text
stringlengths
100
500k
subset
stringclasses
4 values
Bragg spectroscopy of clean and disordered lattice bosons in one dimension: a spectral fingerprint of the Bose glass Guillaume Roux1, Anna Minguzzi2, Tommaso Roscilde3 New J. Phys. 15, 055003 (2013) We study the dynamic structure factor of a one-dimensional Bose gas confined in an optical lattice and modeled by the Bose-Hubbard Hamiltonian, using a variety of numerical and analytical approaches. The dynamic structure factor, experimentally measurable by Bragg spectroscopy, is studied in three relevant cases: in the clean regime, featuring either a superfluid or a Mott phase; and in the presence of two types of (quasi-)disordered external potentials: a quasi-periodic potential obtained from a bichromatic superlattice and a random-box disorder - both featuring a Bose glass phase. In the clean case, we show the emergence of a gapped doublon mode (corresponding to a repulsively bound state) for incommensurate filling, well separated from the low-energy acoustic mode. In the disordered case, we show that the dynamic structure factor provides a direct insight into the spatial structure of the excitations, unveiling their localized nature, which represents a fundamental signature of the Bose glass phase. Furthermore, it provides a clear fingerprint of the very nature of the localization mechanism which differs for the two kinds of disorder potentials we consider. In special cases, the dynamic structure factor may provide an estimate of the position of the localization transition from superfluid to Bose glass, in a complementary manner to the information deduced from the momentum distribution. 2. Université Grenoble-Alpes and CNRS, Laboratoire de Physique et Modélisation des Milieux Condensés, UMR 5493, Maison des Magistères, BP 166, F-38042 Grenoble, France Laboratoire de Physique, CNRS UMR 5672, Ecole Normale Supérieure de Lyon, Université de Lyon, 46 Allée d'Italie, Lyon F-69364, France Influence of liquid structure on the thermodynamics of freezing Pierre Ronceray 1, 2, Peter Harrowell 3 Physical Review E 87 (2013) 052313 The ordering transitions of a 2D lattice liquid characterized by a single favoured local structure (FLS) are studied using Monte Carlo simulations. All eight distinct geometries for the FLS are considered and we find a variety of ordering transitions - first order, continuous and multi-step transitions. Using a microcanonical representation of the freezing transition we resolve the dual influence of the local structure on the ordering transition, i.e. via the energy of the crystal and the entropy cost of structure in the liquid. 2. Ecole Normale Supérieure de Paris (ENS Paris), École normale supérieure [ENS] - Paris 3. Faculty of Sciences, A method for calculating spectral statistics based on random-matrix universality with an application to the three-point correlations of the Riemann zeros E. Bogomolny 1, J. P. Keating 2 We illustrate a general method for calculating spectral statistics that combines the universal (Random Matrix Theory limit) and the non-universal (trace-formula-related) contributions by giving a heuristic derivation of the three-point correlation function for the zeros of the Riemann zeta function. The main idea is to construct a generalized Hermitian random matrix ensemble whose mean eigenvalue density coincides with a large but finite portion of the actual density of the spectrum or the Riemann zeros. Averaging the random matrix result over remaining oscillatory terms related, in the case of the zeta function, to small primes leads to a formula for the three-point correlation function that is in agreement with results from other heuristic methods. This provides support for these different methods. The advantage of the approach we set out here is that it incorporates the determinental structure of the Random Matrix limit. 1 : Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS) 2 : School of Mathematics [Bristol] A note on weakly discontinuous dynamical transitions Silvio Franz 1, Giorgio Parisi 2, Federico Ricci-Tersenghi 2, Tommaso Rizzo 2, Pierfrancesco Urbani 1 Journal of Chemical Physics 138 (2013) 064504 We analyze Mode Coupling discontinuous transition in the limit of vanishing discontinuity, approaching the so called "$A_3$" point. In these conditions structural relaxation and fluctuations appear to have universal form independent from the details of the system. The analysis of this limiting case suggests new ways for looking at the Mode Coupling equations in the general case. 2 : Dipartimento di Fisica and INFM A stochastic model of long-range interacting particles Shamik Gupta 1, Thierry Dauxois 2, Stefano Ruffo 2 We introduce a model of long-range interacting particles evolving under a stochastic Monte Carlo dynamics, in which possible increase or decrease in the values of the dynamical variables is accepted with preassigned probabilities. For symmetric increments, the system at long times settles to the Gibbs equilibrium state, while for asymmetric updates, the steady state is out of equilibrium. For the associated Fokker-Planck dynamics in the thermodynamic limit, we compute exactly the phase space distribution in the nonequilibrium steady state, and find that it has a nontrivial form that reduces to the familiar Gibbsian measure in the equilibrium limit. 2 : Laboratoire de Physique de l'ENS Lyon (Phys-ENS) CNRS : UMR5672 – École Normale Supérieure (ENS) - Lyon Anomalous fluctuations of currents in Sinai-type random chains with strongly correlated disorder Gleb Oshanin 1, Alberto Rosso 2, Gregory Schehr 2 We study properties of a random walk in a generalized Sinai model, in which a quenched random potential is a trajectory of a fractional Brownian motion with arbitrary Hurst parameter H, 0< H <1, so that the random force field displays strong spatial correlations. In this case, the disorder-average mean-square displacement grows in proportion to log^{2/H}(n), n being time. We prove that moments of arbitrary order k of the steady-state current J_L through a finite segment of length L of such a chain decay as L^{-(1-H)}, independently of k, which suggests that despite a logarithmic confinement the average current is much higher than its Fickian counterpart in homogeneous systems. Our results reveal a paradoxical behavior such that, for fixed n and L, the mean square displacement decreases when one varies H from 0 to 1, while the average current increases. This counter-intuitive behavior is explained via an analysis of representative realizations of disorder. 1 : Laboratoire de Physique Théorique de la Matière Condensée (LPTMC) CNRS : UMR7600 – Université Pierre et Marie Curie (UPMC) - Paris VI Archive ouverte HAL – Mesoscopic fluctuations in the Fermi-liquid regime of the Kondo problem Denis Ullmo 1, * Dong E. Liu 2 Sébastien Burdin 3 Harold U. Baranger 2 European Physical Journal B: Condensed Matter and Complex Systems, Springer-Verlag, 2013, 86 (8), pp.353. 〈10.1140/epjb/e2013-40418-3〉 We consider the low temperature regime of the mesoscopic Kondo problem, and in particular the relevance of a Fermi-liquid description of this regime. Mesoscopic fluctuations of both the quasiparticle energy levels and the corresponding wavefunctions are large in this case. These mesoscopic fluctuations make the traditional approach to Fermi-liquids impracticable, as it assumes the existence of a limited number of relevant parameters. We show here how this difficulty can be overcome and discuss the relationship between the resulting Fermi-liquid description "à la Nozières" and the mean field slave fermion approximation. 2. Duke Physics 3. LOMA - Laboratoire Ondes et Matière d'Aquitaine Asymmetric Lévy flights in the presence of absorbing boundaries Clélia de Mulatier 1, 2, Alberto Rosso 1, Gregory Schehr 1 We consider a one dimensional asymmetric random walk whose jumps are identical, independent and drawn from a distribution \phi(\eta) displaying asymmetric power law tails (i.e. \phi(\eta) \sim c/\eta^{\alpha +1} for large positive jumps and \phi(\eta) \sim c/(\gamma |\eta|^{\alpha +1}) for large negative jumps, with 0 < \alpha < 2). In absence of boundaries and after a large number of steps n, the probability density function (PDF) of the walker position, x_n, converges to an asymmetric Lévy stable law of stability index \alpha and skewness parameter \beta=(\gamma-1)/(\gamma+1). In particular the right tail of this PDF decays as c n/x_n^{1+\alpha}. Much less is known when the walker is confined, or partially confined, in a region of the space. In this paper we first study the case of a walker constrained to move on the positive semi-axis and absorbed once it changes sign. In this case, the persistence exponent \theta_+, which characterizes the algebraic large time decay of the survival probability, can be computed exactly and we show that the tail of the PDF of the walker position decays as c \, n/[(1-\theta_+) \, x_n^{1+\alpha}]. This last result can be generalized in higher dimensions such as a planar Lévy walker confined in a wedge with absorbing walls. Our results are corroborated by precise numerical simulations. 2. Département de Modélisation des Systèmes et Structures (DM2S), CEA : DEN/DM2S 2 : Département de Modélisation des Systèmes et Structures (DM2S) Belief Propagation Reconstruction for Discrete Tomography Emmanuelle Gouillart 1, Florent Krzakala 2, Marc Mezard 3, Lenka Zdeborová 4 Inverse Problems 29, 3 (2013) 035003 We consider the reconstruction of a two-dimensional discrete image from a set of tomographic measurements corresponding to the Radon projection. Assuming that the image has a structure where neighbouring pixels have a larger probability to take the same value, we follow a Bayesian approach and introduce a fast message-passing reconstruction algorithm based on belief propagation. For numerical results, we specialize to the case of binary tomography. We test the algorithm on binary synthetic images with different length scales and compare our results against a more usual convex optimization approach. We investigate the reconstruction error as a function of the number of tomographic measurements, corresponding to the number of projection angles. The belief propagation algorithm turns out to be more efficient than the convex-optimization algorithm, both in terms of recovery bounds for noise-free projections, and in terms of reconstruction quality when moderate Gaussian noise is added to the projections. 1 : Surface du Verre et Interfaces (SVI) CNRS : UMR125 2 : Laboratoire de Physico-Chimie Théorique (LPCT) 4 : Institut de Physique Théorique (ex SPhT) (IPHT) Calculation of mean spectral density for statistically uniform tree-like random models E. Bogomolny 1, O. Giraud 1 For random matrices with tree-like structure there exists a recursive relation for the local Green functions whose solution permits to find directly many important quantities in the limit of infinite matrix dimensions. The purpose of this note is to investigate and compare expressions for the spectral density of random regular graphs, based on easy approximations for real solutions of the recursive relation valid for trees with large coordination number. The obtained formulas are in a good agreement with the results of numerical calculations even for small coordination number. Coherent topological defect dynamics and collective modes in superconductors and electronic crystals Dragan Mihailovic 1, 2, Tomaz Mertelj 1, Viktor V Kabanov 1, Serguei Brazovskii 3 Journal of Physics: Condensed Matter 25 (2013) 404206 The control of condensed matter systems out of equilibrium by laser pulses allows us to investigate the system trajectories through symmetry-breaking phase transitions. Thus the evolution of both collective modes and single particle excitations can be followed through diverse phase transitions with femtosecond resolution. Here we present experimental observations of the order parameter trajectory in the normal-superconductor transition and charge-density wave ordering transitions. Of particular interest is the coherent evolution of topological defects forming during the transition via the Kibble-Zurek mechanism, which appears to be measurable in optical pump probe experiments. Experiments on CDW systems reveal some new phenomena, such as coherent oscillations of the order parameter, the creation and emission of dispersive amplitudon modes upon the annihilation of topological defects, and mixing with weakly coupled finite-frequency (massive) bosons. 1 : Jozef Stefan Institute 2 : CENN Nanocenter CENN Nanocenter Conformal Field Theory of Critical Casimir Interactions in 2D G. Bimonte 1, T. Emig 2, M. Kardar 3 Europhysics Letters 104 (2013) 21001 Thermal fluctuations of a critical system induce long-ranged Casimir forces between objects that couple to the underlying field. For two dimensional (2D) conformal field theories (CFT) we derive an exact result for the Casimir interaction between two objects of arbitrary shape, in terms of (1) the free energy of a circular ring whose radii are determined by the mutual capacitance of two conductors with the objects' shape; and (2) a purely geometric energy that is proportional to conformal charge of the CFT, but otherwise super-universal in that it depends only on the shapes and is independent of boundary conditions and other details. 1 : Istituto Nazionale di Fisica Nucleare, Sezione di Napoli (INFN, Sezione di Napoli) 3 : Department of Physics Connectivities of Potts Fortuin-Kasteleyn clusters and time-like Liouville correlator Marco Picco 1, Raoul Santachiara 2, Jacopo Viti 3, Gesualdo Delfino 4 Nuclear Physics B 875, 3 (2013) 719 Recently, two of us argued that the probability that an FK cluster in the Q-state Potts model connects three given points is related to the time-like Liouville three-point correlation function. Moreover, they predicted that the FK three-point connectivity has a prefactor which unveils the effects of a discrete symmetry, reminiscent of the S_Q permutation symmetry of the Q=2,3,4 Potts model. Their theoretical prediction has been checked numerically for the case of percolation, corresponding to Q=1. We introduce the time-like Liouville correlator as the only consistent analytic continuation of the minimal model structure constants and present strong numerical tests of the relation between the time-like Liouville correlator and percolative properties of the FK clusters for real values of Q. 1 : Laboratoire de Physique Théorique et Hautes Energies (LPTHE) CNRS : UMR7589 – Université Pierre et Marie Curie (UPMC) - Paris VI – Université Paris VII - Paris Diderot 3 : Laboratoire de Physique Théorique de l'ENS (LPTENS) CNRS : UMR8549 – Université Pierre et Marie Curie (UPMC) - Paris VI – École normale supérieure [ENS] - Paris 4 : Scuola Internazionale Superiore di Studi Avanzati / International School for Advanced Studies (SISSA / ISAS) Scuola Internazionale Superiore di Studi Avanzati/International School for Advanced Studies (SISSA/ISAS) Counter-ion density profile around charged cylinders: the strong-coupling needle limit Emmanuel Trizac 1, Gabriel Tellez 2, Juan Pablo Mallarino 2 Journal of Physical Chemistry B 117, 42 (2013) 12702-12716 Charged rod-like polymers are not able to bind all their neutralizing counter-ions: a fraction of them evaporates while the others are said to be condensed. We study here counter-ion condensation and its ramifications, both numerically by means of Monte Carlo simulations employing a previously introduced powerful logarithmic sampling of radial coordinates, and analytically, with special emphasis on the strong-coupling regime. We focus on the thin rod, or needle limit, that is naturally reached under strong coulombic couplings, where the typical inter-particle spacing $a'$ along the rod is much larger than its radius R. This regime is complementary and opposite to the simpler thick rod case where $a'\ll R$. We show that due account of counter-ion evaporation, a universal phenomenon in the sense that it occurs in the same clothing for both weakly and strongly coupled systems, allows to obtain excellent agreement between the numerical simulations and the strong-coupling calculations. 2 : Departamento de Fisica De la frustration et du désordre dans les chaînes et les échelles de spins quantiques Arthur Lavarelo 1 Université Paris Sud - Paris XI (19/07/2013), Guillaume Roux (Dir.) Dans les systèmes de spins quantiques, la frustration et la basse dimensionnalité génèrent des fluctuations quantiques et donnent lieu à des phases exotiques. Cette thèse étudie un modèle d'échelle de spins avec des couplages frustrants le long des montants, motivé par les expériences sur le cuprate BiCu$_2$PO$_6$. Dans un premier temps, on présente une méthode variationnelle originale pour décrire les excitations de basse énergie d'une seule chaîne frustrée. Le diagramme de phase de deux chaînes couplées est ensuite établi à l'aide de méthodes numériques. Le modèle exhibe une transition de phase quantique entre une phase dimérisée est une phase à liens de valence résonnants (RVB). La physique de la phase RVB et en particulier l'apparition de l'incommensurabilité sont étudiées numériquement et par un traitement en champ moyen. On étudie ensuite les effets d'impuretés non-magnétiques sur la courbe d'aimantation et la loi de Curie à basse température. Ces propriétés magnétiques sont tout d'abord discutées à température nulle à partir d'arguments probabilistes. Puis un modèle effectif de basse énergie est dérivé dans la théorie de la réponse linéaire et permet de rendre compte des propriétés magnétiques à température finie. Enfin, on étudie l'effet d'un désordre dans les liens, sur une seule chaîne frustrée. La méthode variationnelle, introduite dans le cas non-désordonné, donne une image à faible désordre de l'instabilité de la phase dimérisée, qui consiste en la formation de domaines d'Imry-Ma délimités par des spinons localisés. Ce résultat est finalement discuté à la lumière de la renormalisation dans l'espace réel à fort désordre. Demonstration of Angle Dependent Casimir Force Between Corrugations A. A. Banishev 1, J. Wagner 1, T. Emig 2, R. Zandi 1, U. Mohideen 1 The normal Casimir force between a sinusoidally corrugated gold coated plate and a sphere was measured at various angles between the corrugations using an atomic force microscope. A strong dependence on the orientation angle of the corrugation is found. The measured forces were found to deviate from the proximity force approximation and are in agreement with the theory based on the gradient expansion including correlation effects of geometry and material properties. We analyze the role of temperature. The obtained results open new opportunities for control of the Casimir effect in micromechanical systems. 1 : Department of Physics and Astronomy Distribution of the ratio of consecutive level spacings in random matrix ensembles Y. Y. Atas 1, E. Bogomolny 1, O. Giraud 1, G. Roux 1 We derive expressions for the probability distribution of the ratio of two consecutive level spacings for the classical ensembles of random matrices. This ratio distribution was recently introduced to study spectral properties of many-body problems, as, contrary to the standard level spacing distributions, it does not depend on the local density of states. Our Wigner-like surmises are shown to be very accurate when compared to numerics and exact calculations in the large matrix size limit. Quantitative improvements are found through a polynomial expansion. Examples from a quantum many-body lattice model and from zeros of the Riemann zeta function are presented. Dynamical tunneling with ultracold atoms in magnetic microtraps Martin Lenz1,2, Sebastian Wüster1,3, Christopher J. Vale1,4, Norman R. Heckenberg1, Halina Rubinsztein-Dunlop1, C. A. Holmes1, G. J. Milburn1, and Matthew J. Davis1 Phys. Rev. A 88, 013635 (2013) The study of dynamical tunneling in a periodically driven anharmonic potential probes the quantum-classical transition via the experimental control of the effective Planck's constant for the system. In this paper we consider the prospects for observing dynamical tunneling with ultracold atoms in magnetic microtraps on atom chips. We outline the driven anharmonic potentials that are possible using standard magnetic traps and find the Floquet spectrum for one of these as a function of the potential strength, modulation, and effective Planck's constant. We develop an integrable approximation to the nonintegrable Hamiltonian and find that it can explain the behavior of the tunneling rate as a function of the effective Planck's constant in the regular region of parameter space. In the chaotic region we compare our results with the predictions of models that describe chaos-assisted tunneling. Finally, we examine the practicality of performing these experiments in the laboratory with Bose-Einstein condensates. 1 The University of Queensland, School of Mathematics and Physics, Brisbane, Queensland 4072, Australia 2 University Paris Sud, CNRS, LPTMS, UMR 8626, Orsay 91405, France 3 Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Strasse 38, 01187 Dresden, Germany 4 ARC Centre of Excellence for Quantum-Atom Optics and Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne, Victoria 3122, Australia Dynamics of a tagged monomer: Effects of elastic pinning and harmonic absorption Shamik Gupta 1, Alberto Rosso 1, Christophe Texier 1, 2 We study the dynamics of a tagged monomer of a Rouse polymer for different initial configurations. In the case of free evolution, the monomer displays subdiffusive behavior with strong memory of the initial state. In presence of either elastic pinning or harmonic absorption, we show that the steady state is independent of the initial condition which however strongly affects the transient regime, resulting in non-monotonous behavior and power-law relaxation with varying exponents. 2 : Laboratoire de Physique des Solides (LPS) Edge usage, motifs and regulatory logic for cell cycling genetic networks M. Zagorski 1, A. Krzywicki 2, O. C. Martin 3, 4 The cell cycle is a tightly controlled process, yet its underlying genetic network shows marked differences across species. Which of the associated structural features follow solely from the ability to impose the appropriate gene expression patterns? We tackle this question in silico by examining the ensemble of all regulatory networks which satisfy the constraint of producing a given sequence of gene expressions. We focus on three cell cycle profiles coming from baker's yeast, fission yeast and mammals. First, we show that the networks in each of the ensembles use just a few interactions that are repeatedly reused as building blocks. Second, we find an enrichment in network motifs that is similar in the two yeast cell cycle systems investigated. These motifs do not have autonomous functions, but nevertheless they reveal a regulatory logic for cell cycling based on a feed-forward cascade of activating interactions. 1 : Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center 2 : Laboratoire de Physique Théorique d'Orsay (LPT) 4 : Génétique Végétale (GV) CNRS : UMR8120 – Institut national de la recherche agronomique (INRA) : UMR0320 – Université Paris XI - Paris Sud – AgroParisTech Effect of Partial Absorption on Diffusion with Resetting Justin Whitehouse 1, Martin R. Evans 1, Satya N. Majumdar 2 The effect of partial absorption on a diffusive particle which stochastically resets its position with a finite rate $r$ is considered. The particle is absorbed by a target at the origin with absorption 'velocity' $a$; as the velocity $a$ approaches $\infty$ the absorption property of the target approaches that of a perfectly-absorbing target. The effect of partial absorption on first-passage time problems is studied, in particular, it is shown that the mean time to absorption (MTA) is increased by an additive term proportional to $1/a$. The results are extended to multiparticle systems where independent searchers, initially uniformly distributed with a given density, look for a single immobile target. It is found that the average survival probability $P^{av}$ is modified by a multiplicative factor which is a function of $1/a$, whereas the decay rate of the typical survival probability $P^{typ}$ is decreased by an additive term proportional to $1/a$. 1 : SUPA, School of Physics, University of Edinburgh Effect of winding edge currents Stephane Ouvry 1, Leonid Pastur 2, Andrey Yanovsky 2 Physics Letters A 377 (2013) 804-809 We discuss persistent currents for particles with internal degrees of freedom. The currents arise because of winding properties essential for the chaotic motion of the particles in a confined geometry. The currents do not change the particle concentrations or thermodynamics, similar to the skipping orbits in a magnetic field. 2 : Low Temperature Physics Institute, Kharkiv National Academy of Sciences of Ukraine Einstein relation in systems with anomalous diffusion G. Gradenigo a,b</sup , A. Sarracino a, D. Villamaina c, A. Vulpiani a Acta Physica Polonica B 44 (2013) 899-912 We discuss the role of non-equilibrium conditions in the context of anomalous dynamics. We study in detail the response properties in different models, featuring subdiffusion and superdiffusion: in such models, the presence of currents induces a violation of the Einstein relation. We show how in some of them it is possible to find the correlation function proportional to the linear response, in other words, we have a generalized fluctuation-response relation. a. CNR-ISC and Dipartimento di Fisica, Università Sapienza p.le A. Moro 2, 00185, Roma, Italy b. Institut Physique Théorique (IPhT), CEA Saclay and CNRS URA 2306 91191 Gif Sur Yvette, France c. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), Emergence of clones in sexual populations Richard A. Neher 1, Marija Vucelja 2, Marc Mézard 3, Boris I. Shraiman 4 In sexual population, recombination reshuffles genetic variation and produces novel combinations of existing alleles, while selection amplifies the fittest genotypes in the population. If recombination is more rapid than selection, populations consist of a diverse mixture of many genotypes, as is observed in many populations. In the opposite regime, which is realized for example in the facultatively sexual populations that outcross in only a fraction of reproductive cycles, selection can amplify individual genotypes into large clones. Such clones emerge when the fitness advantage of some of the genotypes is large enough that they grow to a significant fraction of the population despite being broken down by recombination. The occurrence of this "clonal condensation" depends, in addition to the outcrossing rate, on the heritability of fitness. Clonal condensation leads to a strong genetic heterogeneity of the population which is not adequately described by traditional population genetics measures, such as Linkage Disequilibrium. Here we point out the similarity between clonal condensation and the freezing transition in the Random Energy Model of spin glasses. Guided by this analogy we explicitly calculate the probability, Y, that two individuals are genetically identical as a function of the key parameters of the model. While Y is the analog of the spin-glass order parameter, it is also closely related to rate of coalescence in population genetics: Two individuals that are part of the same clone have a recent common ancestor. 1 : Max Planck Institute for Developmental Biology Max Planck Institute for Developmental Biology 2 : Courant Institute for Mathematical Sciences 4 : Kavli Institute for Theoretical Physics and Department of Physics Entanglement production in non-ideal cavities and optimal opacity Dario Villamaina 1, Pierpaolo Vivo 1 Physical Review B (Condensed Matter) 88 (2013) 041301 We compute analytically the distributions of concurrence $\bm{\mathcal{C}}$ and squared norm $\bm{\mathcal{N}}$ for the production of electronic entanglement in a chaotic quantum dot. The dot is connected to the external world via one ideal and one partially transparent lead, characterized by the opacity $\gamma$. The average concurrence increases with $\gamma$ while the average squared norm of the entangled state decreases, making it less likely to be detected. When a minimal detectable norm $\bm{\mathcal{N}}_0$ is required, the average concurrence is maximal for an optimal value of the opacity $\gamma^\star(\bm{\mathcal{N}}_0)$ which is explicitly computed as a function of $\bm{\mathcal{N}}_0$. If $\bm{\mathcal{N}}_0$ is larger than the critical value $\bm{\mathcal{N}}_0^\star\simeq 0.3693\dots$, the average entanglement production is maximal for the completely ideal case, a direct consequence of an interesting bifurcation effect. Ergodic vs diffusive decoherence in mesoscopic devices Thibaut Capron 1, 2, Christophe Texier 3, 4, Gilles Montambaux 4, Dominique Mailly 5, Andreas D. Wieck 6, Christopher Bäuerle 1, Laurent Saminadayar 1 Physical Review B (Condensed Matter) 87, 4 (2013) 041307 We report on the measurement of phase coherence length in a high mobility two-dimensional electron gas patterned in two different geometries, a wire and a ring. The phase coherence length is extracted both from the weak localization correction in long wires and from the amplitude of the Aharonov-Bohm oscillations in a single ring, in a low temperature regime when decoherence is dominated by electronic interactions. We show that these two measurements lead to different phase coherence lengths, namely $L_{\Phi}^\mathrm{wire}\propto T^{-1/3}$ and $L_{\Phi}^\mathrm{ring}\propto T^{-1/2}$. This difference reflects the fact that the electrons winding around the ring necessarily explore the whole sample (ergodic trajectories), while in a long wire the electrons lose their phase coherence before reaching the edges of the sample (diffusive regime). 1 : Institut Néel (NEEL) CNRS : UPR2940 – Université Joseph Fourier - Grenoble I – Institut National Polytechnique de Grenoble (INPG) 2 : Université de Grenoble Université Joseph Fourier - Grenoble I – Université Stendhal - Grenoble III – Université Pierre-Mendès-France - Grenoble II 5 : Laboratoire de photonique et de nanostructures (LPN) 6 : Lehrstuhl für Angewandte Festkörperphysik Lehrstuhl für Angewandte Festkörperphysik Exact distributions of the number of distinct and common sites visited by N independent random walkers Anupam Kundu 1, Satya N. Majumdar 1, Gregory Schehr 1 We study the number of distinct sites S_N(t) and common sites W_N(t) visited by N independent one dimensional random walkers, all starting at the origin, after t time steps. We show that these two random variables can be mapped onto extreme value quantities associated to N independent random walkers. Using this mapping, we compute exactly their probability distributions P_N^d(S,t) and P_N^d(W,t) for any value of N in the limit of large time t, where the random walkers can be described by Brownian motions. In the large N limit one finds that S_N(t)/\sqrt{t} \propto 2 \sqrt{\log N} + \widetilde{s}/(2 \sqrt{\log N}) and W_N(t)/\sqrt{t} \propto \widetilde{w}/N where \widetilde{s} and \widetilde{w} are random variables whose probability density functions (pdfs) are computed exactly and are found to be non trivial. We verify our results through direct numerical simulations. Exact results for the spectra of interacting bosons and fermions on the lowest Landau level Stefan Mashkevich 1, 2, Sergey Matveenko 3, Stéphane Ouvry 4 Journal of Statistical Mechanics (2013) P02013 A system of N interacting bosons or fermions in a two-dimensional harmonic potential (or, equivalently, magnetic field) whose states are projected onto the lowest Landau level is considered. Generic expressions are derived for matrix elements of any interaction, in the basis of angular momentum eigenstates. For the fermion "ground state" (N=1 Laughlin state), this makes it possible to exactly calculate its energy all the way up to the mesoscopic regime N ~ 1000. It is also shown that for N = 3 and Coulomb interaction, several rational low-lying values of energy exist, for bosons and fermions alike. 1 : Schrodinger 2 : Bogolyubov Institute for Theoretical Physics Bogolyobov Institute for Theoretical Physics 3 : Landau Institute for Theoretical Physics Exact Statistics of the Gap and Time Interval Between the First Two Maxima of Random Walks Satya N. Majumdar 1, Philippe Mounaix 2, Gregory Schehr 1 We investigate the statistics of the gap, G_n, between the two rightmost positions of a Markovian one-dimensional random walker (RW) after n time steps and of the duration, L_n, which separates the occurrence of these two extremal positions. The distribution of the jumps \eta_i's of the RW, f(\eta), is symmetric and its Fourier transform has the small k behavior 1-\hat{f}(k)\sim| k|^\mu with 0 < \mu \leq 2. We compute the joint probability density function (pdf) P_n(g,l) of G_n and L_n and show that, when n \to \infty, it approaches a limiting pdf p(g,l). The corresponding marginal pdf of the gap, p_{\rm gap}(g), is found to behave like p_{\rm gap}(g) \sim g^{-1 - \mu} for g \gg 1 and 0<\mu < 2. We show that the limiting marginal distribution of L_n, p_{\rm time}(l), has an algebraic tail p_{\rm time}(l) \sim l^{-\gamma(\mu)} for l \gg 1 with \gamma(1<\mu \leq 2) = 1 + 1/\mu, and \gamma(0<\mu<1) = 2. For l, g \gg 1 with fixed l g^{-\mu}, p(g,l) takes the scaling form p(g,l) \sim g^{-1-2\mu} \tilde p_\mu(l g^{-\mu}) where \tilde p_\mu(y) is a (\mu-dependent) scaling function. We also present numerical simulations which verify our analytic results. 2 : Centre de Physique Théorique (CPHT) CNRS : UMR7644 – Polytechnique - X Exact theory of dense amorphous hard spheres in high dimension. II. The high density regime and the Gardner transition Jorge Kurchan 1, Giorgio Parisi 2, Pierfrancesco Urbani 3, Francesco Zamponi 4 The Journal of Physical Chemistry B 117 (2013) 12979-12994 We consider the theory of the glass phase and jamming of hard spheres in the large space dimension limit. Building upon the exact expression for the free-energy functional obtained previously, we find that the Random First Order Transition (RFOT) scenario is realized here with two thermodynamic transitions: the usual Kauzmann point associated with entropy crisis, and a further transition at higher pressures in which a glassy structure of micro-states is developed within each amorphous state. This kind of glass-glass transition into a phase dominating the higher densities was described years ago by Elisabeth Gardner, and may well be a generic feature of RFOT. Micro states that are small excitations of an amorphous matrix -- separated by low entropic or energetic barriers -- thus emerge naturally, and modify the high pressure (or low temperature) limit of the thermodynamic functions. 1 : Physique et mécanique des milieux hétérogenes (PMMH) CNRS : UMR7636 – Université Pierre et Marie Curie (UPMC) - Paris VI – Université Paris VII - Paris Diderot – ESPCI ParisTech Experimental Observation of Localized Modes in a Dielectric Square Resonator S. Bittner 1, E. Bogomolny 2, B. Dietz 3, M. Miski-Oglu 3, A. Richter 4 We investigated the frequency spectra and field distributions of a dielectric square resonator in a microwave experiment. Since such systems cannot be treated analytically, the experimental studies of their properties are indispensable. The momentum representation of the measured field distributions shows that all resonant modes are localized on specific classical tori of the square billiard. Based on these observations a semiclassical model was developed. It shows excellent agreement with all but a single class of measured field distributions that will be treated separately. 1 : Institut für Kernphysik 4 : Institute of Environmental Physics Finite-size critical fluctuations in microscopic models of mode-coupling theory Silvio Franz 1, Mauro Sellitto 2 Facilitated spin models on random graphs provide an ideal microscopic realization of the mode-coupling theory of supercooled liquids: they undergo a purely dynamic glass transition with no thermodynamic singularity. In this paper we study the fluctuations of dynamical heterogeneity and their finite-size scaling properties in the beta-relaxation regime of such microscopic spin models. We compare the critical fluctuations behavior for two distinct measures of correlations with the results of a recently proposed field theoretical description based on quasi-equilibrium ideas. We find that the theoretical predictions perfectly fit the numerical simulation data once the relevant order parameter is identified with the persistence function of the spins. Fluctuations quantiques et effets non-linéaires dans les condensats de Bose-Einstein : des ondes de choc dispersives au rayonnement de Hawking acoustique Pierre-Élie Larré 1 Université Paris Sud - Paris XI (20/09/2013), Nicolas Pavloff (Dir.) Cette thèse est dédiée à l'étude de l'analogue du rayonnement de Hawking dans les condensats de Bose-Einstein. Le premier chapitre présente de nouvelles configurations d'intérêt expérimental permettant de réaliser l'équivalent acoustique d'un trou noir gravitationnel dans l'écoulement d'un condensat atomique unidimensionnel. Nous donnons dans chaque cas une description analytique du profil de l'écoulement, des fluctuations quantiques associées et du spectre du rayonnement de Hawking. L'analyse des corrélations à deux corps de la densité dans l'espace des positions et des impulsions met en évidence l'émergence de signaux révélant l'effet Hawking dans nos systèmes. En démontrant une règle de somme vérifiée par la matrice densité à deux corps connexe, on montre que les corrélations à longue portée de la densité doivent être associées aux modifications diagonales de la matrice densité à deux corps lorsque l'écoulement du condensat présente un horizon acoustique. Motivés par des études expérimentales récentes de profils d'onde générés dans des condensats de polaritons en microcavité semi-conductrice, nous analysons dans un second chapitre les caractéristiques superfluides et dissipatives de l'écoulement autour d'un obstacle localisé d'un condensat de polaritons unidimensionnel obtenu par pompage incohérent. Nous examinons la réponse du condensat dans la limite des faibles perturbations et au moyen de la théorie de Whitham dans le régime non-linéaire. On identifie un régime dépendant du temps séparant deux types d'écoulement stationnaire et dissipatif : un principalement visqueux à faible vitesse et un autre caractérisé par un rayonnement de Cherenkov d'ondes de densité à grande vitesse. Nous présentons enfin des effets de polarisation obtenus en incluant le spin des polaritons dans la description du condensat et montrons dans le troisième chapitre que des effets similaires en présence d'un horizon acoustique pourraient être utilisés pour démontrer expérimentalement le rayonnement de Hawking dans les condensats de polaritons. Flux-based classification of reactions reveals a functional bow-tie organization of complex metabolic networks Shalini Singh 1, 2, Areejit Samal 1, 3, 4, Varun Giri 1, Sandeep Krishna 5, Nandula Raghuram 6, Sanjay Jain 1, 7, 8 Unraveling the structure of complex biological networks and relating it to their functional role is an important task in systems biology. Here we attempt to characterize the functional organization of the large-scale metabolic networks of three microorganisms. We apply flux balance analysis to study the optimal growth states of these organisms in different environments. By investigating the differential usage of reactions across flux patterns for different environments, we observe a striking bimodal distribution in the activity of reactions. Motivated by this, we propose a simple algorithm to decompose the metabolic network into three sub-networks. It turns out that our reaction classifier which is blind to the biochemical role of pathways leads to three functionally relevant sub-networks that correspond to input, output and intermediate parts of the metabolic network with distinct structural characteristics. Our decomposition method unveils a functional bow-tie organization of metabolic networks that is different from the bow-tie structure determined by graph-theoretic methods that do not incorporate functionality. 1 : Department of Physics and Astrophysics 2 : Department of Genetics 3 : Max Planck Institute for Mathematics in the Sciences (MPI-MIS) 5 : National Centre for Biological Sciences UAS-GKVK Campus 6 : School of Biotechnology GGS Indraprastha University 7 : Jawaharlal Nehru Centre for Advanced Scientific Research Jawaharlal Nehru Centre for Advanced Scientific Research 8 : Santa Fe Institute Fractal globule as an molecular machine V. A. Avetisov 1, 2, V. A. Ivanov 3, D. A. Meshkov 1, S. K. Nechaev 4, 5 JETP Letters 98, 4 (2013) 242-246 The relaxation of an elastic network, constructed by a contact map of a fractal (crumpled) polymer globule is investigated. We found that: i) the slowest mode of the network is separated from the rest of the spectrum by a wide gap, and ii) the network quickly relaxes to a low--dimensional (one-dimensional, in our demonstration) manifold spanned by slowest degrees of freedom with a large basin of attraction, and then slowly approaches the equilibrium not escaping this manifold. By these dynamic properties, the fractal globule elastic network is similar to real biological molecular machines, like myosin. We have demonstrated that unfolding of a fractal globule can be described as a cascade of equilibrium phase transitions in a hierarchical system. Unfolding manifests itself in a sequential loss of stability of hierarchical levels with the temperature change. 1 : The Semenov Institute of Chemical Physics 2 : National Research University Higher School of Economics National Research University Higher School of Economics 3 : Moscow State University 5 : P. N. Lebedev Physical Institute Russian Academy of Science Freezing Transitions in Liquids Characterized by a Favoured Local Structure 2 : Ecole Normale Supérieure de Paris (ENS Paris) 3 : Faculty of Sciences From elongated spanning trees to vicious random walks A. Gorsky 1, S. Nechaev 2, 3, V. S. Poghosyan 4, V. B. Priezzhev 5 Nuclear Physics B 870 (2013) 55-77 Given a spanning forest on a large square lattice, we consider by combinatorial methods a correlation function of $k$ paths ($k$ is odd) along branches of trees or, equivalently, $k$ loop--erased random walks. Starting and ending points of the paths are grouped in a fashion a $k$--leg watermelon. For large distance $r$ between groups of starting and ending points, the ratio of the number of watermelon configurations to the total number of spanning trees behaves as $r^{-\nu} \log r$ with $\nu = (k^2-1)/2$. Considering the spanning forest stretched along the meridian of this watermelon, we see that the two--dimensional $k$--leg loop--erased watermelon exponent $\nu$ is converting into the scaling exponent for the reunion probability (at a given point) of $k$ (1+1)--dimensional vicious walkers, $\tilde{\nu} = k^2/2$. Also, we express the conjectures about the possible relation to integrable systems. 1 : ITEP 3 : P.N. Lebedev Physical Institute of the Russian Academy of Sciences 4 : Institute for Informatics and Automation Problems NAS of Armenia, 5 : Bogolubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research Glassy Critical Points and Random Field Ising Model Silvio Franz 1, Giorgio Parisi 2, Federico Ricci-Tersenghi 2 Journal of Statistical Mechanics (2013) L02001 We consider the critical properties of points of continuous glass transition as one can find in liquids in presence of constraints or in liquids in porous media. Through a one loop analysis we show that the critical Replica Field Theory describing these points can be mapped in the $\phi^4$-Random Field Ising Model. We confirm our analysis studying the finite size scaling of the $p$-spin model defined on sparse random graph, where a fraction of variables is frozen such that the phase transition is of a continuous kind. Hawking radiation in a two-component Bose-Einstein condensate P. -É. Larré 1, N. Pavloff 1 EPL 103 (2013) 60001 We consider a simple realization of an event horizon in the flow of a one-dimensional two-component Bose-Einstein condensate. Such a condensate has two types of quasiparticles; In the system we study, one corresponds to density fluctuations and the other to polarization fluctuations. We treat the case where a horizon occurs only for one type of quasiparticles (the polarization ones). We study the one- and two-body signal associated to the analog of spontaneous Hawking radiation and demonstrate by explicit computation that it consists only in the emission of polarization waves. We discuss the experimental consequences of the present results in the domain of atomic Bose-Einstein condensates and also for the physics of exciton-polaritons in semiconductor microcavities. Invariant $\beta$-Wishart ensembles, crossover densities and asymptotic corrections to the Marchenko-Pastur law Romain Allez 1, Jean-Philippe Bouchaud 2, Satya N. Majumdar 3, Pierpaolo Vivo 3 Journal of Physics A: Mathematical and Theoretical 46 (2013) 015001 We construct a diffusive matrix model for the $\beta$-Wishart (or Laguerre) ensemble for general continuous $\beta\in [0,2]$, which preserves invariance under the orthogonal/unitary group transformation. Scaling the Dyson index $\beta$ with the largest size $M$ of the data matrix as $\beta=2c/M$ (with $c$ a fixed positive constant), we obtain a family of spectral densities parametrized by $c$. As $c$ is varied, this density interpolates continuously between the Mar\vcenko-Pastur ($c\to \infty$ limit) and the Gamma law ($c\to 0$ limit). Analyzing the full Stieltjes transform (resolvent) equation, we obtain as a byproduct the correction to the Mar\vcenko-Pastur density in the bulk up to order 1/M for all $\beta$ and up to order $1/M^2$ for the particular cases $\beta=1,2$. 1 : CEntre de REcherches en MAthématiques de la DEcision (CEREMADE) CNRS : UMR7534 – Université Paris IX - Paris Dauphine 2 : Science et Finance Inverse inference in the asymmetric Ising model Jason Sakellariou 1 Université Paris Sud - Paris XI (22/02/2013), Marc Mézard (Dir.) Recent experimental techniques in biology made possible the acquisition of overwhelming amounts of data concerning complex biological networks, such as neural networks, gene regulation networks and protein-protein interaction networks. These techniques are able to record states of individual components of such networks (neurons, genes, proteins) for a large number of configurations. However, the most biologically relevantinformation lies in their connectivity and in the way their components interact, information that these techniques aren't able to record directly. The aim of this thesis is to study statistical methods for inferring information about the connectivity of complex networks starting from experimental data. The subject is approached from a statistical physics point of view drawing from the arsenal of methods developed in the study of spin glasses. Spin-glasses are prototypes of networks of discrete variables interacting in a complex way and are widely used to model biological networks. After an introduction of the models used and a discussion on the biological motivation of the thesis, all known methods of network inference are introduced and analysed from the point of view of their performance. Then, in the third part of the thesis, a new method is proposed which relies in the remark that the interactions in biology are not necessarily symmetric (i.e. the interaction from node A to node B is not the same as the one from B to A). It is shown that this assumption leads to methods that are both exact and efficient. This means that the interactions can be computed exactly, given a sufficient amount of data, and in a reasonable amount of time. This is an important original contribution since no other method is known to be both exact and efficient. Joint probability densities of level spacing ratios in random matrices Y. Y. Atas 1, E. Bogomolny 1, O. Giraud 1, P. Vivo 1, E. Vivo 2 We calculate analytically, for finite-size matrices, joint probability densities of ratios of level spacings in ensembles of random matrices characterized by their associated confining potential. We focus on the ratios of two spacings between three consecutive real eigenvalues, as well as certain generalizations such as the overlapping ratios. The resulting formulas are further analyzed in detail in two specific cases: the beta-Hermite and the beta-Laguerre cases, for which we offer explicit calculations for small N. The analytical results are in excellent agreement with numerical simulations of usual random matrix ensembles, and with the level statistics of a quantum many-body lattice model and zeros of the Riemann zeta function. 2 : Departamento de Matemáticas and Grupo Interdisciplinar de Sistemas Complejos (GISC) Large deviations of the top eigenvalue of large Cauchy random matrices Satya N. Majumdar 1, Gregory Schehr 1, Dario Villamaina 1, Pierpaolo Vivo 1 We compute analytically the probability density function (pdf) of the largest eigenvalue $\lambda_{\max}$ in rotationally invariant Cauchy ensembles of $N\times N$ matrices. We consider unitary ($\beta = 2$), orthogonal ($\beta =1$) and symplectic ($\beta=4$) ensembles of such heavy-tailed random matrices. We show that a central non-Gaussian regime for $\lambda_{\max} \sim \mathcal{O}(N)$ is flanked by large deviation tails on both sides which we compute here exactly for any value of $\beta$. By matching these tails with the central regime, we obtain the exact leading asymptotic behaviors of the pdf in the central regime, which generalizes the Tracy-Widom distribution known for Gaussian ensembles, both at small and large arguments and for any $\beta$. Our analytical results are confirmed by numerical simulations. Linear hydrodynamics for driven granular gases M. I. Garcia de Soria 1, P. Maynar 1, E. Trizac 2 We study the dynamics of a granular gas heated by the stochastic thermostat. From a Boltzmann description, we derive the hydrodynamic equations for small perturbations around the stationary state that is reached in the long time limit. Transport coefficients are identified as Green-Kubo formulas obtaining explicit expressions as a function of the inelasticity and the spacial dimension. 1 : Fisica Teorica, Universidad de Sevilla Localization of spinons in random Majumdar-Ghosh chains Arthur Lavarélo 1, Guillaume Roux 1 We study the effect of disorder on frustrated dimerized spin-1/2 chains at the Majumdar-Ghosh point. Using variational methods and density-matrix renormalization group approaches, we identify two localization mechanisms for spinons which are the deconfined fractional elementary excitations of these chains. The first one belongs to the Anderson localization class and dominates at the random Majumdar-Ghosh (RMG) point. There, spinons are almost independent, remain gapped, and localize in Lifshitz states whose localization length is analytically obtained. The RMG point then displays a quantum phase transition to phase of localized spinons at large disorder. The other mechanism is a random confinement mechanism which induces an effective interaction between spinons and brings the chain into a gapless and partially polarized phase for arbitrarily small disorder. Low-dimensional physics of ultracold gases with bound states and the sine-Gordon model Thierry Jolicoeur 1, Evgeni Burovski 2, Giuliano Orso 3 European Physical Journal - Special Topics 217 (2013) 3-12 One-dimensional systems of interacting atoms are an ideal laboratory to study the Kosterlitz-Thouless phase transition. In the renormalization group picture there is essentially a two-parameter phase diagram to explore. We first present how detailed experiments have shown direct evidence for the theoretical treatment of this transition. Then generalization to the case of two-component systems with bound state formation is discussed. Trimer formation in the asymmetric attractive Hubbard model involve in a crucial way this kind of physics. 3 : Matériaux et Phénomènes Quantiques (MPQ) CNRS : FR2437 – Université Paris VII - Paris Diderot Lyapunov exponents, one-dimensional Anderson localisation and products of random matrices Alain Comtet 1, 2, Christophe Texier 1, 3, Yves Tourigny 4 The concept of Lyapunov exponent has long occupied a central place in the theory of Anderson localisation; its interest in this particular context is that it provides a reasonable measure of the localisation length. The Lyapunov exponent also features prominently in the theory of products of random matrices pioneered by Furstenberg. After a brief historical survey, we describe some recent work that exploits the close connections between these topics. We review the known solvable cases of disordered quantum mechanics involving random point scatterers and discuss a new solvable case. Finally, we point out some limitations of the Lyapunov exponent as a means of studying localisation properties. 2 : Unite mixte de service de l'institut Henri Poincaré (UMSIHP) CNRS : UMS839 – Université Pierre et Marie Curie (UPMC) - Paris VI Magnetic responses of randomly depleted spin ladders Arthur Lavarélo 1, Guillaume Roux 1, Nicolas Laflorencie 2 Physical Review B (Condensed Matter) 88, 13 (2013) 134420-134442 The magnetic responses of a spin-1/2 ladder doped with non-magnetic impurities are studied using various methods and including the regime where frustration induces incommensurability. Several improvements are made on the results of the seminal work of Sigrist and Furusaki [J. Phys. Soc. Jpn. 65, 2385 (1996)]. Deviations from the Brillouin magnetic curve due to interactions are also analyzed. First, the magnetic profile around a single impurity and effective interactions between impurities are analyzed within the bond-operator mean-field theory and compared to density-matrix renormalization group calculations. Then, the temperature behavior of the Curie constant is studied in details. At zero-temperature, we give doping-dependent corrections to the results of Sigrist and Furusaki on general bipartite lattice and compute exactly the distribution of ladder cluster due to chain breaking effects. Using exact diagonalization and quantum Monte-Carlo methods on the effective model, the temperature dependence of the Curie constant is compared to a random dimer model and a real-space renormalization group scenario. Next, the low-part of the magnetic curve corresponding to the contribution of impurities is computed using exact diagonalization. The random dimer model is shown to capture the bulk of the curve, accounting for the deviation from the Brillouin response. At zero-temperature, the effective model prediction agrees relatively well with density-matrix renormalization group calculations. Finite-temperature effects are displayed within the effective model and for large depleted ladder models using quantum Monte-Carlo simulations. In all, the effect of incommensurability does not display a strong qualitative effect on both the magnetic susceptibility and the magnetic curve. Consequences for experiments on the BiCu2PO6 compound and other spin-gapped materials are briefly discussed. 2 : Laboratoire de Physique Théorique - IRSAMC (LPT) CNRS : UMR5152 – Université Paul Sabatier (UPS) - Toulouse III Many-body study of a quantum point contact in the fractional quantum Hall regime at v=5/2 Paul Soulé 1 Thierry Jolicoeur 1 Philippe Lecheminant 2 Physical Review B (Condensed Matter), American Physical Society, 2013, 88, pp.235107 We study a quantum point contact in the fractional quantum Hall regime at Landau level filling factors 1/3 and 5/2. By using exact diagonalizations in the cylinder geometry we identify the edge modes in the presence of a parabolic confining potential. By changing the sign of the potential we can access both the tunneling through the bulk of the fluid and the tunneling between spatially separated droplets. This geometry is realized in the quantum point contact geometry for two-dimensional electron gases. In the case of the model Moore-Read Pfaffian state at filling factor 5/2 we identify the conformal towers of many-body eigenstates including the non-Abelian sector. By a Monte-Carlo technique we compute the various scaling exponents that characterize the edge modes. In the case of hard-core interactions whose ground states are exact model wavefunction we find equality of neutral and charged velocities for the Pfaffian state both bosonic and fermionic. 2. LPTM - Laboratoire de Physique Théorique et Modélisation Material Dependence of the Wire-Particle Casimir Interaction E. Noruzifar 1, P. Rodriguez-Lopez 2, T. Emig 3, R. Zandi 1 Physical Review A 87 (2013) 042504 We study the Casimir interaction between a metallic cylindrical wire and a metallic spherical particle by employing the scattering formalism. At large separations, we derive the asymptotic form of the interaction. In addition, we find the interaction between a metallic wire and an isotropic atom, both in the non-retarded and retarded limits. We identify the conditions under which the asymptotic Casimir interaction does not depend on the material properties of the metallic wire and the particle. Moreover, we compute the exact Casimir interaction between the particle and the wire numerically. We show that there is a complete agreement between the numerics and the asymptotic energies at large separations. For short separations, our numerical results show good agreement with the proximity force approximation. 2 : Departamento de Fisica Aplicada Universidad Compiutense Mesoscopic fluctuations in the Fermi-liquid regime of the Kondo problem Modeling of dynamics of field-induced transformations in charge density waves T. Yi 1,3 , N. Kirova 2,4 , and S. Brazovskii 1,4 Eur. Phys. J. Special Topics 222, 1035–1046 (2013) We present a modeling of stationary states and their transient dynamic for charge density waves in restricted geometries of realistic junctions under the applied voltage or the passing current. The model takes into account multiple fields in mutual nonlinear interactions: the amplitude and the phase of the charge density wave complex order parameter, distributions of the electric field, the density and the current of normal carriers. The results show that stationary states with dislocations are formed after an initial turbulent multi-vortex process. Static dislocations multiply stepwise when the voltage across or the current through the junction exceed a threshold. The dislocation core forms a charge dipole which concentrates a steep drop of the voltage, thus working as a self-tuned microscopic tunnelling junction. That can gives rise to features observed in experiments on the inter-layer tunneling in mesa-junctions. 1 CNRS, LPTMS, URM 8502, Univerisit ́e Paris-sud, 91405 Orsay, France 2 CNRS, LPS, URM 8626, Univerisit ́e Paris-sud, 91405 Orsay, France 3 epartement of physics, South University of Science and Technology of China, Shenzhen, Guangdong 518055, China 4 International Institute of Physics, 59078-400 Natal, Rio Grande do Norte, Brazil Morphology transition at depinning in a solvable model of interface growth in a random medium Hiroki Ohta 1 Martin-Luc Rosinberg 2 Gilles Tarjus 2 Europhysics Letters, EDP Science, 2013, 104, pp.16003 We propose a simple, exactly solvable, model of interface growth in a random medium that is a variant of the zero-temperature random-field Ising model on the Cayley tree. This model is shown to have a phase diagram (critical depinning field versus disorder strength) qualitatively similar to that obtained numerically on the cubic lattice. We then introduce a specifically tailored random graph that allows an exact asymptotic analysis of the height and width of the interface. We characterize the change of morphology of the interface as a function of the disorder strength, a change that is found to take place at a multicritical point along the depinning-transition line. 2. LPTMC - Laboratoire de Physique Théorique de la Matière Condensée Near-extreme statistics of Brownian motion Anthony Perret 1, Alain Comtet 1, 2, Satya N. Majumdar 1, Gregory Schehr 1 We study the statistics of near-extreme events of Brownian motion (BM) on the time interval [0,t]. We focus on the density of states (DOS) near the maximum \rho(r,t) which is the amount of time spent by the process at a distance r from the maximum. We develop a path integral approach to study functionals of the maximum of BM, which allows us to study the full probability density function (PDF) of \rho(r,t) and obtain an explicit expression for the moments, \langle [\rho(r,t)]^k \rangle, for arbitrary integer k. We also study near-extremes of constrained BM, like the Brownian bridge. Finally we also present numerical simulations to check our analytical results. Network function shapes network structure: the case of the Arabidopsis flower organ specification genetic network Adrien Henry ab , Françoise Monéger c , Areejit Samal* ad and Olivier C. Martin* ab Molecular Biosystems 9 (2013) 1726-1735 The reconstruction of many biological networks has allowed detailed studies of their structural properties. Several features exhibited by these networks have been interpreted to be the result of evolutionary dynamics. For instance the degree distributions may follow from a preferential attachment of new genes to older ones during evolution. Here we argue that even in the absence of any evolutionary dynamics, the presence of atypical features may follow from the fact that the network implements certain functions. To examine this network function shapes network structure scenario, we focus on the Arabidopsis genetic network controlling early flower organogenesis in which gene expression dynamics has been modelled using a Boolean framework. Specifically, for a system with 15 master genes, the phenotype consists of 10 experimentally determined steady-state expression patterns, considered here as the functional constraints on the network. The space of genetic networks satisfying these constraints is sometimes referred to as the neutral or genotype network. We sample this space using Markov Chain Monte Carlo which allows us to exhibit how the functional (phenotypic) constraints shape the gene network structure. We find that this shaping is strongest for the edge (interaction) usage, with effects that are functionally interpretable. In contrast, higher order features such as degree assortativity and network motifs are hardly shaped by the phenotypic constraints. a. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), b. UMR de Génétique Végétale du Moulon, UMR 0320/UMR 8120, INRA / CNRS / Université Paris-Sud, 91190 Gif-sur-Yvette, France. * E-mail: [email protected]; Fax: +33 16933 2340 c. Laboratoire Reproduction et Developpement des Plantes, UMR 5667, ENS / CNRS / INRA / Univ. Lyon I, 69364 Lyon cedex 07, France d. Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, 04103 Leipzig, Germany. * E-mail: [email protected]; Fax: +49 341 9959 658 New phase transition in random planar diagrams and RNA-type matching Andrey Y. Lokhov 1, Olga V. Valba 1, 2, Mikhail V. Tamm 3, Sergei K. Nechaev 1, 4 We study the planar matching problem, defined by a symmetric random matrix with independent identically distributed entries, taking values 0 and 1. We show that the existence of a perfect planar matching structure is possible only above a certain critical density, $p_{c}$, of allowed contacts (i.e. of '1'). Using a formulation of the problem in terms of Dyck paths and a matrix model of planar contact structures, we provide an analytical estimation for the value of the transition point, $p_{c}$, in the thermodynamic limit. This estimation is close to the critical value, $p_{c} \approx 0.379$, obtained in numerical simulations based on an exact dynamical programming algorithm. We characterize the corresponding critical behavior of the model and discuss the relation of the perfect-imperfect matching transition to the known molten-glass transition in the context of random RNA secondary structure's formation. In particular, we provide strong evidence supporting the conjecture that the molten-glass transition at T=0 occurs at $p_{c}$. 2 : Moscow Institute of Physics and Technology Moscow Institute of Physics and Technology Lomonosov State University 4 : P.N. Lebedev Physical Institute Non-Hermitian β-ensemble with real eigenvalues O. Bohigas 1 and M. P. Pato 2 AIP ADVANCES 3, 032130 (2013) By removing the Hermitian condition of the so-called β-ensemble of tridiagonal matrices, an ensemble of non-Hermitian random matrices is constructed whose eigenvalues are all real. It is shown that they belong to the class of pseudo-Hermitian operators. Its statistical properties are investigated. 1 CNRS, Universite Paris-Sud, UMR8626, LPTMS, Orsay Cedex, F-91405, France 2 Instıtuto de F ́ısica, Universidade de S ̃ao Paulo, Caixa Postal 66318, 05314-970 Sao Paulo, S.P., Brazil Numerical approaches on driven elastic interfaces in random media Ezequiel E. Ferrero 1 Sebastian Bustingorry 1 Alejandro B. Kolton 1 Alberto Rosso 2 Comptes Rendus Physique, Elsevier Masson, 2013, 14 (8), pp.641 - 650. <10.1016/j.crhy.2013.08.002> 1. CONICET Centro Atomico Bariloche On the role of electron-nucleus contact and microwave saturation in Thermal Mixing DNP Sonia Colombo Serra 1, Alberto Rosso 2, Fabio Tedoldi 1 Physical Chemistry Chemical Physics 15 (2013) 8416-8428 We have explored the manifold physical scenario emerging from a model of Dynamic Nuclear Polarization (DNP) via thermal mixing under the hypothesis of highly effective electron-electron interaction. When the electron and nuclear reservoirs are also assumed to be in strong thermal contact and the microwave irradiation saturates the target electron transition, the enhancement of the nuclear polarization is expected to be considerably high even if the irradiation frequency is set far away from the centre of the ESR line (as already observed by Borghini) and the typical polarization time is reduced on moving towards the boundaries of said line. More reasonable behaviours are obtained by reducing the level of microwave saturation or the contact between electrons and nuclei in presence of nuclear leakage. In both cases the function describing the dependency of the steady state nuclear polarization on the frequency of irradiation becomes sharper at the edges and the build up rate decreases on moving off-resonance. If qualitatively similar in terms of the effects produced on nuclear polarization, the degree of microwave saturation and of electron-nucleus contact has a totally different impact on electron polarization, which is of course strongly correlated to the effectiveness of saturation and almost insensitive, at the steady state, to the magnitude of the interactions between the two spin reservoirs. The likelihood of the different scenario is discussed in the light of the experimental data currently available in literature, to point out which aspects are suitably accounted and which are not by the declinations of thermal mixing DNP considered here. 1 : Centro Ricerche Bracco Centro Ricerche Bracco Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics Martin R. Evans 1, Satya N. Majumdar 2, Kirone Mallick 3 We study first-passage time problems for a diffusive particle with stochastic resetting with a finite rate $r$. The optimal search time is compared quantitatively with that of an effective equilibrium Langevin process with the same stationary distribution. It is shown that the intermittent, nonequilibrium strategy with non-vanishing resetting rate is more efficient than the equilibrium dynamics. Our results are extended to multiparticle systems where a team of independent searchers, initially uniformly distributed with a given density, looks for a single immobile target. Both the average and the typical survival probability of the target are smaller in the case of nonequilibrium dynamics. Persistence and First-Passage Properties in Non-equilibrium Systems Alan J. Bray 1, Satya N. Majumdar 2, G. Schehr 2 Advances in Physics 62, 3 (2013) 225-361 In this review we discuss the persistence and the related first-passage properties in extended many-body nonequilibrium systems. Starting with simple systems with one or few degrees of freedom, such as random walk and random acceleration problems, we progressively discuss the persistence properties in systems with many degrees of freedom. These systems include spins models undergoing phase ordering dynamics, diffusion equation, fluctuating interfaces etc. Persistence properties are nontrivial in these systems as the effective underlying stochastic process is non-Markovian. Several exact and approximate methods have been developed to compute the persistence of such non-Markov processes over the last two decades, as reviewed in this article. We also discuss various generalisations of the local site persistence probability. Persistence in systems with quenched disorder is discussed briefly. Although the main emphasis of this review is on the theoretical developments on persistence, we briefly touch upon various experimental systems as well. 1 : School of Physics and Astronomy Phase Diagram and Approximate Message Passing for Blind Calibration and Dictionary Learning Florent Krzakala 1 Marc Mézard 2 Lenka Zdeborová 3 IEExplore, 2013, Information Theory Proceedings (ISIT), 2013 IEEE International Symposium, pp.659 - 663 <10.1109/ISIT.2013.6620308 > We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes. 1. LPCT - Laboratoire de Physico-Chimie Théorique 3. IPHT - Institut de Physique Théorique - UMR CNRS 3681 Planar diagrams from optimization for concave potentials S. K. Nechaev 1, 2, A. N. Sobolevski 3, 4, O. V. Valba 1, 5 We propose a new toy model of a heteropolymer chain capable of forming planar secondary structures typical for RNA molecules. In this model the sequential intervals between neighboring monomers along a chain are considered as quenched random variables. Using the optimization procedure for a special class of concave--type potentials, borrowed from optimal transport analysis, we derive the local difference equation for the ground state free energy of the chain with the planar (RNA--like) architecture of paired links. We consider various distribution functions of intervals between neighboring monomers (truncated Gaussian and scale--free) and demonstrate the existence of a topological crossover from sequential to essentially embedded (nested) configurations of paired links. 3 : Higher School of Economics 4 : Kharkevich Institute for Information Transmission Problems 5 : Moscow Institute of Physics and Technology (MIPT) Polarization hydrodynamics in a one-dimensional polariton condensate P. -É Larré 1 N. Pavloff 1 A. M. Kamchatnov 2 We study the hydrodynamics of a nonresonantly-pumped polariton condensate in a quasi-one-dimensional quantum wire taking into account the spin degree of freedom. We clarify the relevance of the Landau criterion for superfluidity in this dissipative two-component system. Two Cherenkov-like critical velocities are identified corresponding to the opening of different channels of radiation: one of (damped) density fluctuations and another of (weakly damped) polarization fluctuations. We determine the drag force exerted onto an external obstacle and propose experimentally measurable consequences of the specific features of the fluctuations of polarization. 2. Institute of Spectroscopy Quasi-equilibrium in glassy dynamics: an algebraic view Silvio Franz 1, Giorgio Parisi 2 We study a chain of identical glassy systems in a constrained equilibrium where each bond of the chain is forced to remain at a preassigned distance to the previous one. We apply this description to Mean Field Glassy systems in the limit of long chain where each bond is close to the previous one. We show that in specific conditions this pseudo-dynamic process can formally describe real relaxational dynamics the long time. In particular, in mean field spin glass models we can recover in this way the equations of Langevin dynamics in the long time limit at the dynamical transition temperature and below. We interpret the formal identity as an evidence that in these situations the configuration space is explored in a quasi-equilibrium fashion. Our general formalism, that relates dynamics to equilibrium puts slow dynamics in a new perspective and opens the way to the computation of new dynamical quantities in glassy systems. Quasistationarity in a long-range interacting model of particles moving on a sphere Shamik Gupta 1, David Mukamel 2 We consider a long-range interacting system of $N$ particles moving on a spherical surface under an attractive Heisenberg-like interaction of infinite range, and evolving under deterministic Hamilton dynamics. The system may also be viewed as one of globally coupled Heisenberg spins. In equilibrium, the system has a continuous phase transition from a low-energy magnetized phase, in which the particles are clustered on the spherical surface, to a high-energy homogeneous phase. The dynamical behavior of the model is studied analytically by analyzing the Vlasov equation for the evolution of the single-particle distribution, and numerically by direct simulations. The model is found to exhibit long lived non-magnetized quasistationary states (QSSs) which in the thermodynamic limit are dynamically stable within an energy range where the equilibrium state is magnetized. For finite $N$, these states relax to equilibrium over a time that increases algebraically with $N$. In the dynamically unstable regime, non-magnetized states relax fast to equilibrium over a time that scales as $\log N$. These features are retained in presence of a global anisotropy in the magnetization. 2 : Weizmann Institute Weizmann Institut Random Aharonov-Bohm vortices and some exact families of integrals: Part III Stephane Ouvry 1 Journal of Statistical Mechanics: Theory and Experiment, Institute of Physics: Hybrid Open Access, 2013, pp.P02002 As a sequel to [1] and [2], I present some recent progress on Bessel integrals $\int_0^{\infty}{\rmd u}\; uK_0(u)^{n}$, $\int_0^{\infty}{\rmd u}\; u^{3}K_0(u)^{n}$, ... where the power of the integration variable is odd and where $n$, the Bessel weight, is a positive integer. Some of these integrals for weights n=3 and n=4 are known to be intimately related to the zeta numbers zeta(2) and zeta(3). Starting from a Feynman diagram inspired representation in terms of n dimensional multiple integrals on an infinite domain, one shows how to partially integrate to n-2 dimensional multiple integrals on a finite domain. In this process the Bessel integrals are shown to be periods. Interestingly enough, these "reduced" multiple integrals can be considered in parallel with some simple integral representations of zeta numbers. One also generalizes the construction of [2] on a particular sum of double nested Bessel integrals to a whole family of double nested integrals. Finally a strong PSLQ numerical evidence is shown to support a surprisingly simple expression of zeta(5) as a linear combination with rational coefficients of Bessel integrals of weight n= 8. Record-breaking statistics for random walks in the presence of measurement error and noise Yaniv Edery 1, Alexander B. Kostinski 2, Satya N. Majumdar 3, Brian Berkowitz 1 We address the question of distance record-setting by a random walker in the presence of measurement error, $\delta$, and additive noise, $\gamma$ and show that the mean number of (upper) records up to $n$ steps still grows universally as $< R_n> \sim n^{1/2}$ for large $n$ for all jump distributions, including Lévy flights, and for all $\delta$ and $\gamma$. In contrast to the universal growth exponent of 1/2, the pace of record setting, measured by the pre-factor of $n^{1/2}$, depends on $\delta$ and $\gamma$. In the absence of noise ($\gamma=0$), the pre-factor $S(\delta)$ is evaluated explicitly for arbitrary jump distributions and it decreases monotonically with increasing $\delta$ whereas, in case of perfect measurement $(\delta=0)$, the corresponding pre-factor $T(\gamma)$ increases with $\gamma$. Our analytical results are supported by extensive numerical simulations and qualitatively similar results are found in two and three dimensions. 1 : Department of Environmental Sciences and Energy Research Weizmann Institute of Science, Relaxation dynamics of the Kuramoto model with uniformly distributed natural frequencies Anandamohan Ghosh 1, Shamik Gupta 2 Physica A: Statistical Mechanics and its Applications 392 (2013) 3812-3818 The Kuramoto model describes a system of globally coupled phase-only oscillators with distributed natural frequencies. The model in the steady state exhibits a phase transition as a function of the coupling strength, between a low-coupling incoherent phase in which the oscillators oscillate independently and a high-coupling synchronized phase. Here, we consider a uniform distribution for the natural frequencies, for which the phase transition is known to be of first order. We study how the system close to the phase transition in the supercritical regime relaxes in time to the steady state while starting from an initial incoherent state. In this case, numerical simulations of finite systems have demonstrated that the relaxation occurs as a step-like jump in the order parameter from the initial to the final steady state value, hinting at the existence of metastable states. We provide numerical evidence to suggest that the observed metastability is a finite-size effect, becoming an increasingly rare event with increasing system size. 1 : Indian Institute of Science Education and Research Indian Institute of Science Education and Research Reunion probability of N vicious walkers: typical and large fluctuations for large N Gregory Schehr 1, Satya N. Majumdar 1, Alain Comtet 1, 2, Peter J. Forrester 3 We consider three different models of N non-intersecting Brownian motions on a line segment [0,L] with absorbing (model A), periodic (model B) and reflecting (model C) boundary conditions. In these three cases we study a properly normalized reunion probability, which, in model A, can also be interpreted as the maximal height of N non-intersecting Brownian excursions on the unit time interval. We provide a detailed derivation of the exact formula for these reunion probabilities for finite N using a Fermionic path integral technique. We then analyse the asymptotic behavior of this reunion probability for large N using two complementary techniques: (i) a saddle point analysis of the underlying Coulomb gas and (ii) orthogonal polynomial method. These two methods are complementary in the sense that they work in two different regimes, respectively for L\ll O(\sqrt{N}) and L\geq O(\sqrt{N}). A striking feature of the large N limit of the reunion probability in the three models is that it exhibits a third-order phase transition when the system size L crosses a critical value L=L_c(N)\sim \sqrt{N}. This transition is akin to the Douglas-Kazakov transition in two-dimensional continuum Yang-Mills theory. While the central part of the reunion probability, for L \sim L_c(N), is described in terms of the Tracy-Widom distributions (associated to GOE and GUE depending on the model), the emphasis of the present study is on the large deviations of these reunion probabilities, both in the right [L \gg L_c(N)] and the left [L \ll L_c(N)] tails. In particular, for model B, we find that the matching between the different regimes corresponding to typical L \sim L_c(N) and atypical fluctuations in the right tail L \gg L_c(N) is rather unconventional, compared to the usual behavior found for the distribution of the largest eigenvalue of GUE random matrices. 3 : Department of Mathematics and Statistics [Melbourne] Sampling fractional Brownian motion in presence of absorption: a Markov chain method Alexander K. Hartmann 1, Satya N. Majumdar 2, Alberto Rosso 2 We study fractional Brownian motion (fBm) characterized by the Hurst exponent H. Using a Monte Carlo sampling technique, we are able to numerically generate fBm processes with an absorbing boundary at the origin at discrete times for a large number of 10^7 time steps even for small values like H=1/4. The results are compatible with previous analytical results that the distribution of (rescaled) endpoints y follow a power law P(y) y^\phi with \phi=(1-H)/H, even for small values of H. Furthermore, for the case H=0.5 we also study analytically the finite-length corrections to the first order, namely a plateau of P(y) for y->0 which decreases with increasing process length. These corrections are compatible with the numerical results. 1 : Institute of Physics University of Oldenburg Spatial extent of an outbreak in animal epidemics Eric Dumonteil 1, Satya N. Majumdar 2, Alberto Rosso 2, Andrea Zoia 1 Proceedings of the National Academy of Sciences 110 (2013) 4239-4244 Characterizing the spatial extent of epidemics at the outbreak stage is key to controlling the evolution of the disease. At the outbreak, the number of infected individuals is typically small, so that fluctuations around their average are important: then, it is commonly assumed that the susceptible-infected-recovered (SIR) mechanism can be described by a stochastic birth-death process of Galton-Watson type. The displacements of the infected individuals can be modelled by resorting to Brownian motion, which is applicable when long-range movements and complex network interactions can be safely neglected, as in case of animal epidemics. In this context, the spatial extent of an epidemic can be assessed by computing the convex hull enclosing the infected individuals at a given time. We derive the exact evolution equations for the mean perimeter and the mean area of the convex hull, and compare them with Monte Carlo simulations. 1 : CEA/Saclay Spin clusters and conformal field theory Gesualdo Delfino 1 Marco Picco 2 Raoul Santachiara 3 Jacopo Viti 4 Journal of Statistical Mechanics: Theory and Experiment, IOP Science, 2013, pp.P11011. <10.1088/1742-5468/2013/11/P11011> We study numerically the fractal dimensions and the bulk three-point connectivity for the spin clusters of the Q-state Potts model in two dimensions with $1\leq Q\leq 4$. We check that the usually invoked correspondence between FK clusters and spin clusters works at the level of fractal dimensions. However, the fine structure of the conformal field theories describing critical clusters first manifests at the level of the three-point connectivities. Contrary to what recently found for FK clusters, no obvious relation emerges for generic Q between the spin cluster connectivity and the structure constants obtained from analytic continuation of the minimal model ones. The numerical results strongly suggest then that spin and FK clusters are described by conformal field theories with different realizations of the color symmetry of the Potts model. 1. SISSA / ISAS - Scuola Internazionale Superiore di Studi Avanzati / International School for Advanced Studies 2. LPTHE - Laboratoire de Physique Théorique et Hautes Energies 4. LPTENS - Laboratoire de Physique Théorique de l'ENS Static replica approach to critical correlations in glassy systems Silvio Franz 1, Hugo Jacquin 2, Giorgio Parisi 3, Pierfrancesco Urbani 1, Francesco Zamponi 4 Journal of Chemical Physics 138 (2013) 12A540 We discuss the slow relaxation phenomenon in glassy systems by means of replicas by constructing a static field theory approach to the problem. At the mean field level we study how criticality in the four point correlation functions arises because of the presence of soft modes and we derive an effective replica field theory for these critical fluctuations. By using this at the Gaussian level we obtain many physical quantities: the correlation length, the exponent parameter that controls the Mode-Coupling dynamical exponents for the two-point correlation functions, and the prefactor of the critical part of the four point correlation functions. Moreover we perform a one-loop computation in order to identify the region in which the mean field Gaussian approximation is valid. The result is a Ginzburg criterion for the glass transition. We define and compute in this way a proper Ginzburg number. Finally, we present numerical values of all these quantities obtained from the Hypernetted Chain approximation for the replicated liquid theory. 2 : Matière et Systèmes Complexes (MSC) Statistical analysis of networks and biophysical systems of complex architecture Olga Valba 1 Université Paris Sud - Paris XI (15/10/2013), Sergeï Nechaev (Dir.) Complex organization is found in many biological systems. For example, biopolymers could possess very hierarchic structure, which provides their functional peculiarity. Understating such, complex organization allows describing biological phenomena and predicting molecule functions. Besides, we can try to characterize the specific phenomenon by some probabilistic quantities (variances, means, etc), assuming the primary biopolymer structure to be randomly formed according to some statistical distribution. Such a formulation is oriented toward evolutionary problems.Artificially constructed biological network is another common object of statistical physics with rich functional properties. A behavior of cells is a consequence of complex interactions between its numerous components, such as DNA, RNA, proteins and small molecules. Cells use signaling pathways and regulatory mechanisms to coordinate multiple processes, allowing them to respond and to adapt to changing environment. Recent theoretical advances allow us to describe cellular network structure using graph concepts to reveal the principal organizational features shared with numerous non-biological networks.The aim of this thesis is to develop bunch of methods for studying statistical and dynamic objects of complex architecture and, in particular, scale-free structures, which have no characteristic spatial and/or time scale. For such systems, the use of standard mathematical methods, relying on the average behavior of the whole system, is often incorrect or useless, while a detailed many-body description is almost hopeless because of the combinatorial complexity of the problem. Here we focus on two problems.The first part addresses to statistical analysis of random biopolymers. Apart from the evolutionary context, our studies cover more general problems of planar topology appeared in description of various systems, ranging from gauge theory to biophysics. We investigate analytically and numerically a phase transition of a generic planar matching problem, from the regime, where almost all the vertices are paired, to the situation, where a finite fraction of them remains unmatched.The second part of this work focus on statistical properties of networks. We demonstrate the possibility to define co-expression gene clusters within a network context from their specific motif distribution signatures. We also show how a method based on the shortest path function (SPF) can be applied to gene interactions sub-networks of co-expression gene clusters, to efficiently predict novel regulatory transcription factors (TFs). The biological significance of this method by applying it on groups of genes with a shared regulatory locus, found by genetic genomics, is presented. Finally, we discuss formation of stable patters of motifs in networks under selective evolution in context of creation of islands of "superfamilies". Statistics of quantum transport in weakly non-ideal chaotic cavities Sergio Rodriguez-Perez 1, Ricardo Marino 2, Marcel Novaes 1, Pierpaolo Vivo 2 We consider statistics of electronic transport in chaotic cavities where time-reversal symmetry is broken and one of the leads is weakly non-ideal, i.e. it contains tunnel barriers characterized by tunneling probabilities $\Gamma_i$. Using symmetric function expansions and a generalized Selberg integral, we develop a systematic perturbation theory in $1-\Gamma_i$ valid for arbitrary number of channels, and obtain explicit formulas up to second order for the average and variance of the conductance, and for the average shot-noise. Higher moments of the conductance are considered to leading order. Universidade Federal de Sao Carlos - UFSCar (BRAZIL) The Lyapunov exponent of products of random $2\times2$ matrices close to the identity A. Comtet 1, 2, J. M. Luck 3, C. Texier 1, 4, Y. Tourigny 5 We study products of arbitrary random real $2 \times 2$ matrices that are close to the identity matrix. Using the Iwasawa decomposition of $\text{SL}(2,{\mathbb R})$, we identify a continuum regime where the mean values and the covariances of the three Iwasawa parameters are simultaneously small. In this regime, the Lyapunov exponent of the product is shown to assume a scaling form. In the general case, the corresponding scaling function is expressed in terms of Gauss' hypergeometric function. A number of particular cases are also considered, where the scaling function of the Lyapunov exponent involves other special functions (Airy, Bessel, Whittaker, elliptic). The general solution thus obtained allows us, among other things, to recover in a unified framework many results known previously from exactly solvable models of one-dimensional disordered systems. 5 : Department of Mathematics [Bristol] University of Bristol – University Walk Thick Filament Length and Isoform Composition Determine Self-Organized Contractile Units in Actomyosin Bundles Todd Thoresen 1 Martin Lenz 2, 1, 3 Margaret Gardel 1, 3 Biophysical Journal, Biophysical Society, 2013, 104, pp.655-665. <10.1016/j.bpj.2012.12.042> Diverse myosin II isoforms regulate contractility of actomyosin bundles in disparate physiological processes by variations in both motor mechanochemistry and the extent to which motors are clustered into thick filaments. Although the role of mechanochemistry is well appreciated, the extent to which thick filament length regulates actomyosin contractility is unknown. Here, we study the contractility of minimal actomyosin bundles formed in vitro by mixtures of F-actin and thick filaments of nonmuscle, smooth, and skeletal muscle myosin isoforms with varied length. Diverse myosin II isoforms guide the self-organization of distinct contractile units within in vitro bundles with shortening rates similar to those of in vivo myofibrils and stress fibers. The tendency to form contractile units increases with the thick filament length, resulting in a bundle shortening rate proportional to the length of constituent myosin thick filament. We develop a model that describes our data, providing a framework in which to understand how diverse myosin II isoforms regulate the contractile behaviors of disordered actomyosin bundles found in muscle and nonmuscle cells. These experiments provide insight into physiological processes that use dynamic regulation of thick filament length, such as smooth muscle contraction. 1. The James Franck Institute, Institute Biophysical Dynamics 3. Department of Physics - Department of Physics Tradeoffs for number-squeezing in collisions of Bose-Einstein condensates P. Deuar 1, T. Wasak 2, P. Zin 3, 4, J. Chwedenczuk 2, M. Trippenbach 2, 3 We investigate the factors that influence the usefulness of supersonic collisions of Bose-Einstein condensates as a potential source of entangled atomic pairs by analyzing the reduction of the number difference fluctuations between regions of opposite momenta. We show that non-monochromaticity of the mother clouds is typically the leading limitation on number squeezing, and that the squeezing becomes less robust to this effect as the density of pairs grows. We develop a simple model that explains the relationship between density correlations and the number squeezing, allows one to estimate the squeezing from properties of the correlation peaks, and shows how the multi-mode nature of the scattering must be taken into account to understand the behavior of the pairing. We analyze the impact of the Bose enhancement on the number squeezing, by introducing a simplified low-gain model. We conclude that as far as squeezing is concerned the preferable configuration occurs when atoms are scattered not uniformly but rather into two well separated regions. 1 : Institut of Physics 2 : Institute of Theoretical Physics 3 : Andrzej Soltan Institute for Nuclear Studies Transition between Hermitian and non-Hermitian Gaussian ensembles J. Phys. A: Math. Theor. 46 (2013) 115001 (11pp) The transition between Hermitian and non-Hermitian matrices of the Gaussian unitary ensemble is revisited. An expression for the kernel of the rescaled Hermite polynomials is derived which expresses the sum in terms of the highest order polynomials. From this Christoffel–Darboux-like formula some results are derived including an extension to the complex plane of the Airy kernel. 2 Instıtuto de F ́ısica, Universidade de Sao Paulo, Caixa Postal 66318, 05314-970 Sao Paulo, S.P., Brazil Two-dimensional dipolar Bose gas with the roton-maxon excitation spectrum Abdelâali Boudjemaâ 1, G. V. Shlyapnikov 2 We discuss fluctuations in a dilute two-dimensional Bose-condensed dipolar gas, which has a roton-maxon character of the excitation spectrum. We calculate the density-density correlation function, fluctuation corrections to the chemical potential, compressibility, and the normal (superfluid) fraction. It is shown that the presence of the roton strongly enhances fluctuations of the density, and we establish the validity criterion of the Bogoliubov approach. At T=0 the condensate depletion becomes significant if the roton minimum is sufficiently close to zero. At finite temperatures exceeding the roton energy, the effect of thermal fluctuations is stronger and it may lead to a large normal fraction of the gas and compressibility. Hassiba Benbouali University of Chlef Two-point correlation function for Dirichlet L-functions E Bogomolny 1,2 and J P Keating 3 The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy–Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question. PACS numbers: 02.10.De, 03.65.Sq, 02.10.Yn 1. Laboratoire de Physique Th ́eorique et Mod`eles Statistiques, Universite Paris-Sud, Orsay, F-91405, France 2 CNRS, UMR8626, Orsay, F-91405, France 3 School of Mathematics, University of Bristol, Bristol BS8 1TW, UK Uniqueness of the thermodynamic limit for driven disordered elastic interfaces A Kolton 1 S Bustingorry 1 E E Ferrero 1 A Rosso 2 Journal of Statistical Mechanics: Theory and Experiment, IOP Science, 2013, 2013 (12), <10.1088/1742-5468/2013/12/P12004> Universality Classes of Critical Points in Constrained Glasses We analyze critical points that can be induced in glassy systems by the presence of constraints. These critical points are predicted by the Mean Field Thermodynamic approach and they are precursors of the standard glass transition in absence of constraints. Through a deep analysis of the soft modes appearing in the replica field theory we can establish the universality class of these points. In the case of the "annealed potential" of a symmetric coupling between two copies of the system, the critical point is in the Ising universality class. More interestingly, is the case of the "quenched potential" where the a single copy is coupled with an equilibrium reference configuration, or the "pinned particle" case where a fraction of particles is frozen in fixed positions. In these cases we find the Random Field Ising Model (RFIM) universality class. The effective random field is a "self-generated" disorder that reflects the random choice of the reference configuration. The RFIM representation of the critical theory predicts non-trivial relations governing the leading singular behavior of relevant correlation functions, that can be tested in numerical simulations. Wigner time-delay distribution in chaotic cavities and freezing transition Christophe Texier 1, 2, Satya N. Majumdar 1 Using the joint distribution for proper time-delays of a chaotic cavity derived by Brouwer, Frahm & Beenakker [Phys. Rev. Lett. {\bf 78}, 4737 (1997)], we obtain, in the limit of large number of channels $N$, the large deviation function for the distribution of the Wigner time-delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas. Zero sound in a two-dimensional dipolar Fermi gas Zhen-Kai Lu 1, S. I. Matveenko 2, 3, G. V. Shlyapnikov 2, 4, 5 We study zero sound in a weakly interacting 2D gas of single-component fermionic dipoles (polar molecules or atoms with a large magnetic moment) tilted with respect to the plane of their translational motion. It is shown that the propagation of zero sound is provided by both mean field and many-body (beyond mean field) effects, and the anisotropy of the sound velocity is the same as the one of the Fermi velocity. The damping of zero sound modes can be much slower than that of quasiparticle excitations of the same energy. One thus has wide possibilities for the observation of zero sound modes in experiments with 2D fermionic dipoles, although the zero sound peak in the structure function is very close to the particle-hole continuum. 1 : Max-Planck-Institut für Quantenoptik Max-Planck-Institut für Quantenoptik 3 : L. D. Landau Institute for Theoretical Physics L. D. Landau Institute for Theoretical Physics 4 : Van der Waals-Zeeman Institute 5 : Kavli Institute for Theoretical Physics A one-year comprehensive chemical characterisation of fine aerosol (PM<sub>2.5</sub>) at urban, suburban and rural background sites in the region of Paris (France) – Archive ouverte HAL M. Bressi 1, 2 J. Sciare 1 V. GhersiN. Bonnaire 1 J. B. Nicolas 1, 2 J.-E. Petit 1 S. Moukhtar 3 A. Rosso 4 N. Mihalopoulos 5 A. Féron 1 M. Bressi, J. Sciare, V. Ghersi, N. Bonnaire, J. B. Nicolas, et al.. A one-year comprehensive chemical characterisation of fine aerosol (PM<sub>2.5</sub>) at urban, suburban and rural background sites in the region of Paris (France). Atmospheric Chemistry and Physics, European Geosciences Union, 2013, 13 (15), pp.7825-7844. ⟨10.5194/acp-13-7825-2013⟩. ⟨hal-02923974⟩ Studies describing the chemical composition of fine aerosol (PM2.5) in urban areas are often conducted for a few weeks only and at one sole site, giving thus a narrow view of their temporal and spatial characteristics. This paper presents a one-year (11 September 2009–10 September 2010) survey of the daily chemical composition of PM2.5 in the region of Paris, which is the second most populated "Larger Urban Zone" in Europe. Five sampling sites representative of suburban (SUB), urban (URB), northeast (NER), northwest (NWR) and south (SOR) rural backgrounds were implemented. The major chemical components of PM2.5 were determined including elemental carbon (EC), organic carbon (OC), and the major ions. OC was converted to organic matter (OM) using the chemical mass closure methodology, which leads to conversion factors of 1.95 for the SUB and URB sites, and 2.05 for the three rural ones. On average, gravimetrically determined PM2.5 annual mass concentrations are 15.2, 14.8, 12.6, 11.7 and 10.8 μg m−3 for SUB, URB, NER, NWR and SOR sites, respectively. The chemical composition of fine aerosol is very homogeneous at the five sites and is composed of OM (38–47%), nitrate (17–22%), non-sea-salt sulfate (13–16%), ammonium (10–12%), EC (4–10%), mineral dust (2–5%) and sea salt (3–4%). This chemical composition is in agreement with those reported in the literature for most European environments. On an annual scale, Paris (URB and SUB sites) exhibits its highest PM2.5 concentrations during late autumn, winter and early spring (higher than 15 μg m−3 on average, from December to April), intermediates during late spring and early autumn (between 10 and 15 μg m−3 during May, June, September, October, and November) and the lowest during summer (below 10 μg m−3 during July and August). PM levels are mostly homogeneous on a regional scale, during the whole project (e.g. for URB plotted against NER sites: slope = 1.06, r2=0.84, n=330), suggesting the importance of mid- or long-range transport, and regional instead of local scale phenomena. During this one-year project, two thirds of the days exceeding the PM2.5 2015 EU annual limit value of 25 μg m−3 were due to continental import from countries located northeast, east of France. This result questions the efficiency of local, regional and even national abatement strategies during pollution episodes, pointing to the need for a wider collaborative work with the neighbouring countries on these topics. Nevertheless, emissions of local anthropogenic sources lead to higher levels at the URB and SUB sites compared to the others (e.g. 26% higher on average at the URB than at the NWR site for PM2.5, during the whole campaign), which can even be emphasised by specific meteorological conditions such as low boundary layer heights. OM and secondary inorganic species (nitrate, non-sea-salt sulfate and ammonium, noted SIA) are mainly imported by mid- or long-range transport (e.g. for NWR plotted against URB sites: slope = 0.79, r2=0.72, n=335 for OM, and slope = 0.91, r2=0.89, n=335 for SIA) whereas EC is primarily locally emitted (e.g. for SOR plotted against URB sites: slope = 0.27; r2=0.03; n=335). This database will serve as a basis for investigating carbonaceous aerosols, metals as well as the main sources and geographical origins of PM in the region of Paris. 1. LSCE - Laboratoire des Sciences du Climat et de l'Environnement [Gif-sur-Yvette] 2. ADEME - Agence de l'Environnement et de la Maîtrise de l'Energie 3. Centre for Atmospheric Chemistry [Toronto] 5. ECPL - Environmental Chemical Processes Laboratory [Heraklion] Archive ouverte HAL – Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics Martin R. Evans 1 Satya N. Majumdar 2 Kirone Mallick 3 Martin R. Evans, Satya N. Majumdar, Kirone Mallick. Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics. Journal of Physics A: Mathematical and Theoretical, IOP Publishing, 2013, 46, pp.185001. ⟨10.1088/1751-8113/46/18/185001⟩. ⟨hal-00825684⟩ 1. SUPA School of Physics and Astronomy [Edinburgh] Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics – Archive ouverte HAL Archive ouverte HAL – On the origin of the halo stabilization Martin Trulsson 1 Bo Jönsson 2 Christophe Labbez 3 Martin Trulsson, Bo Jönsson, Christophe Labbez. On the origin of the halo stabilization. Physical Chemistry Chemical Physics, Royal Society of Chemistry, 2013, 15 (2), pp.541-545. ⟨10.1039/c2cp42404e⟩. ⟨hal-02376063⟩ 2. THEORETICAL CHEMISTRY - Theoretical Chemistry 3. LICB - Laboratoire Interdisciplinaire Carnot de Bourgogne Archive ouverte HAL – A one-year comprehensive chemical characterisation of fine aerosol (PM<sub>2.5</sub>) at urban, suburban and rural background sites in the region of Paris (France) M. Bressi 1 J. Sciare 2 V. GhersiN. Bonnaire 2 J. NicolasJ.-E. Petit 2 S. Moukhtar 3 A. Rosso 4 N. Mihalopoulos 5 A. Féron 6 M. Bressi, J. Sciare, V. Ghersi, N. Bonnaire, J. Nicolas, et al.. A one-year comprehensive chemical characterisation of fine aerosol (PM<sub>2.5</sub>) at urban, suburban and rural background sites in the region of Paris (France). Atmospheric Chemistry and Physics, European Geosciences Union, 2013, 13 (15), pp.7825-7844. ⟨10.5194/acp-13-7825-2013⟩. ⟨hal-02923974⟩ 1. IES - JRC Institute for Environment and Sustainability 6. LISA (UMR_7583) - Laboratoire Interuniversitaire des Systèmes Atmosphériques
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 11 > Page 6324 Christoph Hitzenberger, Editor-in-Chief Non-invasive in situ monitoring of bone scaffold activity by speckle pattern analysis Vahideh Farzam Rad, Majid Panahi, Ramin Jamali, Ahmad Darudi, and Ali-Reza Moradi Vahideh Farzam Rad,1 Majid Panahi,2 Ramin Jamali,1 Ahmad Darudi,2 and Ali-Reza Moradi1,3,* 1Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan 45137-66731, Iran 2Department of Physics, Faculty of Science, University of Zanjan, Zanjan 45371-38791, Iran 3School of Nano Science, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5531, Iran *Corresponding author: [email protected] Majid Panahi https://orcid.org/0000-0002-5822-6788 Ramin Jamali https://orcid.org/0000-0001-6466-834X Ahmad Darudi https://orcid.org/0000-0002-9688-0246 Ali-Reza Moradi https://orcid.org/0000-0002-4677-5800 V Farzam Rad M Panahi R Jamali A Darudi A Moradi Vol. 11, •https://doi.org/10.1364/BOE.401740 Vahideh Farzam Rad, Majid Panahi, Ramin Jamali, Ahmad Darudi, and Ali-Reza Moradi, "Non-invasive in situ monitoring of bone scaffold activity by speckle pattern analysis," Biomed. Opt. Express 11, 6324-6336 (2020) Spatially-offset Raman spectroscopy for monitoring mineralization of bone tissue engineering scaffolds: feasibility study based on phantom samples (BOE) Characterization of elastomeric scaffolds developed for tissue engineering applications by compression and nanoindentation tests, μ-Raman and μ-Brillouin spectroscopies (BOE) Automated quantitative assessment of three-dimensional bioprinted hydrogel scaffolds using optical coherence tomography (BOE) Optics in Biotechnology Laser scattering Spatial frequency Speckle analysis Speckle patterns Original Manuscript: July 2, 2020 Revised Manuscript: October 5, 2020 Manuscript Accepted: October 6, 2020 Biomedical Optics Express Biophotonics (2020) Suppl. Mat. (1) Scaffold-based bone tissue engineering aims to develop 3D scaffolds that mimic the extracellular matrix to regenerate bone defects and damages. In this paper, we provide a laser speckle analysis to characterize the highly porous scaffold. The experimental procedure includes in situ acquisition of speckle patterns of the bone scaffold at different times under preserved environmental conditions, and follow-up statistical post-processing toward examining its internal activity. The activity and overall viscoelastic properties of scaffolds are expressed via several statistical parameters, and the variations in the computed parameters are attributed to time-varying activity of the samples during their internal substructure migration. Bone is a basic element of the human skeleton supporting and protecting various vital organs within it [1–3]. It undergoes continuous renovation during our lifetime and is made of four types of cells: osteoblasts, osteoclasts, osteocytes, and bone lining cells, which regulate bone homeostasis dynamically [4,5]. Creating tissue constructs that can be replaced by the bone in both structure and function has received intensive attention [6]. A bone graft or scaffold, while providing all the necessary environmental cues found in natural bone, should mimic the structure and mechano-chemical properties of the natural bone extracellular matrix. This is where bone tissue engineering becomes important. Different methods use various synthetic or natural, biodegradable or non-biodegradable materials in the fabrication of bone scaffolds [7,8]. The scaffold, after implantation, should elicit a desirable local or systemic response to the host, whilst undesirable effects may also reduce healing or cause rejection by the body [9–11]. A successfully fabricated scaffold must possess few aspects: (1) degrades over time while providing the necessary structural support, (2) eventually allows the original cells to replace the implanted scaffold, (3) degraded parts must be non-toxic and able to exit the body without interference with other organs [12,13], (4) has a high porosity ensuring cellular penetration and adequate diffusion of nutrients to cells within the construct [14,15], (5) has similar mechanical properties to the implementation site [16–18]. Materials used for scaffolds in bone tissue engineering are categorized into four classes: ceramic, composite, polymeric, and metallic scaffolds [10]. Polymeric materials provide more controllability on the physiochemical characteristics of scaffolds, such as porosity, solubility, biocompatibility, enzymatic reactions, and allergic response. The other materials come along with serious drawbacks: ceramics suffer of disadvantages such as brittleness, low fracture strength, difficulty to fabricate, and high density. Biocompatibility of composite materials are often insufficient, and even though metallic scaffolds have high mechanical strength, they are non-biodegradable [7,10,19]. The superiority of polymeric materials and their excellent mechanical properties have led to consideration of synthetic polymers for scaffolds [20]. Materials such as collagen, gelatin, alginate, hyaluronic acid, chitin, and chitosan (CS) are examples of naturally derived polymers [21]. Among these materials, CS has gained great significance in the field of scaffold fabrication. This is because CS possesses several important properties, such as biodegradability, biocompatibility, bioadhesivity, solubility, and non-toxicity to humans [22–24]. The degradation of scaffolds has been studied under different conditions to understand the pertinence of external effects or constituents on their activity. For example, the degradation behavior of the scaffold has been investigated inside micro-channels and static or shaking incubators to understand the role of the shear stress in the changes of scaffolds, in terms of weight loss, water uptake, pH value, porosity, morphology [25]. In this research, we also use CS materials, according to the aforementioned properties to fabricate the scaffolds. The common polymer characterization methods normally include measurement of water uptake, mass loss, porosity change, etc. [26,27] or complementary imaging either in macro-scales, such as macromorphology that provides very low resolution and low magnification images [26], or in high-resolution scales, such as scanning electron microscopy (SEM) [28,29]. They have their own limitations. SEM, for example, although being high resolution and suitable for 3D imaging, is expensive, and must be housed in an area free of any possible electric, magnetic, or vibration interference. In SEM, sample preparation is required, which can result in artifacts and errors and might be damaging, and the method can be applied only to solid inorganic samples, small enough to fit inside the vacuum chamber. Atomic force microscopy (AFM), in comparison to SEM, is free of special sample treatment, expensive vacuum environment, and high maintenance and can be higher in resolution and can be also used for liquids [30,31]. However, its scanning speed is slow, proximity of its tip can be destructive to the sample, and it also unable to work for dynamic specimens. Other studies by placing the scaffolds under different conditions have been pursued to understand their degradation and the effect of external constituents and stimuli, e.g., inside micro-channels and by applying external movement to investigate the role of the shear stress in the changes of scaffolds [25,32]. Here, by the use of dynamic speckle pattern analysis we interrogate the activities on the surface structure of the scaffolds. Speckle light patterns are high-contrast, fine-scale granular patterns that are the result of the interference of a large number of dephased but coherent monochromatic light waves propagating in with different directions [33]. Such kind of interference can be formed through different processes, such as scattering of laser light from a rough surface or mode-mixing in a multimode fiber [34]. Despite the intrinsic randomness of these patterns, the analysis of the speckle patterns may find interesting usefulnesses, and the methodologies based on speckle patterns have been considered as versatile tools to purposively investigate numerous physical, chemical and biological phenomena [35,36]. In the case of speckle patterns that are obtained through the scattering of laser light from a rough surface, each point on the illuminated area acts as a source of secondary waves and contains information about the surface. Therefore, throughout the image recording device, overall information about the surface will be present. Dynamic speckle patterns occur when an illuminated surface includes any kind of activity. Based on the origins and characteristics of dynamic speckles the knowledge about the inner dynamics of the phenomena may be increased. The more inner dynamics of the samples are known, a better insight can be obtained in controlled experiments and simulations to assess how these dynamics show up in the speckle evolution. Several types of research have been carried out involving the dynamic speckle analysis: monitoring and investigation of blood flow [37–39], seed health [40], fruit ripeness [41], paper crumpling [42], tissue viscoelastic properties evaluation [43,44], adhesive drying [45], and parasite motility [46] are among the applications, to name a few. The advantage of the presented methodology over the common polymer characterization techniques includes possibility of dynamic and live acquisition of information about the samples. It is a nondestructive and noncontact method and provides spatiotemporal information integration. In addition, this technique is free of phototoxic effects on the sample since a very low laser power is used to illuminate the samples. Dynamic speckle analysis can be applied in the same manner for all the classes of materials that are used in bone tissue engineering, i.e., ceramic, composite, polymeric, and metallic scaffolds. Here we focused on polymeric materials for their excellent mechanical properties. The method can be used for long period studies even for days, providing that the environmental conditions for the samples are preserved. The application of the dynamic speckle analysis method on the CS scaffold is addressed in this paper. In Section 2 the experimental procedure and the theoretical background on the statistical analysis of dynamic speckle patterns are described. In Section 3 the experimental results are presented and the analysis results are discussed. The paper is concluded in Section 4. 2.1 Sample preparation Several methodologies exist to provide porous scaffolds [47]. Freeze-casting is one of the most common ones since it is environmentally and economically friendly [48]. The samples in this study are prepared with the Freeze-casting method. CS (Acros Chemical Co.) has deacetylation degree of 85% and molecular weight of 100000-300000. CS is dissolved in 1wt $\%$ acetic acid (Merck Inc.) by stirring for 24 h at room temperature to prepare 2wt$\%$ CS solutions. This solution is then poured into the self-made plastic molds and is kept frozen at -20 $^{\circ }$C (24 hours), followed by freeze-drying for 24 h to achieve the porous structure of 3D CS scaffolds. The prepared scaffolds are immersed in 1$\%$ NaOH (Merck Inc.) for 2 h to remove any residual acetic acid, and further washed several times with sterile water to reach pH=7.0 level. The CS samples are lyophilized again overnight in a freeze dryer until getting dried. Finally, scaffolds of 10 mm in diameter and 3 mm in thickness are fabricated. 2.2 Experimental procedure The experimental procedure for recording dynamic speckle patterns is shown schematically in Fig. 1(a). The laser beam (He-Ne Laser, 632.8 nm, 5 mW) passes through a spatial filter (SF), which filters out the unwanted spatial frequencies in the Fourier space through placing a pinhole in the lens focal plane. The emerging beam is highly divergent and by the use of another lens (L$_1$) it is collimated to provide a uniform beam profile. The collimated beam is directed through mirror M into the sample (S) to illuminate it. The speckle pattern is created and recorded on a digital camera (DCC1545M, Thorlabs, 8-bit dynamic range, 5.2 $\mu$m pixel pitch) by a collecting lens (L$_2$). L$_2$ has focal length of 10 cm and f$^{\#}$ of 2. The camera is set to record 700$\times$700 pixels in its central area with the exposure time of 0.20 ms. The He-Ne laser possesses sufficient coherency and stability. The laser is switched on at least half an hour before the experiment to ensure intensity stability, which is of high importance in speckle analysis. The laser is kept on during the experiments and by using a laser shutter (Sh), without touching the elements of the setup, the beam is blocked. When required, the laser shutter is removed and the data is acquired. The uniformity of the beam is checked by placing a mirror in the place of the sample and collecting the reflected light by the camera for about a minute. Fig. 1. (a) Experimental setup for dynamic speckle acquisition; Sh: shutter, SF: spatial filter, L1: collimating lens, M: mirror, L2: collecting lens. (b) A SEM image of a porous bone scaffold sample. (c) A picture of overall scaffold sample. The sample is placed in a dish including phosphate buffered saline (PBS) solution. PBS closely mimics the pH, osmolarity, and ion concentrations of the human body. Therefore, PBS solution with a pH of 7.4 in 37 $^{\circ }$C is used here to simulate the condition of a healthy human body [49,50]. In order to fix the scaffold in the sample dish we pinned it using a couple of thin rods which are supported by magnets from the bottom of the dish. The sample dish has a lid to prevent evaporation of PBS during the experiment. The dish is placed inside a container filled up of liquid paraffin, which is heated by a feedback-controlled plate heater to create a thermal equilibrium and maintain the sample temperature at 37 $^{\circ }$C. The closed chamber is not in direct contact with air. However, the experiments were performed inside an isolated room for the reason of controlled temperature and humidity, and these parameters of the lab were controlled and live monitored. The humidity level was adjusted to 45%$\pm$5%. The setup is built on a pneumatic optical table and the laboratory is based on the ground level. Therefore, the vibrational noises are highly damped out. In Fig. 1(b) an SEM image of a typical porous bone scaffold sample is shown. In Fig. 1(c) a picture of the overall sample along with its dimensions is shown, which also demonstrates that the sample is not transparent, and a reflective-mode speckle pattern analysis setup suits its investigation. Once the laser intensity is stabilized and the sample chamber reached a fixed thermal condition, the sample is shone by the laser beam of uniform intensity. Different parts of the fabricated scaffold sample are prepared and kept inside the PBS solution. The scattered laser beam from the samples is collected and form the speckle patterns. The time evolution of the samples are rather slow, therefore, each experiment for a sample lasts for several hours. The camera is programmed to acquire 100 successive speckle patterns at 25 fps in every 30 min. We record the process of speckle pattern dynamics for several samples for 5 h. For quantitative assessment we consider examination and pattern post-processing of at least 5 samples. The protocol of the experiment is preserved for different samples. To assure clean data and negligible effect of the noises, such as contaminations in the optical elements, we also conduct control experiments. However, for control experiments, in order to preserve the experimental conditions, and to avoid inclusion of unknown noise sources we do not remove the sample or replace it with a reference one. Instead, since only a part of the sample area is occupied by the scaffold sample, an empty area in the vicinity of the sample, which is also observed by the detector is taken as a reference surface. Then we conduct the speckle analysis procedure on it. The results of a typical control experiment is presented in Supplement 1, Fig. S1. 2.3 Numerical processing The characterization procedure through dynamic speckle analysis provides a useful description of the surface properties of the sample under study. The activity of the sample, especially in bio-materials, may be revealed through different analysis results. The essence of the present work is to characterize the activity of the bone scaffold as a function of time. This is done by numerically processing the recorded speckle patterns and calculating multiple statistical parameters that are defined in this section. Dynamic laser speckle occurs when an object in dynamic activity is illuminated by a laser light. Therefore, the high spatial and temporal coherence of the laser illumination enables maintaining the speckle pattern stable when the scatterers do not move. As a result, the activity observed by the dynamic laser speckle in biomaterials can be attributed to their internal features, such as growth and cell division, cytoplasmic movement and biochemical reactions, as well as water-related activities [51–56]. Endogenous motions may also be used as a source of deformation, which provides optical mapping of the viscoelastic properties within the biological tissue [56,57]. These motions can be influenced by several factors ranging from the mechanical properties of the extracellular matrix surrounding it to some chemokinetic responses [58]. In biodegradable scaffolds the degradation measurement which can be in terms of mass loss, porosity change, body distortion, etc. is related to the structure of scaffolds [19]. We name the aforementioned structural variations as "internal activities". The time history speckle pattern (THSP) is a 2D matrix that represents the time evolution of a set of $M$ points, often called image datapack points, in successive speckle patterns. Therefore, a line of a THSP matrix represents this set of points in a speckle pattern during the pattern acquisition and the columns represent the time evolution. The set of $M$ points is randomly chosen from the initial pattern in order to reconstruct the first column of THSP. From the successive patterns, the corresponding points are taken to build the other columns of the THSP matrix. THSP provides an immediate graphical sign of the sample activity level; Higher THSP line variations correspond to samples of higher dynamicity [59]. THSP concept is the base of further numerical outcomes, such as auto-correlation (AC), inertia moment (IM), the absolute value of the differences (AVD), co-occurence matrix (COM), etc [59]. Furthermore, several parameters without using the THSP, such as contrast, homogeneity, and roughness parameters (skewness, kurtosis, etc.) may also be evaluated [7]. Here, we consider and define parameters from both aforementioned sets. COM is an intermediary matrix, still graphical, for evaluation of the dispersion of consecutive pixels in a THSP of $M$ points monitoring through $N$ speckle patterns. It represents a transition histogram of intensities: (1)$$\textrm{COM}(i,j)=\sum_{m=1}^{M}\sum_{n=1}^{N-1} \begin{cases} 1, & \textrm{if} ~~ \textrm{THSP} (m,n) = i \\ & \textrm{and}~~\textrm{THSP}(m,n+1) = j,\\ 0, & \textrm{otherwise.} \end{cases}$$ IM is a numerical activity indicator and statistical outcome defined as: (2)$$\textrm{IM}=\sum_{i}^{}\sum_{j}^{} \frac{\textrm{COM}(i,j)}{\sum_{m}\textrm{COM}(i,m)}|i-j|^2.$$ where the normalization is for reduction of the inhomogeneities effect in the analyzed images, and is performed in a way to make the sum of values in each line of the COM equal to 1. The name "inertia moment" is taken from the mechanical analogue of this operation. Autocorrelation (AC) is an important factor to derive the mean square displacements (MSD), which implies statistical data diffusion in samples and is meaningful especially in biological samples [44]. Typically, the speckle intensity temporal autocorrelation curve, AC$(t)$, is obtained by measuring the correlation between pixel intensities analyzed in the first speckle frame with subsequent frames, over the imaging duration or image sequence [60]. The correlation is calculated between the pixels of THSP in the instant $i$, THSP$(: , i)$, and the pixels in the instant $i+ j$, THSP$(:,i+ j)$: (3)$$\textrm{AC}({i,j})=\langle \textrm{THSP}(:,i),\textrm{THSP}(:,i+j)\rangle,$$ where $\langle ~\rangle$ indicates mean calculation. In the context of diffusing wave spectroscopy, it is shown that the AC of the speckle pattern can be quantitatively related to the embedded scatterers in a solid matrix, such as the present specimen we use [61]. The fluctuations in either their refractive index or local density, or, more generally, in any arbitrary deformation of them may cause changes in their intensity ACs [62]. The AC function of TSHPs, similar to the AC of intensity patterns function, is directly related to the MSD of the scatterers between the instant $i$ and the instant $i+ j$ [63]: (4)$$\textrm{AC}(i,j)=e^{-{2k\gamma\sqrt{\langle{\Delta r^2(i,j)}\rangle}}}.$$ where $k$ is the wave vector of the incident light and $\gamma$ is a coefficient that depends on the polarization state of light. Therefore, from the AC curve, MSD, i.e., $\langle \Delta r^2(i,j)\rangle$ of contributing scatterers of the samples that produce the speckle patterns may be retrieved. Equation (4) is the approximated expression for the case of moderately scattering samples with light absorption. The general expression is discussed in detail in [62]. Technically, MSDs are obtained from the fitted curves to the experimental data points. Furthermore, statistical processing on the speckle intensity patterns can result in roughness parameters, i.e. different moments of the deviation from the mean values of the intensities throughout the patterns. Average roughness ($S_1$) represents the average variation of intensities to mean value of data and root mean square ($S_2$) represents the standard deviation of the distribution: (5)$$\begin{aligned} S_1 = \frac{1}{P~Q} \sum_{p=1}^{P} \sum_{q=1}^{Q}|I(p,q) - \langle{I}(p,q)\rangle|, \end{aligned}$$ (6)$$\begin{aligned} S_2 = \bigg[\frac{1}{P~Q} \sum_{p=1}^{P} \sum_{q=1}^{Q}\left[I(p,q) - \langle{I}(p,q)\rangle\right]^2\bigg]^{\frac{1}{2}}, \end{aligned}$$ where $P$ and $Q$ are the horizontal and vertical sizes of the speckle patterns, $p$ and $q$ count the pixel numbers, and $I$ is the intensity throughout the speckle patterns. These metrics can be used to provide a general estimate of the distribution roughness. Similarly, Skewness, $S_3$, and Kurtosis, $S_4$, are other common roughness parameters to evaluate the samples: (7)$$\begin{aligned} S_3 = \frac{1}{P~Q~S_2^3} \sum_{p=1}^{P} \sum_{q=1}^{Q}\left[I(p,q) - \langle{I}(p,q)\rangle\right]^3, \end{aligned}$$ (8)$$\begin{aligned} S_4 = \frac{1}{P~Q~S_2^4} \sum_{p=1}^{P} \sum_{q=1}^{Q}\left[I(p,q) - \langle{I}(p,q)\rangle\right]^4. \end{aligned}$$ According to the definition of $S_3$, being the third moment of the deviation from the mean value, it is a measure of the degree of symmetry of the distribution of intensities. Negative skew indicates a predominance of valleys, i.e. low intensities, while positive skew indicates a "peaky" distribution. $S_3$ = 0 indicates a surface with symmetric intensity distribution, and $S_3 > 1$ ($S_3 < 1$) indicates the presence of extreme peaks (valleys) on the pattern [64]. Kurtosis, $S_4$, is a parameter that measures the sharpness of the distribution throughout the pattern. For a perfectly random distribution of intensities with a Gaussian probability density function $S_4 = 3.0$. Kurtosis is related to the width of the intensity distribution. $S_4$ values smaller than 3.0, however, indicate broader distributions corresponding to the speckle patterns described as gradually varying, free of extreme peaks or valley features in the intensity distributions. Values greater than 3.0 indicate the presence of inordinately high peaks or deep valleys. Due to the inclusion of negative values of deviations to mean value, we choose and examine skewness as the representer to the speckle pattern roughness analysis. There are, of course, multiple parameters to retrieve further information regarding the samples through their statistical analysis. For example, there are some parameters that include the distribution of differentiations on the distributions, which provide complementary information on the roughness of the function [64]. It is remarkable that the roughness parameters of the "intensity" distribution throughout the speckle field are calculated and assumed to resemble the roughness of the sample surface. These sets of parameters are inherently different as the former set is associated with the intensity fluctuations and the latter to the height fluctuations throughout the sample surface. However, it is the rough surface that produce the speckle patten when illuminated with the laser beam due to the scattering of light rays. Similar trends of the two different sets parameters for various samples have been already studied and reported [65,66]. Figures 2(a)–2(f) show the THSP matrices for a typical scaffold sample at the beginning of the experiment and after 1 to 5 h after the experiment starts. THSPs are built by putting the intensity of 200 random pixels throughout a collection of 100 speckle patterns together. Therefore, big fluctuations in the intensity of the points will be the result of the high internal activity of scaffolds. Recognizable bright horizontal lines appearing-disappearing as well as the appearance of discontinued lines in the THSPs as time goes indicate a growing activity of the scaffold during 5 h of sample examination. In very high activity cases the THSP pattern turns into an ordinary speckle pattern, in which the bright lines cannot be recognized and the structure resembles a random light field. Further, for better understanding the changes in the activities, we calculated the associated COM matrices of the samples at the aforementioned time intervals. Figure 3(a-f) shows the resulted 3D plot and 2D map of COM matrix of the THSPs of scaffold's time evolution (depicted in Fig. 2), at every 1 h after the experiment starts. Reference level and comparison level show intensity levels of $i$ and $j$ in Eq. (1), respectively. From Fig. 3 two points may be remarked: (1) it is observed that the points are spread further away the principal diagonal of COM and the matrix resembles a cloud as time passes, and (2) the number of points with very high COM values decrease in longer times. According to the definition of COM and THSP, higher activities are associated with more frequent and bigger departures from the diagonal per unit time. Therefore, the distributions around its principal diagonal are related to homogeneous samples while the appearance of nonzero elements far from the diagonal represent strong fluctuations in the sample. Usually, COM values are normalized and is called "modified co-occurrence matrix" [67] in order to represent the transition probability matrix between intensity values in the THSP. Therefore, the number of points with high COM values does not provide additional information about the sample. To provide a more quantitative assessment of the spread of the COM values around the principal diagonal as an activity indicator for the scaffold samples, we computed the IM values of several samples as a function of time. Figure 4 shows the average IM value of the 5 samples at every 30 min. The definition of IM is based on summation on the squared row distance to the principal diagonal of THSP and hence it is the right quantitative representer to the "cloudiness", e.g. amount of data spreading of COM distribution around its principal diagonal. Figure 4 shows increasing of IM values and the increasing activity of the scaffold samples as time passes during the experiment. The error bars correspond to the averaging over five IM values associated with the five samples. The increase of the IM can be attributed to the scaffold activity during its internal substructure migration. However, the increase does not follow a single linear behavior during the whole experiment. Instead, it seems in the longer times after the experiment onsets there is only a flatter upward change as a function of time. Moreover, the error bars also experience a slight increase during the experiment. Fig. 2. Time history speckle pattern (THSP), formed by tracking 200 random points throughout a collection of 100 speckle patterns of scaffold (image datapack points). The results at (a) the beginning of the experiment, and (b) t=1 h, (c) t=2 h, (d) t=3 h, (e) t=4 h, and (f) t=5 h after the experiment starts, are shown. Fig. 3. (a-f) 3D plot and 2D map of COM matrix of the associated THSPs in Fig. 2 at every one hour after the experiment starts. Reference level and comparison level show intensity levels of $i$ and $j$ in Eq. (1), respectively. Fig. 4. Average inertia moment over the scaffold samples' THSPs as a function of time. The fact that the scatterers in the bone scaffold can also move during morphogenesis [62], enables us to consider rheology related indicators [44]. Typically, the speckle intensity temporal autocorrelation curve is obtained by measuring the correlation between pixel intensities in the first speckle frame with subsequent frames over the acquisition duration, i.e. the constructed THSPs. Figure 5(a) demonstrates the AC variations as a function of time, for the 100 speckle patterns acquired every 1 h. Different bluish colors depict different evaluation times. Then to each set of calculated AC values we fit an exponential function, according to Eq. (4). The fitted functions are depicted by reddish lines for different evaluation times and the resulted fitting parameters are shown in the legend of Fig. 5(a) (Y1 to Y5). According to Eq. (4), we expect a linear variation vs. time for $2k\gamma {\sqrt {\textrm {MSD}}}$. Due to this, in Fig. 5(b), the sketched fitted curves are linear as a function of time. We sketched them for 4 seconds as the 100 patterns are acquired for 4 seconds. It shows that the scatterers in the samples get away from each other as time passes, which is another indicator of increasing activities of the samples. Fig. 5. (a) Autocorrelation as a function of time for 100 speckle patterns and best fitting functions. (b) $2k\gamma \sqrt {\textrm {MSD}}$ as a function of time extracted from the AC curves. Furthermore, we examined the roughness parameter for the samples. These parameters provide overall information about the surface structure. For porous structures such as bone scaffold studied here the roughness information is important. Figure 6 demonstrates the variations of the skewness parameter. Each data point is obtained by taking an average over the skewness values of 100 speckle patterns, associated with each sample. The error bars correspond to the averaging over the five skewness mean values, and those mean skewness values, themselves, are taken from averaging over the 100 associated speckle patterns for each sample. The variation of skewness shows that the surfaces of the samples get rougher in longer times due to the internal activities happening in the samples. However, similar to Fig. 4, the increase in the values of skewness in the initial stages is higher than the final stages. In Supplement 1, Fig. S4, we have also presented the similar examination of average roughness, root mean square, and kurtosis parameters. Fig. 6. Average skewness of 100 speckle patterns of scaffold surface as a function of time. The internal activities of bone scaffolds which are revealed by several parameters out of the analysis of speckle patterns are attributed to the interactions between the scaffold material and PBS solution that cause the degradation in long times. These effects can be also studied in microscopic and submicroscopic scales by several imaging technologies. Remarkably, the degradation rate of scaffolds can be measured by porosity change, surface wrinkling, body distortion, and pore size change. i.e., water uptake increases over time in the scaffold. However, the present methodology shows the collective effects and includes both the internal mobilities and the frequent fluctuations of the scaffold matrix. This paper presents an investigation of the scaffold morphology that includes its surface characteristics and internal structure. The investigation is based on a dynamic speckle pattern analysis. We show that scaffold dynamics can be measured by calculating several statistical and morphological parameters. The results show that the internal activities of bone scaffolds are increased by time, which is attributed to the interactions between the scaffold material and PBS solution that cause degradation in long times. The authors would like to thank Hamideh Aghahoseini for the preparation of the samples, and Mohammad-Reza Yaftian for his assistance in performing the experiments. See Supplement 1 for supporting content. 1. M. Brotto and M. L. Johnson, "Endocrine crosstalk between muscle and bone," Curr. Osteoporos. Reports 12(2), 135–141 (2014). [CrossRef] 2. H.-W. Kang, S. J. Lee, I. K. Ko, C. Kengla, J. J. Yoo, and A. Atala, "A 3d bioprinting system to produce human-scale tissue constructs with structural integrity," Nat. Biotechnol. 34(3), 312–319 (2016). [CrossRef] 3. S. H. Rao, B. Harini, R. P. K. Shadamarshan, K. Balagangadharan, and N. Selvamurugan, "Natural and synthetic polymers/bioceramics/bioactive compounds-mediated cell signalling in bone tissue engineering," Int. J. Biol. Macromol. 110, 88–96 (2018). [CrossRef] 4. A. M. Mohamed, "An overview of bone cells and their regulating factors of differentiation," The Malaysian Journal of Medical Sciences: MJMS 15(1), 4 (2008). 5. H. Nakamura, "Morphology, function, and differentiation of bone cells," J. Hard Tissue Biol. 16(1), 15–22 (2007). [CrossRef] 6. C. W. Patrick, A. G. Mikos, and L. V. McIntire, Frontiers in Tissue Engineering (Elsevier, 1998). 7. B. Dhandayuthapani, Y. Yoshida, T. Maekawa, and D. S. Kumar, "Polymeric scaffolds in tissue engineering application: a review," Int. J. Polym. Sci. 2011, 1–19 (2011). [CrossRef] 8. A. F. Khan, A. Afzal, A. A. Chaudhary, M. Saleem, L. Shahzadi, A. Jamal, M. Yar, A. Habib, and I. ur Rehman, "Synthesis of highly porous composite scaffolds for trabecular bone repair applications," Sci. Adv. Mater. 7(6), 1177–1186 (2015). [CrossRef] 9. S. Mallick, S. Tripathi, and P. Srivastava, "Advancement in scaffolds for bone tissue engineering: a review," IOSR J Pharm Biol Sci 10(1), 37–54 (2015). [CrossRef] 10. M. A. Velasco, C. A. Narváez-Tovar, and D. A. Garzón-Alvarado, "Design, materials, and mechanobiology of biodegradable scaffolds for bone tissue engineering," BioMed Res. Int. 2015, 1–21 (2015). [CrossRef] 11. M. S. Cortizo and M. S. Belluzo, "Biodegradable polymers for bone tissue engineering," in Industrial Applications of Renewable Biomass Products, (Springer, 2017), pp. 47–74. 12. R. Niranjan, C. Koushik, S. Saravanan, A. Moorthi, M. Vairamani, and N. Selvamurugan, "A novel injectable temperature-sensitive zinc doped chitosan/β-glycerophosphate hydrogel for bone tissue engineering," Int. J. Biol. Macromol. 54, 24–29 (2013). [CrossRef] 13. Q. L. Loh and C. Choong, "Three-dimensional scaffolds for tissue engineering applications: role of porosity and pore size," Tissue Eng. Part B: Rev. 19(6), 485–502 (2013). [CrossRef] 14. A. Wheelton, J. Mace, W. S Khan, and S. Anand, "Biomaterials and fabrication to optimise scaffold properties for musculoskeletal tissue engineering," Curr. Stem Cell Res. & Ther. 11(7), 578–584 (2016). [CrossRef] 15. K. Balagangadharan, S. V. Chandran, B. Arumugam, S. Saravanan, G. D. Venkatasubbu, and N. Selvamurugan, "Chitosan/nano-hydroxyapatite/nano-zirconium dioxide scaffolds with mir-590-5p for bone regeneration," Int. J. Biol. Macromol. 111, 953–958 (2018). [CrossRef] 16. S. Saravanan, R. Leena, and N. Selvamurugan, "Chitosan based biocomposite scaffolds for bone tissue engineering," Int. J. Biol. Macromol. 93, 1354–1365 (2016). [CrossRef] 17. S. Prasadh and R. C. W. Wong, "Unraveling the mechanical strength of biomaterials used as a bone scaffold in oral and maxillofacial defects," Oral Sci. Int. 15(2), 48–55 (2018). [CrossRef] 18. A. Sughanthy, M. Ansari, A. Siva, and M. Ansari, "A review on bone scaffold fabrication methods," Int. Res. J. Eng. Technol 2(6), 1232–1238 (2015). 19. P. Chocholata, V. Kulda, and V. Babuska, "Fabrication of scaffolds for bone-tissue regeneration," Materials 12(4), 568 (2019). [CrossRef] 20. T. Ghassemi, A. Shahroodi, M. H. Ebrahimzadeh, A. Mousavian, J. Movaffagh, and A. Moradi, "Current concepts in scaffolding for bone tissue engineering," Arch. Bone Jt. Surg. 6(2), 90 (2018). [CrossRef] 21. S. P. Soundarya, A. H. Menon, S. V. Chandran, and N. Selvamurugan, "Bone tissue engineering: Scaffold preparation using chitosan and other biomaterials with different design and fabrication techniques," Int. J. Biol. Macromol. 119, 1228–1239 (2018). [CrossRef] 22. A. M. Ferreira, P. Gentile, V. Chiono, and G. Ciardelli, "Collagen for bone tissue regeneration," Acta Biomater. 8(9), 3191–3200 (2012). [CrossRef] 23. R. LogithKumar, A. KeshavNarayan, S. Dhivya, A. Chawla, S. Saravanan, and N. Selvamurugan, "A review of chitosan and its derivatives in bone tissue engineering," Carbohydr. Polym. 151, 172–188 (2016). [CrossRef] 24. K. Balagangadharan, S. Dhivya, and N. Selvamurugan, "Chitosan based nanofibers in bone tissue engineering," Int. J. Biol. Macromol. 104, 1372–1382 (2017). [CrossRef] 25. C. Ma, H. Zhang, S. Yang, R. Yin, X. Yao, and W. Zhang, "Comparison of the degradation behavior of plga scaffolds in micro-channel, shaking, and static conditions," Biomicrofluidics 12(3), 034106 (2018). [CrossRef] 26. D. Campbell, R. A. Pethrick, and J. R. White, Polymer Characterization: Physical Techniques (CRC Press, 2000). 27. A. Bismarck, I. Aranberri-Askargorta, J. Springer, T. Lampke, B. Wielage, A. Stamboulis, I. Shenderovich, and H.-H. Limbach, "Surface characterization of flax, hemp and cellulose fibers; surface properties and the water uptake behavior," Polym. Compos. 23(5), 872–894 (2002). [CrossRef] 28. B. P. Santana, F. Nedel, C. Perelló Ferrúa, R. M. e Silva, A. F. da Silva, F. F. Demarco, and N. Lenin Villarreal Carreño, "Comparing different methods to fix and to dehydrate cells on alginate hydrogel scaffolds using scanning electron microscopy," Microsc. Res. Tech. 78(7), 553–561 (2015). [CrossRef] 29. J. Z. Kovacs, K. Andresen, J. R. Pauls, C. P. Garcia, M. Schossig, K. Schulte, and W. Bauhofer, "Analyzing the quality of carbon nanotube dispersions in polymers using scanning electron microscopy," Carbon 45(6), 1279–1288 (2007). [CrossRef] 30. V. Bellitto, Atomic Force Microscopy: Imaging, Measuring and Manipulating Surfaces at the Atomic Scale (BoD–Books on Demand, 2012). 31. K. C. Khulbe, C. Feng, and T. Matsuura, Synthetic Polymeric Membranes: Characterization by Atomic Force Microscopy (Springer Science & Business Media, 2007). 32. A. Dawson, C. Dyer, J. Macfie, J. Davies, L. Karsai, J. Greenman, and M. Jacobsen, "A microfluidic chip based model for the study of full thickness human intestinal tissue using dual flow," Biomicrofluidics 10(6), 064101 (2016). [CrossRef] 33. J.-P. Bouchaud and A. Georges, "Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications," Phys. Rep. 195(4-5), 127–293 (1990). [CrossRef] 34. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007). 35. Y. a. Aizu and T. Asakura, "Bio-speckle phenomena and their application to the evaluation of blood flow," Opt. Laser Technol. 23(4), 205–219 (1991). [CrossRef] 36. N. Fujisawa, S. Aiura, M. Ohkubo, and T. Shimizu, "Temperature measurement of dilute hydrogen flame by digital laser-speckle technique," J. Visualization 12(1), 57–64 (2009). [CrossRef] 37. H. Fujii, T. Asakura, K. Nohira, Y. Shintomi, and T. Ohura, "Blood flow observed by time-varying laser speckle," Opt. Lett. 10(3), 104–106 (1985). [CrossRef] 38. H. Fujii, K. Nohira, Y. Yamamoto, H. Ikawa, and T. Ohura, "Evaluation of blood flow by laser speckle image sensing. part 1," Appl. Opt. 26(24), 5321–5325 (1987). [CrossRef] 39. H. Fujii, "Visualisation of retinal blood flow by laser speckle flowgraphy," Med. Biol. Eng. Comput. 32(3), 302–304 (1994). [CrossRef] 40. H. J. Rabal and R. A. Braga Jr, Dynamic Laser Speckle and Applications (CRC Press, 2008). 41. R. A. Braga, I. M. Dal Fabbro, F. M. Borem, G. Rabelo, R. Arizaga, H. J. Rabal, and M. Trivi, "Assessment of seed viability by laser speckle techniques," Biosyst. Eng. 86(3), 287–294 (2003). [CrossRef] 42. V. F. Rad, E. E. Ramírez-Miquet, H. Cabrera, M. Habibi, and A.-R. Moradi, "Speckle pattern analysis of crumpled papers," Appl. Opt. 58(24), 6549–6554 (2019). [CrossRef] 43. Z. Hajjarian, H. T. Nia, S. Ahn, A. J. Grodzinsky, R. K. Jain, and S. K. Nadkarni, "Laser speckle rheology for evaluating the viscoelastic properties of hydrogel scaffolds," Sci. Rep. 6(1), 37949 (2016). [CrossRef] 44. Z. Hajjarian and S. K. Nadkarni, "Tutorial on laser speckle rheology: technology, applications, and opportunities," J. Biomed. Opt. 25(05), 1 (2020). [CrossRef] 45. M. Z. Ansari and A. K. Nirala, "Following the drying process of fevicol (adhesive) by dynamic speckle measurement," J. Opt. 45(4), 357–363 (2016). [CrossRef] 46. J. Pomarico, H. Di Rocco, L. Alvarez, C. Lanusse, L. Mottier, C. Saumell, R. Arizaga, H. Rabal, and M. Trivi, "Speckle interferometry applied to pharmacodynamic studies: evaluation of parasite motility," Eur. Biophys. J. 33(8), 694–699 (2004). [CrossRef] 47. G. Chen, T. Ushida, and T. Tateishi, "Development of biodegradable porous scaffolds for tissue engineering," Mater. Sci. Eng., C 17(1-2), 63–69 (2001). [CrossRef] 48. S. Deville, E. Saiz, and A. P. Tomsia, "Freeze casting of hydroxyapatite scaffolds for bone tissue engineering," Biomaterials 27(32), 5480–5489 (2006). [CrossRef] 49. P. Handzlik and K. Fitzner, "Corrosion resistance of ti and ti–pd alloy in phosphate buffered saline solutions with and without h2o2 addition," Trans. Nonferrous Met. Soc. China 23(3), 866–875 (2013). [CrossRef] 50. S. Du, K. Kendall, P. Toloueinia, Y. Mehrabadi, G. Gupta, and J. Newton, "Aggregation and adhesion of gold nanoparticles in phosphate buffered saline," J. Nanopart. Res. 14(3), 758 (2012). [CrossRef] 51. R. A. Braga, L. Dupuy, M. Pasqual, and R. Cardoso, "Live biospeckle laser imaging of root tissues," Eur. Biophys. J. 38(5), 679–686 (2009). [CrossRef] 52. R. Cardoso and R. Braga, "Enhancement of the robustness on dynamic speckle laser numerical analysis," Opt. Lasers Eng. 63, 19–24 (2014). [CrossRef] 53. H. C. Grassi, L. C. García, M. L. Lobo-Sulbarán, A. Velásquez, F. A. Andrades-Grassi, H. Cabrera, J. E. Andrades-Grassi, and E. D. Andrades, "Quantitative laser biospeckle method for the evaluation of the activity of trypanosoma cruzi using vdrl plates and digital analysis," PLoS Neglected Trop. Dis. 10(12), e0005169 (2016). [CrossRef] 54. E. E. Ramírez-Miquet, H. Cabrera, H. C. Grassi, E. d. J. Andrades, I. Otero, D. Rodríguez, and J. G. Darias, "Digital imaging information technology for biospeckle activity assessment relative to bacteria and parasites," Lasers Med. Sci. 32(6), 1375–1386 (2017). [CrossRef] 55. A. Zdunek, A. Adamiak, P. M. Pieczywek, and A. Kurenda, "The biospeckle method for the investigation of agricultural crops: A review," Opt. Lasers Eng. 52, 276–285 (2014). [CrossRef] 56. A. Vladimirov, A. Baharev, A. Malygin, J. Mikhailova, I. Novoselova, and D. Yakin, "Applicaton of speckle dynamics for studies of cell metabolism," in Optical Methods for Inspection, Characterization, and Imaging of Biomaterials II, vol. 9529 (International Society for Optics and Photonics, 2015), p. 95291F. 57. R. A. Braga, R. J. González-Peña, D. C. Viana, and F. P. Rivera, "Dynamic laser speckle analyzed considering inhomogeneities in the biological sample," J. Biomed. Opt. 22(4), 045010 (2017). [CrossRef] 58. S. Chung, R. Sudo, P. J. Mack, C.-R. Wan, V. Vickerman, and R. D. Kamm, "Cell migration into scaffolds under co-culture conditions in a microfluidic platform," Lab Chip 9(2), 269–275 (2009). [CrossRef] 59. R. Braga, W. Silva, T. Sáfadi, and C. Nobre, "Time history speckle pattern under statistical view," Opt. Commun. 281(9), 2443–2448 (2008). [CrossRef] 60. Z. Xu, C. Joenathan, and B. M. Khorana, "Temporal and spatial properties of the time-varying speckles of botanical specimens," Opt. Eng. 34(5), 1487–1503 (1995). [CrossRef] 61. D. Bicout and R. Maynard, "Diffusing wave spectroscopy in inhomogeneous flows," Phys. A 199(3-4), 387–411 (1993). [CrossRef] 62. M.-Y. Nagazi, G. Brambilla, G. Meunier, P. Marguerès, J.-N. Périé, and L. Cipelletti, "Space-resolved diffusing wave spectroscopy measurements of the macroscopic deformation and the microscopic dynamics in tensile strain tests," Opt. Lasers Eng. 88, 5–12 (2017). [CrossRef] 63. Z. Hajjarian and S. K. Nadkarni, "Evaluating the viscoelastic properties of tissue from laser speckle fluctuations," Sci. Rep. 2(1), 316 (2012). [CrossRef] 64. E. Gadelmawla, M. Koura, T. Maksoud, I. Elewa, and H. Soliman, "Roughness parameters," J. Mater. Process. Technol. 123(1), 133–145 (2002). [CrossRef] 65. H. Peregrina-Barreto, E. Perez-Corona, J. Rangel-Magdaleno, R. Ramos-Garcia, R. Chiu, and J. C. Ramirez-San-Juan, "Use of kurtosis for locating deep blood vessels in raw speckle imaging using a homogeneity representation," J. Biomed. Opt. 22(6), 066004 (2017). [CrossRef] 66. N. V. Petrov, P. V. Pavlov, and A. N. Malov, "Numerical simulation of optical vortex propagation and reflection by the methods of scalar diffraction theory," Quantum Electron. 43(6), 582–587 (2013). [CrossRef] 67. R. Arizaga, M. Trivi, and H. Rabal, "Speckle time evolution characterization by the co-occurrence matrix analysis," Opt. Laser Technol. 31(2), 163–169 (1999). [CrossRef] M. Brotto and M. L. Johnson, "Endocrine crosstalk between muscle and bone," Curr. Osteoporos. Reports 12(2), 135–141 (2014). H.-W. Kang, S. J. Lee, I. K. Ko, C. Kengla, J. J. Yoo, and A. Atala, "A 3d bioprinting system to produce human-scale tissue constructs with structural integrity," Nat. Biotechnol. 34(3), 312–319 (2016). S. H. Rao, B. Harini, R. P. K. Shadamarshan, K. Balagangadharan, and N. Selvamurugan, "Natural and synthetic polymers/bioceramics/bioactive compounds-mediated cell signalling in bone tissue engineering," Int. J. Biol. Macromol. 110, 88–96 (2018). A. M. Mohamed, "An overview of bone cells and their regulating factors of differentiation," The Malaysian Journal of Medical Sciences: MJMS 15(1), 4 (2008). H. Nakamura, "Morphology, function, and differentiation of bone cells," J. Hard Tissue Biol. 16(1), 15–22 (2007). C. W. Patrick, A. G. Mikos, and L. V. McIntire, Frontiers in Tissue Engineering (Elsevier, 1998). B. Dhandayuthapani, Y. Yoshida, T. Maekawa, and D. S. Kumar, "Polymeric scaffolds in tissue engineering application: a review," Int. J. Polym. Sci. 2011, 1–19 (2011). A. F. Khan, A. Afzal, A. A. Chaudhary, M. Saleem, L. Shahzadi, A. Jamal, M. Yar, A. Habib, and I. ur Rehman, "Synthesis of highly porous composite scaffolds for trabecular bone repair applications," Sci. Adv. Mater. 7(6), 1177–1186 (2015). S. Mallick, S. Tripathi, and P. Srivastava, "Advancement in scaffolds for bone tissue engineering: a review," IOSR J Pharm Biol Sci 10(1), 37–54 (2015). M. A. Velasco, C. A. Narváez-Tovar, and D. A. Garzón-Alvarado, "Design, materials, and mechanobiology of biodegradable scaffolds for bone tissue engineering," BioMed Res. Int. 2015, 1–21 (2015). M. S. Cortizo and M. S. Belluzo, "Biodegradable polymers for bone tissue engineering," in Industrial Applications of Renewable Biomass Products, (Springer, 2017), pp. 47–74. R. Niranjan, C. Koushik, S. Saravanan, A. Moorthi, M. Vairamani, and N. Selvamurugan, "A novel injectable temperature-sensitive zinc doped chitosan/β-glycerophosphate hydrogel for bone tissue engineering," Int. J. Biol. Macromol. 54, 24–29 (2013). Q. L. Loh and C. Choong, "Three-dimensional scaffolds for tissue engineering applications: role of porosity and pore size," Tissue Eng. Part B: Rev. 19(6), 485–502 (2013). A. Wheelton, J. Mace, W. S Khan, and S. Anand, "Biomaterials and fabrication to optimise scaffold properties for musculoskeletal tissue engineering," Curr. Stem Cell Res. & Ther. 11(7), 578–584 (2016). K. Balagangadharan, S. V. Chandran, B. Arumugam, S. Saravanan, G. D. Venkatasubbu, and N. Selvamurugan, "Chitosan/nano-hydroxyapatite/nano-zirconium dioxide scaffolds with mir-590-5p for bone regeneration," Int. J. Biol. Macromol. 111, 953–958 (2018). S. Saravanan, R. Leena, and N. Selvamurugan, "Chitosan based biocomposite scaffolds for bone tissue engineering," Int. J. Biol. Macromol. 93, 1354–1365 (2016). S. Prasadh and R. C. W. Wong, "Unraveling the mechanical strength of biomaterials used as a bone scaffold in oral and maxillofacial defects," Oral Sci. Int. 15(2), 48–55 (2018). A. Sughanthy, M. Ansari, A. Siva, and M. Ansari, "A review on bone scaffold fabrication methods," Int. Res. J. Eng. Technol 2(6), 1232–1238 (2015). P. Chocholata, V. Kulda, and V. Babuska, "Fabrication of scaffolds for bone-tissue regeneration," Materials 12(4), 568 (2019). T. Ghassemi, A. Shahroodi, M. H. Ebrahimzadeh, A. Mousavian, J. Movaffagh, and A. Moradi, "Current concepts in scaffolding for bone tissue engineering," Arch. Bone Jt. Surg. 6(2), 90 (2018). S. P. Soundarya, A. H. Menon, S. V. Chandran, and N. Selvamurugan, "Bone tissue engineering: Scaffold preparation using chitosan and other biomaterials with different design and fabrication techniques," Int. J. Biol. Macromol. 119, 1228–1239 (2018). A. M. Ferreira, P. Gentile, V. Chiono, and G. Ciardelli, "Collagen for bone tissue regeneration," Acta Biomater. 8(9), 3191–3200 (2012). R. LogithKumar, A. KeshavNarayan, S. Dhivya, A. Chawla, S. Saravanan, and N. Selvamurugan, "A review of chitosan and its derivatives in bone tissue engineering," Carbohydr. Polym. 151, 172–188 (2016). K. Balagangadharan, S. Dhivya, and N. Selvamurugan, "Chitosan based nanofibers in bone tissue engineering," Int. J. Biol. Macromol. 104, 1372–1382 (2017). C. Ma, H. Zhang, S. Yang, R. Yin, X. Yao, and W. Zhang, "Comparison of the degradation behavior of plga scaffolds in micro-channel, shaking, and static conditions," Biomicrofluidics 12(3), 034106 (2018). D. Campbell, R. A. Pethrick, and J. R. White, Polymer Characterization: Physical Techniques (CRC Press, 2000). A. Bismarck, I. Aranberri-Askargorta, J. Springer, T. Lampke, B. Wielage, A. Stamboulis, I. Shenderovich, and H.-H. Limbach, "Surface characterization of flax, hemp and cellulose fibers; surface properties and the water uptake behavior," Polym. Compos. 23(5), 872–894 (2002). B. P. Santana, F. Nedel, C. Perelló Ferrúa, R. M. e Silva, A. F. da Silva, F. F. Demarco, and N. Lenin Villarreal Carreño, "Comparing different methods to fix and to dehydrate cells on alginate hydrogel scaffolds using scanning electron microscopy," Microsc. Res. Tech. 78(7), 553–561 (2015). J. Z. Kovacs, K. Andresen, J. R. Pauls, C. P. Garcia, M. Schossig, K. Schulte, and W. Bauhofer, "Analyzing the quality of carbon nanotube dispersions in polymers using scanning electron microscopy," Carbon 45(6), 1279–1288 (2007). V. Bellitto, Atomic Force Microscopy: Imaging, Measuring and Manipulating Surfaces at the Atomic Scale (BoD–Books on Demand, 2012). K. C. Khulbe, C. Feng, and T. Matsuura, Synthetic Polymeric Membranes: Characterization by Atomic Force Microscopy (Springer Science & Business Media, 2007). A. Dawson, C. Dyer, J. Macfie, J. Davies, L. Karsai, J. Greenman, and M. Jacobsen, "A microfluidic chip based model for the study of full thickness human intestinal tissue using dual flow," Biomicrofluidics 10(6), 064101 (2016). J.-P. Bouchaud and A. Georges, "Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications," Phys. Rep. 195(4-5), 127–293 (1990). J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007). Y. a. Aizu and T. Asakura, "Bio-speckle phenomena and their application to the evaluation of blood flow," Opt. Laser Technol. 23(4), 205–219 (1991). N. Fujisawa, S. Aiura, M. Ohkubo, and T. Shimizu, "Temperature measurement of dilute hydrogen flame by digital laser-speckle technique," J. Visualization 12(1), 57–64 (2009). H. Fujii, T. Asakura, K. Nohira, Y. Shintomi, and T. Ohura, "Blood flow observed by time-varying laser speckle," Opt. Lett. 10(3), 104–106 (1985). H. Fujii, K. Nohira, Y. Yamamoto, H. Ikawa, and T. Ohura, "Evaluation of blood flow by laser speckle image sensing. part 1," Appl. Opt. 26(24), 5321–5325 (1987). H. Fujii, "Visualisation of retinal blood flow by laser speckle flowgraphy," Med. Biol. Eng. Comput. 32(3), 302–304 (1994). H. J. Rabal and R. A. Braga Jr, Dynamic Laser Speckle and Applications (CRC Press, 2008). R. A. Braga, I. M. Dal Fabbro, F. M. Borem, G. Rabelo, R. Arizaga, H. J. Rabal, and M. Trivi, "Assessment of seed viability by laser speckle techniques," Biosyst. Eng. 86(3), 287–294 (2003). V. F. Rad, E. E. Ramírez-Miquet, H. Cabrera, M. Habibi, and A.-R. Moradi, "Speckle pattern analysis of crumpled papers," Appl. Opt. 58(24), 6549–6554 (2019). Z. Hajjarian, H. T. Nia, S. Ahn, A. J. Grodzinsky, R. K. Jain, and S. K. Nadkarni, "Laser speckle rheology for evaluating the viscoelastic properties of hydrogel scaffolds," Sci. Rep. 6(1), 37949 (2016). Z. Hajjarian and S. K. Nadkarni, "Tutorial on laser speckle rheology: technology, applications, and opportunities," J. Biomed. Opt. 25(05), 1 (2020). M. Z. Ansari and A. K. Nirala, "Following the drying process of fevicol (adhesive) by dynamic speckle measurement," J. Opt. 45(4), 357–363 (2016). J. Pomarico, H. Di Rocco, L. Alvarez, C. Lanusse, L. Mottier, C. Saumell, R. Arizaga, H. Rabal, and M. Trivi, "Speckle interferometry applied to pharmacodynamic studies: evaluation of parasite motility," Eur. Biophys. J. 33(8), 694–699 (2004). G. Chen, T. Ushida, and T. Tateishi, "Development of biodegradable porous scaffolds for tissue engineering," Mater. Sci. Eng., C 17(1-2), 63–69 (2001). S. Deville, E. Saiz, and A. P. Tomsia, "Freeze casting of hydroxyapatite scaffolds for bone tissue engineering," Biomaterials 27(32), 5480–5489 (2006). P. Handzlik and K. Fitzner, "Corrosion resistance of ti and ti–pd alloy in phosphate buffered saline solutions with and without h2o2 addition," Trans. Nonferrous Met. Soc. China 23(3), 866–875 (2013). S. Du, K. Kendall, P. Toloueinia, Y. Mehrabadi, G. Gupta, and J. Newton, "Aggregation and adhesion of gold nanoparticles in phosphate buffered saline," J. Nanopart. Res. 14(3), 758 (2012). R. A. Braga, L. Dupuy, M. Pasqual, and R. Cardoso, "Live biospeckle laser imaging of root tissues," Eur. Biophys. J. 38(5), 679–686 (2009). R. Cardoso and R. Braga, "Enhancement of the robustness on dynamic speckle laser numerical analysis," Opt. Lasers Eng. 63, 19–24 (2014). H. C. Grassi, L. C. García, M. L. Lobo-Sulbarán, A. Velásquez, F. A. Andrades-Grassi, H. Cabrera, J. E. Andrades-Grassi, and E. D. Andrades, "Quantitative laser biospeckle method for the evaluation of the activity of trypanosoma cruzi using vdrl plates and digital analysis," PLoS Neglected Trop. Dis. 10(12), e0005169 (2016). E. E. Ramírez-Miquet, H. Cabrera, H. C. Grassi, E. d. J. Andrades, I. Otero, D. Rodríguez, and J. G. Darias, "Digital imaging information technology for biospeckle activity assessment relative to bacteria and parasites," Lasers Med. Sci. 32(6), 1375–1386 (2017). A. Zdunek, A. Adamiak, P. M. Pieczywek, and A. Kurenda, "The biospeckle method for the investigation of agricultural crops: A review," Opt. Lasers Eng. 52, 276–285 (2014). A. Vladimirov, A. Baharev, A. Malygin, J. Mikhailova, I. Novoselova, and D. Yakin, "Applicaton of speckle dynamics for studies of cell metabolism," in Optical Methods for Inspection, Characterization, and Imaging of Biomaterials II, vol. 9529 (International Society for Optics and Photonics, 2015), p. 95291F. R. A. Braga, R. J. González-Peña, D. C. Viana, and F. P. Rivera, "Dynamic laser speckle analyzed considering inhomogeneities in the biological sample," J. Biomed. Opt. 22(4), 045010 (2017). S. Chung, R. Sudo, P. J. Mack, C.-R. Wan, V. Vickerman, and R. D. Kamm, "Cell migration into scaffolds under co-culture conditions in a microfluidic platform," Lab Chip 9(2), 269–275 (2009). R. Braga, W. Silva, T. Sáfadi, and C. Nobre, "Time history speckle pattern under statistical view," Opt. Commun. 281(9), 2443–2448 (2008). Z. Xu, C. Joenathan, and B. M. Khorana, "Temporal and spatial properties of the time-varying speckles of botanical specimens," Opt. Eng. 34(5), 1487–1503 (1995). D. Bicout and R. Maynard, "Diffusing wave spectroscopy in inhomogeneous flows," Phys. A 199(3-4), 387–411 (1993). M.-Y. Nagazi, G. Brambilla, G. Meunier, P. Marguerès, J.-N. Périé, and L. Cipelletti, "Space-resolved diffusing wave spectroscopy measurements of the macroscopic deformation and the microscopic dynamics in tensile strain tests," Opt. Lasers Eng. 88, 5–12 (2017). Z. Hajjarian and S. K. Nadkarni, "Evaluating the viscoelastic properties of tissue from laser speckle fluctuations," Sci. Rep. 2(1), 316 (2012). E. Gadelmawla, M. Koura, T. Maksoud, I. Elewa, and H. Soliman, "Roughness parameters," J. Mater. Process. Technol. 123(1), 133–145 (2002). H. Peregrina-Barreto, E. Perez-Corona, J. Rangel-Magdaleno, R. Ramos-Garcia, R. Chiu, and J. C. Ramirez-San-Juan, "Use of kurtosis for locating deep blood vessels in raw speckle imaging using a homogeneity representation," J. Biomed. Opt. 22(6), 066004 (2017). N. V. Petrov, P. V. Pavlov, and A. N. Malov, "Numerical simulation of optical vortex propagation and reflection by the methods of scalar diffraction theory," Quantum Electron. 43(6), 582–587 (2013). R. Arizaga, M. Trivi, and H. Rabal, "Speckle time evolution characterization by the co-occurrence matrix analysis," Opt. Laser Technol. 31(2), 163–169 (1999). a. Aizu, Y. Adamiak, A. Afzal, A. Ahn, S. Aiura, S. Alvarez, L. Anand, S. Andrades, E. D. Andrades-Grassi, F. A. Andrades-Grassi, J. E. Andresen, K. Ansari, M. Ansari, M. Z. Aranberri-Askargorta, I. Arizaga, R. Arumugam, B. Asakura, T. Atala, A. Babuska, V. Baharev, A. Balagangadharan, K. Bauhofer, W. Bellitto, V. Belluzo, M. S. Bicout, D. Bismarck, A. Borem, F. M. Bouchaud, J.-P. Braga, R. Braga, R. A. Braga Jr, R. A. Brambilla, G. Brotto, M. Cabrera, H. Campbell, D. Cardoso, R. Chandran, S. V. Chaudhary, A. A. Chawla, A. Chen, G. Chiono, V. Chiu, R. Chocholata, P. Choong, C. Chung, S. Ciardelli, G. Cipelletti, L. Cortizo, M. S. d. J. Andrades, E. da Silva, A. F. Dal Fabbro, I. M. Darias, J. G. Davies, J. Dawson, A. Demarco, F. F. Deville, S. Dhandayuthapani, B. Dhivya, S. Di Rocco, H. Du, S. Dupuy, L. Dyer, C. e Silva, R. M. Ebrahimzadeh, M. H. Elewa, I. Feng, C. Ferreira, A. M. Fitzner, K. Fujii, H. Fujisawa, N. Gadelmawla, E. Garcia, C. P. García, L. C. Garzón-Alvarado, D. A. Gentile, P. Georges, A. Ghassemi, T. González-Peña, R. J. Goodman, J. W. Grassi, H. C. Greenman, J. Grodzinsky, A. J. Gupta, G. Habib, A. Habibi, M. Hajjarian, Z. Handzlik, P. Harini, B. Ikawa, H. Jacobsen, M. Jain, R. K. Jamal, A. Joenathan, C. Johnson, M. L. Kamm, R. D. Kang, H.-W. Karsai, L. Kendall, K. Kengla, C. KeshavNarayan, A. Khan, A. F. Khan, W. S Khorana, B. M. Khulbe, K. C. Ko, I. K. Koura, M. Koushik, C. Kovacs, J. Z. Kulda, V. Kumar, D. S. Kurenda, A. Lampke, T. Lanusse, C. Lee, S. J. Leena, R. Lenin Villarreal Carreño, N. Limbach, H.-H. Lobo-Sulbarán, M. L. LogithKumar, R. Loh, Q. L. Ma, C. Mace, J. Macfie, J. Mack, P. J. Maekawa, T. Maksoud, T. Mallick, S. Malov, A. N. Malygin, A. Marguerès, P. Matsuura, T. Maynard, R. McIntire, L. V. Mehrabadi, Y. Menon, A. H. Meunier, G. Mikhailova, J. Mikos, A. G. Mohamed, A. M. Moorthi, A. Moradi, A. Moradi, A.-R. Mottier, L. Mousavian, A. Movaffagh, J. Nadkarni, S. K. Nagazi, M.-Y. Nakamura, H. Narváez-Tovar, C. A. Nedel, F. Newton, J. Nia, H. T. Nirala, A. K. Niranjan, R. Nobre, C. Nohira, K. Novoselova, I. Ohkubo, M. Ohura, T. Otero, I. Pasqual, M. Patrick, C. W. Pauls, J. R. Pavlov, P. V. Peregrina-Barreto, H. Perelló Ferrúa, C. Perez-Corona, E. Périé, J.-N. Pethrick, R. A. Petrov, N. V. Pieczywek, P. M. Pomarico, J. Prasadh, S. Rabal, H. Rabal, H. J. Rabelo, G. Rad, V. F. Ramírez-Miquet, E. E. Ramirez-San-Juan, J. C. Ramos-Garcia, R. Rangel-Magdaleno, J. Rao, S. H. Rivera, F. P. Rodríguez, D. Sáfadi, T. Saiz, E. Saleem, M. Santana, B. P. Saravanan, S. Saumell, C. Schossig, M. Schulte, K. Selvamurugan, N. Shadamarshan, R. P. K. Shahroodi, A. Shahzadi, L. Shenderovich, I. Shimizu, T. Shintomi, Y. Silva, W. Siva, A. Soliman, H. Soundarya, S. P. Springer, J. Srivastava, P. Stamboulis, A. Sudo, R. Sughanthy, A. Tateishi, T. Toloueinia, P. Tomsia, A. P. Tripathi, S. Trivi, M. ur Rehman, I. Ushida, T. Vairamani, M. Velasco, M. A. Velásquez, A. Venkatasubbu, G. D. Viana, D. C. Vickerman, V. Vladimirov, A. Wan, C.-R. Wheelton, A. White, J. R. Wielage, B. Wong, R. C. W. Xu, Z. Yakin, D. Yamamoto, Y. Yang, S. Yao, X. Yar, M. Yin, R. Yoo, J. J. Yoshida, Y. Zdunek, A. Zhang, H. Zhang, W. Acta Biomater. (1) Arch. Bone Jt. Surg. (1) Biomaterials (1) BioMed Res. Int. (1) Biomicrofluidics (2) Biosyst. Eng. (1) Carbohydr. Polym. (1) Curr. Osteoporos. Reports (1) Curr. Stem Cell Res. & Ther. (1) Eur. Biophys. J. (2) Int. J. Biol. Macromol. (6) Int. J. Polym. Sci. (1) Int. Res. J. Eng. Technol (1) IOSR J Pharm Biol Sci (1) J. Biomed. Opt. (3) J. Hard Tissue Biol. (1) J. Mater. Process. Technol. (1) J. Nanopart. Res. (1) J. Opt. (1) J. Visualization (1) Lab Chip (1) Mater. Sci. Eng., C (1) Med. Biol. Eng. Comput. (1) Microsc. Res. Tech. (1) Nat. Biotechnol. (1) Opt. Commun. (1) Opt. Eng. (1) Opt. Laser Technol. (2) Opt. Lasers Eng. (3) Opt. Lett. (1) Oral Sci. Int. (1) Phys. A (1) Phys. Rep. (1) PLoS Neglected Trop. Dis. (1) Polym. Compos. (1) Quantum Electron. (1) Sci. Adv. Mater. (1) The Malaysian Journal of Medical Sciences: MJMS (1) Tissue Eng. Part B: Rev. (1) Trans. Nonferrous Met. Soc. China (1) Supplementary Material (1) » Supplement 1 Control experiment results and calculated of roughness parameters. (1) COM ( i , j ) = ∑ m = 1 M ∑ n = 1 N − 1 { 1 , if THSP ( m , n ) = i and THSP ( m , n + 1 ) = j , 0 , otherwise. (2) IM = ∑ i ∑ j COM ( i , j ) ∑ m COM ( i , m ) | i − j | 2 . (3) AC ( i , j ) = ⟨ THSP ( : , i ) , THSP ( : , i + j ) ⟩ , (4) AC ( i , j ) = e − 2 k γ ⟨ Δ r 2 ( i , j ) ⟩ . (5) S 1 = 1 P Q ∑ p = 1 P ∑ q = 1 Q | I ( p , q ) − ⟨ I ( p , q ) ⟩ | , (6) S 2 = [ 1 P Q ∑ p = 1 P ∑ q = 1 Q [ I ( p , q ) − ⟨ I ( p , q ) ⟩ ] 2 ] 1 2 , (7) S 3 = 1 P Q S 2 3 ∑ p = 1 P ∑ q = 1 Q [ I ( p , q ) − ⟨ I ( p , q ) ⟩ ] 3 , (8) S 4 = 1 P Q S 2 4 ∑ p = 1 P ∑ q = 1 Q [ I ( p , q ) − ⟨ I ( p , q ) ⟩ ] 4 .
CommonCrawl
3D flame reconstruction and error calculation from a schlieren projection C. Alvarez-Herrera ORCID: orcid.org/0000-0001-5707-40681, J. L. Herrera-Aguilar1, D. Moreno-Hernández2, A. V. Contreras-García1 & J. G. Murillo-Ramírez3 In this work a 3D flame reconstruction is performed from a 2D projection of the hot gases of a combustion flame. The projection is obtained using an optical schlieren technique. In this technique, a schlieren image is integrated linearly to obtain the hot gases, and then, a temperature field. Each row of the matrix representing the temperature distribution is fitted with a specific function, and its respective error is calculated. In this way, the projected matrix can be represented with the fitted functions. As a result of the procedure used in this research, a slice of the flame is obtained by assuming a cylindrical symmetry of it and multiplying the fitted function by itself. Finally, it was evaluated the mean error in calculations of temperature intensity in the flame under the cylindrical symmetry assumption obtaining an accuracy of 96% which validates the efficiency of our method. Nowadays, combustion flames are an important subject of study due to its importance in technology, fluctuations in oil prices and air pollution due to combustion processes. Therefore, these reasons are good motivations to look for efficiency in flames. Furthermore, fossil fuels are a finite reservoir of energy that each day decreases continuously in the world [1]. Because of these reasons, it is important to increase the efficiency in combustion flames to reduce fuel consumption in such a way that it will be decrease greenhouse effect and saving money by less fuel consumption. Hence, to have a better understanding of combustion of flames several optical nondestructive full field techniques are used such as; interferometry arrangements in form of tomography, shadowgraph and schlieren [2]. The schlieren techniques quantify the ray deviation at the observation plane. The ray deviates due to changes in the refractive index caused by temperature variations. This technique is used in this manuscript to analyze the hot gases produced by the combustion in flames, and to obtain the shape of flame. Schlieren technique is commonly used in wind tunnels for aeronautic and flame diagnostic since it is suitable and easy to implement [2, 3]. Real combustion flames are compounds of hot gases distributed in a volume, and flame temperature is one of the parameters which can be used to determine the efficiency of combustion processes. However, the schlieren technique obtains information which is integrated along an optical path. Thus, the schlieren technique quantifies the refractive index gradients in a projection plane caused by the hot gases distributed inside a volume. The resultant image at the observation plane contains three dimensional information of the hot gases. Although the information at the projected plane is commonly useful, it is important to calculate the three dimensional information of the volumetric object. In order to determine this information, some authors used projections at different angles to reconstruct a volumetric object using the Radon transform [4]. Other authors used the assumption of cylindrical geometry to apply the Abel transform [5], as it was done in this work to obtain a volumetric reconstruction of an object. It is well known that flames do not have perfect cylindrical shape and to calculate the error is very important to have a real measurement of the accuracy of the reconstruction. In this work, the three-dimensional flame reconstruction includes an error which was evaluated. The main goal of this work was to perform a three-dimensional reconstruction of a flame starting from a 2D projection of hot gases using an easy and novel mathematical method based on schlieren techniques and under the assumption of cylindrical symmetry of the temperature spatial distribution in the flame. Theoretical background The three-dimensional reconstruction of a flame in the present work uses the schlieren optical technique to data acquisition in form of refractive index gradient and it also uses the assumption of cylindrical geometry to process the results. Schlieren technique theory Schlieren technique was used to acquire the visualization of gradient of refractive index in two-dimensions caused by the hot gases in a combustion flame. The light ray propagation through a transparent media was used to describe how schlieren technique works, as shown in Fig. 1. When a light ray travels through a transparent media with a thickness W=ζ2−ζ1 and a refractive index n=n(x,y,z) it experiences certain deflection angle forming a projection of the object in the image plane. The ray propagation in an inhomogeneous medium is well described by the eikonal equation given by [1, 2, 6]: $$ \frac{d}{ds}\left(n \frac{d{\textbf{r}}}{ds} \right)= \nabla n\ . $$ Light ray propagation through the test volume with non-uniform refractive index Where ds is the arc length defined as ds2=dx2+dy2+dz2 and r is a position vector. The light ray passes through the inhomogeneous medium in the z-direction from ζ1 to ζ2 changing ds into dz each displacement of dz generates a small angle deflection. Assuming the configuration depicted in Fig. 1 the Eq. (1) changes into the form: $$ \frac{\partial}{\partial z}\left(n\frac{\partial x}{\partial z}\right) =\frac{\partial n}{\partial x}\ , $$ $$ \frac{\partial}{\partial z}\left(n\frac{\partial y}{\partial z}\right) =\frac{\partial n}{\partial y} \ , $$ integrating Eqs. (2) and (3) in both sides from ζ1 to ζ2, it is obtained: $$ n\left(\frac{dx}{dz}\right)_{\zeta_{2}}-n\left(\frac{dx}{dz}\right)_{\zeta_{1}} =\int_{\zeta_{1}}^{\zeta_{2}}\frac{\partial n}{\partial x}dz $$ $$ n\left(\frac{dy}{dz}\right)_{\zeta_{2}}-n\left(\frac{dy}{dz}\right)_{\zeta_{1}} =\int_{\zeta_{1}}^{\zeta_{2}}\frac{\partial n}{\partial y}dz \ , $$ However, when the ray propagates through ζ1 there is not deflection angle and then: $$ \left(\frac{dx}{dz}\right)_{\zeta_{1}}=0= \left(\frac{dy}{dz}\right)_{\zeta_{1}} \ , $$ as shown in Fig. 1. Substituting now the condition expressed in Eq. (6) into Eqs. (4) and (5) respectively, these change into: $$ \left(\frac{dx}{dz}\right)_{\zeta_{2}}=\frac{\Delta x}{f}\ , $$ $$ \left(\frac{dy}{dz}\right)_{\zeta_{2}}=\frac{\Delta y}{f}\ . $$ where Δx and Δy are the infinitesimal displacements subtended by the angles εx and εy in the Z−X and Z−Y planes, respectively. Considering hot gases from combustion in the flame as the object of study, it can be assumed that the information obtained in x direction is more significant than the corresponding information in the y-direction due to the cylindrical shape of flame. For this reason in this work, in Eq. (7) the x direction was used. Afterward, taking the approximation of small angle deviations tan(εx) can be changed by εx. As shown in Fig. 1, tan(εx)=Δx/f, with f the focal length of the optical system which projects the image of the test volume, so using the approximation εx = Δx/f as in [6] and employing Eqs. (4) and (7) it is obtained: $$ \epsilon_{x} = \int_{\zeta_{1}}^{\zeta_{2}}\frac{1}{n}\frac{\partial n}{\partial x}dz. $$ On the other hand, from the Gladstone-Dale relation (n−1)=Kρ, it is possible to obtain ∂ρ/∂x which can be substituted into Eq.(9) to get: $$ \frac{\partial \rho}{\partial x} = \frac{\delta x}{KWf}; $$ where K is the Gladstone-Dale constant which depends on the medium and the wavelength used in the measurement, ρ is the medium density, W is the object width, δx is the displacement in x direction when a light ray passes through an inhomogeneous medium, which is related linearly with the intensity obtained in the image plane according with Figs. 2 and 3, f represents the focal length of the second mirror. Integrating now the density field as described in the following equation: $$ \rho(x) = \rho_{0}+ \frac{1}{KWf}\int_{x_{1}}^{x_{2}}\delta x dx, $$ Schematic diagram of the Z-type schlieren experimental set-up used in this research to obtain 2D gradient of the hot gases density induced by the flame Close up of the focal field slit in the diagram of the schlieren experimental set-up used in this research to obtain the 2D gradient of the hot gases density induced by the flame where ρ(x) is the density of hot combustion gases along the x axis and ρ0 is the nominal density of air at room temperature. This result is relevant because it can be used to get the temperature T(x) along the x axis using the following relation where T0 stands for the reference temperature: $$ T\left(x\right) = \frac{\rho_{0}}{\rho(x)}T_{0}\ . $$ As can be observed in Eq. (12) it is possible to obtain the temperature from a projection of the volumetric object, as was mentioned above. That is, in the two-dimensional projection of hot gases of flame there is intrinsic 3D information of the combustion process and its recovery is precisely the main objective of this work. 3D reconstruction of flame The temperature function depicted in Eq. (12) was obtained from the schlieren technique, depicted in Figs. 2 and 3, applied to the volumetric object with cylindrical symmetry using the Abel transform described by the relation: $$ g\left(x\right) =A\left[ \widehat{f} \left(r \right) \right] =\int_{x}^{\infty}\frac{\widehat{f} \left(r\right)rdr}{\sqrt{r^{2}-x^{2}}}\ . $$ Where g(x) is the Abel transform of f(r). Such that, to recover the 3D information, the inverse Abel transform should be used [7, 8]; several authors usually apply the inverse Abel transform to reconstruct a volume in three dimensions [7, 9–12]. The procedure proposed in this research for the 3D reconstruction is an alternative to the use of the Abel transform and additionally it is easy to implement, to achieve that purpose the outer product of two coordinate vectors x and y was applied using the next relation: $$ A = A + xy^{t}, A \in \mathbb{R}^{m\times n}, x \in \mathbb{R}^{n}, y \in \mathbb{R}^{m}\ , $$ being x and y skinny matrices, such that the number of columns of x is equal to the number of rows in y, as shown in the following example: $$ xy^{t}=(1,2,1) \left(\begin{array}{cc} &1\ \\ &2\ \\ &1\ \\ \end{array}\right)\ . $$ In this work, the x coordinate represents a row of temperature field associated to a height of flame, and yt represents the transpose of x coordinate matrix [10]. Then using Eq. (15) the temperature slices at a specific height were obtained. Finally, doing a concatenation of all the temperatures slices, the flame volume was reconstructed. So, under the cylindrical symmetry hypothesis, the shape of flame h at a specific x coordinate was obtained making a Gaussian fitting to the curve given by: $$ h(x)=a e^{-(x-b)^{2}/2c^{2}}\ , $$ where a, b and c were calculated for each x coordinate, by means of minimizing the mean squared error between the estimated curve and the experimental data as shown in Fig. 8a, b, c and d. On the other hand, it is well known that a fitting process introduces an absolute error [8], for this reason in order to validate the approximation used, the mean square error between the fitted curve and the experimental data was computed. The functions h(x) and Abel transform g(x) theoretically give the same shape when they are applied in one dimension, that means, both give the projection of light sheet that passes through a cylindrical transparent media cutting a circular slice. These functions describe the temperature profile in x direction obtained with the schlieren technique. 2D flame reconstruction process Figure 2 shows the top view of Z-type schlieren experimental set-up which includes a white light diode as light source, a stop metallic sheet with a hole of two mm diameter to define the dynamic range of the schlieren setup, two spherical mirrors with a diameter D=0.15m and a focal distance f=1.54m, and a knife edge located at the focal plane. According to the experimental set-up, the knife edge is placed at the focal plane in vertical position to be displaced along the x direction perpendicularly to the z axis as shown in Fig. 3 where is described a small displacement δx which produces an intensity change at the image plane located at the camera position. The procedure to evaluate δx is important because it is related with the intensity in the image plane and with the sensitivity of the schlieren set-up. In this research, the knife edge is displaced along the x direction with small increments δx to achieve an image with a half of the diameter intensity at the focal plane. Once the knife edge was located at the required position it was fixed, so that a half of intensity was stopped, and a half of intensity crossed the focal plane to arrive at the image plane. At this fixed position of the knife edge, the schlieren images were captured. The brightness in schlieren images was evaluated qualitatively, by this reason, normalized temperature fields were also obtained. In this research, an optical calibration of the schlieren system was not done. In turn, the changes in the light intensity detected by the camera allow to obtain the gradient of hot gases density as shown in Fig. 5. To form the intensity images a lumenera digital camera was used with a navitar lens with a focal length of =50mm. The camera provides 30 frames per second with 640 x 640 pixels. The burner was located in the middle of the schlieren set-up and the hot gases produced by the combustion flame flow in vertical direction, the same direction in which is positioned the knife edge so the schlieren set-up can be sensible in the x direction [13]. Four experimental conditions labeled as "One Open Slot", "Two Open Slots", "Three Open Slots" and "Four Open Slots", which will be called 1OS, 2OS, 3OS and 4OS respectively, were considered. The experimental condition "One Open Slot" describes the fact that only one slot (hole) was opened in the burner that produce the flame, "Two Open Slots" describes the case where two slots were opened in the burner and so on. In the acquisition procedure of the data sets, the pressure of gas admission to the burner was controlled and adjusted frequently to assure the most similar experimental condition in the gas burning in every case considered. Figure 4 shows a photograph of the gas butane burner used in this research, at the bottom are shown the circular holes (slots) that allow the admission of the air to the burner. Each experimental condition considered by the schlieren technique produced a data set associated with a mesh containing the intensity distribution of temperature associated with each pixel in the flame image. The processing of the data acquired finally allows the 3D reconstruction of flame. Photograph of the gas butane burner used to perform the measurement of temperatures of flame. In the photograph are shown at the bottom the circular holes (slots) that allow the admission of air to the burner and at the top the tip of the thermocouple used to measure the temperature in the flame Gradient of the density of hot gases obtained from the schlieren measurements corresponding to the experimental conditions 1OS, 2OS, 3OS and 4OS air intake at the burner, respectively. The gas reaction zone depends on the amount of premixed air fuel as is described by the conical structures located in the base and close to the center of the flame Figure 5, shows the gradient of the hot gases density obtained from the schlieren measurements corresponding to the experimental conditions 1OS, 2OS, 3OS, and 4OS of air admission in the burner, respectively. As can be observed in Fig. 5, the gradient distribution of hot gases has some own characteristics associated to each experimental condition considered, for example, the 2D flame shape tends to be more symmetric keeping a cylindrical geometry for the cases corresponding to an even number of opened slots. It is also noticeable how the reaction zone shape depends on the premixed air-fuel quantity as is described by the conical structures located at the base and close to the center of flame. Figure 6 shows processed images of the original 2D data projections shown in Fig. 5 obtained from the schlieren technique for the experimental conditions: a) One Open Slot (1OS), b) Two Open Slots (2OS), c) Three Open Slots (3OS), and Four Open Slots (4OS). In Fig. 6 the temperature intensity distribution was normalized using the schlieren temperature intensities, T, employing the relation: Tnormalized=(T−Tmin)/(Tmax−Tmin), where Tmin is the minimum intensity of temperature and Tmax is the maximum intensity of temperature. Then, the normalized temperature distribution was highlighted employing a colors gamma which goes from blue (cold) to red (hot). This normalization procedure described before was also used to represent Figs. 7 and 8. Processed images of the original 2D data projections shown in Fig. 5 obtained from the schlieren technique for the experimental conditions: a) One Open Slot (1OS), b) Two Open Slots (2OS), c) Three Open Slots (3OS), and d) Four Open Slots (4OS). Colors correspond to the temperature intensity distribution which goes from blue (cold) to red (hot). The horizontal black lines in the images represent different height of the flame at which were obtained slices of the temperature distribution on a horizontal plane as shown in Fig. 8 Slices of the reconstructed 3D volume of a flame (the cuts were made at the half value corresponding to pixel 100 at the x axis). The reconstructed volume of flame represents the temperature distribution in 3D for the four data sets: a) 1OS, b) 2OS, c) 3OS and d) 4OS, colors correspond to the temperature intensity which goes from blue (cold) to red (hot). The x and z axes correspond to the original x and y axes of the experimental data, respectively. The new axis obtained due to the 3D reconstruction is the rotation axis Once the reconstruction of the flame in two dimensions was performed, the reconstruction of the flame in three dimensions was carried out, for which it was necessary to applying the outer product of each coordinate row by itself. In the first step of the flame reconstruction process in 3D, it was obtained a 2D Gaussian shape for each height and then were concatenated each one of these slices to obtain the complete volume. This reconstructed volume represents the temperature distribution in 3D as shown in Fig. 7 for the four sets 1OS, 2OS, 3OS and 4OS, being the temperature intensity in the flame described by colors going from blue (cold) to red (hot). In Fig. 7 the new x and z axis correspond to the original x and y axis of the experimental data, respectively. The new axis obtained due 3D reconstruction was called rotation axis (new y axis) in order to avoid confusion. It is important to note that the flame reconstruction process expands the information on the particular characteristics of the flame with respect to the two-dimensional image originally measured. Error estimation in the 3D reconstruction process In order to estimate the error in the three-dimensional reconstruction process of the flame studied in this research, regarding the assumption of cylindrical symmetry of temperatures in the flame along the vertical axis or axis of rotation, a numerical fitting was made, (assuming a Gaussian distribution of the temperature in the flame at various heights fixed), to the data entered to reconstruct the volume obtained directly from the experimental measurements made using the schlieren technique. Several slices were considered at fixed heights in the flame corresponding to the pixels marked in Fig. 8a, b, c and d, which are shown in Fig. 8e, f, g and h, respectively. The absolute mean error by discrepancy between the fitted Gaussian curve for each the 400 rows selected on a fixed value on the y axis and the experimental data computed for each one of the 400 slices constituting the flame are quite small as shown in Fig. 9 for some representative cases. Nevertheless, in Fig. 8b) corresponding to the 2OS condition, the mean error rise at the beginning of the graph as a consequence of the perturbations occurring at the top of the experimental data reconstruction as shown in Figs. 5b) and 6b). Although, it is important to note that the data set associated to the perturbations appearing at the top of flame shown in Fig. 5b) are far away from the hottest area which makes possible to ignore it. Disregarding the disturbance described at the beginning of Fig. 9b, in general terms the mean error was less than 0.04, which means that the reconstruction process was made with an accuracy estimated around 96%. Thus, all calculations performed in this research confirm the validity of our hypothesis about the cylindrical behavior of the temperature distribution in the flame. Figure a), b), c) and d) are experimental data (blue circles) of selected pixels on the rotation y axis defining slices at different height of the flame describing the temperature distribution on a horizontal plane (from top to bottom 50, 200, 325, 350, 365, 370 and 380, respectively) for the four experimental conditions considered 1OS, 2OS,3OS and 4OS, respectively. The solid black lines represent fitted Gaussian curves. (e), (f), (g) and (h) are slices at different height of the flame describing the temperature distribution on a horizontal plane associated to figures (a), (b), (c) and (d), respectively. The temperature intensity goes from blue (cold) to red (hot) Mean relative error between the fitted Gaussian curve for each the 400 rows on a fixed value on the y axis and the experimental data (y axis) for each the 400 fitted curves (x axis) for: a) 1OS, b) 2OS, c) 3OS and d) 4OS experimental conditions Temperature distribution measurement Temperature measurement in the flame using a thermocouple In this research was also performed the measurement of temperature distribution of flame in order to validate again the assumption of cylindrical symmetry. The temperature measurement was completed for the four conditions (1OS, 2OS, 3OS and 4OS) considered in this work, using a K type thermocouple as measurement probe connected to a 2110 KEITHLEY digital multimeter. The temperature was measured at three specific heights of flame respect to the base of the butane gas burner for each experimental condition considered. The thermocouple was attached to a vertical translation base supported on a rail that allowed a horizontal translation to perform a sequential sweep of observation points every two millimeters along a sampling line at a specific height of flame. Figure 10 shows the temperature distribution as a function of the spatial coordinate x in the horizontal direction, measured at three different heights of flame respect to the bottom of the flame in the gas burner position for 1OS, 2OS, 3OS and 4OS experimental conditions. The zero position (x=0) represents the center of the burner. As can be seen in Fig. 10a the flame for 1OS condition has a temperature distribution with a slight asymmetry and presents a shift toward the right of the center of the burner due to the air admission for the combustion comes from a side entry of the burner corresponding to the "One Open Slot" condition. In the case of 2OS condition shown in Fig. 10b, the temperature distribution in the flame presents a better symmetry around the center of the burner respect to the behavior found for the 1OS condition. As can be seen in Fig. 10b the temperature distribution measured at the lowest height (0.5 cm) respect to the burner position and located under the flame reaction zone shows a valley reaching a minimum around the x=0 position. This behavior is due to the fresh mixture of air and butane gas admitted to the burner has not yet reacted. Figure 10c corresponding to the 3OS condition shows a high symmetry in the temperature spatial distribution around the burner position describing approximately a Gaussian distribution shape. Clearly the air admission using three open slots (3OS) brings on high stability to the combustion process. It is interesting to comment that the temperature reached by the flame (≈390∘C) for the minimum height considered (0.5 cm) at x=0 in the 3OS condition was higher than the value measured at the same height for the 2OS experimental condition. Finally, in the case of 4OS experimental condition it is possible to say that the temperature spatial distribution of flame maintains a good grade of symmetry around x=0. Furthermore, in the 4OS experimental condition was verified a remarkably interesting behavior of the temperature distribution for the minimum height considered (0.5 cm) at x=0; the flame reached temperature values at x=0 (corresponding to the valleys) higher than those reached at the maximum height considered (4.2 cm) which was a behavior not found in any other of experimental conditions considered. The temperature spatial distribution corresponding to the 4OS condition still shows a valley shape around x=0 for the measurements done at the minimum height considered (0.5 cm) such as in the other experimental conditions considered but at higher temperatures than those obtained in the conditions 1OS, 2OS and 3OS. In fact, it is interesting to note that the highest temperatures in the flame were reached in some positions of the x coordinate the highest temperatures reached by the flame. Temperature distribution as function of the spatial coordinate x in the horizontal direction, measured at three different heights of flame respect to the bottom of the flame in the gas burner position for the experimental conditions: a) 1OS, b) 2OS, c) 3OS and d) 4OS. The zero position (x=0) represents the center of the burner. The solid lines were obtained by interpolation and are only for eye-guide Temperature measurement from schlieren technique The temperature distribution of flame was also obtained from schlieren technique for one of the representative cases considered in this work. Figure 5 shows the gradient of the hot combustion gases density obtained from the schlieren measurements corresponding to the experimental conditions 1OS, 2OS, 3OS and 4OS of air admission in the burner generating the flame. As can be observed in Fig. 5 it is easy to appreciate the hot gases distribution in the flame in each experimental condition, via the variations of the hot gases density. In fact, it is interesting to observe the gases reaction zone described by the conical structures located at the base and close to the center of flame in Fig. 5. The size of these conical structures depends on the premixed air-fuel quantity. The temperature distribution of flame was obtained calculating first the density of hot gases along the x direction using the Eq. (11) listed in section 2.1 and then substituting it into Eq. (12). Figure 11 shows a comparison of the temperature spatial distribution at a height of 1.4 cm, respect to the base of the burner corresponding to the 4OS experimental condition, obtained by the schlieren technique and the measurements done with the K type thermocouple. Then, for the same height position in schlieren normalized temperature, Tnormalized was calibrated with the temperature measured with a thermocouple, using the thermocouple maximum temperature, TThermocouple,max and the thermocouple minimum temperature corresponding to room temperature TThermocouple,min=T0=20∘C, using the relation Tcalibrated,schlieren=Tnormalized(TThermocouple,max−T0)+T0. The calibration in x direction was done with a Pixel size of Pix=0.031053cm to scale the x axis in centimeters. To help to visualize the behavior of the temperature points in the flame measured with the thermocouple, an interpolation procedure using the Octave function interp1 with Piecewise Cubic Hermite Interpolating Polynomial (pchip) was performed. Comparison of the temperature spatial distribution at a height of 1.4 cm, respect to the base of the burner, for the 4OS experimental condition, obtained by the schlieren technique and the measurements done with the K type thermocouple. Both temperatures measurements are very similar considering the uncertainty bars associated to the values obtained using the thermocouple. The continuous line joining points measured with a thermocouple was obtained by interpolation and is only for eye-guide As shown in Fig. 11 there is a good agreement between the flame temperature data obtained from the schlieren measurements and the direct measurements using a thermocouple. The agreement between both temperatures measurements is good if it is taken into account the uncertainty shown in Fig. 11 associated with the values obtained using the thermocouple. In this research was developed a three-dimensional reconstruction of a flame from two-dimensional temperature measurements obtained from the gradient of hot gases density using an easy and efficient method based on the schlieren technique. The reconstruction process was based on the hypothesis of cylindrical symmetry of the temperature spatial distribution in the flame. The method proposed in this work to reconstruct the 3D characteristics of flame, only requires the calculation of the outer product between one sample row of the temperature field and its own transpose sample row of the temperature field to obtain one slice of the temperature of the flame. Once it has been obtained a collection of temperatures slices, it can be reconstructed the temperature distribution in the volume of flame. The mean error was evaluated in the calculations of temperature intensity for each flame under the cylindrical symmetry assumption and was obtained an accuracy of 96% when they were compared to a Gaussian distribution. The temperature distribution of flame obtained from schlieren measurement was calculated and it was found a good agreement between these calculations and direct measurements of the flame temperature using a thermocouple. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Vanek, FM, Albright, LD: Energy Systems Engineering Evaluation and Implementation. McGraw-Hill, United States of America (2008). Merzkirch, W: Flow visualization. 2nd edn. Academic Press, Inc, Orlando (1987). Settles GS: Schlieren and Shadowgraph Techniques. 1st edn. Springer, Berlin (2001). Deans, SR: The radon transform and some of its applications. John Wiley & Sons, Ltd, New York (N.Y.) (1983). Weisstein, EW: Second edition CRC concise encyclopedia of mathematics. CHAPMAN & HALL/CRC, Boca Raton (2003). Hanson, KM, Wecksung, GW: Local basis-function approach to computed tomography. Appl. Opt. 24, 4028–4039 (1985). Dribinski, V, Ossadtchi, A, Mandelshtam, VA, Reisler, H: Reconstruction of Abel-transformable images: The Gaussian basis-set expansion Abel transform method. Rev. Sci. Instrum. 73(7), 2634–2642 (2002). Jain, AK: Fundamentals of digital image processing. Prentice-Hall, Inc, Upper Saddle River, NJ, USA (1989). De La, Rosa-Miranda, E, et al: An alternative approach to the tomographic reconstruction of smooth refractive index distributions. J. Eur. Opt. Soc. Rapid Publ. Eur. 8, 13036 (2013). ISSN 1990-2573. Golub, GH, Van Loan, CF: Matrix computations. 3rd Ed. Johns Hopkins University Press, Baltimore, MD, USA (1996). Press, WH, Teukolsky, SA, Vetterling, WT, Flannery, BP: Numerical Recipes 3rd Edition: The Art of Scientific Computing. 3ed. Cambridge University Press, New York, NY, USA (2007). Sen, A, Srivastava, M: Regression analysis: theory, methods, and applications. Springer Science & Business Media (2012). Alvarez-Herrera, C, Moreno-Hernández, D, Barrientos-García, B: Temperature measurement of an axisymmetric flame by using a schlieren system. J. Opt. A Pur. Appl. Opt. 10, 10 (2008). https://doi.org/10.1088/1464-4258/10/10/104014. First and second author would like to thank CONACYT. Facultad de Ingeniería, UACH and PRODEP for the support given to this project. Facultad de Ingeniería, Universidad Autónoma de Chihuahua, Nuevo Campus Universitario, Circuito Universitario S/N, Chihuahua, Chih., 31125, México C. Alvarez-Herrera, J. L. Herrera-Aguilar & A. V. Contreras-García Centro de Investigaciones en Óptica A.C., Lomas del Bosque 115, Lomas del Campestre, León, Guanajuato, C.P. 37150, México D. Moreno-Hernández Centro de Investigación en Materiales Avanzados S.C., Miguel de Cervantes 120, C.P. 31136, Chihuahua, Chih., México J. G. Murillo-Ramírez C. Alvarez-Herrera J. L. Herrera-Aguilar A. V. Contreras-García CAH design of the work, AND the acquisition, analysis, AND interpretation of data AND have drafted the work or substantively revised it. JLHA the acquisition, analysis, AND interpretation of data AND have drafted the work or substantively revised it. DMH design of the work, AND the acquisition, analysis, AND interpretation of data. AVCG interpretation of data AND have drafted the work or substantively revised it. JGMR interpretation of data AND have drafted the work or substantively revised it. The author(s) read and approved the final manuscript. Correspondence to C. Alvarez-Herrera. The authors declare that they have no competing interests Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/. Alvarez-Herrera, C., Herrera-Aguilar, J.L., Moreno-Hernández, D. et al. 3D flame reconstruction and error calculation from a schlieren projection. J. Eur. Opt. Soc.-Rapid Publ. 16, 21 (2020). https://doi.org/10.1186/s41476-020-00141-8 Accepted: 05 August 2020 3D flame reconstruction Schlieren technique
CommonCrawl
Treatment of Middle East respiratory syndrome with a combination of lopinavir/ritonavir and interferon-β1b (MIRACLE trial): statistical analysis plan for a recursive two-stage group sequential randomized controlled trial Yaseen M. Arabi ORCID: orcid.org/0000-0001-5735-62411,2, Ayed Y. Asiri3, Abdullah M. Assiri4, Hani A. Aziz Jokhdar5, Adel Alothman1,6, Hanan H. Balkhy1,7, Sameera AlJohani1,8, Shmeylan Al Harbi9,10, Suleiman Kojan1,6, Majed Al Jeraisy9,10, Ahmad M. Deeb ORCID: orcid.org/0000-0002-3680-733811,12, Ziad A. Memish13,14, Sameeh Ghazal3, Sarah Al Faraj3, Fahad Al-Hameed ORCID: orcid.org/0000-0003-0231-871X15,16, Asim AlSaedi15,17, Yasser Mandourah18, Ghaleb A. Al Mekhlafi19, Nisreen Murad Sherbeeni20, Fatehi Elnour Elzein20, Abdullah Almotairi21, Ali Al Bshabshe22, Ayman Kharaba23, Jesna Jose24, Abdulrahman Al Harthy25, Mohammed Al Sulaiman26, Ahmed Mady27,28, Robert A. Fowler29,30, Frederick G. Hayden31, Abdulaziz Al-Dawood1,2, Mohamed Abdelzaher32,33, Wail Bajhmom34, Mohamed A. Hussein12,24 & and the Saudi Critical Care Trials group Trials volume 21, Article number: 8 (2020) Cite this article A Study protocol to this article was published on 30 January 2018 The MIRACLE trial (MERS-CoV Infection tReated with A Combination of Lopinavir/ritonavir and intErferon-β1b) investigates the efficacy of a combination therapy of lopinavir/ritonavir and recombinant interferon-β1b provided with standard supportive care, compared to placebo provided with standard supportive care, in hospitalized patients with laboratory-confirmed MERS. The MIRACLE trial is designed as a recursive, two-stage, group sequential, multicenter, placebo-controlled, double-blind randomized controlled trial. The aim of this article is to describe the statistical analysis plan for the MIRACLE trial. The primary outcome is 90-day mortality. The primary analysis will follow the intention-to-treat principle. The MIRACLE trial is the first randomized controlled trial for MERS treatment. ClinicalTrials.gov, NCT02845843. Registered on 27 July 2016. Middle East respiratory syndrome (MERS) is a viral respiratory disease caused by the Middle East respiratory syndrome coronavirus (MERS-CoV). MERS cases continue to occur and are often associated with respiratory and multiorgan failure [1]. There is no antiviral treatment with proven efficacy at present [1, 2]. The MIRACLE trial (MERS-CoV Infection tReated with A Combination of Lopinavir/ritonavir and intErferon-β1b) is the first randomized controlled trial for MERS treatment. The study protocol has been previously published [3]. There are several challenges in a trial for treatment of a disease like MERS: (1) there is not enough information on the effect size of the lopinavir/ritonavir and interferon-β1b provided with standard supportive care compared to placebo provided with standard supportive care to conduct adequate planning for the study sample size; (2) MERS is a sporadic, unpredictable, and rare disease, which makes it difficult to plan a separate pilot study to collect the necessary information needed for the planning of the main trial. To overcome these challenges, we designed the MIRACLE trial as a recursive two-stage adaptive trial, which is a relatively new method for group sequential trials [4,5,6,7]. The approach is based on the conditional error principle, which allows for flexible and continuous adjustment of the trial parameters using data observed during prior stages without inflation of the type I error [8]. Another advantage of this method is the flexibility in setting the timing and the number of needed interim analyses. Such flexibility is necessary in a situation where recruitment rate is unpredictable and a sudden flux in recruitment of patients could happen at any time. Finally, the design takes advantage of the accumulated information throughout the trial from every single recruited patient as opposed to a traditional two-study approach (pilot followed by the main trial). In this article, we describe the MIRACLE trial statistical analysis plan (SAP) in advance of trial completion. We identify the procedures to be followed for the primary and secondary analyses for the trial. The SAP was written by the study steering committee members led by the principal investigator, who remains blinded to both group allocation and to study results until after completing patient recruitment, patient follow-up, and completion and locking of the database. The final study report will follow the guidelines of the Consolidated Standards of Reporting Trials (CONSORT) for reporting randomized controlled trials [9, 10]. The trial is being conducted according to the standard requirements of Good Clinical Practice E6 [11]. The SAP was developed in accordance with the International Council for Harmonisation guidelines (E9 Statistical principles for clinical trials and E3 clinical study reports guidelines) [12, 13] and with the Guidelines for the Content of Statistical Analysis Plans in Clinical Trials [14]. The MIRACLE trial is a recursive, two-stage, group sequential, multicenter, randomized, placebo-controlled, double-blind trial. The trial includes hospitalized patients who are 18 years old or older with laboratory-confirmed MERS in addition to evidence of acute organ dysfunction that is judged related to MERS. Inclusion and exclusion criteria have been detailed in a previously published protocol manuscript [3]. Patients are randomized to receive lopinavir/ritonavir and recombinant interferon-β1b or placebo. Randomization is stratified according to center and according to whether the patients require mechanical ventilation (invasive or non-invasive) at the time of enrollment, as mechanical ventilation is a major, but pragmatic, surrogate for severity of illness. The study interventions continue for 14 days or until hospital discharge. Patients are followed up daily until day 28 or hospital discharge and then at day 90. A CONSORT flow diagram of the trial progress will be constructed (Fig. 1). The number of randomized patients to each group will be reported as well as the number of randomized patients who received the interventions. We will also report the number of screened patients (defined as all hospitalized patients with MERS) who met the eligibility criteria but were not enrolled and the reasons for non-enrollment. CONSORT flow chart for the MIRACLE trial The intention-to-treat population consists of all enrolled patients whether or not they received the allocated intervention, and will be used for the primary analysis. A per-protocol analysis will be conducted for patients who received the allocated interventions (defined by any dose of the study intervention). Baseline characteristics will be presented for the two study groups (Additional file 1: Table S1) including age, sex, and body mass index, the presence of co-infections, nosocomial versus community-acquired MERS infection, Acute Physiology and Chronic Health Evaluation (APACHE) II scores, Sequential Organ Failure Assessment scores, and the Karnofsky Performance Status Scale score [3]. We will report comorbidities and the interventions received before randomization for the patients in each group. We will report baseline laboratory values (international normalization ratio, platelet count, hemoglobin, white blood cell count, lymphocyte count, liver enzymes, glucose, serum amylase, blood urea nitrogen, creatinine, creatine kinase, lactate) and respiratory and vital parameters in addition to the location of the patient at time of randomization. Intervention data For each group we will report the time of hospital admission to randomization and the time of randomization to the first dose received of the study drugs. We will report the received study intervention and its duration for each group, in addition to the missing or incomplete doses and protocol violations (Additional file 1: Table S2 and Table S8). Co-interventions We will compare any use of vasopressors, renal replacement therapy, neuromuscular blockade, mechanical ventilation, extracorporeal membrane oxygenation (ECMO), nitric oxide, prone ventilation, and tracheostomy. We will also compare the use of intravenous immunoglobulin, antiviral therapy, antibiotics, corticosteroids, and statins (Additional file 1: Table S2). Primary and secondary outcomes The primary outcome is 90-day mortality (Additional file 1: Table S3). The primary outcome is defined as all-cause mortality after enrollment in the trial within 90 days, as either an inpatient or outpatient. Secondary outcomes and subgroups are defined as presented in Table 1 and Additional file 1: Table S4, and S5). Table 1 Secondary Outcomes in the MIRACLE trial In addition, we will compare the physiological parameters among patients treated in the treatment group and the control group. All analyses will be performed using SAS 9.4 with specially written code for the analysis of the primary outcome that accounts for the recursive design, as described in Chang [4]. Data and Safety Monitoring Board and interim analyses A detailed interim analysis plan is reported in the MIRACLE protocol [3]. The trial is designed as a recursive, two-stage, group sequential randomized trial. The first interim analysis will be conducted when 34 subjects (17 per group) have completed 90 days of follow-up. This is about 17.5% of the total sample size needed for the classical design (a classic two-group design requires a total of 194 subjects (97 subjects per group) to have an 80% power at a significance level of 5% using a one-sided Z test for difference in proportion to detect 20% absolute risk reduction in 90 days mortality among subjects receiving treatment (20%) compared to a control group (40%)). A Data and Safety Monitoring Board (DSMB) will be convened to review the unblinded data (efficacy and safety) and advise on continuation or termination of the trial. The determination of the stopping boundaries in the first two-stage design was calculated using the conditional power method based on the summing stage-wise p values. At the first interim analysis, the DSMB will determine whether the trial should be terminated for futility or not using the following boundaries and their corresponding decisions (Table 2). Table 2 Stopping boundaries in the MIRACLE trial Demographics and clinical characteristics We will summarize and report the demographics and baseline clinical characteristics using descriptive statistics. As appropriate, the chi-square test or Fisher's exact test will be used to compare the categorical variables, which will be reported as numbers and percentages. Student's t test or the Mann–Whitney U test will be used as appropriate to compare the continuous variables, which will be reported as means and standard deviations or as medians and interquartile ranges. Analysis of adverse events All adverse events will be grouped using Common Terminology Criteria for Adverse Events (CTCAE) Version 4 of the National Institutes of Health (NIH) (Additional file 1: Table S6). Adverse events will be grouped into aggregate groups and reported for the entire study period (Additional file 1: Table S7). All results will be summarized in terms of frequency and percentage and will be compared across study arms using Fisher's exact test. All results will be declared statistically significant with a p value < 0.05. Analysis of the primary outcome and continuous planning of the trial Let K be the number of stages of the current clinical trial needed to complete the trial and i ∈ {1, 2} be the index for the two-stage design in the kth stage. Let r1ki and r2ki be the proportions of 90 days mortality in the standard of care and treatment group respectively. Then the Z test statistic for the difference in proportion can be calculated as follows: $$ {Z}_{ki}=\frac{\delta_{ki}}{\sigma_{ki}}\sqrt{\frac{n_{ki}}{2}}, $$ $$ {\sigma}_{ki}=\sqrt{\left[{r}_{1 ki}\left(1-{r}_{1 ki}\right)+{r}_2\left(1-{r}_{2 ki}\right)\right]/2}, $$ $$ {\delta}_{ki}={r}_{1 ki-}{r}_{2 ki}, $$ where nki is the sample size per group for the ith two-stage design of the kth stage. In the interim analysis (i.e., at each i = 1 of the kth two-stage), the primary outcome will be evaluated, and the trial sample size will be re-estimated for the subsequent stage based on the observed effect size using the following formula assuming a conditional power of 80% (Pc = 0.8) to decide if the trial should continue: $$ {n}_{k,2}={\left[\frac{\sqrt{2}{\sigma}_{k1}}{\delta {\delta}_{k1}}\left({\Phi}^{-1}\left(1-{\alpha}_{k,2}+{p}_{k,1}\right)-{\Phi}^{-1}\left(1- Pc\right)\right)\right]}^2. $$ Here αk, 2 is the precalculated rejection boundary for efficacy at the second stage of the two-stage design at the kth stage, and pk, 1 is the raw table probability corresponding to the Zki statistic. At the first interim analysis, should the data suggest that another stage of the two-stage steps is required, we will recalculate the conditional error and new boundaries will be calculated for K = 2. Let βk + 1, 1, αk + 1, 1 be the rejection boundaries for futility and efficacy for the first (i = 1) of the two-stage step of the kth + 1 stage. Then the conditional error is: $$ A\left({p}_{k,1}\right)={\alpha}_{k+1,1}+{\alpha}_{k+1,2\kern0.5em }\left({\beta}_{k+1,1}-{\alpha}_{k+1,1}\ \right)-\frac{1}{2}\left({\beta}_{k+1,1}^2-{\alpha}_{k+1,1}^2\right),k=0,1,\dots K, $$ where A(p0, 1) is the type I error, which is set to 0.05. The new αk + 1, 2 boundary for the kth + 1 stage for pre chosen βk + 1, 1, αk + 1, 1 will be calculated as follows: $$ {\alpha}_{k+1,2}=\frac{A\left({p}_{k,1}\right)+\frac{1}{2}\left({\beta}_{k+1,1}^2-{\alpha}_{k+1,1}^2\right)-{\alpha}_{k+1,1}}{\beta_{k+1,1}-{\alpha}_{k+1,1}\ }. $$ At the end of the trial, the treatment will be declared efficacious if the calculated stage-wise ordered p value pk, 2 is less than αk, 2. The adjusted p value will be obtained using backward recursion as follows: $$ \left\{\begin{array}{c}{P}_{K_0-1,2}=\left\{\begin{array}{c}t\kern23.5em for\ k=1,\\ {}\ {\alpha}_{k_0,1}+t\left({\beta}_{k_0,1}-{\alpha}_{k_0,1}\right)-\frac{1}{2}\left({\beta^2}_{k_0,1}-{\alpha^2}_{k_0,1}\right)\kern1.5em for\ k=2,\end{array}\right.\\ {}{P}_{i-1,2}={\alpha}_{i,1}+\left({p}_{i,1}+{p}_{i,2}\right)\left({\beta}_{i,1}-{\alpha}_{i,1}\right)-\frac{1}{2}\left({\beta^2}_{i,1}-{\alpha^2}_{i,1}\right)\kern2.75em for\ i=1,\dots, {K}_{0-1}\end{array}\right.\operatorname{} $$ where K0 is the total number of two-stage stages, and t is the sum of stage-wise raw p values. Finally, the adjusted overall 95% one-sided confidence interval will be calculated by: $$ {\delta}_c=\underset{1\le \mathrm{i}\le {K}_0-1}{\max}\left\{{\delta}_{i,1},{\delta}_{k_0,2}\right\}, $$ where δi, 1 and \( {\delta}_{k_0,2} \) are the stage-wise and the last stage of the kth two-stage design confidence interval bound. The last stage confidence bound \( {\delta}_{k_0,2} \) can be found by solving the following equation numerically for \( {\delta}_{k_0,2} \): $$ \Phi \left(\frac{\delta_{k_0,2}}{\overset{\sim }{\sigma }}\sqrt{\frac{n_{k0,1}}{2}}-{z}_{1-{p}_{k0,1}}\right)+\Phi \left(\frac{\delta_{k_0,2}}{\overset{\sim }{\sigma }}\sqrt{\frac{n_{k0,2}}{2}}-{z}_{1-{p}_{k0,2}}\right)={\alpha}_{k_0,2}, $$ where nk0, 1 and nk0, 2 are the sample sizes for the first and second stage of the last kth two-stage design, and pk0, 1, pk0, 2 are the stage-wise adjusted p values. In order to stay consistent with the method that was used in calculating the boundaries for the trial, we will not account for stratification in the primary outcome analysis. In general, this approach is acceptable and it preserves both type I and type II errors as long as the weighted average of the effect size stays close to the hypothesized effect size [15]. Furthermore, as long as the sample size re-estimation at the interim analysis was based on the weighted average of the effect size, the overall power of the trial will be preserved. Secondary analyses of the primary outcome, secondary outcomes, and subgroups With the exception of the analysis of the primary outcome, all other analyses will be tested using regular statistical methods and will be two-sided. A secondary adjusted analysis will be conducted using multiple logistic regression analysis, in which death within 90 days will be modeled as the dependent variable, and a set of baseline variables that are strongly believed to affect the outcome of MERS will be included as independent variables. Those variables will include at minimum the following: age, community-acquired versus hospital-acquired infection, mechanical ventilation, center, and Sequential Organ Failure Assessment score. Ninety-day median survival time will be summarized and reported using Kaplan–Meier curves and will be compared between the study groups using the log-rank test (Additional file 1: Figure S1). Analysis of secondary outcomes will be compared in the intention-to-treat cohort only. Subgroup analyses will be conducted if patient numbers permit (e.g., no fewer than five patients in subgroups of interest) in a priori defined subgroups (Additional file 1: Table S5). Multivariable logistic regression will be used to report the results of tests of interactions for these subgroups. Handling of missing data All missing data will be reviewed and characterized in terms of their pattern (e.g., missing completely at random, missing at random, etc.). For missing completely at random, all analyses will be based on a list-wise deletion approach where only observations with complete values will be considered for analysis. For variables with values missing at random, multiple imputation techniques will be utilized to impute the missing values, as suggested by Rubin [16]. Adjustment multiplicity testing for the secondary analyses To adjust for multiple testing, we will use the false discovery rate (FDR) as described by Benjamini and Hochberg [17]. In this procedure all hypothesis tests will be sorted in ascending order based on their calculated p value. All hypothesis tests below an index K will be rejected, where K is calculated as follows: $$ K=\mathit{\max}\left\{i:p(i)\le \frac{i}{m}.q\right\}, $$ where i = m, … ,1, m is the total number of tested hypotheses, and q = 0.05. Additional details about the SAP are available in Additional file 2. The MIRACLE trial investigates the efficacy of a combination therapy of lopinavir/ritonavir and recombinant Interferon-β1b provided with standard supportive care, compared to placebo provided with standard supportive care, in hospitalized patients with laboratory-confirmed MERS. The first patient was enrolled in November 2016. At present, 14 sites are actively screening for eligible patients. The recruitment rate in the MIRACLE trial has been slow mainly related to the decline in the number of MERS cases in Saudi Arabia. Due to the uncertainty of the efficacy level of the treatment and the recruitment rate, the trial is designed to be a recursive, two-stage, group sequential randomized trial [4]. Several methods could be utilized to build an adaptive trial. However, most of these methods would require one to specify a priori the time and type of adjustments that need to take place in the trial. For a disease such as MERS there are many factors that could limit the ability to specify a priori those elements; thus, the recursive two-stage design is a natural choice. This type of design provides enough flexibility to introduce different adjustments while learning from the observed data without inflating the type I error. Reporting of the SAP to the MIRACLE trial in advance of trial completion will enhance evaluation of the clinical data and support confidence in the final results and the conclusion. Prior specification of the statistical methods and outcomes analysis will facilitate unbiased analyses of these important clinical data. Recruitment started in November 2016 and is currently ongoing. The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. APACHE II: Acute Physiology and Chronic Health Evaluation II CONSORT: Consolidated Standards of Reporting Trials CTCAE: Common Terminology Criteria for Adverse Events DSMB: Data and Safety Monitoring Board ECMO: MERS: Middle East respiratory syndrome MERS-CoV: Middle East respiratory syndrome coronavirus MIRACLE trial: MERS-CoV Infection tReated with A Combination of Lopinavir/ritonavir and intErferon-β1b NIH: SAP: Statistical analysis plan SFDA: TEAE: Treatment-emergent adverse event Arabi YM, Balkhy HH, Hayden FG, Bouchama A, Luke T, Baillie JK, Al-Omari A, Hajeer AH, Senga M, Denison MR, et al. Middle East respiratory syndrome. N Engl J Med. 2017;376:584–94. Arabi Y, Shalhoub S, Mandourah Y, Al-Hameed F, Al-Omari A, Al Qasim E, Jose J, Alraddadi B, Almotairi A, Al Khatib K, Abdulmomen A, Qushmaq I, Sindi AA, Mady A, Solaiman O, Al-Raddadi R, Maghrabi K, Ragab A, Al Mekhlafi GA, Balkhy HH, Al Harthy A, Kharaba A, Gramish JA, Al-Aithan AM, Al-Dawood A, Merson L, Hayden FG, Fowler R, Saudi Critical Care Trials Group. Ribavirin and interferon therapy for critically ill patients with the Middle East respiratory syndrome: a multicenter observational study, Clin Infect Dis 2019; ciz544. doi: https://doi.org/10.1093/cid/ciz544. Arabi YM, Alothman A, Balkhy HH, Al-Dawood A, AlJohani S, Al Harbi S, Kojan S, Al Jeraisy M, Deeb AM, Assiri AM, et al. Treatment of Middle East respiratory syndrome with a combination of lopinavir-ritonavir and interferon-beta1b (MIRACLE trial): study protocol for a randomized controlled trial. Trials. 2018;19:81. Chang M. Adaptive design theory and implementation using SAS and R. 2nd ed. Boca Raton: Chapman & Hall/CRC; 2014. p. 153–80. Liu Q, Anderson KM, Pledger GW. Benefit–risk evaluation of multi-stage adaptive designs. Seq Anal. 2004;23:317–31. US Food and Drug Administration (FDA). Draft guidance for industry on enrichment strategies for clinical trials to support approval of human drugs and biological products; availability. 2012. https://www.federalregister.gov/documents/2012/12/17/2012-30274/draft-guidance-for-industry-on-enrichment-strategies-for-clinical-trials-to-support-approval-of. Accessed 26 June 2019. Wassmer G, Eisebitt R, Coburger S. Flexible interim analyses in clinical trials using multistage adaptive test designs. Drug Inf J. 2001;35:1131–46. Schafer H, Muller HH. Modification of the sample-size and the schedule of interim analyses in survival trials based on data inspections. Stat Med. 2001;20:3741–51. Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869. Schulz KF, Altman DG, Moher D, Group C. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use: Good Clinical Practice (GCP) guideline. https://database.ich.org/sites/default/files/E6_R2_Addendum.pdf. The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH): statistical principles for clinical trials. https://database.ich.org/sites/default/files/E9_Guideline.pdf. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use: E3 - structure and content of clinical study reports. https://database.ich.org/sites/default/files/E3_Guideline.pdf. Gamble C, Krishan A, Stocken D, Lewis S, Juszczak E, Dore C, Williamson PR, Altman DG, Montgomery A, Lim P, et al. Guidelines for the content of statistical analysis plans in clinical trials. JAMA. 2017;318:2337–43. Srivastava DK, Rai SN, Pan J. Robustness of an odds-ratio test in a stratified group sequential trial with a binary outcome measure. Biom J. 2007;49:351–64. Rubin DB. Multiple imputation after 18+ years. J Am Stat Assoc. 1996;91:473–89. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995;57:289–300. We would like to acknowledge the following Data Safety Monitoring Board (DSMB) Chair and members: Greg S Martin, MD, MSc (Chair) Executive Associate Division Director Division of Pulmonary, Allergy, Critical Care and Sleep Medicine Research Director, Emory Critical Care Center Section Chief and Director, Medical and Coronary Intensive Care Grady Memorial Hospital David A Schoenfeld, PhD (Member) Professor of Medicine, Harvard Medical School Professor in the Department of Biostatistics Sharon L Walmsley, MSc, MD, FRCPC (Member) Senior Scientist, Toronto General Research Institute (TGRI) Shannon Carson, MD (Member) Division Chief, Pulmonary Diseases and Critical Care Medicine University of North Carolina at Chapel Hill School of Medicine Chapel Hill, North Carolina, USA The MIRACLE trial is funded by King Abdullah International Medical Research Center, Riyadh, Kingdom of Saudi Arabia. The study sponsor does not have any role in the study design, collection, management, analysis or interpretation of the data, or in writing the report. College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Yaseen M. Arabi, Adel Alothman, Hanan H. Balkhy, Sameera AlJohani, Suleiman Kojan & Abdulaziz Al-Dawood Intensive Care Department, Ministry of the National Guard - Health Affairs, ICU 1425, P.O. Box 22490, Riyadh, 11426, Saudi Arabia Yaseen M. Arabi & Abdulaziz Al-Dawood Prince Mohammed bin Abdulaziz Hospital, Riyadh, Saudi Arabia Ayed Y. Asiri, Sameeh Ghazal & Sarah Al Faraj Infection Prevention and Control, Assistant Deputy Minister, Preventive Health, Ministry of Health, Riyadh, Saudi Arabia Abdullah M. Assiri Deputy Minister for Public Health, Ministry of Health, Riyadh, Saudi Arabia Hani A. Aziz Jokhdar Department of Medicine, Ministry of the National Guard - Health Affairs, Riyadh, Saudi Arabia Adel Alothman & Suleiman Kojan Department of Infection Prevention and Control, Ministry of the National Guard - Health Affairs, Riyadh, Saudi Arabia Hanan H. Balkhy Department of Pathology and Laboratory Medicine, Ministry of the National Guard - Health Affairs, Riyadh, Saudi Arabia Sameera AlJohani College of Pharmacy, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Shmeylan Al Harbi & Majed Al Jeraisy Pharmaceutical Care Department, Ministry of the National Guard - Health Affairs, Riyadh, Saudi Arabia King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Research Office, Riyadh, Saudi Arabia Ahmad M. Deeb Ministry of the National Guard - Health Affairs, Riyadh, Saudi Arabia Ahmad M. Deeb & Mohamed A. Hussein Prince Mohammed bin Abdulaziz Hospital, Ministry of Health & College of Medicine, Alfaisal University, Riyadh, Saudi Arabia Ziad A. Memish Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Jeddah, Saudi Arabia Fahad Al-Hameed & Asim AlSaedi Intensive Care Department, Ministry of the National Guard - Health Affairs, Jeddah, Saudi Arabia Fahad Al-Hameed Department of Infection Prevention and Control, Ministry of the National Guard - Health Affairs, Jeddah, Saudi Arabia Asim AlSaedi Military Medical Services, Ministry of Defense, Prince Sultan Military Medical City, Riyadh, Saudi Arabia Yasser Mandourah Department of Intensive Care Services, Prince Sultan Military Medical City, Riyadh, Saudi Arabia Ghaleb A. Al Mekhlafi Infectious Diseases Division, Prince Sultan Military Medical City, Riyadh, Saudi Arabia Nisreen Murad Sherbeeni & Fatehi Elnour Elzein Department of Critical Care Medicine, King Fahad Medical City, Riyadh, Saudi Arabia Abdullah Almotairi Department of Critical Care Medicine, King Khalid University, Aseer Central Hospital, Abha, Saudi Arabia Ali Al Bshabshe Department of Critical Care, King Fahad Hospital, Ohoud Hospital, Al-Madinah Al-Monawarah, Saudi Arabia Ayman Kharaba Department Biostatistics and Bioinformatics, King Saud Bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Jesna Jose & Mohamed A. Hussein Intensive Care Unit, King Saud Medical City, Riyadh, Saudi Arabia Abdulrahman Al Harthy Infectious Disease, King Saud Medical City, Riyadh, Saudi Arabia Mohammed Al Sulaiman Intensive Care Department, King Saud Medical City, Riyadh, Saudi Arabia Ahmed Mady Department of Anesthesiology and Intensive Care, Tanta University Hospitals, Tanta, Egypt Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, Canada Robert A. Fowler Department of Critical Care Medicine and Department of Medicine, Sunnybrook Hospital, Bayview Avenue, Room D478, Toronto, 2075, Canada International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC), Division of Infectious Diseases and International Health, Department of Medicine, University of Virginia School of Medicine, Charlottesville, Virginia, USA Frederick G. Hayden Critical Care Medicine Department, King Abdullah Medical Complex, Jeddah, Saudi Arabia Mohamed Abdelzaher Critical Care Medicine Department, Cairo University Hospital, Cairo, Egypt Internal Medicine Department, King Fahad General Hospital, Ministry of Health, Jeddah, Saudi Arabia Wail Bajhmom Yaseen M. Arabi Ayed Y. Asiri Adel Alothman Shmeylan Al Harbi Suleiman Kojan Majed Al Jeraisy Sameeh Ghazal Sarah Al Faraj Nisreen Murad Sherbeeni Fatehi Elnour Elzein Jesna Jose Abdulaziz Al-Dawood Mohamed A. Hussein YA, AA1, HB, SJ, SH, SK, MJ, AMD, JJ, RAF, FGH, and MAH conceived and designed the plan. YA, AMD, and JJ drafted the manuscript. YA, JJ, and MAH developed the analytical plan. YA, AA, AMA, HAJ, AA1, HB, SJ, SH, SK, MJ, AMD, ZAM, SG, SF, FH, AS, YM, GM, NMS, FEZ, AM, AB, AK, JJ, AH, MS, AM, RAF, FGH, AD, MA, WB, and MAH were involved in trial sites management, critical revision of the manuscript for important intellectual content, and approval of the final version to be published, and they agree to be accountable for all aspects of the work. All authors read and approved the final version of the manuscript. Correspondence to Yaseen M. Arabi. The MIRACLE study is approved by the Scientific Committee and the Institutional Review Board at the National Guard Health Affairs, Riyadh, Saudi Arabia (RC15/142) and all participating sites and registered at the Saudi Food and Drug Authority (SFDA), Riyadh, Saudi Arabia. Patients who meet the eligibility criteria or substitute decision-makers (for patients lacking decision-making capacity) of eligible patients will be approached to obtain informed consent for enrollment. The authors declare that they have no competing interests. YA and FGH are unpaid consultants on antivirals active for MERS for Gilead Sciences, SAB Biotherapeutics, and Regeneron. Baseline characteristics of intention-to-treat (ITT) population. Table S2. Summary of interventions and co-interventions. Table S3. Primary outcome: 90-day mortality. Table S4. Secondary outcomes. Table S5. Subgroup analyses. Table S6. Classification of adverse events in the MIRACLE trial (MERS-CoV Infection tReated with A Combination of Lopinavir/ritonavir and intErferon-β1b) using the NIH Common Terminology Criteria for Adverse Events (CTCAE), Version 4.0. Table S7. Summary of adverse events by severity. Table S8. Summary of protocol violations. Additional file 2. Statistical analysis plan document. Arabi, Y.M., Asiri, A.Y., Assiri, A.M. et al. Treatment of Middle East respiratory syndrome with a combination of lopinavir/ritonavir and interferon-β1b (MIRACLE trial): statistical analysis plan for a recursive two-stage group sequential randomized controlled trial. Trials 21, 8 (2020). https://doi.org/10.1186/s13063-019-3846-x Lopinavir/ritonavir Interferon-β1b
CommonCrawl
Turing machine examples Example of Turing Machine - Javatpoin Example 1: Construct a TM for the language L = {0 n 1 n 2 n } where n≥1. L = {0 n 1 n 2 n | n≥1} represents language where we use only 3 character, i.e., 0, 1 and 2. In this, some number of 0's followed by an equal number of 1's and then followed by an equal number of 2's. Any type of string which falls in this category will be accepted by this. Examples of Turing Machines - p.21/22. Marking tape symbols In stage two the machine places a mark above a symbol, in this case. In the actual implementation the machine has two different symbols, and in the tape alphabet Thus, when machine places a mark above symbol it actually writes the marked symbo In this Turing machine example, you can see the machine following a simple set of steps to count in binary. How the Turing Machine Counts. Counting is really just repeatedly adding one to a number. If we count in decimal like most humans do, we keep adding one to a number Language of Turing Machines Now, we define the language of Turing machines: Definition 3. Let M =(Q, ⌃, , ,q 0,B,F) be a Turing machine. Then the language accepted by M is L(M )={w 2 ⌃+ | q 0w `⇤ x 1q f x 2 for some q f 2 F,x 1,x 2 2 ⇤} That is, the Turing machine accepts the string w if the initial configuration goes to a final. Practice designing and working with Turing machines. 1 JFlap. Review the Turing machines section of the Tutorial. Construct the TM from examples 8.2/8.3. Use it to solve Exercise 8.2.1. Construct your own Turing machine to solve Exercise 8.2.2a. (Note that this language is not a CFL.) 2 New Ways to Solve Old Problems 2.1 Contains 10 Turing Machine. Turing machine was invented in 1936 by Alan Turing. It is an accepting device which accepts Recursive Enumerable Language generated by type 0 grammar. There are various features of the Turing machine: It has an external memory which remembers arbitrary long sequence of input. It has unlimited memory capability The Church-Turing Thesis)Various definitions of algorithms were shown to be equivalent in the 1930s)Church-Turing Thesis: The intuitive notion of algorithms equals Turing machine algorithms ¼Turing machines serve as a precise formal model for the intuitive notion of an algorithm)Any computation on a digital computer is equivalent t Designing a Turing Machine. The basic guidelines of designing a Turing machine have been explained below with the help of a couple of examples. Example 1. Design a TM to recognize all strings consisting of an odd number of α's. Solution. The Turing machine M can be constructed by the following moves − Let q 1 be the initial state Example of Turing machine. Turing machine M = (Q, X, ∑, δ, q 0, B, F) with. Q = {q 0, q 1, q 2, q f} X = {a, b} ∑ = {1} q 0 = {q 0} B = blank symbol; F = {q f} δ is given by � Overview. A Turing machine is a general example of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a machine capable of enumerating some arbitrary subset of valid strings of an alphabet; these strings are part of a recursively enumerable set Turing Machine was invented by Alan Turing in 1936 and it is used to accept Recursive Enumerable Languages (generated by Type-0 Grammar). A turing machine consists of a tape of infinite length on which read and writes operation can be performed. The tape consists of infinite cells on which each cell either contains input symbol or a special. For example, 5 will be represented by a sequence of five zeroes or five ones. 5 = 1 1 1 1 1 or 0 0 0 0 0. Lets use zeroes for representation. For adding 2 numbers using a Turing machine, both these numbers are given as input to the Turing machine separated by a c For a 3-State machine, the maximum number of '1's that it can print is proven to be 6, and it takes 14 steps for the Turing machine to do so. The state table for the program is shown below. Since only 2 symbols are required, the instructions for the '0' symbol are left as the default settings Function Computed by Turing Machine 1. Require that the machine halt on input x iff f(x) is defined. 2. Interpret the content of the tape(s) of the Turing machine, after it halts, as the output of f. There are various ways of doing this, for example, by just considering the non-blank content of the first (only) tape. - p. 8/5 Turing Machine examples. This app contains a set of examples of Turing Machines written in Elixir. All the examples can be found inside the TuringMachine module. Examples. Binary Number incrementer. TuringMachine. BinaryNumberIncrementer. increment ( 1011 ) # 1100 About In the Turing machine example of Fig. 4.18, the tape system and control engine are each atomic Classic DEVS. For the tape system, a state is a triple ( t a p e , p o s , m v ) where the tape is an infinite sequence of zeros and ones (symbols), pos represents the position of the head, and mv is a specification for moving left or right Uses of the Turing machine. The Turing machine has been, for example, used as a language generator, because this type of machine has several tapes including an output tape that is empty at first and then filled with words of language.It is also used in compilers I and II, state machines, automaton machines and code generators.. In the antiquity it was used in machines as the Bombe that. sample turing machine programs Problem 1: Find Right Hand End of Tape The example solves the problem of finding the right hand end of a tape starting anywhere within the non-blank characters on a tape with initial state A. Note that having skipped over the possible characters on the tape and finding a blank, it is necessary to move back one. An example 3-state, 2-color Turing machine is illustrated above (Wolfram 2002, p. 78). It has a total of rules, which describe the machine behavior for all possible states. In general, an -state, -color Turing machine requires rules to specify its behavior. Although any number of these rules may specify a halting condition, the most commonly. A simple demonstration. As a trivial example to demonstrate these operations, let's try printing the symbols 1 1 0 on an initially blank tape: First, we write a 1 on the square under the head: Next, we move the tape left by one square: Now, write a 1 on the new square under the head A Turing Machine Program Examples The Turing Machine A Turing machine consists of three parts: A finite-state control that issues commands, an infinite tape for input and scratch space, and a tape head that can read and write a single tape cell. At each step, the Turing machine writes a symbol to the tape cell under the tape head, changes state, and moves the tape head to the left or to the right Turing machine examples: | The following are examples to supplement the article |Turing machine|. | | World Heritage Encyclopedia, the aggregation of the largest online encyclopedias available, and the most definitive collection ever assembled A Turing machine then, or a computing machine as Turing called it, in Turing's original definition is a machine capable of a finite set of configurations \(q_{1},\ldots,q_{n}\) (the states of the machine, called m-configurations by Turing). It is supplied with a one-way infinite and one-dimensional tape divided into squares each capable of. Universal Turing Machine Manolis Kamvysselis - [email protected]. A Turing Machine is the mathematical tool equivalent to a digital computer. It was suggested by the mathematician Turing in the 30s, and has been since then the most widely used model of computation in computability and complexity theory Turing Machine Basics: The Turing machine is an invention of a mathematician Alan Turing. Turing machine is a very powerful machine. Any computer problem can be solved through Turing Machine. Just like FA, Turing machine also has some states and some transition. Starting and ending states are also the part of Turing Machine Example Turing machine to compute the truncated subtraction (monus), after John E. Hopcroft and Jeffrey D. Ullman (1979). In our case, the finite control table corresponds to the learned Q-table applied on a numerical list on the tape. Q — states, 32 An animation of the chosen machine BASIC [] Sinclair ZX81 BASIC [] The universal machine []. This program expects to find: • R$(), an array of rules; • T$, an input tape (where an empty string stands for a blank tape); • B$, a character to use as a blank; • S$, an initial state; • H$, a halting state. It will execute the Turing machine these parameters describe, animating the process. A 3-state example takes 14 steps while this 4-state example takes 107 steps. Increasing from there, a 5-state example has been found that takes 47,176,870 steps, and a 6-state example that takes 2.584 x10 2879 steps A Turing Machine that Seeks by Rewriting. This final Turing machine will seek out the 'H' symbol in either direction on a two-way-infinite tape. By rewriting the cells, it avoids the infinite loops that could entrap the first example. Since it writes, it is not a finite state automaton. *State #0 is the halt state Models of Computation, 2020 4 Slide 5 Turing Machine, formally A Turing machine is specified by a quadruple M = (Q,Σ,s,δ) where • Q is a finite set of machine states; • Σ is a finite set of tape symbols, containing distinguished symbol , called blank; • an initial state s ∈ Q; • a partial transition function δ ∈ (Q × Σ)⇀(Q × Σ × {L,R}) The machine has finite internal. Title: Turing Machines - Definition and Examples 1 Turing Machines Definition and Examples. Lecture 23 ; Section 3.1 ; Mon, Oct 15, 2007; 2 Computation. Can a DFA or a PDA compute that 1 1 2? 3 Computation. The nearest they can come is to read input of the form a b c, with a, b, and c in binary, and accept or reject it. Accept the input 1 1 2 Turing machine. Assume we already compiled the code and loaded the string '0100'. Figure 2 depicts The machine panel at the beginning of the run. Martin Ugarte Page 1 of 3 Programming example for TURING MACHINE Figure 1. At this point the state is qEven and the head is reading a 0, the instruction of the first transition would be applied. In the example above, the input consists of 6 A's and the Turing machine writes the binary number 110 to the tape. To describe how this is accomplished, we first review an algorithm for incrementing a binary integer by 1: scan the bits from right to left, changing 1's to 0's until you see a 0 Turing Machines and Languages The set of strings accepted by a Turing machine M is the language recognised by M, L(M). A language A is Turing-recognisable or computably enumerable (c.e.) or recursively enumerable (r.e.) (or semi-decidable) iff A = L(M) for some Turing machine M. Three behaviours are possible for M on input w By the Church-Turing thesis, any effective model of computation is equivalent in power to a Turing machine. Therefore, if there is any algorithm for deciding membership in the language, there is a decider for it. Therefore, the language is in R. A language is in R if and only if there is an algorithm for deciding membership in tha Turing Machines We want to study computable functions, or algorithms. In particular, we will look at algorithms for answering certain questions. A question is decidable if and only if an algorithm exists to answer it. Example question: Is the complement of an arbitrary CFL also a CFL? This question is undecidable—there is no algorithm. Example: Turing Machine This TM scans its input right, turning each 0 into a 1. If it ever finds a 1, it goes to final reject state r, goes right on square, and halts. If it reaches a blank, it changes moves left and accepts. Itslanguageis0 A Turing machine is an abstract device to model computation as rote symbol manipulation. Each machine has a finite number of states, and a finite number of possible symbols. These are fixed before the machine starts, and do not change as the machine runs. There are an infinite number of tape cells, however, extending endlessly to the left and. A Turing Machine A Turing Machine (TM) has three components: •An infinite tape divided into cells. Each cell contains one symbol. •A head that accesses one cell at a time, and which can both read from and write on the tape, and can move both left and right. •A memory that is in one of a fixed finite num- ber of states • A Turing Machine (TM) has finite-state control (like PDA), and an infinite read-write tape.The tape serves as both input and unbounded storage device. • The tape is divided into cells, and each cell holds one symbol from the tape alphabet. • There is a special blank symbol B. At any instant, all bu Turing Machine The rst question to ask is, what is a turing machine? I will start with a simple explanation of a turing machine rst before I go into a more rigorous de nition. A turing machine, in essence, is a mathematically model of a machine that mechanically operates on a tape. It has four main components: 1 Intro to Turing Machines • A Turing Machine (TM) has finite-state control (like PDA), and an infinite read-write tape. The tape serves as both input and unbounded storage device. • The tape is divided into cells, and each cell holds one symbol from the tape alphabet. • There is a special blank symbol B. At any instant, all bu • Church-Turing Thesis: There is an effective procedure for solving a problem if and only if there is a TM that halts for all inputs and solves the problem. - There are many other computing models, but all are equivalent to or subsumed by TMs. There is no more powerful machine (Technically cannot be proved) JFLAP TM examples Turing machine which adds unary numbers. That is, 11111+1111 becomes 111111111 and the tape head points to the first 1. Search example 1. This Turing machine searches for the right end of a string of a's. Search example 2. This Turing machine searches for the right end of a string of a's and b's. Example 9.1 Examples of Turing Machines - Korea Universit deciding whether an input string is a palindrome can be solved in time O(n) on a two-tape Turing machine, but requires time (n2) on a one-tape Turing machine. 1.2 An example We take an example directly out of Sipser's book [3]. The turing machine M L accepts the language L= f02njn 0g. M L uses the alphabet = ft;0;xg. Recall that the. Start the Turing Machine operation.;} /* Notifications */ notification halted {description The Turing Machine has halted. This means that there is no: transition rule for the current state and tape symbol.; leaf state {type state-index; mandatory true; description The state of the control unit in which the machine has: halted.;}} The following table is Turings very first example Alan Turing 1937: 1. A machine can be constructed to compute the sequence 0 1 0 1 0 1. 0 1 0. Undecidable p. Turing machines 1. Turing Machines 2. Invented by Alan Turing in 1936.A simple mathematical model of a general purpose computer.It is capable of performing any calculation which can be performed by any computing machine. Another Turing Machine Example n n Turing machine for the language {a b } q4 y y, R y y, Ly y, R a a, R a a, L ,L y y, R. Turing machine for a number of a's multiplied by the number of b's and equals to the number of c's Read More Examples of Turing Machine. Turing Machine to copy a string: with animations; Turing Machine of numbers divisible by 3: with animations; Turing machine for anbncn: with animations; Turing machine of two equal binary strings: with. A Turing machine T is said to decide a language L if and only if T writes yes and halts if a string is in L and T writes no and halts if a string is not in L. For example the Turing machine of Example 1 above goes through the following sequence of configurations to accept the string aba: ( q 0, aba ) ( q 1, aba ) ( q 2, aba ) ( q 3, aba. These are trivial examples of non-deterministic Turing Machines, but examples nonetheless. Also, in the example you provided, slide 3 explicitly lists how acceptance and rejection work: a word is accepted if any run succeeds and is rejected if all runs succeed. - Welbog Sep 9 '19 at 13:56 Examples of Turing Machine Simulations. In order to illustrate the use of the TM simulator, I will present some interesting TMs. The text files for all the TMs are also provided. A TM that accepts the non-regular language 0^n1^n For example, if a Turing machine has two states, when the head reads an A symbol in state 1 1 1, the machine might do one thing, and if the head reads an A symbol in state 2 2 2, it can do a different thing. Transition functions are often represented in a table form Examples of mechanical Turing Machines . A Turing Machine in the classic style has an excellent video depicting the operation of a physical Turing machine; Simulations . Turing Kara has some excellent instructions to help you get to grips with the basic operations of a turing machine; Example Question Turing Machines: Examples - Old Dominion Universit istic Turing Machines Turing machines are a model of computation. It is believed that anything that can be computed can be computed by a Turing Machine. The de nition won't look like much, and won't be used much; however, it is good to have a rigorous de nition t A Turing Machine is a mathematical model imagined by Alan Turing in 1936, and is considered as a fundamental concept of computer science. It is made of an infinite tape on which are written binary. For example, your PC is a Turing Machine (more or less) and many languages are Turing-complete, meaning they can simulate Turing Machines. This is important for some categories of languages. The reason you can write a Turing Machine simulator in c# is that it is Turing Complete ate with a certain result, and. Automata Turing Machine - Javatpoin A Turing machine refers to a hypothetical machine proposed by Alan M. Turing (1912--1954) in 1936 whose computations are intended to give an operational and formal definition of the intuitive notion of computability in the discrete domain. It is a digital device and sufficiently simple to be amenable to theoretical analysis and sufficiently powerful to embrace everything in the discrete domain. 1. Explain why nondeterministic Turing machines are unsuitable for defining functions.. 2. Let L be the set of all words on the alphabet {a, b}that contain at least two consecutive occurrences of b. Construct a nondeterministic Turing machine that never moves left and accepts L.. 3. Show that the nondeterministic Turing machine used as an example in this section accepts the set {1 [2 n]} A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. | Review and cite TURING MACHINE protocol, troubleshooting and other methodology information. Here you can see the basic ideas of Turing machines illustrated by some very simple examples. CLICK on one of these: Machine 1: unary addition Machine 2: divisibility Machine 3: primalit Do you have a question regarding this example, TikZ or LaTeX in general? Just ask in the LaTeX Forum. Oder frag auf Deutsch auf TeXwelt.de.En français: TeXnique.fr Turing provides Hamiltonian Monte Carlo sampling for differentiable posterior distributions, Particle MCMC sampling for complex posterior distributions involving discrete variables and stochastic control flow, and Gibbs sampling which combines particle MCMC, HMC and many other MCMC algorithms This is a Turing machine simulator. To use it: Load one of the example programs, or write your own in the Turing machine program area. See below for syntax. Enter something in the 'Input' area - this will be written on the tape initially as input to the machine. Click 'Reset' to initialise the machine Assuming Turing machine is a general topic | Use as referring to a mathematical definition or a historical event or a word instead Examples for Turing Machines Turing Machines This will prove that a two-track Turing machine is equivalent to a standard Turing machine. This can be generalized to a n-track Turing machine. Let L be a recursively enumerable language. Let M= \langle Q, \Sigma, \Gamma, \delta, q_0, F \rangle be standard Turing machine that accepts L. Let M' is a two-track Turing machine Every now and then, we see some headline about Turing Completeness of something. For example, Minecraft or Dwarf Fortress, or even Minesweeper are famous examples of accidentally Turing complete systems. If you know what a Turing machine is (and you should) you will have an intuitive idea of the claim: you know that X can compute any computable function Accepted Language & Decided Language - Tutorialspoin How to Create a Turing Machine. For knowledge of many of the general tools, menus, and windows used to create an automaton, one should first read the tutorial on finite automata. This tutorial will principally focus on features and options differing from the finite automaton walkthrough, and offer an example of constructing a Turing machine ing whether a Turing machine halts on any input is undecidable. Which of the following options is correct ? A. Both S1 and S2 are correct. B Turing Machine Example. 예를 통해 튜링 머신이 어떻게 동작하는지를 살펴보자. 아래 그림은 앞서 설명한 튜링 머신 구성 요소들을 그림으로 나타낸것이다. 위 튜링 머신은 \($\) 문자열을 맨 뒤로 옮기는 작업을 수행하는 튜링 머신이다 Transition function might take the turing machine into an infinite loop on a particular input even when the input length is finite. Consider the transition function below with states Q0 and Q1 Q0 is the halt state If Q1 reads 'L', then stay in Q1 and go left one cell Multiple choice questions on Formal Languages and Automata Theory topic Introduction to Turing Machines. Practice these MCQ questions and answers for preparation of various competitive and entrance exams For example, suppose his tape was not blank. What would happen? The Turing machine would read different values than the intended values; this is a important subroutine used in the multiply routine. The example Turing machine handles a string of 1s, with 0 represented by the blank symbol Turing Machine Introduction - Tutorialspoin Below are some example programs you can use at the Turing machine simulator at morphett.info/turing Turing machines as language accepters Program that accepts strings of the form a n b n where n>=1 0 a x r 1 1 a a r 1 1 y y r 1 1 b y l 2 2 y y l 2 2 a a l 2 2 x x r 0 0 y y r 3 3 y y r 3 3 _ _ l halt-accept Same program as above, with extra transitions that specify reason for rejection 0 a x r 1 1 a a r 1 1 y y r 1 1 b y l 2 2 y y l 2 2 a a l 2 2 x x r 0 0 y y r 3 3 y y r 3 3 _ _ l halt. Example Example A Turing machine M 2 that decides A = f02 njn 0g, the language consisting of all strings of 0s whose length is a power of 2. M 2 = \On input string w: 1.Sweep left to right across the tape, crossing o every other 0. 2.If in stage 1 the tape contained a single 0, accept Possible example models include Turing machines with multiple tapes and C++ programs. We will show that model AM is equivalent to our basic Turing machine model as follows: For every machine or program M 1 in model AM , there exists a Turing machine M such that L(M 1 ) = L(M) Example Configuration: 1011q 7 01111. Turing Machine Formal Definition of Computation M receives input w = w 1 w 2 w n ∈ Σ* on the leftmost n squares of the tape, and the rest of the tape is blank. Configuration C 1 yields configuration C 2 if the T Example: the Lin and Rado's Turing machine given above, with 3 states and 2 symbols, takes 14 steps to stop, and leaves 6 symbols 1 on the tape when stopping. So here: s(M) = 14 and sigma(M) = 6. This machine is the winner for the most non-blank symbols left on the tape, so we have Sigma(3,2) = 6 These are the transitions. δ (q,σ) = δ 1 (q,σ), q ∈ K 1 -H 1 δ (q,σ) = δ 2 (q,σ), q ∈ K 2 -H 2 δ ( h ,a) = (s 2 ,a), h ∈ H 1 δ ( h ,σ) = (h 2 ,σ), σ ≠ a, h ∈ H 1, h 2 ∈ H 2. In English, run M1 until it halts (if it halts) if the r/w head is reading the symbol a, leave read head alone and go to state s2 in M 2 A linear bounded automaton is a nondeterministic Turing machine the length of whose tape is bounded by some fixed constant k times the length of the input. Example: L = {a n b n c n : n ≥ 0 Turing's thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930) Computer Science Law: A computation is mechanical if and only if it can be performed by a Turing Machine There is no known model of computation more powerful than Turing Machines Definition of Algorithm: An algorithm for function is Turing Machines as Transducers A Turing machine can be used as a transducer. The most obvious way to do this is to treat the entire nonblank portion of the initial tape as input, and to treat the entire nonblank portion of the tape when the machine halts as output. In other words, a Turing machine defines a function y=f(x) for strings x, y * if. Turing machine - Wikipedi 2 Examples of non-deterministic Turing Machines Example 1 Given a set S = {a1,...,an} of integers, determine if there is a subset T ⊆ S such that X ai∈T ai = X ai∈S−T ai. The task is to construct an NDTM which accepts a language L corresponding to the problem. Language: L = {a1#a2#...am# : ∃T ⊆ S, such that X ai∈T ai = X ai∈S−T ai. Prerequisite - Turing Machine Problem-1: Draw a Turing machine which subtract two numbers. Example: Steps: Step-1. If 0 found convert 0 into X and go right then convert all 0's into 0's and go right. Step-2. Then convert C into C and go right then convert all X into X and go right. Step-3 Examples Here is a set of rules for a simple 6-state Turing Machine which, when started on the leftmost end of a block of 1s on a tape that otherwise contains only 0s, will make a new block of 1s double the length of the original one, destroying the original in the process Introduction. The Alan Turing Institute and the British Library, together with researchers from a range of universities, have been awarded £9.2 million from the UKRI's Strategic Priorities Fund for a major new project.. Led by the Arts and Humanities Research Council (AHRC), 'Living with Machines' will take place over five years and is set to be one of the biggest and most ambitious. Turing Machine in TOC - GeeksforGeek Example of a Turing Machine. To demonstrate and illustrate the concept of the Turing Machine, we will look at an example. Hopefully, you will see what a simple and elegant mechanism it is. We have already seen that a TM is defined by a set of transition functions from a state/symbol pair to a new state/symbol pair and a direction in which to move Turing didn't come up with a machine. Turing came up with an abstract model for all machines. In other words, any algorithm ever can be built on a Turing machine. From 2 + 2 all the way to the latest Assassin's Creed, a Turing machine can run it. (But the latter will require a lot of tape. Like, a lot a lot.) It's a theoretical description of. Turing Machine. Turing Machine was invented by Alan Turing in 1936. It is an accepting device that accept recursively enumerable set of languages generated by type 0 grammars. The Turing machine can be imagined as a finite automaton or control unit equipped with an infinite storage A Turing Machine (TM) M = (Q, ∑, , , q 0,B,F) This is like the CPU & program t Finite control coun er Tape is the Infinite tape with tape symbols Tape head Tape is the memor Turing Machine for addition - GeeksforGeek According to Wikipedia, a Turing machine is a 'hypothetical device that manipulates symbols on a strip of tape'. The strip of tape is infinite and the Turing machine can read and write it, but also maintains an internal state. Based on the current symbol of the tape, the Turing machine can change its current state. An example of a very. g language. Recursion and parameter passing are some typical examples. A Turing machine can also be used to simplify the statements of an algorithm Turing machines review problems. Draw the state diagram for a Turing machine that increments a binary number. Assume that the input tape contains at least one non-blank symbol. For example, if the binary representation of 4 is initially on the tape.b100b.. then the output should be the binary representation of 5,.b101b. Turing Machines: An Introduction A Turing machine is somewhat similar to a finite automaton, but there are important differences: 1. A Turing machine can both write on the tape and read from it. 2. The read-write head can move both to the left and to the right. 3. The tape is infinite. 4 It is a very simple Turing Machine as it is limited to 3 states and 3 symbols. It is shown in this picture starting with a 2 1's on the tape to the right. It will stop with twice this number on the right. The Tape is implemented as 2 stacks and the machine has been provided with 6 stack cells. This can be expanded as required for the calculation 1. DeÞnitions of the Turing Machine 1.1 TuringÕs DeÞnition 1.2 PostÕs DeÞnition 1.3 The DeÞnition Formalized 1.4 Describing the Behavior of a Turing Machine 2. Computing with Turing Machines 2.1 Some (Simple) Examples 2.2 Computable Numbers and Problems 2.3 TuringÕs Universal Machine 2.3.1 Interchangeability of program and behavior: a. A Turing machine is the simplest form of a computer. The concept was invented by Alan Turing in 1936. This was the first computer invented (on paper only). I- Principles of a Turing machine. In its simplest form, a Turing machine is composed of a tape, a ribbon of paper of indefinite length Turing machine Introduction. Till now we have seen machines which can move in one direction: left to right. But Turing machine can move in both directions and also it can read from TAPE as well as write on it. Turing machine can accept recursive enumerable language Turing Machine: A Turing machine is a theoretical machine that manipulates symbols on a tape strip, based on a table of rules. Even though the Turing machine is simple, it can be tailored to replicate the logic associated with any computer algorithm. It is also particularly useful for describing the CPU functions within a computer. Alan Turing. 2 Examples of Turing machines Example 1. As our rst example, let's construct a Turing machine that takes a binary string and appends 0 to the left side of the string. The machine has four states: s;r 0;r 1;'. State sis the starting state, in state r 0 and r 1 it is moving right and preparing to write a 0 or 1, respectively An example of a non-terminating Turing machine program is a program that calculates sequentially each digit of the decimal representation of pi (say by using one of the standard power series expressions for pi). A Turing machine running this program will spend all eternity writing out the decimal representation of pi digit by digit, 3.14159 . . A Turing machine can also perform a special action - it can stop or halt - and surprisingly it is this behaviour that attracts a great deal of attention. For example, a Turing machine is said to recognise a sequence of symbols written on the tape if it is started on the tape and halts in a special state called a final state A Turing machine which, by appropriate programming using a finite length of input tape, can act as any Turing machine whatsoever. In his seminal paper, Turing himself gave the first construction for a universal Turing machine (Turing 1937, 1938). Shannon (1956) showed that two colors were sufficient, so long as enough states were used. Minsky (1962) discovered a 7-state 4-color universal. ماجدة. العلاقة الزوجية قبل الحقن المجهري. انواع طيور الزينة. تحميل اغاني ساندرا mp3. تطبيق نقل الملفات. كيف اعرف الأشجار المثمرة. حالات عن الزوج. اسماء مخيمات في لبنان. ماذا نتعلم من سفر راعوث. تحميل العاب رعب للكمبيوتر من ميديا فاير. معنى كيرفي بالانجليزي. مواليد شهر 10 برج ايه. معنى settle down. اسم ذكر السلحفاة فطحل. إغلاق البرامج التي تعمل في الخلفية أندرويد. متطلبات التعليم عن بعد. فساتين لفرح بالنهار. أسباب استخدام الجوال أثناء القيادة. علاج نمو الشعر بسرعة. بوش اب تمارين. عدد المنارات في المسجد الحرام. طريقة استخدام الكرياتين. قصة السمكة نيمو. Horace game. اسم إدريس مزخرف. درس قطرة ماء للصف الرابع الابتدائي الكويت. ساري هندي للبنات. صفات يأجوج ومأجوج إسلام ويب. برجر كنج رقم التوصيل. Tuberous sclerosis Medscape. عيد الاضحى المبارك بالانجليزي. انواع قصات الشعر القصير واسمائها للنساء. احوال الطقس ضباء. دول يمكن زيارتها بدون فيزا. لعبة بينج بونج للكمبيوتر. محلات الورد في القطيف. رقم شركة الفنار الموحد. اللون الرمادي في علم النفس. شمشون الجبار المصري. مريض التوحد بالانجليزي. أسماء أكلات مصرية.
CommonCrawl
Straw Bale Construction/Print version < Straw Bale Construction This is the print version of Straw Bale Construction You won't see this message or any elements not part of the book's content when you print or preview this page. Straw Bale Construction/Front cover 1 Introduction to Building with Straw Bales 1.2 Current Perspective and Regulations 1.2.3 New Zealand 2.1 Straw sources, bale dimensions and compression 2.2 Choosing your bales 2.3 Keeping your bales dry before building 4 Acoustics of straw bale structures 4.2 Insulation 4.4 US 4.5 Thermal mass 4.7 Availability, types and cost 4.7.1 Availability 4.7.2 Types 4.8 Resistance to pests 4.9 Resistance to fire 4.10 Limits to structural strength 4.11 UK Structural Design of Loadbearing Straw Bale Buildings 4.11.1 Traditional Construction 4.11.2 Prefabricated 4.12 Design and construction challenges 4.13 Foundations 4.14 Toe up 4.15 Underfloor insulation 4.16 Heating 4.17 US specific 4.18 Walls 4.18.1 Straw Bale Infill 4.18.2 Load Bearing Walls 4.18.3 Curved Walls 4.18.4 Structural Capabilities of Bale Walls 4.19 Finishes 4.19.1 Cement/ sand stucco 4.19.2 Clay plaster 4.19.3 Lime plaster 4.19.4 Tadelakt 4.19.5 Floor finishes 4.20 Openings 4.21 Roofing 4.21.1 Overhangs 4.21.2 Hip roof 4.21.3 Weight limitations 4.21.4 Ecological considerations 4.21.5 The Green Roof 4.21.6 Roof and Ceiling Insulation 4.21.7 Related Links 4.22 Pushing the Limit 4.22.1 Arches and vaults 4.22.1.1 Designing the catenary arch 4.22.1.1.1 The gravity approach 4.22.1.1.2 The mathematical approach 4.22.1.2 Building the formwork 4.22.2 Domes 4.23 Related Resources 4.23.1 Content 4.24 Useful Software 4.24.1 Technical Studies, Reports and Tests 4.24.1.2 Acoustics 4.24.1.3 Insulation 4.24.1.4 Fire Safety 4.24.1.5 Building Codes 4.24.1.6 Construction Strength 4.24.1.7 Moisture 4.24.1.8 Studies in other languages 4.24.2 Worldwide organisations and contacts 4.24.2.1 The Americas 4.24.2.2 Europe 4.24.2.3 South Pacific 4.24.2.4 Email Lists 4.24.3 Straw Bale Building Registries 4.25 Resources on the internet 4.25.1 Wikipedia, the free encyclopedia 4.25.2 Wikibooks, the open-content textbooks collection 4.25.3 External links 6 Glossary of Terms Introduction to Building with Straw BalesEdit While use of grass-family plant fibers has long been a part of building methods worldwide, dating far back into prehistory, actual straw-bale construction was pioneered in Nebraska in the United States, in the late 19th/early 20th century, in response the then-new availability of baling machines and the lack of significant amounts of timber or buildable sod needed to build barns and housing in the Sandhills region. Under the Homestead Act of 1862 and the Kinkaid Act of 1904, the "sod-busters" were required to develop and live on their new property for five years in order to maintain ownership; building housing was a legal requirement. The straw-bale house was first seen simply as a make-shift structure, to provide temporary lodging, until enough funds were available to pay for the shipping in of timbers, to build a "real" house. However, these homes quickly proved to be comfortable, durable, and affordable, and so became regarded as permanent housing. Over the past century they have indeed outlived many neighbouring timber-frame buildings, and a number are in continuing use today and beginning their second century. After World War II a scattering of U.S. veterans turned to straw-bale for shelter, but modern straw-bale construction experienced a re-emergence in the late 1970s, after the 1973 energy crisis helped bring issues of real sustainability to the forefront, with first examples built primarily in the southwestern United States. Now, they are being built the world around, from northern Canada, Mongolia and post-Chernobyl Russia, to Mexico, Australia and New Zealand. Because it is based on an inexpensive and renewable so-called "agricultural waste product," with a technique relatively simple for beginners to implement, involving few synthetic chemicals and providing effective energy-conserving insulation, it continues to grow in popularity, especially with do-it-yourself-ers "owner-designer-builders" and other proponents of sustainability. Current Perspective and RegulationsEdit (Please help to expand this section!) Building with straw bales is slowly but surely gaining ever wider acceptance across America, Europe and Australasia. With some charitable groups using it in poorer countries it is also beginning to appear in South America and Eastern Europe. Government bodies are in general less hostile than you might at first expect. In many instances government bodies actively welcome sustainable building projects and straw bale building is readily recognized as a sustainable building method. Generally there is some reluctance to accept non standard 'alternative building methods by building officials. Besides adopted straw bale building codes there are now extensive resources available based on officially executed laboratory tests, studies and reports making it much easier to win them over (see the later section entitled Technical Studies, Reports and Tests for a compilation of links to such literature). Since the initial research done in the United States to support the adopted straw bale buildings in several counties, tests have been executed in such places as Australia, Austria, Canada, Denmark, Germany, France, Hungary and New Zealand, resulting in the adoption of either local codes or the establishment of a frame work of best practices. AustraliaEdit Around 1996 the Standards New Zealand/Standards Australia Joint Technical Committee for Earth Building (BD-083), chaired by New Zealand architect Graeme North, decided against publishing any specific regulatory standards. Following the conclusion of the committee, in 2001 Standards Australia published an Earth Building Handbook authored by Dr Peter Walker (University of Bath, UK) with the assistance of Stephen Dobson, David Baetge, Kevan Heathcote, Chris Howe and David Oliver.[1] CanadaEdit Several researchers and government bodies in Canada, including the Canadian Mortgage and Housing Company, have been testing the abilities of straw-bale construction.[2] They hope to develop building code to be included in the next revision of the Canadian Building Code. Building permits can be issued for alternate building methods approved by an engineer.[citation needed] New ZealandEdit Early European settlers used clay to make wattle and daub, sod or cob houses. Chimneys were made from tree fern trunks heavily plastered with clay. Māori began to use sod to build whare paruparu (dirt houses) takitaki (walls).[3] The oldest known standing earth building in New Zealand was built in 1842, the two storey Pompallier House in Russell, Bay of Islands. Today the popularity of straw bale construction is increasing, with a growing number of specialist businesses dedicated to the area. Straw bale constructions are required to meet the standard New Zealand building regulations.[4] Around 1996 Graeme North, Chair of the Standards New Zealand/Standards Australia Joint Technical Committee for Earth Building (BD-083), rejected an approach to enlarge the committee's work to write strawbale standards for Australasia despite the wide range of building methods that utilised both earth and straw. The rejection was based on several reasons: Earth buildings rely on the binding properties of clay. Once this is absent, then you have another material and set of "rules"; Strawbale was relatively recent in New Zealand and Australia and did not have a large number of local examples or performance history to draw upon; There was no adequate funding available to enable them to do the work. Some members of the committee had no experience or interest in strawbale. Despite this outcome, a few years later the 'New Zealand Earth Building Standards' were published in late 1998/1999. Standards development was managed by Standards New Zealand and supported by the members of a voluntary Technical Committee. The publications are available from Standards New Zealand: NZS 4297:1998 Engineering Design of Earth Buildings (Specific Design) NZS 4298:1998 Incl Amendment#1 2000 Materials & Workmanship for Earth Buildings NZS 4299:1998 Incl Amendment#1 1999 Earth Buildings Not Requiring specific Designs In mid-2012, access to these three standards documents cost over NZD $300 to purchase,[5] however they are usually easy to locate in a local library via services such as WorldCat and are not particularly long documents. The Earth Building Association of New Zealand suggests that the final document (NZS 4299:1998 Incl Amendment#1 1999 Earth Buildings Not Requiring specific Designs) may be useful for earth builders seeking a way to legally avoid dealing with certain planning expenses. Also in 1998 the Building Research Association of New Zealand published an article entitled Guidelines For Strawbale Building In New Zealand by G. North, R. Walker, B. Gilkison, N. Crocker, A. Alcorn, T. Drupsteen in its BUILD magazine. This was apparently later republished as a 'bulletin' on straw bale building best practices[6], most recently updated in December 2010[7]. One of the authors of that paper, architect Graeme North, presented a paper entitled Strawbale Building Guidelines for Wet and Humid Climates (Such as New Zealand's) at an international conference in 2002. For further information on local regulations and conditions, consider visiting the Earth Building Association of New Zealand. IrelandEdit The Building Regulations in Ireland require you to obtain a fire certificate and with strawbale construction, this is not an easy process because there are so few straw bale buildings in Ireland (with the majority being domestic houses which do not require fire certificates). Fire Officers have no reference point and as yet are reluctant to accept that this type of construction is occuring across the world and can meet the standards set out in the Building Regulations. ↑ The handbook is available online for over AUD $100, a sample is also available here. ↑ Canada Mortgage and Housing Corporation - Documents matching 'straw' ↑ Rock, limestone and clay - Clay, Te Ara - the Encyclopedia of New Zealand. Carl Walrond, 2009-03-02.] ↑ Unit Standard 13033 - Alternative Building, Matt Thompson. 2011-08-02. ↑ NZS 4297, NZS 4298 & NZS 4299 - Earth Building Set ↑ BRANZ Bulletin #398: Strawbale Construction, Building Research Association of New Zealand, 2000. ↑ BRANZ Bulletin #530: Strawbale Construction, Building Research Association of New Zealand, December 2010. This page may need to verify facts by citing reliable publications. You can help by adding references to reliable publications, or by correcting statements cited as fact. MaterialsEdit Straw sources, bale dimensions and compressionEdit Straw-bales can be made from a range of plant fibers, not only grass-family species like wheat, rye, barley, blue-grass and rice, but also flax, hemp, etc. (Bales of recycled materials like paper, pasteboard, waxed cardboard, crushed plastics, whole tires and used carpeting have also all been used or are currently being explored for building.) Basic straw-bales are produced on farms and referred to as "field-bales". These come in a range of sizes, from small "two-string" ones 18 in (460 mm) wide, by either 14 or 16 in (350 to 400 mm) high, and 32 to 48 in (0.8 to 1.2 m) long, to three-string "commercial bales" 21 in wide, by 16 in high, by 3 to 4 ft long. These sizes range from 40 to as much as 100 pounds (18 to 45 kg). Even larger "bulk" bales are now becoming common, 3 by 3 ft (1 by 1 m), or 3 x 4 ft (1 m by 1.2 m) by 6 ft (2 m) long and even 4 x 4 x 8 ft (1.2 by 1.2 by 2.4 m) long, weighing up to a ton, plus rolled round bales 4 to 5 ft (1.2 to 1.5 m) in diameter. All of these "economy-size" units also offer unique potential for imaginative designers. A newer trend is the use of high-density recompressed bales, sometimes called strawblocks, offering far higher compression strength. These bales, "remade" from field bales, in massive stationary presses producing up to 1 million pounds of force (4 MN), were originally developed for cargo-container transport to over-seas markets. But innovators soon discovered that where a wall of "conventional field bales" is able to support a roof load of 600 pounds per foot (900 kg/m), the high-density bales can support up to 3,000 to 4,500 pounds per foot (4,500 to 7,000 kg/m). This makes them particularly suited to load-bearing multi-storey or "living-roofed" designs, and they may be faced with siding, gyp-board or paneling and have cabinetry hung directly from them with long sheet-rock screws. They are available in a range of sizes from different companies' presses but 2' long by 2' high by 18" wide might be considered "typical"; because they are bound with horizontally ties or straps, at 3" or 4" intervals vertically, they may be recut with a chain-saw at a range of heights. They are usually used in "stacked bond", with the straws running vertically for greatest strength and tied with "re-mesh" on both sides before stucco application. Choosing your balesEdit straw source twine tightness Keeping your bales dry before buildingEdit Keeping your bales dry is extremely important. This can be a challenge, especially when building load bearing. A dry barn on site is a big advantage. Or try to negotiate to leave a trailer with bales on site for a month. If neither of those is an option, some hints: Have your bales delivered only when ready to build Put a double layer of pallets under your bale pile. Make multiple local piles of bales. Put those near the places where you will need bales. This will speed up construction, and thus limit the times bales are exposed. Stack the bales in pyramid shape. Cover them in tarps. The pyramid shape will help rain water drain off the pile. You don't expect windy rain? Then leave the bottom part of the tarp exposed. Wind will help moisture (condensation, ...) that did get to the bales dry up. CharacteristicsEdit The thick walls (typically 21 to 26 inches (530 mm) when stuccoed/plastered), result in deeper window and door "reveals", similar to stone and adobe buildings. Since the bales are irregular and may be shaped easily, they are readily adaptable to curved designs, and when plastered, tend toward a relaxed, imperfect texture and shape. If flat, straight walls are desired, this can be achieved, as well, by the application of more plaster. Passive solar Availability, types and cost Resistance to pests Resistance to fire Design and construction challenges Acoustics of straw bale structuresEdit A report carried out in Denmark (Halmhuse - Udformning og materialeegenskaber Straw_Bale_Construction/Resources/Technical_Studies) measured the sound insulation performance of a wall in an existing home. The measurements were carried out in a wall with both horizontal strawbales (where the straws were perpendicular to the plane of the wall) and on a wall with vertical strawbales (where the straws were parallel with the plane of the wall). In both instances there were approximately 40 mm of clay rendering on each side of the wall. In the first instance the sound insulation (expressed with the sound reduction R'w) was found to be R'w=52 dB and in the second instance to be R'w=46 dB. The second result is affected considerably by bed-lofts in both rooms that were carried by a wooden framework in the wall. DELTA [1] estimates that an construction focusing on reducing the transmission due to openness in the construction (flank transmission), would be able to obtain values of 53-54 dB, regardless of the direction of the straws. For comparison it can be mentioned that the requirement of the Danish Building Regulations in 2004 for a wall that separates apartments in housing blocks is 52 dB, while the requirement for walls between non-detached houses built in accordance with the Danish Building Regulations for Small Houses is 55 dB. It should be mentioned that walls that only just satisfy these code requirements are not always perceived as satisfactory by the residents. For most other applications the strawbale-walls will have satisfactory sound insulation performance. Within a dwelling the sound reduction is particularly satisfactory and the actual sound insulation will most likely be determined by the doors. A less formal study was made of a recording studio in Sydney by John Glassford of Huff 'n' Constructions. This has yet to be added. LinksEdit These links are what I have for SB recording studio's http://www.alteredstaterecords.com/strawbalestudio.html http://www.aaronenglish.com/aaronbio.html http://www.johari.co.uk/music.html http://web.archive.org/20020228092424/www.geocities.com/dreamingbones/music.html http://www.pindropclub.co.uk/strawdio http://sbregistry.greenbuilder.com/search.straw?RID=110 InsulationEdit UKEdit Straw in steady state conditions is an unexceptional insulator in the context of insulating materials that are typically considered when designing a building envelope. Typical Thermal Conductivity Values, W/mK Thickness required in a frame construction to provide U = 0.2 W/m2K Hemp-lime 0.06 - 0.09[citation needed] 450mm Straw 0.05 - 0.07[citation needed] 350mm Wood/Hemp/Wool Fibre 0.038 - 0.040[citation needed] 200mm Mineral Wool 0.032 - 0.040 175mm Rigid Foam 0.022 - 0.030 150mm Note: of the materials given in the table above straw is the only one that can also be used as the principle structural element. Note: U = 0.2 W/m2K is a steady state target proposed in case studies for the 2013 Part L draft, the new regulations do allow dynamic methods of analysis where natural fibre insulation materials perform significantly better. However, the first point to note is that in common with all natural fibre insulation, straw performs better under dynamic conditions. So if a typical day/night cycle is considered better in-situ performance will be recorded than would be predicted from a typical U value calculation. The next point to consider is that straw is used in bale form and typically laid flat to maximise stability during construction. So more straw is put into the building to fascilitate the construction sequence than would be needed to provide adequate thermal performance. Finally straw is low density and the higher density renders are thin. So thermal mass and thermal lag time are less than for a traditional masonry construction. However natural fibres interact with moisture in the air which causes a 'virtual' thermal mass effect as water vapour changes phase, this has been more thoroughly investigated in hemp-lime than straw, but is still not fully understood. USEdit A carefully constructed straw-bale building has excellent thermal performance because of its combination of the bales' high insulative value and the thermal mass provided by the interior's thick plaster coating. (Read the section on thermal mass for more on the advantages of a high mass construction.) A good starting point is a discussion of what R-value is, and what it is not. It is not an absolute measure of how energy efficient your building is. It is not even a perfect way of predicting the wall's contribution to thermal comfort. It is one piece of information about the wall that, with other information, can enable you to estimate the heat loss and heat gain through the walls. R-value is the inverse of U-factor (R = 1 / U). U-factor is a measure of thermal conductance, or how easily a material (or system) allows heat to pass through it. This is how U-factor is defined (in the U.S.): the number of British thermal units that pass through one square foot of a material (or system) per hour with a one degree Fahrenheit temperature difference between the two sides of the material. Mathematically: U = ( B t u h × a × F ) {\displaystyle U=\left({\frac {Btu}{h\times a\times F}}\right)} Btu = British thermal units, a = area in square feet, F = temperature fahrenheit In most other countries U-factor is defined in terms of Watts per square meter per Kelvin [W/(m2*K)]. To convert metric (SI) U-factors to inch-pound (IP) U-factors multiply by 0.1761; to convert the other way, simply divide by 0.1761. To convert IP R-values to metric R-values, multiply by 5.6783. When a laboratory tests a material (or system) to determine its thermal conductance or resistance, they determine the heat flow from one side to the other on the basis of measured surface temperatures and heat energy required on the warm side of the wall to maintain a steady heat flow. This provides the U-factor, which is then converted to R-value for some purposes. (Nehemiah Stone, 2003) (the following comments are imperial R value) The theoretical R-value (thermal resistance) for a 16.5 inch (420 mm) straw bale was calculated by Joseph McCabe as 52 (RSI-9.2). This is compared with a theoretical R-value for 3.5 inch (90 mm) of fibreglass (the conventional insulation material used in home construction) of 13 (RSI-2.3). This means fibreglass has an R-value of about 3.7 per inch (RSI-0.26 per centimeter) and straw bales have about 3.2 per inch (RSI-0.22 per centimeter). Some lab tests of straw-bale assemblies have found significantly lower R-values in practice. However, the more conservative of these results still suggests an R-value of 28 for an 18" wall [2], which is a significant improvement over the R-14 of an energy-efficient insulated 2x6 wall[3]. Straw-bale experts suggest that it is possible to approach theoretical R-values by giving more attention to detailing, but this has never been documented. Tests have shown a range of values from R-17 (for an 18" bale wall) to R-55 (for a 23" bale). Analysis at Oak Ridge National Lab, among other places, has shown that R-values for insulation materials used in "standard" walls are generally much higher than the R-value for the wall as an assembly of disparate materials. Joe McCabe recently postulated that the same phenomenon could account for the difference between the high values from his testing of bales and the lower values obtained in the 1998 Oak Ridge test of a straw bale wall system. While it is possible that the relatively low densities where bales abut each other might contribute to greater heat loss than would be measured through an individual bale, it is unlikely that this would account for the entire difference. This difference between bales and bale walls is similar to the difference between standard insulation and what is found in stud framed walls (insulation voids, thermal bridges, uninsulated headers, and other faults). It is noteworthy that all tests of straw bale wall systems prior to the Oak Ridge test in 1998 had potentially significant shortcomings and should not be considered particularly reliable. The last Oak Ridge test had no identified deficiencies and is considered by most to be an accurate determination of the thermal resistance of straw bale walls. ORNL determined the R-value to be R-27.5 (or R-1.45/inch), or R-33 for three string (23") bale wall systems. Shaving a bit off the top just for conservatism's sake, the California Energy Commision officially regards a plastered straw bale wall to have an R-value of 30. Thermal massEdit Main page: Thermal mass The interior plaster on a straw bale wall works as an excellent thermal mass on a diurnal cycle. Thermal mass reduces temperature swings due to daytime warming and night time cooling, by absorbing and then gradually releasing heat. This can result in a direct reduction in the need for fuel or electricity to regulate temperature, and indirectly in savings through lifestyle adjustments: occupants of a moderate environment, with only gradual temperature swings, are less likely to use artificial heating and cooling. This is most easily achieved at high desert altitudes where a clear sky contributes to both warm days (solar gain) and cool nights (nighttime cooling), but the principle still works in other climates as well. (Please help expand this section, specifically adding comparative) Straw bale like all other organic insulation materials are better able to buffer heat than inorganic insulation[citation needed]. Basically this improves thermal comfort within a building, exterior temperature swings are delayed and damped. Having thermal mass is the difference between the interior comfort experienced within a catherderal or in a corrugated shed, either in the summer or winter. The fact that a straw bale wall should be plastered on innner and outer surfaces greatly enhances the heat buffering effect by substantially adding to the thermal mass of the building. The combination of insulation with sufficient thermal mass creates the high level of comfort experienced in most straw bale buildings. Adding extra thermal mass by even thicker plaster layers on inner surfaces encounters the effects of diminishing returns. Doubling the thickness over 35mm (which seams to be the optimum) increases comfort by an insignificant amount. Depending on the location overheating can take place if there is too much equator facing glass. This can be combatted by adding extra thermal mass. Shading though is far more effective but correctly dimensioning the amount of glasing is even better as it saves the heat loss through the glasing at night or overcast days. (Passive Solar should be moved to a more appropriate section) Passive solar refers to buildings designed to maximise the heating and cooling effects of the environment around them. They are called passive because there are none (or few) parts of the design that require energy to operate. The most common technique for passively taking advantage of the environment is maximising solar gain by exposing interior surfaces to the suns warmth and then designing the building to best contain that warmth. At the other end of the scale, where climates are hot and passive cooling is what's needed, one technique is using rising warm air to draw basement cooled air up through a building. Any building taking advantage of passive solar gain must have well insulated interior surfaces which are exposed to sunlight and have enough mass to store daytime heat and release it at night. How suited a straw bale house is to taking advantage of solar gain depends on the mass (think of thickness) of the inside plaster coating, though some maintain that straw bale constructions are inherently unsuitable for passive solar gain [4] (although the article seems to neglect the surface plaster). It should be stressed that straw bale homes are not inherently good for passive solar gain, they need to be designed to make use of it, it doesn't just happen. The same is true of any building material or system. Following are the basic features that distinguish straw bale buildings designed to maximise passive (think of free and sustainable) heating and cooling: Limited exterior wall surface with high insulation. Equator-facing, East and West Roof overhangs correctly sized to block the summer sun (angle) and still allow the lower winter sun angle to provide heating of interior thermal mass. Passive preheating/precooling of external air by drawing through cellers, porches, glass houses and heat exchangers. Features specific to cold climates Large (super insulated low-e) glass surfaces with high orientation for maximum sun exposure of the buildings interior. In strawbale buildings the inside plastered surface of the bales is a great surface for collecting sunlights heat and radiating it slowly back to the inside space. See also w:Solar_gain:solar gain. Superinsulated doors, windows and frames. Glazing with low-emissivity glass coatings facing outwards Position doors for minimum wind exposure, preferably with an enclosed porch. External postbox, not an in-door hole. Building envelope air-tightness (see below). For extra winter heating the focus is on renewable fuels (plant oils/ charcoal and wood) or sun heated systems (solar collectors or heat pumps). Features specific to hot climates Glass openings (and leisure areas) need to be protected from radiated heat from surrounding object like sun baked sand or earth, outside planting can greatly reduce radiated ground heat. Shading and orientation to avoid sun exposure, especially to the buildings interior. Position windows where they can make the most efficient use of prevailing wind for cooling and ventilation. One common source of confusion when talking about 'passive' construction is the term 'breath' which is more accurately known as "vapor permeability". People talk about straw bale walls breathing, but this has nothing to do with air moving through the wall, it's about moisture moving through the wall. Really it is better to refer to it as moisture permiability. In this way walls that can transport odour filled moisture to the outside contribute to a high air quality, without air moving through the wall. Availability, types and costEdit AvailabilityEdit Straw is an agricultural waste product, a by-product of grain harvesting. Many different kinds of straw are baled and can be used for construction. Straw is widely available, and is generally an abundant, renewable resource. Relatively little energy is consumed in harvesting, baling and transporting bales to a building site. In bulk, straw bales are generally sold for close to the cost of baling and transport. Farmers will sometimes sell bales for under cost in order to clear storage sheds prior to a harvest. In most regions, straw is baled only once each year, and so must kept dry and stored for use at other times of the year. Straw production and demand is relatively constant, however high demand for bales used for erosion control following forest fires can create a temporary shortage of bales. TypesEdit Bales are rectangular compressed blocks of straw, bound by strings or wires. Straw bales come in all shapes and sizes. Rectangular bales are the only bales suitable for building. The round bales that are now becoming popular require re-bailing before use but this is not recommended. Three string bales (585x405x1070 mm) common in western USA have an average weight of 29 kg. The two string bales (460x350-450x960 mm) which are common in the rest of USA and most of the world are easier to handle and have a weight ranging from 15 to 19 kg. Besides these traditionally sized bales, big jumbo bales are also becoming popular. There are basically two sizes in use. The real jumbo is 1200x760x2400 mm and the mini-jumbo is 800mm wide and available in various lengths and heights depending on the bailing machine used. The jumbo bales are appropriate for bigger industrial buildings where they show definite advantages due to their high load carrying capacity of up to 300 KG/bale for the 1200mm wide variety. Only machine handling is possible due to their weight. Greater stability and the bigger size of Jumbo bales compared to the conventional bales favors rapid and easy construction. Small bales range in price from 1.50 USD per bale to 6USD per bale. Prices go up rapidly when you take into account transportation. Jumbo bales, including transportation, range in cost from 15USD to 30USD per bale. This depends upon when you purchase the bales, how far they need to be transported, and type of bale - whether it's wheat straw, flax straw, or rice straw. Different "waste" products have different values for farmers and some are less usable than others for agricultural purposes. The best way to get an idea of prices in your area is to look in the [[../../Resources/Worldwide_Contacts|Worldwide organisations and contacts]] section of this book. Of course if you find some useful information, come back and add it to this page. Resistance to pestsEdit Straw bales are thick and dense enough to keep out many kinds of pests. As well, the outer layer of plaster makes them unattractive or impenetrable to animals and insects. Finally, because straw contains little nutrient value to most animals and insects, it does not attract pests. Termites like moist damp conditions. While a wall is kept dry, there is little danger termites would have any interest. When termites do manage to enter a wall, they tend to bypass the straw and attack any wooden studs. In North America, termites attacked straw bale houses only very rarely. Resistance to fireEdit Although loose straw is quite flammable, once packed into a bale it is too dense to allow enough air for combustion. By analogy, it is easy to light a single piece of paper on fire, but difficult and time consuming to burn an entire phone book. In construction it is critical to have, at a minimum, a parge coat of plaster on all surfaces of the wall. Parge coating the wall involves troweling on a thin coating of mortar and brushing it smooth. Typical failure of straw-bale homes involves frame walls set against straw-bale walls without a parge coat. A spark from an electrical short or an error by a plumber ignites the hair-like fuzz on the exposed bale. The flame spreads upward and sets the wood framing on fire causing the wood framing to burn. The typical fire results in little fire damage to bales, but extensive water damage due to the fire suppression activities. The ASTM E-119 fire resistance test for plastered straw-bale wall assemblies in 1993 passed for a 2 hour fire-wall assembly. In this test a gas flame blows on one side of the wall at approximately 2000 degree Fahrenheit (1100 degrees Celsius) while the temperature of the other side of the wall is continuously measured. The results of this test had no burn-through and a maximum temperature rise of 60 degrees Fahrenheit (33.3 degrees Celsius). Limits to structural strengthEdit Load-bearing straw-bale walls are typically used only in single-storey or occasionally double-storey structures. A dug foundation (basement) is uncommon. An all-straw vaulted building was designed and built in Joshua Tree, California, and greatly exceeded the structural requirements for this highly active seismic zone. Post and beam straw-bale structures have been used for buildings as large as 14,000 square feet (1,300 m²) and even for a United States Post Office, in Corrales, NM [5]. UK Structural Design of Loadbearing Straw Bale BuildingsEdit In the UK there are currently two main ways of building with straw: onsite or prefabricated. In both cases the straw is covered by an earth or lime render, note NEVER a cement render! In the UK the recommended minimum density for a straw bale is 110kg/m^3 this is to ensure a good bond between the straw and render by improving the dimensional stability of the straw. The below is based on the Natural Building Materials lecture series given at the University of Bath (BRE Centre for Innovative Construction Materials) Traditional ConstructionEdit Vertical Load Capacity Current practise is to ignore the strength of the straw and only consider the strength of the render. Recent testing at the University of Bath has shown that the render more than any other variable affects strength and stiffness. Consider a typical wall section for a one-storey building that uses standard two string bales laid flat (450mm) and 35mm render internally and externally. A timber base and header plate are positioned to ensure uniform load distribution. Assume that there is sufficient composite behaviour that the render will be restrained from buckling by the straw, the wall is plumb and the load is concentric. Area of render = 2 × 35 × 1000 = 70×10^3 mm^2 per m run of wall So even a low strength mortar σc = 0.5 N/mm^2 Gives a theoretical allowable force, F = A × σc = 35kN per m run of wall However it is unlikely that a wall will ever be built plumb so a practical limit of 10kN/m run is applied. Addional checks should be done if the wall is to be eccentrically loaded as this will result in unequal load share between the internal and external renders. Lateral Load Capacity Once again the strength of the straw is ignored and composite behaviour is assumed. Failure is taken to be when the render cracks and leaves the straw vulnerable. Calculate the second moment of area, I = 2[b(t^3)/12 + bt(d^2)] (from parallel axis theorem) Calculate the section modulus, z = I/y (y is distance from centroid to extreme edge) Moment resistance of the wall is given by, M = z × σb (σb of render, ignore straw) Rearrange M = w(l^2)/8 for w to find failure load. Racking Resistance Straw constructions that are one or two storeys high generally do not have racking issues providing the walls are continuous and window openings are restricted to 50% of the wall area. PrefabricatedEdit The main supplier of prefabricated straw bale units in the UK is Modcell. The panels are made near to site in a flying factory, allowing the use of local straw which minimises embodied carbon due to transportation. The frame is machine cut glulam with steel corner bracing to improve racking resistance. These can be load bearing up to three storeys or used non-structurally as cladding panels. Design and construction challengesEdit Straw-bale construction is still considered experimental in many jurisdictions. Building codes may not include it, local authorities may not recognise it, and most contractors will probably not be experienced in its use. Straw-bale buildings must be carefully designed to eliminate the possibility of moisture entering the walls, especially from above. Successful designs often incorporate roof overhangs that are wider than normal and roof shapes and detailing that minimise the risk of water splashing against walls. Because straw-bale walls are much thicker than normal walls, there is sometimes a compromise between the size of the building's footprint and the amount of living space. Foundations frost, soil types, insulation Walls load bearing, non-load bearing, curved Finishes clay plaster, cement render, lime based plaster, mechanical application Openings water proofing, tightness, design considerations, location, natural lighting Roofing green roofs, straw insulated, seashell insulated Non-residential Buildings Pushing the Limit arches, domes, stringless bales Building Services electrical cables, plumbing, heating and cooling FoundationsEdit The first rule of building with straw is to keep it dry ("good hat and goot boots"). This includes the foundations. Moisture will eventually find its way into even the best wall so foundations must allow any moisture to drain away. An impermeable foundation will trap moisture near the straw and cause it to degrade quickly. Care should also be taken positioning membranes for the same reason. One possible solution is building a stone, lime mortar and rubble infill foundation. Build it up above ground level to protect against rising damp and rain splash. Other alternatives include rubble piers, rammed earth depending on the site and the desired aesthetic. Car tires have recently gained in popularity as a cheap but labour-intensive foundation material. Concrete footing/foundations or thickened-edge-slab-on-grade foundations are another option. A building with concrete foundations is generally much easier to attain building approval for. While the most common solution, the cement used in concrete does require large amounts of energy for cement production. Concrete foundations contribute up to 70% of the ecological footprint of a straw bale house. Toe upEdit To further prevent water damage, on top of the foundations one will generally find a toe up, a basis for the straw bale wall. It is made out of two parallel 5x10 (2x2") or 10x10cm (4x4") beams. These are spaced evenly slightly less than the width of one bale apart. This is important to avoid water getting into the bales. The space between the toeup beams is filled with somewhat water resistant insulation like perlite, sheep wool, crushed shells or alternatives less friendly to the environment. Pier foundations with joists raised well above ground level are a relatively common option in Australia and Germany. Even if the piers are poured or pre-fab concrete a vast savings on concrete is made. This technique also has the added bonus of allowing the use of straw bales as underfloor insulation as they are raised well above grade. Bales can also be stacked over stem walls with joisted floors. With load-bearing straw-bale homes rubble trench foundations or Earthbag construction foundations are increasingly used, as an alternative to conventional footings. Some pioneer designers are even using rock-filled gabions or earth-filled "bastions" in lieu of concrete. Underfloor insulationEdit The use of straw as underfloor insulation is usually discouraged because straw will rot and grow mold if it gets damp (>18% moisture content). Avoiding moisture is especially important in kitchens and bathrooms where flooding is possible due to plumbing leaks/broken washing machines/over flowing bath tubs etc. A commonly used option for insulating joisted floors is sheep's wool. A bed of shells has been used with much success in Denmark as a combined rubble bed and insulation. At a thickness of between 119.4 and 124.9mm conductivity is between 0.120 and 0.112 W/mK. Compared to industrial products (such as expanded ceramic or spun glass or rock) shells therefore provide good insulation as a nearly carbon neutral industrial waste product. Another practical alternative for underfloor insulation is hempcrete, consisting of hemp fibers mixed with natural hydraulic lime. As this is a fairly lightweight mix, it can also be applied as roof or ceiling insulation. Recycled foam glass is yet another option. It's cheap, insulating and hydrophobic: no capillary effect, and no loss of insulation value if placed in a high humidity environment. One experimental building in Belgium tried to avoid moisture problems in a straw insulated floor with a clever foundation floor structure. From bottom to top: big stones, sand, strong plastic tarp, sand, recycled building bricks (spaced so air can flow between them), straw blocks, and rammed earth for the floor. HeatingEdit While thinking about the design of your foundations, or more specifically the foundation pad, this is the time to think about heating options. One of the options gaining popularity is in-floor radiant heating. You can read more about this in the section on building services under heating and cooling. US specificEdit Foundations can still be a major cost as most building codes still require a footing of at least 12 inches or to the frost line, whichever is deeper. They then require that a pad be poured that is at least the width of the bales being used(possibly three inches less if you are going to use rigid insulation on the outside of the foundation) for at least 8 inches above final grade. This is the least restrictive code that has been written to date. If you are not being bound by code (rural area) you might be able to get away with using something much less energy intensive than concrete. Note: Definitely check with the local code compliance or county property appraiser to get their input. Give them a bit of the information here and other places to warm them to the idea. If you are going to be bound by code you need to know that and follow it. Or alternatively, sell that piece of land and move elsewhere. Jay H. Crandell, Design Guidelines for Frost Protected Shallow Foundations (2Mb PDF), 1994, U.S. Department of Housing and Urban Development Thermal insulation of mussell shells, three different densities (2001, Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research.) These tests were to measure the conductivity of Mussel shells carried out in 2001 and can be downloaded in Danish from the Straw_Bale_Construction/Resources/Technical_Studies Technical studies section of this book. The shells were dried in a 60celcius oven before the tests. The tests were carried out following EN 822, 823 (1994) and ISO 8301 (1991). The margin of error is +-2%. There is an article about the tests in The Last Straw Journal (2005 Issue #52) Part of "Straw Bale Houses - design and material properties". WallsEdit Straw Bale InfillEdit The original "Nebraska" straw-bale building technique was one in which walls of straw-bales actually provided the support for the roof-structure above, so these are now referred to as load-bearing, and straw-bale homes of this style continue to be built and permitted. An alternative method of construction uses a post and beam framing system to carry roof, wind and seismic loads. Once that structure is in place, the walls are then stuffed with straw bales for insulation. This type of structure is popular because it allows bale placement to be accomplished with the roof already in place, "in the dry", and can easily be demonstrated to conform to building codes, using conventional engineering techniques or a pre-engineered pole-structure design. Some projects best lend themselves to a combination of both techniques, with load-bearing perimeter walls and pole or stick-frame support at the interior or ridge; this is termed a "hybrid" structural system. The building code in the State of New Mexico (1994 ed.) required that all straw-bale homes there be built with rigid structural frames, while other state or regional building codes lack this restriction (see codes for California, Pima County Arizona, etc.) In other jurisdictions without specific "straw-bale codes", straw bale construction is often approved under the building code provisions for alternate methods and materials. Plans are commonly required to be stamped by a licenced structural engineer. Field bales are often laid in stretcher bond like bricks. They are easily retied to make half or custom sized bales. They may also be easily "pinned" internally or on both surfaces (with bamboo, reed, rebar or wood). Bale stacking is often done in community "bale raisings", where family and friends pitch in together to raise the walls in a weekend or two. Novice owner/builders and their friends can continue the work through lathing and plastering of the bales, giving the house their own special imprint, and achieving savings in construction costs, as well. Load Bearing WallsEdit As in the original Nebraska straw bale homes, bales are so compact that they can successfully be used as the structure of the building itself. Strictly speaking it is the outer surface of the bales which provides most of the structure. This matrix of straw fibres on the surface of bales is locked together by the stucco of whatever plaster is being used. Much like the reinforcing bars set into concrete, but over the whole surface and pointing in all directions. Wooden stakes are pounded vertically into the straw bales. They connect the layers of bales, adding stability to the wall. Popular choices are hazel and bamboo. Sometimes, metal rebar stakes are also used. Metal rebar should be avoided because temperature differences between the metal and the bales cause condensation inside the bales. Vertical compression of the bales into one surface is just as important for stability. The most popular way to do this is to run plastic straps vertically around the bales, compressing them between the top plate and the bottom plate. This compression not only strengthens the structure. It also helps the load bearing wall settle much more quickly, enabling a quicker installation of windows for example. The last and least effective option to reinforce (citation needed) is to "cage" the bales on one or both faces with pre-welded or woven mesh, to increase pre-stuccoed wall stability. Do avoid using metal mesh, as it can crack the surface render. This is both because of rust and temperature differences between the organic materials and the metal. These temperature differences also causes condensation inside the bales. The finish you use also has a great effect on the structural integrity of the finished wall. See the section on finishes. Curved WallsEdit An example of a tightly curved wall with flat bales (project: John Swearingen) "Curved walls are fun, pleasing to the eye, and create glorious light patterns. But they are deceptively time consuming! I can build three flat walls for the price of one curved wall. And it has all to do with the foundation, curbs, window bucks, window flashing, roof details." (Straw Bale contractor Frank Tettemer of Living Sol) As the above quote points to, time, and details, are an important consideration when deciding if your building will have any curved walls. How will you put the gutter on, what about the roof structure, the foundation? Some people also find any aesthetic advantages outweighed by the problems of using the rounded shapes on the inside. So, what needs to be considered? For gentle curves the bales can be laid against a wall and kicked, as you would if you were breaking a small branch. This can be done with bales laid flat or on edge. Of course it's best if the bales on all walls are lying the same way, but it's not a strict necessity. For larger walls flat bales would be more prudent, especially if the wall is bearing some weight. Bales placed on edge (largest face outwards) can be shaped well before placing into the wall, and hold their shape well. (The insulation value is almost the same as for bales laid flat.) If the curve is very tight the exposed strings could be a problem. Any such problems are solved if you use some form of surface mesh on both sides of the wall (plastic or metal) which you tie to each other through the wall. The round bale layout results in pie-shaped gaps between the bales. These are best filled with a mixture of clay and straw, the clay serving to hold the straw together. Mesh on the outside of the wall will add additional restraint to the tendency of the bales to "explode" outwards. (for discussion see John Swearingen). An additional way to increase the strength of a curved wall is to add large horizontal straps to each row of bales on the outer face, fixing these to something stable. Curved walls are, by their geometry, inherently less prone to overturning than straight walls. The composite of mesh (tension) and plaster (compression), along with the geometry of the wall, can result in a strong, stable building, if the continuity of the bale wall isn't broken by large openings. Swearingen, John., Archive of the Strawbale Construction Discussion List, accessed March 3rd, 2006. Tettemer, Frank., Archive of the Strawbale Construction Discussion List, accessed March 3rd, 2006. Example of a round building: [6] Structural Capabilities of Bale WallsEdit The bale assembly can do a number of things, depending upon the structural design of the building: Hold itself up, be self-supporting and resist tipping. Keep out the wind; inhibiting air/moisture infiltration. Resist heat transfer (insulation) Reduce water intrusion and migration, store and transfer moisture within the wall. Keep the assembly from buckling, under a compressive load. Keep the assembly from deflecting in a strong wind, when pushed from the sides or end. Keep the assembly from bursting apart in an earthquake, when pushed and pulled from all directions. Hold the plaster at least while it's curing. Keep the plaster from cracking after it's cured, from shrinkage or movement. Support the plaster skins from buckling. Transfer and absorb loads to and from the plaster. Support the roof load (compression). Reduce damage or failure from high winds (ductility). Reduce damage or failure from earthquakes (ductility). Stop bullets and/or flying debris. FinishesEdit Straw-bale walls are most typically plastered on the outside with lime, clay, or a cement and lime mix. Inside surfaces are typically lime, clay, plaster board (gypsum) or Structolite, a US Gypsum product that is formulated for thick applications (Wanek, Catherine). Structural analysis has shown that the straw-bale/stucco assembly behaves much like a sandwich panel, with the rigid stucco skins initially bearing most of the load and adding considerable strength to the wall. An important consideration when choosing a finish is that the outside surface of the walls must be more permeable to moisture than the inside surface. Failing to follow this rule will result in moisture accumulating in the wall, which will eventually rot the bales, just as it would rot anything untreated. As two extreme examples, if you chose to finish the inside surface with cement plaster and seal it with acrylic or latex paint, then any moisture in the wall can effectively only move outwards (assuming that's not also painted). If you did the opposite and used natural finishes on the inside but painted the outside with plastic paint then you are trapping moisture into the walls and rotting is likely. Cement/ sand stuccoEdit Stucco for straw-bale walls can be cement/sand-based, although mixes containing earth or clay and/or with a high percentage of lime, replacing part or all of the cement are increasingly popular trends. (Advocates of sustainable construction are becoming increasingly concerned with the fact that for every ton of cement manufactured and used, another ton of climate-changing fossil CO2 is released into the atmosphere.) Avoid cement stucco on bale structures. Cement is not breathable. Humidity - the arch enemy of any straw bale construction- will accumulate inside bales covered in cement stucco. Clay plasterEdit Clay plaster allow higher water vapour permeability through the walls than lime plaster, which in turn is much more than cement plasters. This means the right type of wall will dry quickly when wetted by rain and will effectively transfer any moisture which accumulates in the wall, whether from a leak or from normal day-to-day living (a significant amount). Clay plasters are great regulators of the indoor climate, they 'breath', which means moisture is absorbed and released - it does not mean that air trickles through the wall. On the inside of a house this property makes it well suited to damp areas like kitchens and bathrooms, it will absorb periodic moisture and to some extent odour, and slowly release it again. Because clay plaster typically is quite thick it also serves to regulate temperature by warming and cooling quite slowly. On the outside of the house this effect can even mean that the clay will wick (pull) moisture out of the straw and release it to the exterior air (Wanek, Catherine) Information about earth plaster systems for straw bale Lime plasterEdit (This section needs improvement) Performs in a similar way to clay plaster. Lime plasters consist of Lime, aggregate and other additives. Lime plasters are more resistant to weather, mold and impact than clay plasters, but are more time consuming and challenging to finish. A good compromise between breathability and water resistance, they are an ideal outside finish for a house, while clay plasters are more appropriate for the inside. Interview with Andy deGruchy about lime Lime plaster on straw bale TadelaktEdit This bright, waterproof lime plaster can be used on the inside of buildings and on the outside. It is the traditional coating of the palaces, hammams and bathrooms of the riads in Morocco. It is characteristically polished with a river stone and treated with a soft soap to acquire its final appearance. Tadelakt has a soft appearance with undulations due to the work of the stone; it is water-tight, which also makes it suitable for making bathtubs and washbasins and confers great decorative capacities. Tadelakt is generally produced with lime of the area of Marrakesh, but other types of lime can also be appropriate. Further online reading Nice pictures of Tadelakt A very informative page about Tadelakt Steps of the preparation of Tadelakt Tadelakt - A world beyond tile... [http://listserv.repp.org/pipermail/strawbale/2006-March/038980.html - Discussion on the REPP list, Tadelakt and natural plasters remove this link when the information has been added to this page. --DuLithgow 22:24, 26 March 2006 (UTC)] Magnesite or magnesium oxychloride cement, patented in 1800's as Sorel's cement [7]. http://en.wikipedia.org/wiki/Magnesite OpeningsEdit (Please help us write this section) This section could take as its starting point the discussion about waterproofing of window openings archived here: http://finance.groups.yahoo.com/group/GSBN-Greenbuilder/message/608 RoofingEdit OverhangsEdit Straw bale walls need ample roof overhangs to protect them from humidity, much more so than more common construction techniques. Hip roofEdit A hip roof provides ample overhang on all sides of the construction. While the roof itself is more complex to build, it rests directly on the bond beam on top of the straw bale walls. This considerably simplifies and speeds up construction. No triangular areas have to be filled with straw bales. An excellent place to help you calculate build your hip roof yourself is http://www.blocklayer.com/Roof/default.aspx . Weight limitationsEdit If you want to build a green or a similarly heavy roof on top of a load bearing straw bale constructions, make sure the walls can carry it. Have it calculated! Ecological considerationsEdit If your reason for choosing to build with straw bales includes an element of environmental concern then some options quickly become more attractive than others. In this context the main concerns are the embodied energy of the roofing system and the potential for reuse of the materials at a later stage. For example the production of new roofing tiles uses very large amounts of energy, which contributes to our burden on the environment. Clay/ terracotta tiles require large amounts of energy to bake. Concrete tiles must take the burden of the energy used in the extraction and heating of lime to make cement. On the other hand some types of roofing tiles can easily be removed and used again and again for several hundred years. In many cases and depending on where you live, collecting rain water or minimising roof runoff can be important. If you are not collecting your roof water then a green or living roof can be an option. Who wouldn't like a roof garden? Another direction for roofs in straw bale buildings is to make arches and vaults of straw bales so that they all press on each other giving a stable compressive structure just like the stone arches of ancient roman times. There is more about arches in the section on Straw Bale Construction/Pushing_the_Limit. The most typical solution is a conventional roof structure attached to a load-distributing plate or beam running all the way along the top of the bale walls. The Green RoofEdit A green roof with native planting (Please help write this section) One advantage of green roofs is the stability they add to the temperature of the roof. Because of their size and mass they are slow to warm up and cool down. In places where there is a large temperature change from day to night this can be a great advantage. The actual insulation value of a green roof (and here we're talking about not more than 30cm thick) is unclear. The presence or absence of water and roots makes such a large difference that it is hard to generalise. Naturally if the layer of soil is thick enough it will provide more insulation and a considerable amount of stability to the temperature. Roof and Ceiling InsulationEdit One of the easiest and most effective places to add extra insulation is in the space between your ceiling and the roof. So don't overlook this important part of the overall design. Conventional roof structures may be insulated with straw bales, taking advantage of their high insulation values and good acoustic properties. Other alternative insulation includes rice-hulls, cotton or wool batts, soy-based foam and recycled cellulose. According to comments on an unrecorded test by Tim Owen-Kennedy of Vital Systems [8] rice hulls perform just as flame retardant as borate treated cellulose or better, without being treated. According to The Rice Hull House ([[../../Bibliography|Paul A. Olivier]]) from around 2004; ...rice hulls are unique within nature. They contain approximately 20% opaline silica in combination with a large amount of the phenyl propanoid structural polymer called lignin. This abundant agricultural waste has all of the properties one could ever expect of some of the best insulating materials. Recent ASTM testing conducted R&D Services of Cookville, Tennessee, reveals that rice hulls do not flame or smolder very easily, they are highly resistant to moisture penetration and fungal decomposition, they do not transfer heat very well, they do not smell or emit gases, and they are not corrosive with respect to aluminum, copper or steel. (The following quote from a reference in the same article needs to be followed up) "Rice hull has a thermal conductivity of about 0.0359 W/(m.°C); the values compare well with the thermal conductivity of excellent insulating materials (Houston, 1972)." Juliano (1985), p, 696. The thermal conductivity of rice hull ash is reported to be 0.062 W.m-1.K-1. See UNIDO, p. 21. A more recent test done by R&D services of Cookville, Tennessee, indicates a 3.024 R-per-inch." Many of these natural products have a very low impact on the environment and perform excellently, sometimes better than synthetic insulation like rock wool or fibre glass insulation. If straw bales are used in the roof, their weight needs to be considered. Moisture is another consideration, and there is a fire risk if any loose straw is left exposed. Weight considerations are overcome by the fact that web-beams built to the height of the bales can easily bear their weight. To avoid moisture problems, it is important that the bales be treated just as walls are. They need to have good ventilation on the outer surface (a ventilation space) and should be coated with some plaster (typically clay or lime plaster) that can absorb, redistribute and release to the air any moisture. It cannot be overemphasised that no straw should be left exposed, plastering should be done in such a way that it acts as a suitable fire retardant. Related LinksEdit http://www.greenspec.co.uk/html/design/materials/pitchedroofs.html "Pitched roofing materials compared" is part of the National Green Spec website for encouraging sustainable building practises. http://www.greenroofs.org/ "In 1999, Green Roofs for Healthy Cities, a small network consisting of public and private organizations, was founded as a direct result of a research project on the benefits of green roofs and barriers to industry development entitled "Greenbacks from Green Roofs" prepared by Steven Peck, Monica Kuhn, Dr. Brad Bass and Chris Callaghan. Green Roofs for Healthy Cities - North America Inc. is now a rapidly growing not-for-profit industry association working to promote the industry throughout North America." http://www.newbuilder.co.uk/archive/sustainable_roofing.asp "Sustainable Roofing" is part of the website of The Green Building Press To Do Read w:Green roofs, Green Roofs for Healthy Cities, Scandinavian Green Roof Institute and incorporate relevant material. See the [[../../Bibliography|bibliography]]. Pushing the LimitEdit (Please help us write this section, you could use the following links to get started.) Arches and vaultsEdit Designing the catenary archEdit A catenary shape helps you build the sturdiest vault construction. The gravity approachEdit a heavy rope at least as long as the circumference of your arch (thin metal wire rope would be ideal) an even, rectangular wall at least as high and wide as the arch you want to build. Hint: you can make one out of plywood, pallets or anything you want. screws and a screwdriver (or nails and a hammer) a wide roll of paper a spirit level If you haven't got the required wall, build one from lightweight materials. Just make sure it's perfectly vertical and level. Tape the paper to the wall. Hints: Vertical is easier. Have them overlap a few centimeters. Number the segments. Mark a level line on the paper a bit higher than the arch you want to build. Double check the line is level. Mark a second line on the paper exactly as far down as the height of the arch you want to build. Mark two points on the top line as far apart as the width of the arch you want to build. Fix the rope to one nail, as close to the paper as possible. Lower it from the other nail until it reaches the base line. Spray paint the rope. Hint: You may also want to quickly spray paint the overlap between paper rolls. The mathematical approachEdit You will need: a computer, a pencil, some exact measuring skills, a roll of paper, a long measuring device, adhesive tape, a flat surface the size of your arc. Calculate the coordinates of your catenary. You can either use a catenary designer like https://github.com/ddjokic/pyCatenary-NoElast/blob/master/pycatenary.py or http://www.cgl.uwaterloo.ca/~smann/catenary/ , or use the cosh(x) function yourself. Tape the paper together to form a big paper surface that could cover the entire catenary's surface. Start out by setting out the triangle defined by the top and the two base points of the catenary. Precisely mark the coordinates of one point every 50 cm approximately. Keep in mind that the precision of this drawing is key to the stability of your vault, so take your time. Building the formworkEdit planks. They don't need to be thick, but they do need to be high. Recycled pallet planks are a perfect fit. a saw. A circular saw will work best, but a hand saw will do. nails and a hammer or (preferably) screws and an electric screwdriver your catenary arch design straw bales (if available) Let's start building the arches: Put the paper arch design on a flat and dry surface (the floor?). Put the planks on the inside of the arch design. You will have to saw the planks a bit for this, especially at the top. Put a second layer of planks on top of the first one. Screw them to the first layer. The idea is to have them overlap with the first layer to get some basic rigidity. Mark every 25cm on the outside of the arch. This is where the horizontal battens will link the three arches. Reinforce the arch by screwing two planks in an X shape to the inside of the arch. These should also make carrying the final formwork easier. Build three arches by repeating the above steps three times. The arches should be almost identical in shape. Measure the length of your bales. Space your three arches with a distance of one bale's length between each arch. Hint: In some cases it might be easy to: lay the first arch on a flat and dry surface, put bales on top as spacers (keep the length side vertical), lay the second arch on top of that, lay another row of bales, lay the third arch. Triple check the arches are all 100% level and perfectly aligned. Screw the battens to the three arches, removing any spacer bales as you proceed. DomesEdit The following link can help: http://minke-strawbaledome.blogspot.com In France, at least three "paligloo" straw/pallet domes have been built: http://paligloo.free.fr . Related ResourcesEdit Here you can find technical reports to help you convince your friends and building inspectors, contacts to help you and give you advice and support, and some registries of straw bale buildings so you can see what others have done before you. ContentEdit Technical Studies, Reports and Tests studies of Thermal Properties, Fire safety, Construction Strength, Moisture Issues, links to some Building Codes and a few bits and pieces Worldwide organisations and contacts contains a comprehensive list of contact from around the world so you can find someone near you, and you can use one of the email lists to ask for help and advice. Straw Bale Building Registries contain many examples of Straw Bale buildings in different climates and regions. Resources on the internet are many and varied, this is a collection of links to sites with useful information for straw bale builders. Useful SoftwareEdit ESP-r "ESP-r is an integrated modelling tool for the simulation of the thermal, visual and acoustic performance of buildings and the assessment of the energy use and gaseous emissions associated with the environmental control systems and constructional materials. In undertaking its assessments, the system is equipped to model heat, air, moisture and electrical power flows at user determined resolution. The system is designed for the Unix operating system, with supported implementations for Solaris and Linux, and is made available at no cost under an Open Source licence." ESP-r is OpenSource free software and will run on Linux, OSX and Windows using Cygwin (for advanced nerds). TRACE 700 "Trane's TRACE 700 software - the latest version of Trane Air Conditioning Economics - brings the algorithms recommended by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) to the familiar Windows operating environment. Use it to assess the energy and economic impacts of building-related selections such as architectural features, comfort-system design, HVAC equipment selections, operating schedules, and financial options." (Building Energy Software Tools Directory) Energy 10 "ENERGY-10™ software analyzes—and illustrates—the energy and cost savings that can be achieved by applying up to a dozen sustainable design strategies. Hourly energy simulations quantify, assess, and clearly depict the benefits of: Daylighting, Passive solar heating and cooling, Natural ventilation, Well-insulated building envelopes, High-performance windows, High-performance lighting systems, High-performance mechanical equipment, And more...". ENERGY-10™ is commercial software. The program can be purchased cheaper at an educational rate, taking a coarse on line through the Solar Energy Institute could make it possible to get the educational rate. HOT2000 "HOT2000TM is a low-rise residential energy analysis and design software. Up-to-date heat loss or gain and system performance models provide an accurate way of evaluating building designs. This evaluation takes into account the thermal effectiveness of the building and its components, the passive solar heating owing to the location of the building and the operation and performance of the building's ventilation, heating and cooling systems." (Free with registration) Technical Studies, Reports and TestsEdit GeneralEdit English slides from the Danish report Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Slide show of Danish results AcousticsEdit Air-sound-insulation of clay plastered non-loadbearing sb-wall In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Air-sound-insulation of clay plastered non-loadbearing sb-wall Thermal Performance of Straw Bale Wall Systems "In this analysis we provide a summary of the results of research that has been done, examine the implications of each to residential thermal comfort, and suggest a reasonable thermal performance value for plastered straw bale walls as a synthesis of the data." Nehemiah Stone, USA, 2003. Thermal Performance of Straw Bale Wall Systems PDF ?Kb. This document is made available by the Ecological Building Network, for which they request a donation. The Rice Hull House "The rice hulls are unique within nature. They contain approximately 20% opaline silica in combination with a large amount of the phenyl propanoid structural polymer called lignin. This abundant agricultural waste has all of the properties one could ever expect of some of the best insulating materials. Recent ASTM testing conducted R&D Services of Cookville, Tennessee, reveals that rice hulls do not flame or smolder very easily ..." Paul A. Olivier, USA, 2004 The Rice Hull House (PDF 225Kb) Thermal insulation of earthplastered sb-wall, bale lying flat In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Thermal insulation of earthplastered sb-wall, bale lying flat Thermal insulation of earthplastered sb-wall, bale on edge In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Thermal insulation of earthplastered sb-wall, bale on edge Thermal insulation of non plastered straw bale, on edge, flat, two different densities In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Thermal insulation of non plastered straw bale, on edge, flat, two different densities Thermal insulation of mussell shells, three different densities In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Thermal insulation of mussell shells, three different densities Fire SafetyEdit Summary of New Mexico ASTM E-119 Small Scale Fire Tests On Straw Bale Wall Assemblies This American document is a compilation of information regarding testing done by SHB Agra Engineering and Environmental Services Laboratory in Albuquerque, New Mexico in 1993. Small Scale Fire Tests (PDF 262 KB) Straw Bale Fire Safety The ability of plastered and unplastered straw bale walls to resist fire is presented, based on a number of tests and field reports to date. Field and laboratory experience show plastered bale walls to be highly resistant to fire damage, flame spread and combustion. Bob Theis, 2003. Straw Bale Fire Safety (PDF 100 KB) ASTM E84-98 Surface Burning Characteristics report Report prepared for Katrina Hayes by Omega Point Laboratories in 2000 (USA). Surface Burning Characteristics report (PDF 452 KB) Fire test of clay as a surface cover material In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. Fire test of clay as a surface cover material 30min fire test of clay plastered non-loadbearing sb-wall In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. 30min fire test of clay plastered non-loadbearing sb-wall Building CodesEdit City of Cortex Straw Bale Code (Colorado, USA) The City of Cortex ordinance which Dion Hollenbeck scanned and converted to html. City of Cortex Straw Bale Code California Straw Bale Code (USA) This code is from 1995 and has some very general requirements for bearing and non-load bearing constructions, there are also notes on fire safety requirements. California Straw Bale Code (PDF 18 KB) Austin Straw Bale Code (Texas, USA) Austin Straw Bale Code (PDF 18 KB) Boulder Straw Bale Code (Colorado, USA) The purpose of this chapter is to establish minimum prescriptive standards of safety for the construction of structures which use baled straw as a load bearing or non-load bearing material. This code was added to existing legislation in 1981. Boulder Straw Bale Code (PDF 16 KB) Tucson/Pima County SB Code (Arizona, USA) Tucson/Pima County SB Code (PDF 22 KB) Construction StrengthEdit A Pilot Study examining the Strength, Compressibility and Serviceability of Rendered Straw Bale Walls for Two Storey Load Bearing Construction. "A pilot study of a wall constructed from straw bales was carried out. The objective was to examine the suitability of such walls for two-storey residential construction. The emphases were placed on the strength, compressibility and serviceability of the rendered straw bale wall. The full-scale wall was tested to failure in laboratory conditions. The result shows that it is feasible to construct a two-storey wall using such system. The test results were compared with the recommendation provided by some of the codes of practice. It was found that the wall has adequate capacity for a two-storey wall construction. Other issues, such as constructability, detailing, and compressibility were also examined in this paper." Michael Faine and Dr. John Zhang, University of Western Sydney, Australia, 2000 Rendered Straw Bale Walls for Two Storey Load Bearing Construction (PDF 507Kb) Compression load testing straw bale walls Peter Walker, Dept. Architecture & Civil Engineering, University of Bath, England, 2004. Compression load testing straw bale walls (PDF ) Settling of non-loadbearing and loadbearing sb-walls after two moisture cycles In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. [Settling of non-loadbearing and loadbearing sb-walls after two moisture cycles http://www.by-og-byg.dk/download/pdf/423-8d.pdf] Load-bearing straw bale construction One hundred years of experience with load-bearing plastered straw bale structures, along with a number of laboratory tests worldwide, show these wall systems to be capable of supporting substantial service loads. When properly baled, stacked, and detailed, and plastered both sides with cement, lime, or earthen renders, straw bale walls can support at least residential scale loads, and meet typical building code criteria for strength, serviceability, creep, and durability. Bruce King, USA, 2003. Load-bearing straw bale construction (PDF). This document is made available by the Ecological Building Network, for which they request a donation. Structural Testing of Plasters for Straw Bale Construction Over the past hundred years, plastered straw bale construction has shown itself to be strong and durable in both load-bearing and post-and-beam structures. In load-bearing straw bale systems, the relatively strong, stiff plaster plays a significant role as it works together with the ductile straw bale core to function as a stress skin panel, resisting compressive, in-plane and out-ofplane loading. Kelly Lerner and Kevin Donahue, USA, 2003. Structural Testing of Plasters for Straw Bale Construction. This document is made available by the Ecological Building Network, for which they request a donation. Creep in Bale Walls The tests are aimed at determining the vertical creep or settlement of various bale walls loaded vertically for 12 months. In the base group are two stacks of 6 unplastered rice 3- string bales which are tested with uniform low (100plf) and high (400plf) loads. Dan Smith, USA, 2003. Creep in Bale Walls (PDF). This document is made available by the Ecological Building Network, for which they request a donation. Testing of Straw Bale Walls with out of Plane Loads 3-string rice-straw bales (16" x 24" x 4'-0") laid flat and stacked to create 2'x4'x8' straw bale walls plastered with 1" stucco, 2" earth plaster or unplastered were loaded out-of-plane as follows: air-pressure was added to a 4'x8' plastic waterbed bladder placed in a 2" gap between a 4'x8' 2x10@16" stud wall with 3/4" plywood both sides. Kevin Donahue, USA, 2003. Testing of Straw Bale Walls with out of Plane Loads. This document is made available by the Ecological Building Network, for which they request a donation. In-Plane Cyclic Tests of Plastered Straw Bale Wall Assemblies The construction and testing of six full-scale plastered straw bale wall assemblies is described in this report. The specimens consisted of three cement stucco skinned walls and three earth plaster skinned walls representing varying levels of reinforcement detailing. All walls were tested in-plane under either cyclic or monotonic lateral loadings. Measured behavior is presented in this report, along with recommendations for future work. Cale Ash, Mark Aschheim and David Mar, USA (date unknown). In-Plane Cyclic Tests of Plastered Straw Bale Wall Assemblies. This document is made available by the Ecological Building Network, for which they request a donation. Design Approach for Load-Bearing Strawbale Walls "In addition to presenting background information about loadbearing strawbale wall systems this paper presents the results of a series of tests conducted to gain more insight into the various parameters needed for the design of strawbale structures. These parameters include: dead load behaviour, bale response to over time, shear between straw and stucco, and axial load capacity of the stucco skin. Based on the test results the paper presents a design example for comparison of test values with design values." Kris J. Dick, M.G. (Ron) Britton. For presentation at the AIC 2002 Meeting CSAE/SCGR Program Saskatoon, Saskatchewan July 14 - 17, 2002. Design Approach for Load-Bearing Strawbale Walls The Effects of Plastered Skin Confinement on the Performance of Straw Bale Wall Systems "This project will continue this investigation and will include the results of compressive tests of confined straw bale specimens. It will also include an evaluation of construction details for confining skins, and how further research can clarify which techniques are most beneficial when building straw bale structures." Adrianne Wheeler, David Riley and Thomas Boothby. Pennsylvania State University Summer Research Opportunities Program 2004 The Effects of Plastered Skin Confinement on the Performance of Straw Bale Wall Systems MoistureEdit Straw Bale House Moisture Research "Researchers and builders do not know how well straw bale walls deal with moisture. What happens if you build with wet straw? Does it dry out over time? Is straw naturally better able to deal with water than building products such as wood? Will house humidity levels affect straw bale walls—especially during long Canadian winters? Would a vapour barrier help? If rain wets the stucco, does the straw underneath get wet? How do you keep the wall dry by a window when there is no drainage plane behind the stucco to carry the water away?" Canada Mortgage and Housing Corporation. The report is from before 2003 Straw Bale House Moisture Research Pilot Study of Moisture Control in Stuccoed Straw Bale Walls This study was made for the Canada Mortgage and Housing Corporation by Bob Platts of Fibrehouse Limited, USA, 1997. Moisture Control in Stuccoed Straw Bale Walls. Pilot Study of Moisture Control in Stuccoed Straw Bale Walls Monitoring the Hygrothermal Performance of Strawbale Walls "A California winery, interested in quality buildings and sustainable action, commissioned the construction of a large strawbale building to be used as a tasting room, barrel storage room, and tank farm on a site adjoining one of their vineyards. They offered access to this unique building for a comprehensive enclosure wall monitoring program." John Straube and Chris Schumacher, USA, 2003. Monitoring the Hygrothermal Performance of Strawbale Walls (PDF). This document is made available by the Ecological Building Network, for which they request a donation. How Straw Decomposes: Implications for Straw Bale Construction "Straw is a natural fiber that can last many thousands of years under certain conditions. Intact straw has been found in dry Egyptian tombs and buried in layers of frozen glacial ice. However, under typical conditions straw will slowly degrade as do all natural fiber materials like wood, paper, cotton fabric, etc." Matthew D. Summers, Sherry L. Blunk and Bryan M. Jenkins, USA, 2003. How Straw Decomposes (PDF). This document is made available by the Ecological Building Network, for which they request a donation. Moisture properties of straw and plaster/straw assemblies "This report is a draft summary of the results of the moisture property testing of a range of plaster types that might be installed over strawbale walls. It reviews the literature for previous data, describes the test protocols, and summarizes the results." John Straube, USA (date unknown). Moisture properties of straw and plaster/straw assemblies. This document is made available by the Ecological Building Network, for which they request a donation. Water vapour transmission properties of clay plaster with various surface treatments / additives In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. [Water vapour transmission properties of clay plaster with various surface treatments / additives http://www.by-og-byg.dk/download/pdf/423-8a.pdf] Water vapour transmission properties of straw In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. [Water vapour transmission properties of straw http://www.by-og-byg.dk/download/pdf/423-8b.pdf] Moisture accumulation of sb-walls plastered with clay plaster on the inside (warm side) and clay plaster or lime plaster on the outside (cold side) In Danish only, direct translation requests to user:DuLithgow. Part of "Straw Bale Houses - design and material properties" prepared by Jørgen Munch-Andersen, Birte Møller Andersen and Danish Building and Urban Research. [moisture accumulation of sb-walls plastered with clay plaster on the inside (warm side) and clay plaster or lime plaster on the outside (cold side) http://www.by-og-byg.dk/download/pdf/423-8c.pdf] Humidity in straw bale walls and its effect on the decomposition of straw Jakub Wihan explores the physics of moisture in walls in relation to the degradation of straw. He considers practical experience through case studies of straw bale houses and compares simple design calculations with computer simulation. The conclusions are compared to knowledge from 27 cases by professional straw bale builders to give guidelines for future work. Humidity in straw bale walls and its effect on the decomposition of straw Studies in other languagesEdit Utilisation de la Paille en Parois de Maisons Individuelles a Ossature Bois "Le programme de recherche comporte le suivi technique de la construction, l'expérimentation du comportement thermique des logements et de l'humidité au sein des parois, ainsi que la détermination en laboratoire des caractéristiques des matériaux et la validation technique de ces procédés constructifs." Alain Grelat, 2004 Utilisation de la Paille en Parois de Maisons Individuelles a Ossature Bois (PDF 824Kb) Halmhuse - Udformning og materialeegenskaber "Laster, Længderetning, Tværretning, Halmhuse og dagslys, Varmeakkumulering, Fugtakkumulering, Lufttæthed og dampspærre, Forebyggelse af svampeangreb, Tage, Varmeisolering, Brand, Tagdækning, Skivevirkning, Terrændæk, Vinduer, Fundamenter, Ydervægge, Fugt, Trækonstruktion, Rundtømmerløsning, Skjult konstruktion, Rammekonstruktion, Muslingeskaller, Varmeisolering for halmvægge." Jørgen Munch-Andersen og Birte Møller Andersen, 2004 Halmhuse - Udformning og materialeegenskaber (PDF 3.0MB) Mikrobielle Empfindlichkeit von Bau-Strohballen "Im Zuge des zunehmenden Umweltbewußtseins der Bevölkerung hat sich auch ein umweltbewußtes Wohnen verbreitet. Viele legen mehr Wert auf ein gesundes Raumklima da sie begreifen, dass sie einen Großteil ihres Leben in ihren Wohnungen zubringen. Zudem hat es in der letzten Zeit eine Reihe von Umweltskandalen gegeben die zu einem Umdenken geführt haben. Die Folge war eine Hinwendung zu Bauprodukten die sowohl gesundheits- als auch umweltschonend sind. Diese Entwicklung wurde und wird auch von der Bundesregierung gefördert (Gütesiegel Natureplus). Zur diesen Produkten gehören vor allem auch Dämmstoffe aus nachwachsenden Rohstoffen. In diesem Bereich wurde in den letzten 10 Jahren viel Forschungsarbeit geleistet u diese Materialien konkurrenzfähig gegenüber konventionellen Dämmstoffen zu machen." Hansjörg Wieland, 2004 Mikrobielle Empfindlichkeit von Bau-Strohballen (PDF 408KB) Worldwide organisations and contactsEdit If you have a contact to add please click edit and use this template: Contact: FirstName LastName, Organisation, Address. Telephone: +1 23 3456789. Email: [email protected] Website: http://somewhere.org The AmericasEdit Contact: California Straw Building Association, The Tides Center, P.O. Box 1293, Angels Camp, CA 95222-1293. USA. Telephone: +1 209 7857077. Website: http://www.strawbuilding.org/ Contact: The Last Straw blog, Email: [email protected] Website: http://tls.buildearth.org Contact: Buildearth.org, Email: [email protected] Website: http://www.buildearth.org Contact: The Last Straw Journal, PO Box 22706 Lincoln Nebraska 68542-2706 USA. USA. Telephone: +1 402 4835135. Email: [email protected] Website: http://www.the.thelaststraw.org Contact: Colorado strawbale association, 2010 Hermosa Dr. Boulder, Colorado 80304. Telephone: +1 303 4446027 Email: [email protected] Website: http://www.coloradostrawbale.org Contact: Straw Bale Association of Nebraska and the MidAmerica Straw Bale Association. Email: [email protected] Website: http://www.strawhomes.com/sban Contact: Ontario Straw Bale Building Coalition, Hank and Anita Carr, 2025 Ventnor Road, RR3, Spencerville ON, K0E 1X0, Canada. Website: http://www.strawbalebuilding.ca/ Telephone: +1-87-STRAWBALE Email: [email protected] Email discussion list: http://groups.yahoo.com/group/practical-sbc/ Contact: Straw Bale Association of Texas, P.O. Box 4211 Austin, TX 78763-4211. USA. Telephone: +1 512-3026766 Website: http://www.greenbuilder.com/sbat/ EuropeEdit (Listed alphabetically under their English spelling) European Strawbale Network (Europäisches Strohballen-Netzwerk) Website: http://www.baubiologie.at/europe/ Austria (Republik Österreich) Contact: Herbert Gruber, A-3720 Baierdorf 6. Österreichisches Strohballen-Netzwerk (Austrian Straw Bale Network). Website: http://www.baubiologie.at/asbn/ Belgium (Koninkrijk België) Contact: Geert Goffin (Gigi), Casa Calida, Grootmeers 14, 3700 Tongeren, email: info [at] casacalida [dot] be Website: www.casacalida.be Contact: Herwig van Soom, Kouterstraat 7, B-3052 Blanden. Email: orcaherwig [at] skynet [dot] be Czech Republic (Česká republika) Contact: Jan (Jenik) Hollan. Email: jhollan [at] amper [dot] ped [dot] muni [dot] cz Contact: Lars Keller, Friland 12 B, 8410 Roende. Email: larskeller [at] livinghouses [dot] net Contact: Kermo Jürmann, Tallinn. Email: kermo [at] ehituslahendused dot] ee Website:http://www.ehituslahendused.ee/strawbalebuilding, http://www.ehituslahendused.ee/strawbalebuilding/estonia.htm Contact: Coralie & André de Bouter, La Maison en Paille, Le Trezidoux, 16290 Champmillon. Telephone: +33 545 662768. Email: accueil [at] lamaisonenpaille [dot] com Website: http://www.lamaisonenpaille.com/ and Contact: Philippe Liboureau, "Les Compaillons". Website: http://compaillons.fr Webforum: http://compaillons.naturalforum.net Germany (Bundesrepublik Deutschland) Contact: Dirk Scharmer, Fachverband Strohballenbau Deutschland, Sieben Linden 1, D-38486 Bandau. Email: info [at] fasba [dot] de Website: http://www.fasba.de/ Contact: Sven Eweleit, anderssehn, Seiler Str.16, 30171 Hannover. Telephone: +49 511 33644780. Email: info [at] anderssehn [dot] de Website: http://anderssehn.de Holland (Netherland) Strobouwacademie, contact via website: http://www.strobouwacademie.nl +31-(0)35-5238900 Contact: Michel Post, VIBA werkgroep strobouw. Email: info [at] purplex [dot] nl Website: http://www.strobouw.nl Hungary (Magyar Köztársaság) Contact: Attila Meszaros. Email: tilla [dot] szalmahaz [dot] hu Website: http://www.szalmahaz.hu Phone: +36 20 9772258 Contact: EDILPAGLIA - Italian Association Straw Bale Building, via delle Vigne 12, 51016 Montecatini Terme (pt) Website: http://www.edilpaglia.it Contact: Centro di Permacultura LA BOA - Stefano Soldati, via Boa, 29 30020 Pramaggiore (VE) Website: http://www.laboa.org Italy/ Southern Tyrol Contact: Margareth Schwarz, via d.corse 6, o-Rennweg, I-39012 Meran. Telephone: +39 473 230023 or +39 348 3634054. Faximilie: +39 473 230035. Email: arch.schwarz [at] rolmail [dot] net Contact: Piet Jensen, Norsk jord- og halmbyggeforening, Værnhus, 1540 Vestby. Telephone: +47 64 952246 Email: njh [at] halmhus [dot] no Website: http://halmhus.virkelighet.net/ Romania (România) Contact: Catalina Grigore, Ritmului 16, Bucuresti. Email: catalina [at] eco-habitat [dot] ro Website: http://www.eco-habitat.ro/ Slovenia (Republika Slovenija) Contact: Habjanic Stojan, Slowenien, Biogradnja s.p.., Brezovci 72/a, Sl-9201, Puconci. Email: stojan.oikia[at] siol [dot] net Spain (Espana) Contact: Rikki Nitzkin-Lleida, Spanish Strawbale Network. Email: rikkinitzkin [at] earthlink [dot] net Website: http://www.casasdepaja.org Discussion email lists: casasdepaja [at] yahoo [dot] es paja [at] amper [dot] ped [dot] muni [dot] cz Contact: Ulf Henningsson, Rosenvingegatan 1, 431 63 MÖLNDAL. Telephone: +46 31 27 60 70. Email: ulf-lennart [at] rocketmail [dot] com Website: http://www.naturligt-byggeri.org/ Werner Schmidt, Fabrikareal Nr 119, 7180 Trun. Email: atelier_schmidt [at] bluewin [dot] ch Turkey (Türkiye Cumhuriyeti) Contact: Demet Irkli Eryldiz. Email: irkli [at] mmf [dot] gazi [dot] edu [dot] tr [This email is not active and requires updating] United Kingdom (Wales, Ireland, Scotland and England) Contact: Straw Bale Building Association (WISE), Hollinroyd Farm, Butts Lane, Todmorden, OL14 8RJ. Telephone: +44 1442 825421 or +44 1706 814696. Email: [email protected] Website: http://www.strawbalebuildingassociation.org.uk/ South PacificEdit Contact: Andrew Webb, Australasian Straw Bale Building Association. Website: http://www.ausbale.org/. Telephone: +61 7 54852720. Email: [email protected] Contact: Ian Redfern. Email: [email protected] Contact: Graeme North. Email: [email protected] Email ListsEdit Strawbale Construction Discussion List This discussion email list is hosted by the US based Renewable Energy Policy Project (REPP). list webpage list archive pre 2003 archive Straw Bale Social Club When the CREST hosted list disappeared (it is now hosted by REPP) this list was set up to fill the gap. Their welcome message describes it as "a discussion forum ostensibly created to share experiences and thoughts about building with straw and earth, but it's really just a place where the strawbale community can talk about common interests, often not having to do with building at all." Mailing list website and archive Practical Straw Bale Construction Focused on the how-to of straw bale construction and associated systems and technologies in Ontario, Canada and around the world. Mailing list website and archive European strawbale building discussions "Discussions, announcements, enquiries, etc., about building with strawbales in Europe. Straw is generally a waste product, therefore building with strawbales has very low environmental impact, uses locally available materials, is cheap, and is a very fun sociable way to build!" list website list archive The Global Straw Building Network "A private forum restricted to organizations involved in the promotion of straw as a building material. (If you are a member of such a group which is not yet represented in GSBN, email [email protected] with SUBSCRIBE in the SUBJECT line. Your request will be forwarded to a moderator for consideration.)" public list archive Straw Bale Building RegistriesEdit International Straw Bale Building Registry Greenbuilder.com, The Last Straw Journal, The Straw Bale Association of Texas, and the Development Center for Appropriate Technology, along with a number of regional strawbale organizations, are working together to build a database of buildings constructed using straw bale. [Visit the registry] United Kingdom Database of Strawbale Buildings in the U.K. Maintained by Chug, email: [email protected] International Registry using Google Maps Contributions from self-builders, architects and organisations around the world with clickable icons linking to the owner's websites [Visit the registry] Resources on the internetEdit Wikipedia, the free encyclopediaEdit Voluntary simplicity Wikibooks, the open-content textbooks collectionEdit Vaulted Straw Bale workshops, tours, instruction booklets Straw Bale construction story Straw Bale construction pictures and commentary Straw Bale Buildings around the World Location of notable strawbale buildings. Contributions to [email protected] Straw Bale Construction Certification, Training, Building codes, etc. Design Forward Straw Bale Design The Last Straw, the international quarterly journal of straw bale and natural building Surfin' StrawBale, a compendium of straw bale construction links The Canelo Project Building With Awareness, a how-to DVD video showing the construction of a straw bale house from start to finish 50 Straw Bale House Plans Department of Energy: Insulation fact sheet Canada Mortgage and Housing Corporation: Energy use in straw bale houses Amazon Nails Straw Bale Building (UK) The Pangea Partnership - Straw bale workshops in the developing world LA CONSTRUCTION EN BOTTES DE PAILLE Straw bale design and construction in California The Australasian Straw Bale Building Association (AUSBALE) naturalhomes.org straw bale learning calendar, links and owner-built natural homes Conscious Construction Building homes using only straw bales in the wall structures Pictures and commentary on straw bale extension to a brick build BibliographyEdit This is a bibliography of texts referred to in this book, there is also a section on Technical Studies, Reports and Tests. Some of these texts are available on the internet. City of Cortex, City of Cortex Straw Bale Code Colorado, USA Development Center for Appropriate Technology, 1993, Summary of New Mexico ASTM E-119 Small Scale Fire Tests On Straw Bale Wall Assemblies SHB Agra Engineering and Environmental Services Laboratory, New Mexico, USA Hayes, Katrina 2000 ASTM E84-98 Surface Burning Characteristics report Omega Point Laboratories, USA Jones, Barbara 2001 Information guide to straw bale building Amazon Nails, UK Olivier, Paul A., The Rice Hull House around 2004 Stone, Nehemiah 2003, Thermal Performance of Straw Bale Wall Systems Ecological Building Network, USA Theis, Bob, 2003 Straw Bale Fire Safety - A review of testing and experience to date Ecological Building Network, USA Wanek, Catherine, 2003 The New Strawbale Home Gibbs Smith Publisher, Layton, Utah, USA http://www.gibbs-smith.com/ Glossary of TermsEdit Bale Needle A pointed metal rod or plate with a handle at one end and a hole at the other used to push twine through the bales and stitch them from one side to the other, holding mesh tightly to each surface. BTU or British Thermal Unit This is a unit for measuring energy which is now mostly replaced by the joule. One BTU is the amount of heat required to raise the temperature of one pound avoirdupois of water by one degree Fahrenheit. One BTU is approximately 1054–1060 joules. Cold bridge If a structure is made a various materials some of which insulate more than others, any part of the structure which is a potential path warmth can use to escape is a cold bridge. A common example is a well insulated house with solid aluminium windows which transfer large amounts of heat throughout the structure. The land, air and water that a city or nation needs to produce all of its resources and to dispose of all its waste. It is a way to determine if the lifestyle of a community is sustainable. It shows if a city or nation is utilizing more or less than its fair sustainable share of the world's resources. The total energy used to bring a product or material to its present phase in its life cycle. It includes the energy required to extract or produce raw materials, their transport to the place of production, and the energy used for manufacturing. It can also include the energy used in the distribution and retail chain, for maintenance processes, for repair, etc. It is measured in MJ per kg or GJ per tonne. There is a list of the embodied energy of various material on the Embodied energy page of Australians Governments Your Home Design Guide. End-of-Life (EoL) The moment when a product ceases to fulfil the tasks it was designed for. The end-of-life of a product is not the end of its life cycle, since its environmental impact has not yet come to an end; the disassembly, recycling, incineration, and/or disposal phases still remain. A metal cage full of some hard material, typically stones. Often used for retaining wall especially on river sides. Can be successfully used as part of a building foundation. Heating, Ventilation and Air Conditioning. Straw bales used between the vertical elements of a structure to form non-bearing walls and act as insulation. Often an option where building regulations are otherwise too restrictive. Life Cycle Analysis or Life Cycle Assessment (LCA) A calculation of the environmental impact of a product over its complete life cycle. It starts with an inventory of the 'input' (all resources and energy consumption) and 'output' (emissions, solid waste, waste water). The elements in this inventory are grouped into environmental categories, which are quantified according to their environmental impact. The goal is to compare different design strategies within a category. Load-bearing A load bearing wall is one where all or most of the weight of the building is taken by the straw bale walls. The walls are 'bearing' the 'load'. Modified Post and Beam See Post and Beam. Modified simply means that the dimensions are modified to suit the dimensions of your straw bales. A construction using vertical elements (posts) and horizontal elements (beams) to form a structural framework. The term often refers to using a smaller number of larger than normal timbers compared to conventional 'baloon' timber framing. R value Standard insulation value which measures the Resistance in a material to the passage of heat. An R-value is the inverse of a U-value which measures the conductance in a material of heat. Rubble trench foundation A trench dug into the ground and filled with a rigid material such as demolition rubble, gravel or sea shells. Important functions of such a wall are: it cannot be further compressed, and moisture will not rise through the rubble into the wall. Precompression When stress it put into your structure some elements will be compressed. Pre-compression adds this stress to the building before it is finished to stop the structure suddenly settling, or the plaster suddenly cracking, once finished. Stem walls A stemwall is the part of the foundation between the floor level and ground level, and may rest on and be attached to a rubble trench or whatever else is in the ground. It can be made of concrete blocks or such or be concrete poured into forms. Think raised foundation. Subgrade The ground of the site and anything below the surface. Subgrade foundation insulations is therefore insulation below the finished ground level to insulate the foundation from the surrounding temperature. plate (normally wood) used to precompress bale wall, use as a roof connection, and help distribute roof weight. web-beam A beam made up of a large number of small elements, typically in a criss-cross pattern, which together perform as one large beam. These heating specific definitions should be edited and added into the rest alphabetically A.F.U.E.: Annual Fuel Utilization Efficiency represents the percentage of fuel that is converted into usable heating energy - the balance is vented through your chimney or other venting systems. It is an industry agreed upon standard. All furnaces and brands are tested the same way to provide "apples to apples" comparisons. Air Conditioner - a device used to decrease the temperature and humidity of air, which moves through it. Typical air conditioners include central air conditioning which utilizes existing forced air ductwork and "Ductless Splits". Anode Rod - a sacrificial metal used to protect against corrosion in a hot water heater. Baseboard Heating - heating elements located around the perimeter of a room, used to warm room air by transferring the heat from the hot water circulating through them. Blower – an air handling device used with a furnace to circulate air through a network of ducts. Boiler: A heating appliance that heats water to a pre-set temperature and feeds it to a circulator, which transfers the water to radiant heating units including some or all of cast iron radiators, slim baseboard radiators, under floor tubing or wall panels. Some boilers produce steam for heating purposes. B.T.U.: British Thermal Units are the standard efficiency comparison between heating fuels. One BTU is the amount of heating energy that will raise one pound of water one degree Fahrenheit. B.T.U./Hr.: British Thermal Units Per Hour. Used to express capacities of furnaces and boilers. Burner - a device which supplies a mixture of air and fuel to the combustion area. Cast Iron - a durable metal with an exceptional capability to hold and transfer heat. Chimney Liner: A clay-tile or metal liner that is inserted into a chimney. Chimney Venting - a vertical vent used to transfer products of combustion from a furnace or boiler to the outdoors. Combustion - the process of converting fuel into heat. This requires oxygen. Combustion Air: An air supply brought into the furnace's combustion chamber - supplied from within the basement, or from outdoors. Combustion air is necessary to burn fuel. Controls: Devices such as a thermostat that regulate a heating or cooling system. Convection: The transfer of heat through a moving gas (air) and a surface, or the transfer of heat from one point to another within a gas. In hydronic heating, cool air falls to the floor where it is heated by metal fins in a baseboard radiator and then rises to transfer heat to the environment through natural convection. Convective Heat: the natural circulation of air across a heat source to heat the air. Degree Days: A system by which heating oil dealers measure and record the daily temperature. This information is compared to what they know about your heating system to ensure automatic delivery before your system uses all of the oil in the storage tank. Direct Venting: A process in which the products of combustion are vented to the outdoors via sidewall venting, (without the use of a chimney). Direct Vent - a furnace or boiler design where all the air for combustion is taken from the outdoors and all exhaust products are released to the outdoors, also known as sealed combustion. Direct Vent is also known as balanced flue venting in oil furnaces and boilers. Distribution System: The component of a heating or cooling system that delivers warmed or cooled air, or warmed water, to the living space. Draft Hood - a device that prevents a backdraft from entering the heating unit or excessive chimney draw from affecting the operation of the boiler or furnace. Ductless Split A/C System - A system that cools and dehumidifies air without the use of conventional duct work. The equipment location is split, with the condenser and heat pump outside of the home and the air handler and controls inside. Efficiency Rating - the ratio of heat actually generated versus the amount of heat Theoretically possible from the amount of fuel inputted. Flame Retention Burner: A modern oil burner which retains the flame near the mouth of the burner, for improved efficiency and operational savings. Flue: An enclosed passage that is designed to convey hot flue gases. (Also known as a breech). Flue Gases: The gases (eg. carbon dioxide, water vapour and nitrogen) that are formed when the fuel oil, natural gas, or propane is burned with the air. (Products of combustion are technically all of the flue gases less the nitrogen that was present before combustion). Forced Air: A distribution system in which a fan circulates air from the heating or cooling unit to the rooms through a network of supply air and return air ducts. Furnace: A heating appliance that warms air around a heat exchanger. The air is conveyed by fan, into a central duct system to distribute warm air to all areas of the home or building. Heat Exchanger: A structure that transfers heat from the hot combustion gases inside the furnace heat exchanger to the circulating room air flowing across the exterior of the heat exchanger. Heat Loss: Term used for all areas of your home where heated air may escape due to construction styles, age of house, windows, weather-stripping, etc. All homes will experience some level of heat loss. Heat Loss Calculation: This is the means by which a heating contractor will determine the required capacity of a furnace or boiler to adequately heat the home (or building). Heat Recovery Ventilator (HRV): A device used in central ventilation systems to reduce the amount of heat that is lost as household air is replaced with outside air. As fresh air enters the house, it is warmed as it passes through a heat exchanger, heated by the warm outgoing air stream. Heat Transfer: the transmission of heat from the source (flame) to air or water. Heating Capacity: the amount of usable heat produced by a heating unit. High-boy: a term used to describe a furnace which has a small "footprint" but is tall. The blower is under the heat exchanger. This is also known as an upflow furnace. Hot Water Boiler: a heating unit that uses water circulated throughout the home in a system of baseboard heating units, radiators, and/or in-floor radiant tubing. Hot Water Heater: a unit with its own energy source that generates and stores hot water. Hydronics: Hydronics, or heating with water, consists of a compact boiler (fired by any fuel) that heats water, which is distributed to a network of slim baseboard, panel or space radiators, or under floor tubing by a circulator. This term also applies to the science of heating (or cooling) with water. Indirect Hot Water Storage Tank: a unit that works in conjunction with a boiler to generate and store domestic hot water, it does not require its own energy source. In-floor Radiant Tubing: tubing, typically plastic or rubber, used in conjunction with heated boiler water to heat floors. Low-boy: a term used to describe a furnace which has a low profile. The blower is located on the same level plane as the heat exchanger. This furnace style has both the return air plenum and the supply air plenum on the top of the furnace. This furnace style is sometimes called a console-style furnace. Low Water Cut-off: a device used to shut down a boiler in the event that a low water condition exists. This is required whenever radiators are located at a lower level than the boiler. Some jurisdictions require them on all boiler installations. Natural Gas: any gas found in the earth (e.g. methane gas) as opposed to gases which are manufactured. Nozzle: A burner component that atomizes, meters and patterns fuel oil into the heat exchanger / fire-pot. Oil Heating: the production of heat by burning oil. Propane: a manufactured gas typically used for cooking or heating. This is also known as L.P. gas. (liquid petroleum). Push Nipples: machined metal sleeves used to join adjacent sections of a boiler. Radiant Floor Heating: Under floor heat is provided by flexible, long-lasting tubing. The continuous tubing can be placed under any flooring, and circulated hot water provides invisible heat anywhere in the home, swimming pool or driveway. Radiant Heating: the method of heating the walls, floors or ceilings in order to transfer heat to the occupants of a room. Radiator: a heating element, typically metal, used in conjunction with water or steam to give off heat. Retrofit: Replacement of one or more components of an existing system. Safety Shut-off Device: any device used to shut down a heating appliance in the event an unsafe condition exists. Seasonal Efficiency: A performance rating that considers the heat actually delivered to the living space, the total energy available in the fuel consumed, and the impact the equipment itself has on the total heating load through an entire heating season. S.E.E.R.: Seasonal Energy Efficiency Rating. The standards by which equipment is measured. The higher the S.E.E.R., the more efficient the equipment (especially air conditioning). Sealed Combustion: a furnace or boiler design where all the air for combustion is taken from the outside atmosphere and all exhaust products are released to the outside atmosphere, also known as direct vent. Steam Boiler: a heating unit designed to heat by boiling water, producing steam, and circulating it to radiators or steam baseboard units throughout the home. Stack Damper: a device installed in the venting system that will automatically close when the appliance shuts down. This device is used to reduce the amount of warm indoor air being drawn up the chimney between heating cycles. Supply Tapping: opening in a boiler by which hot water enters the heating system. Setback Thermostat: A programmable thermostat with a built-in timer. You can adjust it to vary household temperature automatically. Tankless Heater: a copper coil submerged into the heated boiler water used to transfer heat to domestic water. Venting: An opening for combustion gases to exit the house. Can be a chimney or a vent through the wall of the house. Includes all parts of the venting system - vent connector, chimney, etc. Zone Control: A heating control system in which the space to be heated is divided into zones and each zone is controlled by a separate thermostat. Retrieved from "https://en.wikibooks.org/w/index.php?title=Straw_Bale_Construction/Print_version&oldid=410071"
CommonCrawl
Journal Home About Issues in Progress Current Issue All Issues Feature Issues OSA Continuum •https://doi.org/10.1364/OSAC.2.003223 Perfect grating-Mie-metamaterial based spectrally selective solar absorbers Yanpei Tian, Xiaojie Liu, Fangqi Chen, and Yi Zheng Yanpei Tian, Xiaojie Liu, Fangqi Chen, and Yi Zheng* Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA 02115, USA *Corresponding author: [email protected] Y Tian F Chen Y Zheng Yanpei Tian, Xiaojie Liu, Fangqi Chen, and Yi Zheng, "Perfect grating-Mie-metamaterial based spectrally selective solar absorbers," OSA Continuum 2, 3223-3239 (2019) Perfect selective metamaterial solar absorbers Hao Wang, et al. Opt. Express 21(S6) A1078-A1093 (2013) Solar thermal absorber based on dielectric filled two-dimensional nickel grating Qiuqun Liang, et al. Opt. Mater. Express 9(8) 3193-3203 (2019) High-temperature, spectrally-selective, scalable, and flexible thin-film Si absorber and emitter Zhiguang Zhou, et al. Opt. Mater. Express 10(1) 208-221 (2020) Table of Contents Category Optical Devices and Detectors Effective refractive index Metamaterial absorbers Original Manuscript: July 18, 2019 Manuscript Accepted: October 27, 2019 Theoretical fundamentals Results and discussions Spectrally selective solar absorbers are widely employed in solar thermal energy systems. This work theoretically investigates thermal radiative properties of metamaterials consisting of 1-D and 2-D grating-Mie-metamaterials (tungsten nanoparticles embedded in alumina) on top of multilayered refractory materials (tungsten-silicon nitride-tungsten) as a promising selective solar absorber. The proposed metamaterial shows high absorptance from the ultraviolet to near-infrared lights, while exhibiting low emittance in the mid-infrared regime owing to Mie-resonances, surface plasmon polaritons, and metal-dielectric-metal resonance. The optical properties of designed metamaterial solar absorbers are angular independence of up to 75° and polarization insensitive. The total absorptance of 1-D and 2-D grating-Mie-metamaterials are 90.59% and 94.11%, respectively, while the total emittance are 2.89% and 3.2%, respectively. The photon-to-heat conversion efficiency is theoretically investigated under various operational temperatures and concentration factors. Thermal performance of grating-Mie-metamaterials is greatly enhanced within a one-day cycle, and the stagnation temperature under different concentration factors manifests the potential feasibility in mid and high-temperature solar thermal engineering. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Solar energy as the most abundant and clean renewable energy resource is one of the most promising pathways to solve the energy crisis for past decades, while reduces the environmental impact of fossil fuels. Solar thermal technology converts striking photon into heat which can be used for domestic or industrial heating and air conditioning systems [1–4]. It can be employed for electricity generation through concentrated solar power (CSP) via Rankine cycle [5], solar thermoelectric generators (STEGs) [2], and solar thermophotovoltaics (STPVs) [6–8]. In the abovementioned approaches, a solar absorber, as a consequential component, that converts striking sunlight to thermal energy substantially affects the overall performance of solar thermal systems. Since solar absorbers re-emit thermal energy when it is heated up [9]. Therefore, it is significant to develop an ideal spectrally selective solar absorber that enhances the photon-to-heat conversion efficiency. An ideal selective solar absorber should exhibit an unity absorptance over the UV, visible, and near-infrared regimes, where solar radiation energy distributes, to convert most radiation into heat, accompanying with zero emittance in the mid-infrared region to minimize thermal leakage from spontaneous blackbody radiation. This spectral selectivity is highly demanded for further improvement of conversion efficiency for solar thermal systems. In addition, a selective solar absorber is required to be omnidirectional diffuse and polarization insensitive, since the solar radiation is randomly distributed and angular time-dependent [10]. Furthermore, the optimized cut-off wavelength, at which it absorptance spectrum changes steeply, of a selective absorber, shifts according to its operational temperature because of the shifting nature of blackbody radiation as stated by Wien's displacement law [11]. Consequently, it is worthwhile to design a perfectly selective solar absorber according to the operational temperature and concentration factor (CF) to maximize the absorption of solar energy. Selective absorptance or emittance exists in natural materials, such as black carbon paint, black chrome [12–14], and Pyromark [15], however, their high emittance in mid-infrared regime as well as limited tunability of their cut-off wavelengths confine their practicability in various solar thermal applications. Recently, metamaterial absorbers have attracted great attention because of their high selective absorption due to exciting plasmonic resonance at particular wavelengths inside the structures [16,17], which provides alternative methods to modify the wavelength selective properties for sunlight trapping. Advances in micro/nanofabrications have promoted enormous metamaterials based on sub-wavelength metallic patterns on a metal film separated by a dielectric spacer and make the manipulation of sunlight absorption more feasibly. Excitation of plasmonic resonance at different wavelengths can be achieved by one dimensional (1-D) or two dimensional (2-D) surface gratings [18–26]. Liu et al. experimentally demonstrated the absorption of 97% at the wavelength of 6 $\mu$m in a sub-wavelength perfect absorber consisting of a film-coupled crossbar structure [27]. An absorption peak of 88% at the wavelength of 1.58 $\mu$m is obtained in a plasmonic absorber made of a layer of 200 nm gold patch array on a thin Al$_2$O$_3$ layer and a gold film [28]. By depositing a 2-D Ag grating with a period of 300 nm on a 60 nm SiO$_2$ and a Ag film, Aydin et al. reveal an ultra-thin plasmonic absorber in the visible spectrum [29]. Strong visible light absorption has also been achieved by film-coupled colloidal nanoantennas [30], circular plasmonic resonators [31] by exciting magnetic resonance inside the metamaterial absorbers. Mie-resonance metamaterials are another class of artificial materials which utilize Mie-resonance of inclusions for the shaping of absorptance spectra. Ghanekar et al. theoretically and experimentally demonstrated a selective absorber/emitter that consists of nanoparticles embedded thin film to shape the thermal emission spectrum due to Mie-resonance [7,8,32,33]. Dai et al. systematically investigated a type of plasmonic light absorber based on a monolayer of gold nano-spheres with less than 30 nm in diameters deposited on top of a gold substrate and obtained an absorptance of approximately 90% around 810 nm [34]. However, it is still a challenge to design a perfect broadband solar absorber that is angular, polarization-independent with unity absorptance from UV to near-infrared lights and zero emittance in the mid-infrared region. In addition, literature is rare pertaining to nanoparticles embedded surface gratings based broadband selective absorbers. In present study, we theoretically propose metamaterial structures made of 1-D and 2-D tungsten (W) nanoparticles embedded alumina (Al$_2$O$_3$) surface gratings on top of W-silicon nitride (Si$_3$N$_4$)-W multilayer stacks as perfect selective solar absorbers. The thermal radiative properties of proposed metamaterials are illustrated in a broad region from UV to mid-infrared regime. The influence of geometric parameters, such as the volume fraction of W nanoparticles and grating configurations, on the spectral selectivity of grating-Mie-metamaterials are studied. The effects of oblique incidence and polarization states on the spectral absorptance are explored and it is proved the proposed structures are angular independence of up to 75$^{\circ }$ and polarization insensitive for transverse electric (TE) and transverse magnetic (TM) polarizations. The photon-to-heat conversion efficiency of proposed structures is theoretically investigated under various operational temperature and concentration factors. Thermal performance of grating-Mie-metamaterials is greatly enhanced within a one-day cycle and the stagnation temperature under different concentration factors is simulated to manifest the potential practicability in mid and high -temperature solar thermal engineering. 2. Theoretical fundamentals 2.1 Photon-to-heat conversion efficiency analysis of the solar absorber A schematic of a typical solar thermal engineering system is shown in Fig. 1A. In such solar thermal to electrical techniques, the sunlight incident on the absorber (Q$_{abs}$) is converted into a heat flux (Q$_{h}$) and delivered to the thermal system, where it generates desired work (heating, cooling, electricity, etc.). Waste heat (Q$_{c}$) is transported to a heat sink. The spontaneous blackbody thermal radiation (Q$_{re-emit}$) leaks at the surface as radiative (Q$_{rad}$) and convective (Q$_{conv}$) thermal losses. Therefore, the solar absorber must be dark to sun and be ideally reflective to the mid-infrared light so as to depress the thermal leakage by convection and radiation. Fig. 1. (A) A typical solar thermal energy conversion system. (B) Solar spectral irradiance (AM1.5, global tilt), radiative heat flux of blackbody thermal radiation at 200 $^{\circ }C$ and 500 $^{\circ }C$, and reflectivity spectrum of ideal selective solar absorber and black surface. To quantitatively evaluate the performance of solar absorbers, the photon-to-heat conversion efficiency, $\eta _{abs}$, is given by: (1)$$\eta_{\mathrm{abs}}=\alpha_{\mathrm{abs}}-\epsilon_{\mathrm{abs}} \frac{\sigma\left(T_{\mathrm{abs}}^{4}-T_{\mathrm{amb}}^{4}\right)}{C F \cdot Q_{\mathrm{abs}}}$$ where CF is the concentration factors, and Q$_{abs}$ is the solar radiative heat flux at AM 1.5 (global tilt) [35]. $\sigma$ is the Stefan-Boltzmann constant. $T_{abs}$ and $T_{amb}$ are the operational temperature of solar absorber and the environment, respectively. The solar absorptance, $\alpha _{abs}$, is expressed as the following: (2)$$\alpha_{abs}=\frac{\int_{0.3 \mu m}^{4.0 \mu m} I_{sun}(\lambda,\theta,\phi) \alpha(\lambda,\theta, \phi) d \lambda}{\int_{0.3 \mu m}^{4.0 \mu m} I_{sun}(\lambda,\theta, \phi) d \lambda}=\frac{\int_{0.3 \mu m}^{4.0 \mu m} I_{sun}(\lambda,\theta, \phi)[1-R(\lambda,\theta, \phi)] d \lambda}{\int_{0.3 \mu m}^{4.0 \mu m} I_{sun}(\lambda,\theta, \phi) d \lambda}$$ where $\lambda$ is the wavelength of solar radiation, $\phi$ is the azimuthal angle, and $\theta$ is the polar angle. $\alpha (\lambda ,\theta , \phi )$ and $R(\lambda ,\theta , \phi )$ are the spectral directional absorptance and reflectance at a certain operational temperature. $I_{sun}$ is the incident solar intensity at AM 1.5 (global tilt) [35]. The numerator of this equation is the total absorbed solar energy, the denominator is the incident solar heat flux, $Q_{abs}$. Since the available data of AM 1.5 is confined from 0.3 $\mu$m to 4.0 $\mu$m [35], which includes 95% of solar radiation, the integration interval is limited from 0.3 $\mu$m to 4.0 $\mu$m. The total thermal emittance, $\epsilon _{abs}$, is given by: (3)$$\epsilon_{\textrm{abs}}=\frac{\int_{2.5 \mu m}^{20 \mu m} I_{bb}(\lambda,\theta, \phi) \epsilon(\lambda,\theta, \phi) d \lambda}{\int_{2.5 \mu m}^{20 \mu m} I_{bb}(\lambda,\theta, \phi) d \lambda}=\frac{\int_{2.5 \mu m}^{20 \mu m} I_{bb}(\lambda,\theta, \phi)[1-R(\lambda,\theta, \phi)] d \lambda}{\int_{2.5 \mu m}^{20 \mu m} I_{bb}(\lambda,\theta, \phi) d \lambda}$$ where $I_{bb}(\lambda ,\theta , \phi )$ is the blackbody radiation intensity given by Plank's law. $\epsilon (\lambda ,\theta ,\phi )$ is the spectral directional absorptance at a certain operational temperature. Since our proposed structure are opaque within the wavelengths of interest (0.3 $\mu$m to 20 $\mu$m), we take $\alpha (\lambda ,\theta , \phi )$ = $\epsilon (\lambda ,\theta , \phi )$ = 1 – $R(\lambda ,\theta , \phi )$ according to Kirchhoff's law of thermal radiation. It can be discovered from Fig. 1B that most of the solar radiation is distributed over the UV, visible, and near-infrared region, while most of the blackbody thermal radiation expands within the mid-infrared regime according to Wien's displacement law. The spectral radiant emittance of a 500$^{\circ }C$, at which most of mid-temperature solar applications works, blackbody is double as the peak spectral irradiance intensity of AM 1.5. Consequently, it is required to design spectrally selective solar absorbers. An ideal spectral selective absorber has an unity absorptance over the solar radiation regime and zero emittance within the mid-infrared thermal region, along with a very sharp transition between these two regions (as expressed by the dashed black line in Fig. 1B). The cut-off wavelength, $\lambda _{cut-off}$, at which the transition happens, should be where the total blackbody intensity begins to surpass the total solar radiation intensity. The $\lambda _{cut-off}$ will shift left to shorter wavelengths, called blue-shift, when the operational temperature goes up, as well as the peak of the blackbody thermal radiation moves to the left. The $\lambda _{cut-off}$ will shift to longer wavelengths, called red-shift, when the absorber is under higher concentration factors, since the solar energy accounts for a larger proportion in the enhancement of conversion efficiency. Note that, the optical properties of solar absorbers are dependent of operational temperature. It will shift a little according to various temperatures at difference thermal loads of diverse working conditions, so the photon-to-heat efficiency of solar absorbers varies as a function of $\lambda _{cut-off}$, CF, and $T_{abs}$. Consequently, it is a trade off to design a perfectly selective solar absorber for diverse solar thermal engineering. 2.2 Theoretical method of radiative property calculations As our proposed structures involve 1-D grating structure of Al$_2$O$_3$, we employ the second order approximation of effective medium theory to obtain the dielectric properties given by the expressions [36–38]: (4a)$$\varepsilon_{TE,2}\!=\!\varepsilon_{TE,0}\!\left[\! 1\!+\!\frac{\pi^2}{3}\!\left(\frac{\Lambda}{\lambda}\right)^2\!\phi^2(1-\phi)^2\frac{(\!\varepsilon_{A}\!-\!\varepsilon_{B}\!)^2}{\varepsilon_{TE,0}}\right]$$ (4b)$$\!\varepsilon_{TM,2}\!=\!\varepsilon_{TM,0}\!\left[\!1\!+\!\frac{\!\pi^2}{\!3}\!\left(\!\frac{\!\Lambda}{\!\lambda}\! \right)^2\!\phi^2(\!1\!-\!\phi)^2 (\!\varepsilon_{\!A}\!-\!\varepsilon_{\!B}\!)^2\varepsilon_{TE,0}\!\left(\!\frac{\varepsilon_{TM,0}}{\varepsilon_{\!A}\varepsilon_{\!B}}\!\right)^2\right]$$ where $\varepsilon _{A}$ and $\varepsilon _{B}$ are dielectric functions of two media (Al$_2$O$_3$ and vacuum) in surface gratings, $\Lambda$ is grating period and filling ratio $\phi =w/\Lambda$ where $w$ is width of Al$_2$O$_3$ segment. The expressions for zeroth order effective dielectric functions $\varepsilon _{TE,0}$ and $\varepsilon _{TM,0}$ are given by [36,39]: (5a)$$\varepsilon_{TE,0}\!=\phi\varepsilon_{A}+(1-\phi)\varepsilon_{B}$$ (5b)$$\varepsilon_{TM,0}\!=\left(\frac{\phi}{\varepsilon_{A}}+\frac{1-\phi}{\varepsilon_{B}}\right)^{{-}1}$$ For symmetric 2-D grating consisting two media having dielectric functions $\varepsilon _A$ and $\varepsilon _B$ (refractive indices $n_A$ and $n_B$, respectively) and filling ratio $f$ of medium B in medium A, the effective refractive index of the medium is given by [40]: (6)$$n_{2-D}=[\bar{n}+2\hat{n}_{2-D}+2\check{n}_{2-D}]/5$$ where $\bar {n}$, $\hat {n}_{2-D}$ and $\check {n}_{2-D}$ can be obtained using (7)$$\bar{n}=(1-f^2)n_A+f^2n_B$$ (8a)$$\hat{\varepsilon}_{2-D}=(1-f)\varepsilon_{A}+f\varepsilon_{\bot}$$ (8b)$$1/\check{\varepsilon}_{2-D}=(1-f)/\varepsilon_{A}+f/\varepsilon_{\|}$$ where $\varepsilon _{\bot }$ and $\varepsilon _{\|}$ are given by: (9a)$$\varepsilon_{\|}=(1-f)\varepsilon_{A}+f\varepsilon_{B}$$ (9b)$$1/\varepsilon_{\bot}=(1-f)/\varepsilon_{A}+f/\varepsilon_{B}$$ For 1-D and 2-D triangular gratings as shown in Fig. 2, gratings can be treated as a composition of multiple layers of rectangular gratings that each has decreasing filling ratio and period equal to that of parent grating [39]. Slicing the triangular structure into 100 layers is sufficient to achieve converging values of near-field radiative heat flux. As the effective medium approximation is valid for grating periods smaller than the wavelength of interest, $\Lambda / \lambda <1 /\left (n +\sin \theta \right )$, where $\theta$ is the incident angle, $n$ is the refractive index of layer underneath the surface grating structure [38], so we limit ourselves the period of $\Lambda =$ 100 nm and $\Lambda =$ 200 nm for 1-D and 2-D gratings, respectively, and keep this parameter fixed during optimization. Fig. 2. Schematics of 1-D and 2-D grating-Mie-metamaterial based solar absorbers. (A) 1-D triangular Al$_2$O$_3$ surface gratings of height, $h$ = 150 nm, period, $\Lambda$ = 100 nm, on top of W-Si$_3$N$_4$-W stacks with the thickness of $t_1$ = 12 nm, $t_2$ = 35 nm, and $t_3$ = 500 nm, respectively. The Al$_2$O$_3$ triangular grating is doped with 5 nm in radius W nanoparticles with a volume fraction, $f$, of 25%. (B) 2-D pyramid encapsulated with W nanoparticles ($r$ = 5 nm in radius with a volume fraction, $f$, of 25%) sits on stockpiles of W-Al$_2$O$_3$-W. The thickness of W, Al$_2$O$_3$, and W is 10 nm, 40 nm, and 500 nm, respectively. The height of the surface grating layer is 200 nm and the period $\Lambda$ = 200 nm in both $x$ and $y$ direction. In order to calculate the effective dielectric function the Mie-metamaterial, we apply the Clausius-Mossotti equation [41,42]: (10)$$\varepsilon_{eff}=\varepsilon_{m}\left(\frac{r^{3}+2\alpha_{r} f}{r^{3}-\alpha_r f}\right)$$ where $\varepsilon _{m}$ is the dielectric function of the matrix, $\alpha _r$ is the electric dipole polarizability, $r$ and $f$ are the radius and volume fraction of nanoparticles respectively. To consider the size effects of nanoparticle inclusions, the Maxwell Garnett formula is employed with the expression for electric dipole polarizability using Mie theory [43], $\alpha _r=3jc^{3}a_{1,r}/2\omega ^{3}\varepsilon _{m}^{3/2}$, where $a_{1,r}$ is the first electric Mie coefficient given by: (11)$$a_{1,r}\!=\!\frac{\sqrt{\varepsilon_{np}}\psi_{1}(x_{np})\psi_{1}^{'}(x_{m})\!-\!\sqrt{\varepsilon_{m}}\psi_{1}(x_{m})\psi_{1}^{'}(x_{np}) }{\sqrt{\varepsilon_{np}}\psi_{1}(x_{np})\xi_{1}^{'}(x_{m})\!-\!\sqrt{\varepsilon_{m}}\xi_{1}(x_{m})\psi_{1}^{'}(x_{np})}$$ where $\psi _{1}$ and $\xi _{1}$ are Riccati-Bessel functions of the first order given by $\psi _{1}(x)=xj_{1}(x)$ and $\xi _{1}(x)=xh_{1}^{(1)}(x)$ where $j_{1}$ and $h_{1}^{(1)}$ are first order spherical Bessel functions and spherical Hankel functions of the first kind, respectively. Here, '$'$' indicates the first derivative. $x_{m}=\omega r\sqrt {\varepsilon _{m}}/c$ and $x_{np}=\omega r\sqrt {\varepsilon _{np}}/c$ are the size parameters of the matrix and the nanoparticles, respectively; $\varepsilon _{np}$ is the dielectric function of nanoparticles. It is worth to mention that Maxwell-Garnett-Mie theory is trustworthy when the average distance between inclusions is much smaller than the wavelength of interest [44]. Additionally, nanoparticle diameter (5 nm) considered in this work is much smaller than the thickness of thin films (100 nm and 200 nm). Thus, the effective medium theory holds true for the calculations presented here. The hemispherical emissivity of the thermal emitter can be expressed as [45] (12)$$\epsilon(\omega)=\frac{c^{2}}{\omega^{2}}\int_0^{\omega/c}dk_{\rho}k_{\rho}\sum_{\mu=s,p}(1-|\widetilde{R}_{h}^{(\mu)}|^{2}-|\widetilde{T}_{h}^{(\mu)}|^{2})$$ where c is the speed of light in vacuum, $\omega$ is the angular frequency and $k_{\rho }$ is the magnitude of inplane wave vector. $\widetilde {R}_{h}^{(\mu )}$ and $\widetilde {T}_{h}^{(\mu )}$ are the polarization dependent effective reflection and transmission coefficients which can be calculated using the recursive relations of Fresnel coefficients of each interface [46]. The dielectric functions can be related to real $(n)$ and imaginary $(\kappa )$ parts of refractive index as $\sqrt {\varepsilon }=n+j\kappa$. Dielectric functions of the materials (Al$_2$O$_3$, W and Si$_3$N$_4$) utilized in this work are taken from literature [47–49]. The melting temperatures of Al$_2$O$_3$, W and Si$_3$N$_4$ are 2027 $^{\circ }$C, 3422 $^{\circ }$C, and 1900 $^{\circ }$C, and the thermal expansion coefficients of Al$_2$O$_3$, W and Si$_3$N$_4$ of 5.4 $\times$ 10 $^{-6}$ m/(m$\cdot$K), 4.2 $\times$ 10 $^{-6}$ m/(m$\cdot$K), and 3.3 $\times$ 10 $^{-6}$ m/(m$\cdot$K), respectively. The melting temperatures of the selected materials are higher than the operating temperature of high-temperature solar thermal engineering application (800 – 1000 $^{\circ }$C) and the thermal expansion coefficient of these materials are all the same order of magnitude, which proves that these materials have very low temperature coefficients under a relatively operating temperature and avoid the internal thermal stress from different materials , and room temperature values of dielectric functions are used for Al$_2$O$_3$, W and Si$_3$N$_4$. W nanoparticles are absorptive at visible and near-infrared region (from 0.3 $\mu$m to 1.5 $\mu$m) and the inclusion of W nanoparticles enhances the spectral emissivity of interest, which has investigated in authors' previous work [50,51]. Other metal nanoparticles, such as nickel, platinum, rhenium, and tantalum and dielectric materials like silica can also be selected as alternatives for Mie-metamaterials. The top layer of solar absorbers considered in our design is either a 1-D or 2-D grating structure with nanoparticles embedded inside and can be approximated as a homogeneous layer using effective dielectric property. 2.3 Thermal performance calculations of solar absorbers for a one-day cycle In order to demonstrate the thermal performance of the designed solar absorber under direct solar irradiation, we can solve the thermal balance equation as expressed by the following to obtain its temperature variations for a one-day cycle [52,53]: (13)$$Q_{total} (T_{abs},T_{amb}) = Q_{sun}(T_{abs})+Q_{amb} (T_{amb})-Q_{re-emit}(T_{abs})$$ It is supposed that the backside of the solar absorber is thermally insulated (i.e., no thermal load is connected to absorber), we only consider the heat transfer between solar absorber and air on the upper hemisphere. Here, Q$_{sun}$ is the heat flux of solar radiation incident on solar absorbers. Q$_{amb}$ is incident thermal radiation from ambient, and Q$_{re-emit}$ stands for the heat flux through thermal re-emission from upper side of the solar absorber to the ambient. Q$_{total}$ is the net heating power of the solar absorber. Solar radiation absorbed by the absorber, Q$_{sun}$, is given by $Q_{sun}(T_{abs})$ as following: (14)$$Q_{sun}(T_{abs}) = A\cdot{CF}\int_0^\infty \textrm{d}\lambda I_{AM 1.5} (\lambda) \alpha (\lambda, \theta_{sun}, T_{abs})$$ Here, $A$ is the area of the solar absorber. $\alpha (\lambda , \theta _{sun}, T_{abs})$ is the temperature, wavelength and angular dependent absorptance of solar absorber, however, the absorptance of designed solar absorber is angular independent up to 80$^{\circ }$ and the materials of proposed solar absorbers are refractory and with low thermal expansion coefficients. Hence, it is reasonable to use the absorptance that obtained with dielectric function at the room-temperature. The absorbed power of incident thermal radiation from atmosphere $Q_{amb} (T_{amb})$ can be expressed as follows: (15)$$Q_{amb}(T_{amb}) = A\int_0^\infty \textrm{d}\lambda I_{BB} (T_{amb},\lambda) \alpha (\lambda, \theta, \phi, T_{abs}) \epsilon (\lambda, \theta, \phi)$$ $I_{BB} (T_{amb}, \lambda )$ = $2hc^{5}{\lambda^{-5}}$ $\exp ( hc/\lambda k_{B}T-1)^{-1}$ defines the spectral radiance of a blackbody at a certain temperature, where h is the Planck's constant and $k_{B}$ is the Boltzmann constant. $\alpha (\lambda , \theta , \phi , T_{abs})$ = $\frac {1}{\pi }\int _0^{2\pi }\textrm {d}\phi \int _0^{\pi /2} \varepsilon _{\lambda }\cos \theta \sin \theta \textrm {d}\theta$ is the temperature-dependent absorptance of solar absorber [9]. $t (\lambda , \theta , \phi )$ is the transmittance value of atmosphere obtained from MODTRAN 4 [54]. The heat flux of thermal re-emission from the solar absorber upper surface is defined as follows: (16)$$Q_{re-emit}(T_{abs}) = A\int_0^\infty \textrm{d}\lambda I_{BB} (T_{abs},\lambda) \epsilon (\lambda, \theta, \phi, T_{abs})$$ where $\epsilon (\lambda , \theta , \phi , T_{abs})$ = $\alpha (\lambda , \theta , \phi , T_{abs})$ is the emissivity of the solar absorber according to Kirchhoff's law of thermal radiation [55]. The time-dependent temperature variations of the solar absorber can be obtained by solving the following equation: (17)$$C_{abs} \frac{dT}{dt} = Q_{total}(T_{abs}, T_{amb})$$ Since the multilayer structures of sputtering solar absorber is only 350 nm thick, it is rational to neglect its thermal resistance. Therefore, the heat capacitance of the absorber, C$_{abs}$, consider here equal to the heat capacitance of 300 $\mu$m silicon wafer. The transient temperature fluctuations of the solar absorber under different 5 suns (CF = 5) are simulated by solving Eq. 17, which is integrated to obtain the temperature evolution of solar absorber as a function of time. For each simulation, the initial temperature of the solar absorber is assumed to be the same as the ambient temperature. 3. Results and discussions 3.1 Design structure and effect of geometric parameters Schematics of 1-D and 2-D grating-Mie-metamaterials are depicted in Fig. 2. It comprises 1-D (Fig. 2A) and 2-D (Fig. 2B) W nanoparticles embedded Al$_2$O$_3$ gratings on top of the metal-dielectric-metal stacks. For Fig. 2A, the top surface grating layer is 150 nm thick W nanoparticles embedded Al$_2$O$_3$ with a period $\Lambda$ = 100 nm on top of W-Si$_3$N$_4$-W stacks with the thickness of $t_1$ = 12 nm, $t_2$ = 35 nm, and $t_3$ = 500 nm, respectively. While, for the 2-D surface grating structure, the period $\lambda$ = 200 nm is in both $x$ and $y$ direction, and the thickness of the top 2-D surface gratings layer is 200 nm. The 2-D gratings layer are Al$_2$O$_3$ encapsulated with W nanoparticles. The metal-dielectric-metal stockpiles layer is W ($t_1$ = 10 nm), Al$_2$O$_3$ ($t_2$ = 40 nm), and W ($t_3$ = 500 nm). Volume fraction of W nanoparticles is adjusted for both 1-D and 2-D surface grating structures to be 25% with a radius of 5 nm to achieve the best result. The bottom W layer of both cases are set to be 500 nm to block all the incident light, so the proposed structures can be considered as opaque and the reflectivity, $R$, of designed solar absorbers equals to 1 – $\epsilon$. From Eq. 7 to Eq. 12, reflectivity spectrum of proposed structure can be obtained as shown in Fig. 3A. The normalized solar spectral intensity and normalized 500 $^{\circ }C$ blackbody thermal radiation irradiance are also shown in Fig. 3A. It is observed that both the 1-D and 2-D grating structures have very low reflectivity (< 0.1) with the solar radiation regime (from 0.3 $\mu$m to 1.2 $\mu$m), while show a high reflectivity (> 0.95) within the mid-infrared region (from 4.0 $\mu$m to 20 $\mu$m). Table 1 lists total absorptance and emittance of the proposed 1-D and 2-D grating based solar absorbers. The integration interval of total absorptance is from 0.3 $\mu$m to 4.0 $\mu$m, which covers 99.9% of the solar radiation energy at AM 1.5. While the total emittance calculations takes from 0.3 $\mu$m to 20 $\mu$m, where 98.5% of a 500 $^{\circ }C$ and 93.2% of a 200 $^{\circ }C$ blackbody thermal radiation irradiance are spread over. Since the designed is highly diffuse, which will be investigated in section 3.2 later, it is reasonable to consider the total normal absorptance and emittance approximately to be the total hemispherical absorptance and emittance. It exhibits that the designed structure has a high absorptance (> 0.9) at solar radiation distributed regime and a quite low emittance (< 0.06) in the mid-infrared wavelength region, which demonstrates a high selectivity of the proposed grating-Mie-metamaterials. Fig. 3. (A) Normalized spectral distribution of solar heat flux (AM 1.5) and normalized thermal radiation of a 500 $^{\circ }C$ blackbody, as well as the calculated reflectivity spectra of proposed 1-D and 2-D surface grating-Mie-metamaterials. Normal reflectivity spectra as a function of the thickness of 1-D (B) and 2-D (C) surface gratings layer, $h_1$ and $h_2$ are the height of the 1-D and 2-D triangular surface gratings, respectively. Table 1. Total normal absorptance and emittance of the designed 1-D and 2-D grating-Mie-metamaterials at $T_{abs}$ = 200 $^{\circ }C$ and 500 $^{\circ }C$. In order to get the optimized geometric parameters for a final design, we investigate the geometric effects on the proposed grating-Mie-metamaterial based solar absorbers. The effects of the thickness of top grating layer, $h$, volume fraction, $f$, and the size of W nanoparticles, $r$. The thickness of each layer in the metal-dielectric-metal stack ($t_1$, $t_2$, and $t_3$) and the period , $\lambda$ are set to be invariable geometric parameters, as shown in Fig. 2. Other geometric parameters are fixed as in Fig. 2 when one of the investigated parameters varies in our calculations. The incident angle is fixed as 0$^{\circ }$. Figures 3B and 3C exhibit how the reflectivity spectra vary with the thickness of the top grating layer in the wavelength region from 0.3 $\mu$m to 3.0 $\mu$m, where 98.9% of the solar radiation energy is distributed. As shown in Fig. 3, the normal reflectivity over visible and near-infrared regime drops and its coverage over solar radiation region becomes larger, when the thickness of the top grating layer increases. It follows the similar rules with both 1-D and 2-D surface grating structures. As a result, a lower and broader reflectivity band from 0.3 $\mu$m to 1.6 $\mu$m can be achieved with a thickness of 150 nm and 200 nm for 1-D and 2-D surface grating structures. It gives hints that if a solar absorber works at a low temperature and high solar concentration factors, we can increase the thickness of the top grating layer to obtain a higher photon-to-heat conversion efficiency. Meanwhile, the reflectivity goes beyond 0.95 when the wavelength is larger than 3 $\mu$m, which keeps the minimization of thermal leakage from blackbody radiation. Figures 4A and 4B illustrate the effects of volume fraction, $f$, on the reflectivity spectra of solar absorbers at normal incidence. For the 1-D surface grating structure (Fig. 4A), it can be seen that there are two peaks (as marked by solid red circle) appears at around 0.5 $\mu$m and 1.2 $\mu$m. When the volume fraction of W nanoparticles increases from 10% to 30%, the two peaks both shift to longer wavelengths, while it makes the reflectivity band broader. Since the W supports surface plasmon polaritons (SPPs), the right-shifts of these two peaks presented here can be attributed to the excitation of SPPs. Ghanekar et al. [7,45] investigated explicitly the behavior of SPPs of metal nanoparticles embedded in dielectric thin film. For the 2-D surface grating structure, there is no evident to show reflectivity peaks shifting, which makes it more suitable for a high absorptance with solar radiation regime. Fig. 4. Normal reflectivity spectra as a function of W nanoparticles volume fraction, $f$ = 10%, 20% or 30%, for 1-D (A) and 2-D (B) surface grating-Mie-metamaterials, $f_1$ and $f_2$ defines the volume fractions of W nanoparticles embedded in the 1-D and 2-D Al$_2$O$_3$ host, respectively. 1-D (C) and 2-D (D), reflectivity spectra vary as the size of W nanoparticles increases ($r$ = 1, 3, and 5 nm), $r_1$ and $r_2$ denotes the size of the W nanoparticles in the 1-D and 2-D triangular surface grating structures, respectively. Refractive indices of W, SiO2 and SiO2 doped with W nanoparticles of volume fraction 20% and 10 nm radius. (E) Real part of refractive index. (F) Imaginary part of refractive index. Figure 4C and 4D elucidate how the size, $r$ of W nanoparticles affects the radiative properties of both 1-D and 2-D grating structures. The reflectivity spectra of both structures barely change when the radii of W nanoparticles change from 1 nm to 5 nm, because the maximum diameter of W nanoparticles is 10 nm which is quite smaller compared to the minimum wavelength investigated here (300 nm). The two reflectivity peaks of 1-D grating structure show a small red-shift. For the 2-D surface grating structure, no apparent shifts and reflectivity peaks are discovered. Consequently, the size of W nanoparticles exhibits little effect on the radiative properties when its radius is less than 5 nm. Consequently, we can obtain the desirable conversion efficiency through optimizing the geometric parameters of 1-D and 2-D grating-Mie-metamaterials. Fig. 4E and 4F illustrate the effect of W nanoparticle inclusions on the refractive indices of Al$_2$O$_3$ host. The pure Al$_2$O$_3$ has a near constant refractive index (n) $\sim$ 1.73 and a negligible extinction coefficient ($\kappa$). Nano-sized W nanoparticles ($d$ $\leq$ 10 nm) much smaller than the operating wavelength are introduced into the Al$_2$O$_3$ matrix, which brings about the increased refractive index and extinction coefficient as a result of the addition of new materials as well as the Mie-scattering of electromagnetic waves by spherical nanoparticles [7,50]. 3.2 Diffuse and polarization-independent behaviors of solar absorbers In traditional CSP solar thermal systems, the solar tracker system is incorporated to make solar absorbers face the sun all the time, since the sunlight is randomly distributed and not always normalized to the surface of solar absorber. Nevertheless, the solar tracker consumes extra electricity and reduces the system efficiency, so it is consequential to make the absorptance of the proposed solar absorber angular-independent. In addition, the polarization insensitivity is also highly required to maximum the solar absorption since the sunlight is unpolarized. Figure 5 illustrates the contour plot of reflectivity spectra for the designed solar absorbers as functions of incident angle, $\theta$, and wavelength, $\lambda$, at TE and TM waves, where, the redder color represents higher reflectivity, while the bluer color indicates lower reflectivity. It is shown that the reflectivity of both 1-D and 2-D grating structures maintains lower within visible and near-infrared regions (0.3 $\mu$m to 1.7 $\mu$m) in Figs. 5A and 5C, while remains at a high reflectivity in the mid-infrared region (3 $\mu$m to 20 $\mu$m). The reflectivity of these two structures at 0.55 $\mu$m, where the irradiance peak of solar radiation lies, with various incident angles, are listed in Table 2. It is manifested that, for both TE and TM polarized waves, the reflectivity of designed solar absorbers is very low from an incidence angle of 0$^{\circ }$ to 75$^{\circ }$. The dips for TM waves that appear at around 12 $\mu$m are due to the surface plasmon resonance of top Al$_2$O$_3$ gratings. However, the thermal radiation of blackbody of 500 $^{\circ }C$ and 200 $^{\circ }C$ is mainly distributed over the interval of 2 $\mu$m to 8 $\mu$m and 4 $\mu$m to 11 $\mu$m, respectively, which exhibits little overlap between blackbody radiation and two dips. Consequently, these two dips in reflectivity spectra affect lightly the selectivity performance of proposed solar absorbers in low- and mid-temperature solar thermal systems. For the high-temperature solar thermal engineering (e.g. 800 – 1000$^{\circ }C$), the corresponding blackbody thermal radiation has even little relation with these two dips. Above all, the proposed solar absorbers are direction-insensitive and polarization-independent. Fig. 5. Angle dependent reflectivity of TE polarization and TM polarization for proposed 1-D (A) and 2-D (B) selective solar absorbers contour plotted against wavelength, $\lambda$ and angle of incidence, $\theta$. Table 2. Reflectivity of the designed 1-D and 2-D grating-Mie-metamaterials at solar irradiance peak of 0.55 $\mu$m, with various incident angles and for both TE and TM polarizations. 3.3 Photon-to-heat conversion efficiency analysis of spectrally selective solar absorber Photon-to-heat conversion efficiency is the ultimate target for any solar thermal engineering, so the thermal performance of the designed solar absorbers have been quantitatively investigated by analysing Eq. 1. As designed above, the melting point of selected materials are higher (1900 $^{\circ }C$) than the operational temperature (the highest is around 1200 $^{\circ }C$) of solar thermal systems and the thermal expansion coefficient for these materials are ultra-low, so the optical properties of the designed grating-Mie-metamaterials are considered to be temperature-independent and we are able to employ the reflectivity data evaluated by the optical properties at room temperature. Additionally, the designed structures of solar absorbers are demonstrated to be angular-insensitive as well as polarization-independent, the normal reflectivity calculated in section 3.1 are used to determine the photon-to-heat conversion efficiency. The spectral integration for $\alpha _{abs}$ and $\epsilon _{abs}$ is executed over wavelength from 0.3 $\mu$m to 20 $\mu$m, which covers most of the solar radiation and the blackbody radiation (above 200 $^{\circ }C$). Figure 6A shows the photon-to-heat conversion efficiency, $\eta _{abs}$, as a function of operational temperature, $T_{abs}$ under one sun (no concentrated optical system included) for an ideal absorber surface, the 1-D and 2-D grating-Mie-metamaterial based solar absorbers, and a black surface from 100 $^{\circ }C$ to 500 $^{\circ }C$. Note that the ambient temperature is fixed to be 25 $^{\circ }C$ and the cut-off wavelength, $\lambda _{cut-off}$ is adjusted to various operational temperature to keep an ideal performance of selective solar absorbers. The reflectivity of the black surface is unity over all wavelength investigated and shows no spectral selectivity (the black dotdash line in Fig. 1B). It is clearly illustrated that the energy conversion efficiencies of solar absorbers are 88.74% and 92.02% for 1-D and 2-D surface grating structures at $T_{abs}$ = 100 $^{\circ }C$, respectively. These two curves intersect at 338.1 $^{\circ }C$. The stagnation temperatures, at which $\eta _{abs}$ = 0 (i.e., no solar energy is harvested), of 1-D and 2-D surface grating structures are 483.2 $^{\circ }C$ and 469.5 $^{\circ }C$, respectively. This difference results from that the 1-D grating structure has a lower absorptance in visible and near-infrared spectral region, while has an lower emittance in the mid-infrared regime, as shown in Fig. 3A. The black surface that is considered to be a reference only converts 34.82% solar energy to heat at $T_{abs}$ = 100 $^{\circ }C$, and its efficiency goes down to zero at 126 $^{\circ }C$ very quickly, which further demonstrates the significance of wavelength selectivity in enhancing the solar to heat energy conversion efficiency. The mismatch between the proposed solar absorbers and the ideal solar absorber becomes even larger when the operational temperature goes up from 100 $^{\circ }C$ to 500 $^{\circ }C$. It can be explained from two points: (1) the thermal emittance of both 1-D and 2-D grating structures (e.g. 5.2% and 6.0% at 500 $^{\circ }C$, listed in Tab. 1) are higher than the ideal one; and (2) the reflectivity spectrum of the ideal surface changes more sharply at the cut-off point than those proposed structures. Therefore, the geometric parameters still has room to be optimized approaching the ideal one. Fig. 6. (A) Calculated photon-to-heat conversion efficiency of an ideal selective absorber, the multilayer solar absorber with measured/simulated radiative properties, and a black surface as a function of absorber operational temperature, T$_{abs}$, under unconcentrated solar light; (B) Photon-to-heat conversion efficiency for abovementioned four absorber surfaces as a function of concentration factors, CF, at an absorber operational temperature of T$_{abs}$ = 500 $^{\circ }C$. Fig. 7. (A) Thermal performance of the 1-D (orange solid curve), 2-D (yellow solid curve) grating-Mie-metamaterials, and the black surface (purple solid curve) over a one-day cycle from sunrise (5:00 a.m.) to one hour after sunset (8:00 p.m.) at a varying ambient temperature (blue solid curve) under 10 suns (CF = 10). (B) The stagnation temperature of 1-D, 2-D surface gratings selective absorbers, and black surface at various concentration factors from 1 sun to 500 suns. As shown in Fig. 6B, the conversion efficiency increases as the concentration factor, CF, changes 1 to 1000 when the operational temperature, $T_{abs}$, is 500 $^{\circ }C$. The conversion efficiency of 1-D surface grating structure is higher than the 2-D one, since the operation temperature is higher than the intersection temperature of 338.1 $^{\circ }C$, as shown in Fig. 6A. The conversion efficiency of 2-D grating structure goes beyond the 1-D one at CF = 5, this is because the concentration factor becomes even higher and the solar radiation is much larger than the thermal radiation of a 500 $^{\circ }C$ blackbody. The conversion efficiency of the black surface exceeds that of 1-D and 2-D surface gratings at CF =201 and 311, respectively. It is reasonable since the quantity of solar radiation absorption dominates the photon-to-heat conversion efficiency. Consequently, the advantage of the selective solar absorber loses when the concentration factor becomes higher than 311 in this work. 3.4 Thermal performance investigation of solar absorbers in a one-day cycle In order to evaluate the thermal performance of proposed solar absorbers, we set the concentration factor to be 10 over a one-day sunlight cycle on July 10, 2018 in Boston, Massachusetts [56]. The input data of the ambient temperature [56] and the solar illumination [57] for Eq. 17, the temperature variations of the 1-D (solid orange curve), 2-D (solid yellow curve) surface grating structure, and the black surface (solid purple curve) are simulated from sunrise (5:00 a.m.) to one hour after sunset (8:00 p.m.). It is exhibited that the highest stagnation temperatures of the proposed structures are 667 $^{\circ }C$ (1-D gratings) and 684 $^{\circ }C$ (2-D gratings) under 10 suns, while, for the black surface, the stagnation temperature has a maximization of 266 $^{\circ }C$. It is clearly shown that the heating rate of the selective solar absorber is much higher than that of the black surface, which is significant to the solar thermal power plant when a quick response is required according to electricity load (Fig. 7). It is clear that the thermal performance of proposed selective solar absorber is better than that of the black surface at the highest stagnation temperature and heating rate that reveals the significance of selective solar absorbers in the CSP systems. Furthermore, we simulate the stagnation temperature of 1-D, 2-D surface grating structures, and black surface as a function of incident concentration factors. It is shown that the stagnation temperature of the 1-D and 2-D grating structures increases from 362.8 $^{\circ }C$ to 1460 $^{\circ }C$ (1-D) and 373.3 $^{\circ }C$ to 1481 $^{\circ }C$ (2-D) when the concentration factor increases from 1 to 500. The stagnation temperature of proposed selective absorbers is always higher than that of the black surface, which further demonstrates the importance of applications to CSP system. Moreover, the fabrication of the proposed Grating-Mie-Metamaterials based absorbers is feasible. The metal-dielectric-metal resonator of both 1-D and 2-D proposed nanostructure involves Al$_2$O$_3$, W, and Si$_3$N$_4$, which are general materials used in nanofabrication. W can be deposited using the DC magnetron sputtering method [8,58,59]. RF, DC, and pulsed reactive magnetron sputtering are well established approaches to deposit the Al$_2$O$_3$ thin film [60,61]. LPCVD and PECVD are both feasible candidates for depositing Si$_3$N$_4$ thin film [62–65]. The W nanoparticles embedded thin film can be manufactured through co-sputtering method as described in these literature [8,59,66]. Following by the fabrication of 1-D [67–69] and 2-D [10] triangular grating nanostructures described in these articles, the 1-D and 2-D surface grating structures can be fabricated. In this work, we theoretically propose the grating-Mie-metamaterial based selective solar absorbers consisting of 1-D and 2-D W nanoparticles embedded Al$_2$O$_3$ gratings on top of W-Si$_3$N$_4$/Al$_2$O$_3$-W stacks. High absorptance in the visible and near-infrared region and low emittance in the mid-infrared region can be achieved at normal incidence in both 1-D and 2-D surface grating structures. The physical mechanisms responsible for the high absorption include the excitations of SPPs, Mie-resonance, and metal-dielectric-metal resonance. The affects of all the key geometric parameters have been investigated such as the grating height, the volume fraction and the size of W nanoparticles. The 2-D surface grating has a higher absorptance than the 1-D grating one in visible and near-infrared regime, while the 1-D grating structure has a lower spectral emittance than the 2-D grating one. The total absorptance of both designed solar absorbers are higher 0.9 at normal incidence, while the total normal emittance is lower than 0.06 even at a high operation temperature of 500 $^{\circ }C$, which promises an excellent thermal performance of spectrally selective solar absorbers with a high photon-to-heat conversion efficiency. Furthermore, the effects of incidence angle and light polarization are investigated to prove the features of angular insensitivity and polarization independence. The solar to heat conversion efficiency under a one-day cycle is also performed to illustrate the advantage over a black surface, which further demonstrates the significance of the incorporation of spectral selectivity technology into solar thermal engineering. National Aeronautics and Space Administration (NNX15AK52A); National Science Foundation (1836967). This project was supported in part by National Science Foundation through grant numbers CBET-1836967, and National Aeronautics and Space Administration through grant number NNX15AK52A. 1. X. Lim, "How heat from the sun can keep us all cool," Nat. News 542(7639), 23–24 (2017). [CrossRef] 2. D. Kraemer, B. Poudel, H.-P. Feng, J. C. Caylor, B. Yu, X. Yan, Y. Ma, X. Wang, D. Wang, A. Muto, K. McEnaney, M. Chiesa, Z. Ren, and G. Chen, "High-performance flat-panel solar thermoelectric generators with high thermal concentration," Nat. Mater. 10(7), 532–538 (2011). [CrossRef] 3. D. Kraemer, Q. Jie, K. McEnaney, F. Cao, W. Liu, L. A. Weinstein, J. Loomis, Z. Ren, and G. Chen, "Concentrating solar thermoelectric generators with a peak efficiency of 7.4%," Nat. Energy 1(11), 16153 (2016). [CrossRef] 4. H. M. Qiblawey and F. Banat, "Solar thermal desalination technologies," Desalination 220(1-3), 633–644 (2008). [CrossRef] 5. D. Mills, "Advances in solar thermal electricity technology," Sol. Energy 76(1-3), 19–31 (2004). [CrossRef] 6. N. Wang, L. Han, H. He, N.-H. Park, and K. Koumoto, "A novel high-performance photovoltaic–thermoelectric hybrid device," Energy Environ. Sci. 4(9), 3676–3679 (2011). [CrossRef] 7. Y. Tian, A. Ghanekar, X. Liu, J. Sheng, and Y. Zheng, "Tunable wavelength selectivity of photonic metamaterials-based thermal devices," J. Photonics Energy 9(03), 1 (2018). [CrossRef] 8. Y. Tian, A. Ghanekar, L. Qian, M. Ricci, X. Liu, G. Xiao, O. Gregory, and Y. Zheng, "Near-infrared optics of nanoparticles embedded silica thin films," Opt. Express 27(4), A148–A157 (2019). [CrossRef] 9. Z. Zhang, Nano/Microscale Heat Transfer (McGraw-Hill, New York, 2007). 10. H. Wang, V. P. Sivan, A. Mitchell, G. Rosengarten, P. Phelan, and L. Wang, "Highly efficient selective metamaterial absorber for high-temperature solar thermal energy harvesting," Sol. Energy Mater. Sol. Cells 137, 235–242 (2015). [CrossRef] 11. R. Siegel, Thermal radiation heat transfer, vol. 1 (CRC press, 2001). 12. R. Pettit, R. Sowell, and I. Hall, "Black chrome solar selective coatings optimized for high temperature applications," Sol. Energy Mater. 7(2), 153–170 (1982). [CrossRef] 13. G. E. McDonald, "Spectral reflectance properties of black chrome for use as a solar selective coating," Sol. Energy 17(2), 119–122 (1975). [CrossRef] 14. J. Sweet, R. Pettit, and M. Chamberlain, "Optical modeling and aging characteristics of thermally stable black chrome solar selective coatings," Sol. Energy Mater. 10(3-4), 251–286 (1984). [CrossRef] 15. C. K. Ho, A. R. Mahoney, A. Ambrosini, M. Bencomo, A. Hall, and T. N. Lambert, "Characterization of pyromark 2500 paint for high-temperature solar receivers," J. Sol. Energy Eng. 136(1), 014502 (2014). [CrossRef] 16. C. M. Watts, X. Liu, and W. J. Padilla, "Metamaterial electromagnetic wave absorbers (adv. mater. 23/2012)," Adv. Mater. 24(23), OP181 (2012). [CrossRef] 17. A. Isenstadt and J. Xu, "Subwavelength metal optics and antireflection," Electron. Mater. Lett. 9(2), 125–132 (2013). [CrossRef] 18. A. Ghanekar, M. Sun, Z. Zhang, and Y. Zheng, "Optimal design of wavelength selective thermal emitter for thermophotovoltaic applications," J. Therm. Sci. Eng. Appl. 10(1), 011004 (2018). [CrossRef] 19. H. Wang and L. Wang, "Perfect selective metamaterial solar absorbers," Opt. Express 21(S6), A1078–A1093 (2013). [CrossRef] 20. B. J. Lee, Y.-B. Chen, S. Han, F.-C. Chiu, and H. J. Lee, "Wavelength-selective solar thermal absorber with two-dimensional nickel gratings," J. Heat Transfer 136(7), 072702 (2014). [CrossRef] 21. S. Han, J.-H. Shin, P.-H. Jung, H. Lee, and B. J. Lee, "Broadband solar thermal absorber based on optical metamaterials for high-temperature applications," Adv. Opt. Mater. 4(8), 1265–1273 (2016). [CrossRef] 22. R. Marques, J. Martel, F. Mesa, and F. Medina, "Left-handed-media simulation and transmission of em waves in subwavelength split-ring-resonator-loaded metallic waveguides," Phys. Rev. Lett. 89(18), 183901 (2002). [CrossRef] 23. K. Aydin and E. Ozbay, "Capacitor-loaded split ring resonators as tunable metamaterial components," J. Appl. Phys. 101(2), 024911 (2007). [CrossRef] 24. D. Y. Shchegolkov, A. Azad, J. O'Hara, and E. Simakov, "Perfect subwavelength fishnetlike metamaterial-based film terahertz absorbers," Phys. Rev. B 82(20), 205117 (2010). [CrossRef] 25. N. Liu, H. Guo, L. Fu, S. Kaiser, H. Schweizer, and H. Giessen, "Plasmon hybridization in stacked cut-wire metamaterials," Adv. Mater. 19(21), 3628–3632 (2007). [CrossRef] 26. H. Tao, N. I. Landy, C. M. Bingham, X. Zhang, R. D. Averitt, and W. J. Padilla, "A metamaterial absorber for the terahertz regime: design, fabrication and characterization," Opt. Express 16(10), 7181–7188 (2008). [CrossRef] 27. N. Liu, M. Mesch, T. Weiss, M. Hentschel, and H. Giessen, "Infrared perfect absorber and its application as plasmonic sensor," Nano Lett. 10(7), 2342–2348 (2010). [CrossRef] 28. J. Hao, J. Wang, X. Liu, W. J. Padilla, L. Zhou, and M. Qiu, "High performance optical absorber based on a plasmonic metamaterial," Appl. Phys. Lett. 96(25), 251104 (2010). [CrossRef] 29. K. Aydin, V. E. Ferry, R. M. Briggs, and H. A. Atwater, "Broadband polarization-independent resonant light absorption using ultrathin plasmonic super absorbers," Nat. Commun. 2(1), 517 (2011). [CrossRef] 30. A. Moreau, C. Ciraci, J. J. Mock, R. T. Hill, Q. Wang, B. J. Wiley, A. Chilkoti, and D. R. Smith, "Controlled-reflectance surfaces with film-coupled colloidal nanoantennas," Nature 492(7427), 86–89 (2012). [CrossRef] 31. M. G. Nielsen, A. Pors, O. Albrektsen, and S. I. Bozhevolnyi, "Efficient absorption of visible radiation by gap plasmon resonators," Opt. Express 20(12), 13311–13319 (2012). [CrossRef] 32. A. Ghanekar, Y. Tian, S. Zhang, Y. Cui, and Y. Zheng, "Mie-metamaterials-based thermal emitter for near-field thermophotovoltaic systems," Materials 10(8), 885 (2017). [CrossRef] 33. A. Ghanekar, L. Lin, and Y. Zheng, "Novel and efficient mie-metamaterial thermal emitter for thermophotovoltaic systems," Opt. Express 24(10), A868–A877 (2016). [CrossRef] 34. J. Dai, F. Ye, Y. Chen, M. Muhammed, M. Qiu, and M. Yan, "Light absorber based on nano-spheres on a substrate reflector," Opt. Express 21(6), 6697–6706 (2013). [CrossRef] 35. A. S. (ASTM), "Calculation of Solar Insolation at AM1.5," https://www.nrel.gov/grid/solar-resource/spectra.html. 36. D. H. Raguin and G. M. Morris, "Antireflection structured surfaces for the infrared spectral region," Appl. Opt. 32(7), 1154–1167 (1993). [CrossRef] 37. P. Lalanne and D. Lemercier-Lalanne, "On the effective medium theory of subwavelength periodic structures," J. Mod. Opt. 43(10), 2063–2085 (1996). [CrossRef] 38. Y.-B. Chen, Z. Zhang, and P. Timans, "Radiative properties of patterned wafers with nanoscale linewidth," J. Heat Transfer 129(1), 79–90 (2007). [CrossRef] 39. E. Glytsis and T. K. Gaylord, "High-spatial-frequency binary and multilevel stairstep gratings: polarization-selective mirrors and broadband antireflection surfaces," Appl. Opt. 31(22), 4459–4470 (1992). [CrossRef] 40. R. Bräuer and O. Bryngdahl, "Design of antireflection gratings with approximate and rigorous methods," Appl. Opt. 33(34), 7875–7882 (1994). [CrossRef] 41. V. Myroshnychenko, J. Rodríguez-Fernández, I. Pastoriza-Santos, A. M. Funston, C. Novo, P. Mulvaney, L. M. Liz-Marzán, and F. J. G. de Abajo, "Modelling the optical response of gold nanoparticles," Chem. Soc. Rev. 37(9), 1792–1805 (2008). [CrossRef] 42. U. Kreibig and M. Vollmer, Optical properties of metal clusters, vol. 25 (Springer Berlin, 1995). 43. W. T. Doyle, "Optical properties of a suspension of metal spheres," Phys. Rev. B 39(14), 9852–9858 (1989). [CrossRef] 44. M. S. Wheeler, "A scattering-based approach to the design, analysis, and experimental verification of magnetic metamaterials made from dielectrics," Ph.D. thesis (2010). 45. A. Ghanekar, L. Lin, J. Su, H. Sun, and Y. Zheng, "Role of nanoparticles in wavelength selectivity of multilayered structures in the far-field and near-field regimes," Opt. Express 23(19), A1129–A1139 (2015). [CrossRef] 46. W. C. Chew and W. C. Chew, Waves and fields in inhomogeneous media, vol. 522 (IEEE press New York, 1995). 47. M. R. Querry, "Optical constants," Tech. rep., MISSOURI UNIV-KANSAS CITY (1985). 48. L. Gao, F. Lemarchand, and M. Lequime, "Refractive index determination of sio2 layer in the uv/vis/nir range: spectrophotometric reverse engineering on single and bi-layer designs," J. Eur. Opt. Soc. publications 8, 13010 (2013). [CrossRef] 49. S. Ferré, A. Peinado, E. Garcia-Caurel, V. Trinité, M. Carras, and R. Ferreira, "Comparative study of sio 2, si 3 n 4 and tio 2 thin films as passivation layers for quantum cascade lasers," Opt. Express 24(21), 24032–24044 (2016). [CrossRef] 52. M. Ono, K. Chen, W. Li, and S. Fan, "Self-adaptive radiative cooling based on phase change materials," Opt. Express 26(18), A777–A787 (2018). [CrossRef] 53. P. Yang, C. Chen, and Z. M. Zhang, "A dual-layer structure with record-high solar reflectance for daytime radiative cooling," Sol. Energy 169, 316–324 (2018). [CrossRef] 54. A. Berk, G. P. Anderson, L. S. Bernstein, P. K. Acharya, H. Dothe, M. W. Matthew, S. M. Adler-Golden, J. H. Chetwynd, S. C. Richtsmeier, B. Pukall, C. L. Allred, L. S. Jeong, and M. L. Hoke, Modtran4 radiative transfer modeling for atmospheric correction, in Optical spectroscopic techniques and instrumentation for atmospheric and space research III, vol. 3756 (International Society for Optics and Photonics, 1999), pp. 348–354. 55. P.-M. Robitaille, "Kirchhoff's law of thermal emission: 150 years," in Progress in Physics, Vol. 4 (Progress in Physics, 2009) pp. 3–13. 56. Weatherground, "Weather conditions," https://www.wunderground.com/history/daily/us/ma/boston/KBOS/date/2018-7-10 (2018). [Online; accessed 10-July-2018]. 57. PVEDUCATION.ORG, "Calculation of Solar Insolation," https://www.pveducation.org/pvcdrom/properties-of-sunlight/calculation-of-solar-insolation (2018). 58. T. Vink, W. Walrave, J. Daams, A. Dirks, M. Somers, and K. Van den Aker, "Stress, strain, and microstructure in thin tungsten films deposited by dc magnetron sputtering," J. Appl. Phys. 74(2), 988–995 (1993). [CrossRef] 59. F. Cao, D. Kraemer, L. Tang, Y. Li, A. P. Litvinchuk, J. Bao, G. Chen, and Z. Ren, "A high-performance spectrally-selective solar absorber based on a yttria-stabilized zirconia cermet with high-temperature stability," Energy Environ. Sci. 8(10), 3040–3048 (2015). [CrossRef] 60. Y. Zhao, Y. Qian, W. Yu, and Z. Chen, "Surface roughness of alumina films deposited by reactive rf sputtering," Thin Solid Films 286(1-2), 45–48 (1996). [CrossRef] 61. R. Cremer, M. Witthaut, D. Neuschütz, G. Erkens, T. Leyendecker, and M. Feldhege, "Comparative characterization of alumina coatings deposited by rf, dc and pulsed reactive magnetron sputtering," Surf. Coat. Technol. 120-121, 213–218 (1999). [CrossRef] 62. A. Gorin, A. Jaouad, E. Grondin, V. Aimez, and P. Charette, "Fabrication of silicon nitride waveguides for visible-light using pecvd: a study of the effect of plasma frequency on optical properties," Opt. Express 16(18), 13509–13516 (2008). [CrossRef] 63. H. Kato, K. Orito, H. Kikuchi, and S. Maku, "Method of cvd for forming silicon nitride film on substrate," (2006). US Patent 7,094,708. 64. H. Kanoh, O. Sugiura, P. Breddels, and M. Matsumura, "Amorphous-silicon/silicon-nitride thin-film transistors fabricated by plasma-free (chemical vapor deposition) method," IEEE Electron Device Lett. 11(6), 258–260 (1990). [CrossRef] 65. M. Ino, N. Inoue, and M. Yoshimaru, "Silicon nitride thin-film deposition by lpcvd with in situ hf vapor cleaning and its application to stacked dram capacitor fabrication," IEEE Trans. Electron Devices 41(5), 703–708 (1994). [CrossRef] 66. F. Cao, L. Tang, Y. Li, A. P. Litvinchuk, J. Bao, and Z. Ren, "A high-temperature stable spectrally-selective solar absorber based on cermet of titanium nitride in sio 2 deposited on lanthanum aluminate," Sol. Energy Mater. Sol. Cells 160, 12–17 (2017). [CrossRef] 67. L. Wang and Z. Zhang, "Wavelength-selective and diffuse emitter enhanced by magnetic polaritons for thermophotovoltaics," Appl. Phys. Lett. 100(6), 063902 (2012). [CrossRef] 68. J. Wu, C. Zhou, H. Cao, and A. Hu, "Polarization-dependent and-independent spectrum selective absorption based on a metallic grating structure," Opt. Commun. 309, 57–63 (2013). [CrossRef] 69. L. Wang and Z. Zhang, "Measurement of coherent thermal emission due to magnetic polaritons in subwavelength microstructures," J. Heat Transfer 135(9), 091505 (2013). [CrossRef] X. Lim, "How heat from the sun can keep us all cool," Nat. News 542(7639), 23–24 (2017). D. Kraemer, B. Poudel, H.-P. Feng, J. C. Caylor, B. Yu, X. Yan, Y. Ma, X. Wang, D. Wang, A. Muto, K. McEnaney, M. Chiesa, Z. Ren, and G. Chen, "High-performance flat-panel solar thermoelectric generators with high thermal concentration," Nat. Mater. 10(7), 532–538 (2011). D. Kraemer, Q. Jie, K. McEnaney, F. Cao, W. Liu, L. A. Weinstein, J. Loomis, Z. Ren, and G. Chen, "Concentrating solar thermoelectric generators with a peak efficiency of 7.4%," Nat. Energy 1(11), 16153 (2016). H. M. Qiblawey and F. Banat, "Solar thermal desalination technologies," Desalination 220(1-3), 633–644 (2008). D. Mills, "Advances in solar thermal electricity technology," Sol. Energy 76(1-3), 19–31 (2004). N. Wang, L. Han, H. He, N.-H. Park, and K. Koumoto, "A novel high-performance photovoltaic–thermoelectric hybrid device," Energy Environ. Sci. 4(9), 3676–3679 (2011). Y. Tian, A. Ghanekar, X. Liu, J. Sheng, and Y. Zheng, "Tunable wavelength selectivity of photonic metamaterials-based thermal devices," J. Photonics Energy 9(03), 1 (2018). Y. Tian, A. Ghanekar, L. Qian, M. Ricci, X. Liu, G. Xiao, O. Gregory, and Y. Zheng, "Near-infrared optics of nanoparticles embedded silica thin films," Opt. Express 27(4), A148–A157 (2019). Z. Zhang, Nano/Microscale Heat Transfer (McGraw-Hill, New York, 2007). H. Wang, V. P. Sivan, A. Mitchell, G. Rosengarten, P. Phelan, and L. Wang, "Highly efficient selective metamaterial absorber for high-temperature solar thermal energy harvesting," Sol. Energy Mater. Sol. Cells 137, 235–242 (2015). R. Siegel, Thermal radiation heat transfer, vol. 1 (CRC press, 2001). R. Pettit, R. Sowell, and I. Hall, "Black chrome solar selective coatings optimized for high temperature applications," Sol. Energy Mater. 7(2), 153–170 (1982). G. E. McDonald, "Spectral reflectance properties of black chrome for use as a solar selective coating," Sol. Energy 17(2), 119–122 (1975). J. Sweet, R. Pettit, and M. Chamberlain, "Optical modeling and aging characteristics of thermally stable black chrome solar selective coatings," Sol. Energy Mater. 10(3-4), 251–286 (1984). C. K. Ho, A. R. Mahoney, A. Ambrosini, M. Bencomo, A. Hall, and T. N. Lambert, "Characterization of pyromark 2500 paint for high-temperature solar receivers," J. Sol. Energy Eng. 136(1), 014502 (2014). C. M. Watts, X. Liu, and W. J. Padilla, "Metamaterial electromagnetic wave absorbers (adv. mater. 23/2012)," Adv. Mater. 24(23), OP181 (2012). A. Isenstadt and J. Xu, "Subwavelength metal optics and antireflection," Electron. Mater. Lett. 9(2), 125–132 (2013). A. Ghanekar, M. Sun, Z. Zhang, and Y. Zheng, "Optimal design of wavelength selective thermal emitter for thermophotovoltaic applications," J. Therm. Sci. Eng. Appl. 10(1), 011004 (2018). H. Wang and L. Wang, "Perfect selective metamaterial solar absorbers," Opt. Express 21(S6), A1078–A1093 (2013). B. J. Lee, Y.-B. Chen, S. Han, F.-C. Chiu, and H. J. Lee, "Wavelength-selective solar thermal absorber with two-dimensional nickel gratings," J. Heat Transfer 136(7), 072702 (2014). S. Han, J.-H. Shin, P.-H. Jung, H. Lee, and B. J. Lee, "Broadband solar thermal absorber based on optical metamaterials for high-temperature applications," Adv. Opt. Mater. 4(8), 1265–1273 (2016). R. Marques, J. Martel, F. Mesa, and F. Medina, "Left-handed-media simulation and transmission of em waves in subwavelength split-ring-resonator-loaded metallic waveguides," Phys. Rev. Lett. 89(18), 183901 (2002). K. Aydin and E. Ozbay, "Capacitor-loaded split ring resonators as tunable metamaterial components," J. Appl. Phys. 101(2), 024911 (2007). D. Y. Shchegolkov, A. Azad, J. O'Hara, and E. Simakov, "Perfect subwavelength fishnetlike metamaterial-based film terahertz absorbers," Phys. Rev. B 82(20), 205117 (2010). N. Liu, H. Guo, L. Fu, S. Kaiser, H. Schweizer, and H. Giessen, "Plasmon hybridization in stacked cut-wire metamaterials," Adv. Mater. 19(21), 3628–3632 (2007). H. Tao, N. I. Landy, C. M. Bingham, X. Zhang, R. D. Averitt, and W. J. Padilla, "A metamaterial absorber for the terahertz regime: design, fabrication and characterization," Opt. Express 16(10), 7181–7188 (2008). N. Liu, M. Mesch, T. Weiss, M. Hentschel, and H. Giessen, "Infrared perfect absorber and its application as plasmonic sensor," Nano Lett. 10(7), 2342–2348 (2010). J. Hao, J. Wang, X. Liu, W. J. Padilla, L. Zhou, and M. Qiu, "High performance optical absorber based on a plasmonic metamaterial," Appl. Phys. Lett. 96(25), 251104 (2010). K. Aydin, V. E. Ferry, R. M. Briggs, and H. A. Atwater, "Broadband polarization-independent resonant light absorption using ultrathin plasmonic super absorbers," Nat. Commun. 2(1), 517 (2011). A. Moreau, C. Ciraci, J. J. Mock, R. T. Hill, Q. Wang, B. J. Wiley, A. Chilkoti, and D. R. Smith, "Controlled-reflectance surfaces with film-coupled colloidal nanoantennas," Nature 492(7427), 86–89 (2012). M. G. Nielsen, A. Pors, O. Albrektsen, and S. I. Bozhevolnyi, "Efficient absorption of visible radiation by gap plasmon resonators," Opt. Express 20(12), 13311–13319 (2012). A. Ghanekar, Y. Tian, S. Zhang, Y. Cui, and Y. Zheng, "Mie-metamaterials-based thermal emitter for near-field thermophotovoltaic systems," Materials 10(8), 885 (2017). A. Ghanekar, L. Lin, and Y. Zheng, "Novel and efficient mie-metamaterial thermal emitter for thermophotovoltaic systems," Opt. Express 24(10), A868–A877 (2016). J. Dai, F. Ye, Y. Chen, M. Muhammed, M. Qiu, and M. Yan, "Light absorber based on nano-spheres on a substrate reflector," Opt. Express 21(6), 6697–6706 (2013). A. S. (ASTM), "Calculation of Solar Insolation at AM1.5," https://www.nrel.gov/grid/solar-resource/spectra.html . D. H. Raguin and G. M. Morris, "Antireflection structured surfaces for the infrared spectral region," Appl. Opt. 32(7), 1154–1167 (1993). P. Lalanne and D. Lemercier-Lalanne, "On the effective medium theory of subwavelength periodic structures," J. Mod. Opt. 43(10), 2063–2085 (1996). Y.-B. Chen, Z. Zhang, and P. Timans, "Radiative properties of patterned wafers with nanoscale linewidth," J. Heat Transfer 129(1), 79–90 (2007). E. Glytsis and T. K. Gaylord, "High-spatial-frequency binary and multilevel stairstep gratings: polarization-selective mirrors and broadband antireflection surfaces," Appl. Opt. 31(22), 4459–4470 (1992). R. Bräuer and O. Bryngdahl, "Design of antireflection gratings with approximate and rigorous methods," Appl. Opt. 33(34), 7875–7882 (1994). V. Myroshnychenko, J. Rodríguez-Fernández, I. Pastoriza-Santos, A. M. Funston, C. Novo, P. Mulvaney, L. M. Liz-Marzán, and F. J. G. de Abajo, "Modelling the optical response of gold nanoparticles," Chem. Soc. Rev. 37(9), 1792–1805 (2008). U. Kreibig and M. Vollmer, Optical properties of metal clusters, vol. 25 (Springer Berlin, 1995). W. T. Doyle, "Optical properties of a suspension of metal spheres," Phys. Rev. B 39(14), 9852–9858 (1989). M. S. Wheeler, "A scattering-based approach to the design, analysis, and experimental verification of magnetic metamaterials made from dielectrics," Ph.D. thesis (2010). A. Ghanekar, L. Lin, J. Su, H. Sun, and Y. Zheng, "Role of nanoparticles in wavelength selectivity of multilayered structures in the far-field and near-field regimes," Opt. Express 23(19), A1129–A1139 (2015). W. C. Chew and W. C. Chew, Waves and fields in inhomogeneous media, vol. 522 (IEEE press New York, 1995). M. R. Querry, "Optical constants," Tech. rep., MISSOURI UNIV-KANSAS CITY (1985). L. Gao, F. Lemarchand, and M. Lequime, "Refractive index determination of sio2 layer in the uv/vis/nir range: spectrophotometric reverse engineering on single and bi-layer designs," J. Eur. Opt. Soc. publications 8, 13010 (2013). S. Ferré, A. Peinado, E. Garcia-Caurel, V. Trinité, M. Carras, and R. Ferreira, "Comparative study of sio 2, si 3 n 4 and tio 2 thin films as passivation layers for quantum cascade lasers," Opt. Express 24(21), 24032–24044 (2016). M. Ono, K. Chen, W. Li, and S. Fan, "Self-adaptive radiative cooling based on phase change materials," Opt. Express 26(18), A777–A787 (2018). P. Yang, C. Chen, and Z. M. Zhang, "A dual-layer structure with record-high solar reflectance for daytime radiative cooling," Sol. Energy 169, 316–324 (2018). A. Berk, G. P. Anderson, L. S. Bernstein, P. K. Acharya, H. Dothe, M. W. Matthew, S. M. Adler-Golden, J. H. Chetwynd, S. C. Richtsmeier, B. Pukall, C. L. Allred, L. S. Jeong, and M. L. Hoke, Modtran4 radiative transfer modeling for atmospheric correction, in Optical spectroscopic techniques and instrumentation for atmospheric and space research III, vol. 3756 (International Society for Optics and Photonics, 1999), pp. 348–354. P.-M. Robitaille, "Kirchhoff's law of thermal emission: 150 years," in Progress in Physics, Vol. 4 (Progress in Physics, 2009) pp. 3–13. Weatherground, "Weather conditions," https://www.wunderground.com/history/daily/us/ma/boston/KBOS/date/2018-7-10 (2018). [Online; accessed 10-July-2018]. PVEDUCATION.ORG, "Calculation of Solar Insolation," https://www.pveducation.org/pvcdrom/properties-of-sunlight/calculation-of-solar-insolation (2018). T. Vink, W. Walrave, J. Daams, A. Dirks, M. Somers, and K. Van den Aker, "Stress, strain, and microstructure in thin tungsten films deposited by dc magnetron sputtering," J. Appl. Phys. 74(2), 988–995 (1993). F. Cao, D. Kraemer, L. Tang, Y. Li, A. P. Litvinchuk, J. Bao, G. Chen, and Z. Ren, "A high-performance spectrally-selective solar absorber based on a yttria-stabilized zirconia cermet with high-temperature stability," Energy Environ. Sci. 8(10), 3040–3048 (2015). Y. Zhao, Y. Qian, W. Yu, and Z. Chen, "Surface roughness of alumina films deposited by reactive rf sputtering," Thin Solid Films 286(1-2), 45–48 (1996). R. Cremer, M. Witthaut, D. Neuschütz, G. Erkens, T. Leyendecker, and M. Feldhege, "Comparative characterization of alumina coatings deposited by rf, dc and pulsed reactive magnetron sputtering," Surf. Coat. Technol. 120-121, 213–218 (1999). A. Gorin, A. Jaouad, E. Grondin, V. Aimez, and P. Charette, "Fabrication of silicon nitride waveguides for visible-light using pecvd: a study of the effect of plasma frequency on optical properties," Opt. Express 16(18), 13509–13516 (2008). H. Kato, K. Orito, H. Kikuchi, and S. Maku, "Method of cvd for forming silicon nitride film on substrate," (2006). US Patent 7,094,708. H. Kanoh, O. Sugiura, P. Breddels, and M. Matsumura, "Amorphous-silicon/silicon-nitride thin-film transistors fabricated by plasma-free (chemical vapor deposition) method," IEEE Electron Device Lett. 11(6), 258–260 (1990). M. Ino, N. Inoue, and M. Yoshimaru, "Silicon nitride thin-film deposition by lpcvd with in situ hf vapor cleaning and its application to stacked dram capacitor fabrication," IEEE Trans. Electron Devices 41(5), 703–708 (1994). F. Cao, L. Tang, Y. Li, A. P. Litvinchuk, J. Bao, and Z. Ren, "A high-temperature stable spectrally-selective solar absorber based on cermet of titanium nitride in sio 2 deposited on lanthanum aluminate," Sol. Energy Mater. Sol. Cells 160, 12–17 (2017). L. Wang and Z. Zhang, "Wavelength-selective and diffuse emitter enhanced by magnetic polaritons for thermophotovoltaics," Appl. Phys. Lett. 100(6), 063902 (2012). J. Wu, C. Zhou, H. Cao, and A. Hu, "Polarization-dependent and-independent spectrum selective absorption based on a metallic grating structure," Opt. Commun. 309, 57–63 (2013). L. Wang and Z. Zhang, "Measurement of coherent thermal emission due to magnetic polaritons in subwavelength microstructures," J. Heat Transfer 135(9), 091505 (2013). Acharya, P. K. Adler-Golden, S. M. Aimez, V. Albrektsen, O. Allred, C. L. Ambrosini, A. Anderson, G. P. Atwater, H. A. Averitt, R. D. Aydin, K. Azad, A. Banat, F. Bao, J. Bencomo, M. Berk, A. Bernstein, L. S. Bingham, C. M. Bozhevolnyi, S. I. Bräuer, R. Breddels, P. Briggs, R. M. Bryngdahl, O. Cao, F. Cao, H. Carras, M. Caylor, J. C. Chamberlain, M. Charette, P. Chen, G. Chen, K. Chen, Y.-B. Chen, Z. Chetwynd, J. H. Chew, W. C. Chiesa, M. Chilkoti, A. Chiu, F.-C. Ciraci, C. Cremer, R. Cui, Y. Daams, J. Dai, J. de Abajo, F. J. G. Dirks, A. Dothe, H. Doyle, W. T. Erkens, G. Fan, S. Feldhege, M. Feng, H.-P. Ferré, S. Ferreira, R. Ferry, V. E. Fu, L. Funston, A. M. Gao, L. Garcia-Caurel, E. Gaylord, T. K. Ghanekar, A. Giessen, H. Glytsis, E. Gorin, A. Gregory, O. Grondin, E. Guo, H. Hall, A. Hall, I. Han, L. Han, S. Hao, J. He, H. Hentschel, M. Hill, R. T. Ho, C. K. Hoke, M. L. Hu, A. Ino, M. Inoue, N. Isenstadt, A. Jaouad, A. Jeong, L. S. Jie, Q. Jung, P.-H. Kaiser, S. Kanoh, H. Kato, H. Kikuchi, H. Koumoto, K. Kraemer, D. Kreibig, U. Lalanne, P. Lambert, T. N. Landy, N. I. Lee, B. J. Lee, H. J. Lemarchand, F. Lemercier-Lalanne, D. Lequime, M. Leyendecker, T. Li, W. Li, Y. Lim, X. Lin, L. Litvinchuk, A. P. Liu, N. Liu, W. Liu, X. Liz-Marzán, L. M. Loomis, J. Ma, Y. Mahoney, A. R. Maku, S. Marques, R. Martel, J. Matsumura, M. Matthew, M. W. McDonald, G. E. McEnaney, K. Medina, F. Mesa, F. Mesch, M. Mills, D. Mitchell, A. Mock, J. J. Moreau, A. Morris, G. M. Muhammed, M. Mulvaney, P. Muto, A. Myroshnychenko, V. Neuschütz, D. Nielsen, M. G. Novo, C. O'Hara, J. Ono, M. Orito, K. Ozbay, E. Padilla, W. J. Park, N.-H. Pastoriza-Santos, I. Peinado, A. Pettit, R. Phelan, P. Pors, A. Poudel, B. Pukall, B. Qian, L. Qian, Y. Qiblawey, H. M. Qiu, M. Querry, M. R. Raguin, D. H. Ren, Z. Ricci, M. Richtsmeier, S. C. Robitaille, P.-M. Rodríguez-Fernández, J. Rosengarten, G. Schweizer, H. Shchegolkov, D. Y. Sheng, J. Shin, J.-H. Siegel, R. Simakov, E. Sivan, V. P. Smith, D. R. Somers, M. Sowell, R. Su, J. Sugiura, O. Sun, H. Sun, M. Sweet, J. Tang, L. Tao, H. Tian, Y. Timans, P. Trinité, V. Van den Aker, K. Vink, T. Vollmer, M. Walrave, W. Wang, D. Wang, H. Wang, J. Wang, L. Wang, N. Wang, Q. Wang, X. Watts, C. M. Weinstein, L. A. Weiss, T. Wheeler, M. S. Wiley, B. J. Witthaut, M. Wu, J. Xiao, G. Xu, J. Yan, M. Yan, X. Yang, P. Ye, F. Yoshimaru, M. Yu, B. Yu, W. Zhang, X. Zhang, Z. Zhang, Z. M. Zhao, Y. Zheng, Y. Zhou, C. Zhou, L. Adv. Mater. (2) Adv. Opt. Mater. (1) Appl. Opt. (3) Appl. Phys. Lett. (2) Chem. Soc. Rev. (1) Desalination (1) Electron. Mater. Lett. (1) Energy Environ. Sci. (2) IEEE Electron Device Lett. (1) IEEE Trans. Electron Devices (1) J. Appl. Phys. (2) J. Eur. Opt. Soc. publications (1) J. Heat Transfer (3) J. Mod. Opt. (1) J. Photonics Energy (1) J. Sol. Energy Eng. (1) J. Therm. Sci. Eng. Appl. (1) Nano Lett. (1) Nat. Commun. (1) Nat. Energy (1) Nat. Mater. (1) Nat. News (1) Opt. Commun. (1) Opt. Express (12) Phys. Rev. B (2) Phys. Rev. Lett. (1) Sol. Energy (3) Sol. Energy Mater. (2) Sol. Energy Mater. Sol. Cells (2) Surf. Coat. Technol. (1) Thin Solid Films (1) Fig. 1. (A) A typical solar thermal energy conversion system. (B) Solar spectral irradiance (AM1.5, global tilt), radiative heat flux of blackbody thermal radiation at 200 $^{\circ }C$ and 500 $^{\circ }C$ , and reflectivity spectrum of ideal selective solar absorber and black surface. Fig. 2. Schematics of 1-D and 2-D grating-Mie-metamaterial based solar absorbers. (A) 1-D triangular Al $_2$ O $_3$ surface gratings of height, $h$ = 150 nm, period, $\Lambda$ = 100 nm, on top of W-Si $_3$ N $_4$ -W stacks with the thickness of $t_1$ = 12 nm, $t_2$ = 35 nm, and $t_3$ = 500 nm, respectively. The Al $_2$ O $_3$ triangular grating is doped with 5 nm in radius W nanoparticles with a volume fraction, $f$ , of 25%. (B) 2-D pyramid encapsulated with W nanoparticles ( $r$ = 5 nm in radius with a volume fraction, $f$ , of 25%) sits on stockpiles of W-Al $_2$ O $_3$ -W. The thickness of W, Al $_2$ O $_3$ , and W is 10 nm, 40 nm, and 500 nm, respectively. The height of the surface grating layer is 200 nm and the period $\Lambda$ = 200 nm in both $x$ and $y$ direction. Fig. 4. Normal reflectivity spectra as a function of W nanoparticles volume fraction, $f$ = 10%, 20% or 30%, for 1-D (A) and 2-D (B) surface grating-Mie-metamaterials, $f_1$ and $f_2$ defines the volume fractions of W nanoparticles embedded in the 1-D and 2-D Al $_2$ O $_3$ host, respectively. 1-D (C) and 2-D (D), reflectivity spectra vary as the size of W nanoparticles increases ( $r$ = 1, 3, and 5 nm), $r_1$ and $r_2$ denotes the size of the W nanoparticles in the 1-D and 2-D triangular surface grating structures, respectively. Refractive indices of W, SiO2 and SiO2 doped with W nanoparticles of volume fraction 20% and 10 nm radius. (E) Real part of refractive index. (F) Imaginary part of refractive index. Fig. 5. Angle dependent reflectivity of TE polarization and TM polarization for proposed 1-D (A) and 2-D (B) selective solar absorbers contour plotted against wavelength, $\lambda$ and angle of incidence, $\theta$ . Fig. 6. (A) Calculated photon-to-heat conversion efficiency of an ideal selective absorber, the multilayer solar absorber with measured/simulated radiative properties, and a black surface as a function of absorber operational temperature, T $_{abs}$ , under unconcentrated solar light; (B) Photon-to-heat conversion efficiency for abovementioned four absorber surfaces as a function of concentration factors, CF, at an absorber operational temperature of T $_{abs}$ = 500 $^{\circ }C$ . Table 1. Total normal absorptance and emittance of the designed 1-D and 2-D grating-Mie-metamaterials at T a b s = 200 ∘ C and 500 ∘ C . Table 2. Reflectivity of the designed 1-D and 2-D grating-Mie-metamaterials at solar irradiance peak of 0.55 μ m, with various incident angles and for both TE and TM polarizations. Equations on this page are rendered with MathJax. Learn more. (1) η a b s = α a b s − ϵ a b s σ ( T a b s 4 − T a m b 4 ) C F ⋅ Q a b s (2) α a b s = ∫ 0.3 μ m 4.0 μ m I s u n ( λ , θ , ϕ ) α ( λ , θ , ϕ ) d λ ∫ 0.3 μ m 4.0 μ m I s u n ( λ , θ , ϕ ) d λ = ∫ 0.3 μ m 4.0 μ m I s u n ( λ , θ , ϕ ) [ 1 − R ( λ , θ , ϕ ) ] d λ ∫ 0.3 μ m 4.0 μ m I s u n ( λ , θ , ϕ ) d λ (3) ϵ abs = ∫ 2.5 μ m 20 μ m I b b ( λ , θ , ϕ ) ϵ ( λ , θ , ϕ ) d λ ∫ 2.5 μ m 20 μ m I b b ( λ , θ , ϕ ) d λ = ∫ 2.5 μ m 20 μ m I b b ( λ , θ , ϕ ) [ 1 − R ( λ , θ , ϕ ) ] d λ ∫ 2.5 μ m 20 μ m I b b ( λ , θ , ϕ ) d λ (4a) ε T E , 2 = ε T E , 0 [ 1 + π 2 3 ( Λ λ ) 2 ϕ 2 ( 1 − ϕ ) 2 ( ε A − ε B ) 2 ε T E , 0 ] (4b) ε T M , 2 = ε T M , 0 [ 1 + π 2 3 ( Λ λ ) 2 ϕ 2 ( 1 − ϕ ) 2 ( ε A − ε B ) 2 ε T E , 0 ( ε T M , 0 ε A ε B ) 2 ] (5a) ε T E , 0 = ϕ ε A + ( 1 − ϕ ) ε B (5b) ε T M , 0 = ( ϕ ε A + 1 − ϕ ε B ) − 1 (6) n 2 − D = [ n ¯ + 2 n ^ 2 − D + 2 n ˇ 2 − D ] / 5 (7) n ¯ = ( 1 − f 2 ) n A + f 2 n B (8a) ε ^ 2 − D = ( 1 − f ) ε A + f ε ⊥ (8b) 1 / ε ˇ 2 − D = ( 1 − f ) / ε A + f / ε ‖ (9a) ε ‖ = ( 1 − f ) ε A + f ε B (9b) 1 / ε ⊥ = ( 1 − f ) / ε A + f / ε B (10) ε e f f = ε m ( r 3 + 2 α r f r 3 − α r f ) (11) a 1 , r = ε n p ψ 1 ( x n p ) ψ 1 ′ ( x m ) − ε m ψ 1 ( x m ) ψ 1 ′ ( x n p ) ε n p ψ 1 ( x n p ) ξ 1 ′ ( x m ) − ε m ξ 1 ( x m ) ψ 1 ′ ( x n p ) (12) ϵ ( ω ) = c 2 ω 2 ∫ 0 ω / c d k ρ k ρ ∑ μ = s , p ( 1 − | R ~ h ( μ ) | 2 − | T ~ h ( μ ) | 2 ) (13) Q t o t a l ( T a b s , T a m b ) = Q s u n ( T a b s ) + Q a m b ( T a m b ) − Q r e − e m i t ( T a b s ) (14) Q s u n ( T a b s ) = A ⋅ C F ∫ 0 ∞ d λ I A M 1.5 ( λ ) α ( λ , θ s u n , T a b s ) (15) Q a m b ( T a m b ) = A ∫ 0 ∞ d λ I B B ( T a m b , λ ) α ( λ , θ , ϕ , T a b s ) ϵ ( λ , θ , ϕ ) (16) Q r e − e m i t ( T a b s ) = A ∫ 0 ∞ d λ I B B ( T a b s , λ ) ϵ ( λ , θ , ϕ , T a b s ) (17) C a b s d T d t = Q t o t a l ( T a b s , T a m b ) Takashige Omatsu, Editor-in-Chief Total normal absorptance and emittance of the designed 1-D and 2-D grating-Mie-metamaterials at T a b s = 200 ∘ C and 500 ∘ C . α a b s (200 ∘ C ) ϵ a b s (200 ∘ C ) 1-D gratings 0.906 0.906 0.031 0.052 Reflectivity of the designed 1-D and 2-D grating-Mie-metamaterials at solar irradiance peak of 0.55 μ m, with various incident angles and for both TE and TM polarizations. Angle ( θ ) TE (1-D) TM (1-D) 0 ∘ 0.0337 0.0337 0.0038 0.0140 15 ∘ 0.0362 0.0330 0.0039 0.0125
CommonCrawl
Adoption of improved citrus orchard management practices: a micro study from Drujegang growers, Dagana, Bhutan Kinley Dorji1, Lakey Lakey2, Sonam Chophel3, Sonam Dechen Dorji4 & Birkha Tamang1 Citrus ranks top among the agricultural export commodities of Bhutan both in terms of volume and value. However, citrus cultivation practices still remain traditional with very low yield and inferior fruit quality. This study adopted community approach to identify basic components of citrus orchard management. Citrus growers of Drujegang were trained on citrus orchard management and assessed the impact of training on the adoption of management technology and subsequent effect on the yield and household (HH) income for 40 randomly selected individuals. Statistical results showed significant difference both in terms of adoption of improved orchard management practices (p = 0.04) and HH income generation (p = 0.01). Adoption of improved management practices increased from 4.54 % (in 2012) to over 16 % (in 2014) with a mean yield increase of 27.5 % (212 kg acre−1) over previous year. Similarly, mean production increased from 5376 (2012) to 11,993 (2014) kg HH−1. Thus, average annual HH income from the sale of citrus increased from Nu. 82,641 (in 2012) to 164,307 (in 2014). Hands-on training on basic orchard management increased the rate of adoption and resulted in increased yield and production. Huge potential exists in enhancing the livelihood of citrus growers by taking forward the available orchard management technology to growers through appropriate research and extension intervention. Therefore, replication of similar participatory approach at community level is recommended in other parts of the country. "Citrus" is a generic term that refers to wide range of plants under Rutaceae family. In Bhutan, citrus refers exclusively to mandarin (Citrus reticulata Blanco), which constitutes more than 95 % of total citrus production in the country [1]. Citrus is a major horticultural crop of Bhutan cultivated over 5048.6 hectares in 17 of the 20 districts. Currently, citrus ranks first both in terms of export volume and value [2]. It is also the main agricultural commodity that earns foreign exchange and provides livelihood to 60 % of rural population [1]. However, the national yield (3.9 tons acre−1) is far below the average yield of Thailand and Taiwan (6 tons acre−1) (http://www.agnet.org/index.php) mainly because of poor technology adoption and traditional system of management [3]. In fact, citrus orchard management remains almost primitive [4] though the market demand for Bhutanese mandarin across the border is almost consistent over the years [5]. Almost all the existing citrus trees are raised from seedlings, which are mostly grown in their own home yard. Citrus trees in the field remain under water stress for almost 8 months in a year besides poor nutrient management [4]. Technology is an important force to increase yield and production in agriculture. The adoption of technology depends on several factors—economic, social, institutional, and policy [6, 7]. The adoption of new technology also depends on farmers' need, and any new technology must fit in the complex pattern of agriculture dynamism in which all participate [8]. Assessment of technology and its adoption has become an essential component of research and extension intervention to justify the investments on technology generation and adoption to the funding agencies. Even more attention is currently paid to the assessment of research-extension technology and its transfer process to enhance the transparency, accountability, and effectiveness of the project [9–12]. Several international organizations and researchers adopt different areas of focus for assessing the research technology and its adoption. For example, a CIMMYT method of assessment focuses on study of agricultural change with little attention being paid to process of technology development [12]. On the contrary, ICARDA approach insists on the process of technology generation as adoption where long-term impact depends on the nature of technology [13]. Both methods mentioned above are based on results of sound socio-economic analysis (adaptability, adoptability, and potential impact analysis). Further, evaluation process depends on the stages of implementation of technology adoption studies (ex-post and ex-ante evaluation). The adoption of technology also depends on its perceived characteristics (i.e., subjective preferences toward technology) and relevant past information providing better idea on the speed and rate of adoption [14]. In Bhutan, adaptive research on agriculture started almost six decades back (1962) with the establishment of Center for Agriculture Research and Development in west-central region of Bhutan. Unlike in the past, there is a fair amount of technology and information on citrus orchard management being generated by the research and other development agencies which are based on field problems and opportunities [15, 16]. However, when compared with other cash crops, farmers' practice in citrus orchard management lags behind [4] probably because not much attention is paid to the need and appropriateness of technology and subsequent transfer. On the other hand, due to poor linkages between research and extension, inappropriate extension approaches have resulted in low adoption of technology. Conventional technology transfer model is a one-way (top-down) approach where growers remain simply a passive recipient of the technology. Farming system research/extension (FRS/E) approach proved advantageous as the process involves growers/end user in whole process of technology generation and transfer. FRS/E is described as an approach that generates technologies for studying existing farming systems and involving technology users. Farmers, especially small growers, are actively involved in planning and evaluation process (http://www.fao.org). Citrus orchard management requires sound understanding of physiology and crop phenological stages that differ with environmental conditions [17, 18]. Appropriate and timely implementation of management activities enhances plant physiological functions with the final outcome of economic efficiency, i.e., in terms of resource use. Citrus in Bhutan is grown from as low as 300 m above sea level to about 1800 m in diverse agro-ecological conditions, resulting in a huge variation in phenological stages even across a small location. Poor orchard management (esp. pests and diseases) is of the greatest concerns in Bhutanese citrus industry [19]. Chinese fruit fly (Bactrocera minax Enderlein) alone cause fruit drop ranging from 35 to 70 % followed by shied bug (Rhynochocoris poseidon Kirkaldy) [19]. Other pests such as trunk borer (Anoplophora versteegi) and citrus leaf miner (Phyllonictis citrella) are also a problem in some areas (Chukha and Dagana). Currently, most orchards are believed to have declined due to citrus greening disease, which is officially known as huanglongbing (HLB) [20]. In Bhutan, citrus HLB disease was first reported in 2003 [21]. The disease is caused by Candidatus liberibacter asiaticus which is vectored by Asian citrus psylla (Diaphorina citri Kuwayama) [22]. Currently, HLB is presumed to be one of the major causes of citrus decline especially in low lying areas (<1000 masl) [23]. While HLB's role in declining orchards cannot be denied, poor orchard management further made it worse by reducing tree vigor and productive bearing period. Therefore, supply of high health status seedlings is a major focus both for policy makers and researchers. Appropriate nutrient management is crucial to optimize yield and production. Nutrition programming requires in-depth understanding of plant physiology and phenology [24]. Sound fertilizer recommendation follows scientific studies on the form of fertilizers, fertilization rate, nutrient content, and timing of the application and its placement [25]. Different methods of nutrient application (soil application, foliar application, fertigation) are practiced only in research fields. High-yield, better-quality fruits are obtained only through correct application of appropriate fertilizers in right form and time [26]. Moreover, fertilizer rate depends on soil types [26] and other climatic conditions [27]. Currently, integrated nutrient management [the use of farm yard manure (FYM) and fertilizers to optimize yield and sustain soil health] is recommended in many developing countries including Bhutan [28, 29]. Therefore, this paper evaluates the effectiveness of the research and extension interventions in transferring of technology and assesses gain in yield and household (HH) income using farming system extension approach at community level. A total of 320 citrus growers from Drujegang geog (26°58′57″ N to 26°59′23″ N and 90°01′53″ E to 90°02′42″ E) comprising three ChiwogsFootnote 1 (Pangna, Thangna and Pangserpo) were selected as it represents one of the major citrus growing areas with minimal orchard management technology being adopted (Fig. 1). Most of the citrus orchards are located within the altitude range of 750–1200 masl. Map of Bhutan showing study site—Drujegang geog and three Chiwogs (Thangna, Pangna and Pangserpo) under Dagana district Farming system research and extension approach was recommended during annual regional review and planning workshop (2011) held at Research and Development Center, Bajo. Approval was obtained from Council for Renewable Natural Resource Research Bhutan (CoRRB) under Ministry of Agriculture and Forests. Informed consent was obtained from all individual participants, and the letter of undertaking was obtained from Drujegang geog (sub-district) administration. Assessment of farmers' level of knowledge and the yield was conducted initially through semi-structured interview and focus group discussion method. Key management components were identified and appropriate extension interventions were formulated. Growers were imparted with hands-on training on key management components: canopy management, basin preparation, mulching, integrated nutrient management (FYM and fertilizer application), integrated pest management (plant protection chemical application, mechanical control, biological control, etc.), and irrigation and water management. Fertilizer application was based on soil analysis report published by the Soil Fertility Unit of the National Soil Service Center [26], while fertilizer rate was based on its guide to fertilizer recommendation for citrus (110–220 g Urea, 126–315 g SSP, 170–225 g MoP) tree−1year−1 for non-bearing trees and (330–550 g Urea, 315–630 g SSP, 425–595 g MoP) tree−1year−1 for bearing trees [28]. Similarly, the use of biopesticide (Azadirachtin 0.15 % ww−1—0.15 ml L−1 of water) and chemical pesticides (Dimethoate 30 EC—2 ml liter−1 of water and cypermethrin 10 EC— 0.5 ml liter−1 of water) was recommended as per the citrus production manual (Department of Agriculture, Bhutan). A total of 40 households were randomly sampled and interviewed using semi-structured questionnaire. Each respondents represented a HH and they were segregated into three typologies (small, medium, and large) based on the number of trees in their orchard. The characteristics of the respondents as obtained from interview data were reported in Table 1. Table 1 Respondents (citrus growers) with their categories and general description Each component of orchard management practices was initially assigned with appropriate score. Data on the level of adoption before (2012) and after (2014) for each component were collected in the month of August. Corresponding operating cost for each component was determined, and cost of production was calculated in ngultrumFootnote 2 (Nu). Similarly, data on mean yield and annual income were also collected for three consecutive years (2012–2014) during the month of December. The adoption data before (2012) and after (2014) were compared and presented using descriptive statistics. The effect of management training on technology adoption was calculated as adoption quotient (AQ) as per the following formula and statistically analyzed using Student's t tests assuming equal variance: $${\text{AQ}} = \frac{\text{Sum of the adoption score obtained}}{\text{Maximum possible adoption score}} \times 100.$$ The effect of technology adoption on mean yield (kg acre−1) per HH was determined by gap analysis approach [30]: $${\text{GY}} = \mathop \sum \limits_{n = 1}^{n} \left( {\frac{{Y_{2} - Y_{1} }}{n}} \right),$$ where Y 2 is average yield of the new technology, Y 1 is the yield of the farmers' practice in ith farm, and 'n' is the number of farms. Similarly, the changes in HH income (GI) accrued from increased yield was assessed using the equation $${\text{GI}} = \mathop \sum \limits_{n = 1}^{n} \left( {\frac{{I_{2} - I_{1} }}{n}} \right),$$ where I 2 is the average income of the new technology, I 1 is the income of the farmers' practice in ith farm, and 'n' is the number of farms. The interview data were validated through independent field visits and random crop cuts (yield assessment). Similarly, differences in income before (2012) and after (2014) interventions were tested for statistical significance using repeated-measure t test: two-sample—assuming equal variances USING R [31]. Farming system extension approach at community level as research and extension intervention on citrus orchard management is first of its kind adopted in Drujegang. After intervention, the adoption of improved management practices increased from 4.54 % (in 2012) to 16.56 % (2014). The total adoption score increased from 296 (in 2012) to 737 (in 2014), and the rate of adoption increased by almost two-and-a-half-fold (2.49) which accounts to more than 12 % rise over base year (2012). This increased rate of adoption has increased the yield and HH income. Adoption rate for 2012 and 2014 differed significantly with t(14) = 1.7, p = 0.04. The difference in AQ for 2012 (before) and 2014 (after) research-extension intervention for different management components is shown in Table 2. Table 2 Comparison of adoption quotient (AQ) before (2012) and after (2014) research-extension intervention for different management components Different management components received varying levels of adoption rates among the groups. Out of seven improved management practices imparted to the groups, majority of the growers in the groups adopted basin making (total score = 294), and the rate of adoption increased by 38 % followed by FYM application (17 %) and fertilizer application (12 %). On the other hand, application of plant protection chemicals received very poor attention with only 3 % increase followed by tree mulching (3.07 %) and irrigation (7.7 %). The mean yield increased by almost 27.5 % accounting to almost 212 kg acre−1. The highest increase in yield (429.51 kg acre−1) was observed for medium grower category, while large and small growers' category realized similar raise in yield which is 100.79 and 104.9 kg acre−1, respectively. Similarly, mean HH income also increased by more than double—from Nu. 110.7 thousand in 2012 to 237.15 thousand in 2014. The increase in HH income showed statistically significant difference, t(73) = 1.66, p = 0.03). Almost all the growers received increased HH income by little over 114 % after research and extension interventions irrespective of their categories (Table 3). Mean HH production also increased as a result of the increased technology adoption (Fig. 2). Table 3 Effect of technology adoption on yield and household income Production in three consecutive years by grower category Agriculture research and extension intervention can be crucial in increasing yield and production in mandarin in Bhutan. Training of citrus growers had positive impact on adoption of improved orchard management practices which in turn helped increasing yield and HH income. Different management components received different levels of attention and subsequent adoption. In addition, other factors such as credit facility and established irrigation infrastructure are found precursors to help increased adoption of technology [32] . Basin preparation is one of the most laborious and daunting tasks among the seven management components identified and implemented. Basin preparation had the highest adoption rate after the training program. This was evident from our independent random field visits. However, the quality of the basin prepared differed from orchard to orchard. This was clearly due to the differences in the farm gradient. The most common basins on the slopes were prepared, raising stone walls on which the soil was leveled. This basin preparation has not only loosened the surface soils but also provided better platform for fertilizer application and irrigation. The hands-on training provided on key orchard management components might have had positive impact on growers' knowledge base, while community mobilization helped resolve the issue of labor constraints. FYM application is one of the traditional systems of management practiced in other crops from very early days. However, only about 17 % of the growers applied FYM in citrus orchards. Tethering of cattle to the trees is widely practiced to supplement nutrients such as FYM [4]. The incidence of tethering cattle is said to have declined, and application of FYM has increased. This is because tethering cattle around the trees is found to damage basin and compact the soil. Mulching is another management component adopted by the growers. Prior to the training, mulching was simply default placement of plant debris during weeding. Proper mulching began only after our intervention. Considering the limitation of irrigation in citrus orchard in Bhutan, mulching is perceived as one of the important components to be implemented to conserve soil moisture. Timely removal of the mulch (early May) before the onset of monsoon is found crucial in preventing pest such as trunk borer. Canopy management is quite new to citrus management in Bhutan although it is being promoted sporadically in other districts (Tsirang and Sarpang). Baseline information collected showed not even a single respondent managed canopy. After the program, 3.6 % of the growers implemented canopy management practices. Further, canopy management practices are complicated by alternate bearing nature as orchards consists of trees (i.e., heterogeneous population) expressing irregular bearing habit and thus adversely affecting the crop yield. In addition, not many growers are confident with this technology as it initially requires identification of individual trees expressing alternate bearing habit. Other components such as spraying of plant protection chemicals and application of fertilizers are on the rise following the training in 2012 although chemical application is affected by the religious sentiments. Only few sections of the growers picked up the use of chemicals. The application of chemical fertilizers and fungicides faced resistance as farmers opined that it kills insects besides deteriorating soil health. Irrigation is an issue to many citrus growers. Adoption would be difficult in the absence of irrigation water source and infrastructure. Moreover, most of the orchards are not only located in slopes of varying gradient but also away from water source. Water stress in citrus reduces the yield considerably (30 %) [33]. Under Drujegang condition, the driest months of the year (January, February and March) coincides with flowering stage while delay in onset of monsoon (dry period—April through May) and rising temperature (28–36 °C) coincide with fruit cell division stage. Physiological fruit drop (June drop) is severe in citrus orchards in Drujegang areas especially when untimely rainfall occurs. The growers were imparted with the training on the importance of water in tree physiological process and their critical requirement during certain phonological stages. Many small and medium category growers use drinking water for hand irrigating their orchards during critical stages (flower initiation, fruit cell division, and cell development) as deficit affects the yield and quality adversely to greater extent. Mulching is another technology that farmers adopted to retain soil moisture in situ. Majority of large growers also used hand watering due to lack of proper irrigation infrastructure. The training conducted on orchard management had positive impact on the overall rate of adoption, increasing yield, and HH income although the rate of adoption was influenced by many independent factors (like nature of technology, belief, and availability of infrastructure). Average annual HH income increased almost two-fold (Nu. 82,641 in 2012 to Nu. 164,308 in 2014). In 2013, the average HH income increased by 12 % only. This is because of canopy management and alternate bearing habit of the mandarin, which usually reduces the yield and production in the corresponding year. Further, the year also coincided with lean year of bearing. Alternate bearing is one of the constraints with local and many other commercial citrus varieties [34]. Studies have shown that management operations (canopy, fertilizer, and irrigation) can address this problem to a great extent although the presence of fruit alters genes expression (floral promoter) that affects flushing and flowering [35]. Still, there is an opportunity for Bhutanese citrus growers to stabilize yield through better management practices. Although the categories of growers were based on the number of trees, some of the small growers group received higher income than medium group because they had more number of bearing trees. The medium group who received lower income had younger trees (10–15 years old) with low yield or trees that have just fruited. Similar observation was made in large category growers where they received lesser income than medium category growers. Replacement of declined trees with un-grafted poor quality seedlings with long juvenile period is one of the main reasons. Few individuals in medium and large categories who received lower income had their orchards in decline. Most of the orchards in severe decline that were below altitudes of 1000 masl are suspected with HLB infection as trees at Pangserpo showed characteristic HLB symptoms, while the presence of Asian Citrus Psyllids (Diaphorina citri) was reported in adjoining district of Sarpang. Nevertheless, almost all the declining orchards were poorly managed—heavily infested with trunk borer and parasitic weeds (Loranthus sp.). Hands-on training on orchard management at Drujegang had positive impact on adoption of management practices, yield, and HH income. Majority of growers irrespective of their categories received higher yield and income after they started orchard management practices. One of the constraints in citrus orchards management was lack of know-how among growers besides shortage of farm labor. Lack of irrigation and erratic rainfall affect yield and production. Religious sentiments also limit spraying of plant protection chemicals and fertilizer application except for a small section of the community. Although increase in adoption of management practices and impact on HH income cannot be denied, constant monitoring and follow-up by research and extension personnel may be necessary for a few more years. There is a huge potential to increase yield and production in Bhutan, by improving few components of orchard management. Therefore, replication of similar community level, participatory approach of research-extension intervention (farming system extension approach) may be beneficial to take the technology at shelf to farmers' field and to enhance the livelihood of rural people. Administrative unit under geog; usually comprise of few villages. Bhutan currency roughly equivalent to 0.016 US Dollar. AQ: adoption quotient FYM: farm yard manure GI: gap in income GY: gap in yield HLB: I 1 : initial income (before intervention) final income (after intervention) Masl: meters above sea level MoP: muriate of potash ngultrum SSP: single superphosphate Y 1 : initial yield (before intervention) final yield (after intervention) Joshi SR, Gurung BR. Citrus in Bhutan: value chain analysis. 2009. National Statistical Bureau: Statistical Yearbook of Bhutan 2013. 2014. http://www.nsb.gov.bt/publication/files/pub9ot4338yv.pdf. (Assessed 20 Feb 2015. Drukpa K. Bhutan RNR statistics 2012. Thimphu: Ministry of Agriculture and Forests, RGoB; 2013. Connellan J, Hardy S, Harris A, Wangdi P, Dorjee D, Spohr L, Sanderson G. Results of citrus farmers survey in Bhutan. Thimphu: Department of agriculture, ministry of agriculture; 2008. Dorjee D, Bockel L, Punjabi P, Chheteri GB. Commodity chain analysis citrus. Thimphu: Royal Government of Bhutan; 2007. Baumgart-Getz A, Prokopy LS, Floress K. Why farmers adopt best management practice in the United States: a meta-analysis of the adoption literature. J Environ Manag. 2012;96:17–25. Bonabana-Wabbi J. Assessing factors affecting adoption of agricultural technologies: the case of integrated pest management (IPM) in Kumi district. Eastern Uganda: Virginia Polytechnic Institute and State University; 2002. Kinyangi AA. Factors influencing the adoption of agricultural technology among small holder farmers in Kakamega north sub-county. Kenya: University of Nairobi; 2014. Maertens A, Barrett CB. Measuring social networks' effects on agricultural technology adoption. Am J Agric Econ. 95:353–359. Diiro G. Impact of off-farm income on agricultural technology adoption intensity and productivity. International food policy research institute. 2009. Doss CR, Doss C. Analyzing technology adoption, challenges and limitations of micro studies. Agric Econ. 2006;34:207–19. Doss CR. Understanding farm-level technology adoption: lessons learned from CIMMYT's micro surveys in eastern Africa: CIMMYT. 2003. Mazid A, Amegbeto K, Shideed K, Malhotra R. Impact of crop improvement and management: winter-sown chickpea in Syria. Aleppo: ICARDA; 2009. Batz FJ, Janssen W, Peters KJ. Predicting technology adoption to improve research priority setting. Agr Econ. 2003;28:151–64. Ghimirey M, Dorji K, Tshewang S, Gyamtsho T, Yeshey, Wangchen T, Phub S. Annual Report 2011–2012 Bajo, Wangduephodrang: Renewable Natural Resource Research and Development center. 2013. Citrus production manual. Thimphu: Horticulture Division, Ministry of Agriculture and Forests. 2008. Garcia-Tejero I, Romero-Vicente R, Jimenez-Bocanegra J, Martinez-Garcia G, Duran-Zuazo V, Muriel-Fernandez J. Response of citrus trees to deficit irrigation during different phenological periods in relation to yield, fruit quality, and water productivity. Agric Water Manag. 2010;97(5):689–99. Sanz-Cortes F, Martinez-Calvo J, Badenes M, Bleiholder H, Hack H, Llacer G, Meier U. Phenological growth stages of olive trees (Olea europaea). Ann Appl Biol. 2002;140(2):151–7. Schoubroeck FV. Learning to fight a fly: developing citrus IPM in Bhutan. Wageningen Universiteit; 1999. Payout JA: Control of citrus huanglongbing (ex-greening) and citrus tristeza virus. Report to the Government of Bhutan of the entomology mission (Sept. 14–27, 2007) in Bhutan. 2007. ftp://fao.org/docrep/fao/010/ah928e/ah928e00.pdf. Assessed Dec 2012. Doe D, Om N, Dorji C, Dorji T, Garnier M, Jagoueix-Eveillard S, Bové J. First report of 'Candidatus Liberibacter asiaticus', the agent of citrus huanglongbing (ex-greening) in Bhutan. Plant Dis. 2003;87:448–448. Ahlawat YS, Baranwal VK, Thinlay DD, Majumder S. First Report of Citrus Greening Disease and Associated BacteriumCandidatus Liberibacter asiaticus from Bhutan. Plant Dis. 2003;87:448–448. Tshetrim T, Chhetri R. Factors contributing to citrus mandarin yield decline in Dewathang Geog under SamdrupJongkhar Dzongkhag. Bhutan J Nat Resour Dev. 2014;1:18–23. Hochmuth G, Hanlon E. Principles of sound fertilizer recommendations. In: IFAS Extension. Florida: University of Florida; 2009. Morgan K, Hanlon E: Improving citrus nitrogen uptake efficiency: understanding citrus nitrogen requirements. In: Florida, United States: The Institute of Food and Agricultural Sciences (IFAS), University of Florida; 2006. Obreza TA, Alva AK, Calvert DV. Citrus fertilizer management on calcareous soils: cooperative extension service. Florida: University of Florida, Institute of Food and Agricultural Sciences; 1993. Hammami A, Ben Mimoun M, Rezgui S, Hellali R. A new nitrogen and potassium fertilization management program for clementine mandarin under Mediterranean climate. In: The proceeding of the International Plant NutritionColloquium XVI: 2009: UC Davis; 2009. National Soil Service Center. A guide to fertilizer recommendation for major crops. Semtokha, Thimphu: Ministry of Agriculture &Forests; 2012. National Soil Service Center. Citrus soil report of Dagana Dzongkhag. Semtokha: Soil Fertility Unit, NSSC, Department of Agriculture; 2012. Van Ittersum M, Cassman K: Yield gap analysis—Rationale, methods and applications—Introduction to the Special Issue. Field Crops Res. 2013; 143:1–3. R Core Team: R. A Luanguage and Environment for Statistical Computing. In: Vienna: R Foundation for Statistical Computing; 2014. De Souza G, Cyphers D, Phipps T. Factors affecting the adoption of sustainable agricultural practices. Agr Res Econ Rev. 1993;22:159–65. Garcia-Tejero I, Durain-Zuazo VH, Arriaga-Sevilla J, Muriel-Fernandez JL. Impact of water stress on citrus yield. Agr Sust Dev. 2012;32:651–9. Mazhar MS, Anwar R, Maqbool M. A review of alternate bearing in Citrus. In: Proceedings International symposium on Prospects of Horticultural Industry in Pakistan: 2007. Muaoz-Fambuena N, Mesejo C, Gonzalez-Mas MC, Primo-Millo E, Agusta M, Iglesias DJ. Fruit load modulates flowering-related gene expression in buds of alternate-bearing Moncada mandarin. Ann Bot. 2012;110:1109–18. KD conceptualized, analyzed, and interpreted data and drafted the manuscript. SC, SDD, and BBT have been involved in conducting field works, training of growers, data collection, supervision, field verification and constant monitoring. L revised the manuscript critically for intellectual content. All the authors agree to be accountable for accuracy and content of the work. All the authors read and approved the final manuscript. We thank Gyambo Tshering (former Program Director), Ngawang (Program Director, Research Center—Bajo), Yeshey (Research Officer), and Yeshi Zangpo (Research Assistant) for their support. We are grateful to the Australian Center for International Agricultural Research (ACIAR) Project officials (Graeme Sanderson, Nerida Donovan and Michael Treeby) for their invaluable inputs and comments during implementation. The authors declare that they have no competing interests to declare (both financial and non-financial). This study was supported by Royal Government of Bhutan and partly by ACIAR Project (Adapting integrated crop management technologies to commercial citrus enterprises in Bhutan and Australia—HORT/2010/089). Research and Development Sub-Center (RDSC), Tsirang, Bhutan Kinley Dorji & Birkha Tamang National Citrus Program, Horticulture Division, Thimphu, Bhutan Lakey Lakey Research and Development Center, Bajo, Wangduephodrang, Bhutan Sonam Chophel Renewable Natural Resource Extension Center (Agriculture), Drujegang, Bhutan Sonam Dechen Dorji Search for Kinley Dorji in: Search for Lakey Lakey in: Search for Sonam Chophel in: Search for Sonam Dechen Dorji in: Search for Birkha Tamang in: Correspondence to Kinley Dorji. Dorji, K., Lakey, L., Chophel, S. et al. Adoption of improved citrus orchard management practices: a micro study from Drujegang growers, Dagana, Bhutan. Agric & Food Secur 5, 3 (2016) doi:10.1186/s40066-016-0050-z DOI: https://doi.org/10.1186/s40066-016-0050-z Orchard management Drujegang
CommonCrawl
Why is the harmonic oscillator so important? I've been wondering what makes the harmonic oscillator such an important model. What I came up with: It is a (relatively) simple system, making it a perfect example for physics students to learn principles of classical and quantum mechanics. The harmonic oscillator potential can be used as a model to approximate many physical phenomena quite well. The first point is sort of meaningless though, I think the real reason is my second point. I'm looking for some materials to read about the different applications of the HO in different areas of physics. soft-question harmonic-oscillator Spine FeastSpine Feast $\begingroup$ The second point is really important. Almost any system near equilibrium is at least approximately harmonic because you can expand the potential energy in a Taylor series and the linear term is zero by construction. This applies to everything from atoms in a crystal to quantum fields. $\endgroup$ – DanielSank Jan 12 '15 at 17:20 $\begingroup$ The first reason is not meaningless. It provides a starting point for modeling more complex dynamic systems. For example the harmonic oscillator assumes linear damping, but Duffing extended the simple linear oscillator to one where the damping is nonlinear. And this extends the coverage of the modeling to other physical systems not so well modeled by the linear oscillator. You need to walk before you can run. $\endgroup$ – docscience Jan 12 '15 at 17:23 $\begingroup$ Remarkably, this question does not appear to have been asked yet (correct me if I'm wrong!). If so, this has the potential to become a really great, canonical question for this site; I'm looking forward to reading some good answers. $\endgroup$ – Mark Mitchison Jan 12 '15 at 17:41 $\begingroup$ @DepeHb for the study of the quantum harmonic oscillator a formalism is developed (you probably will learn it some later), with raising and lowering operators. This formalism will accompany you in whatever theory that uses the 2nd quantization, i.e. in which the number of particles of a certain type is not constant. $\endgroup$ – Sofia Jan 12 '15 at 22:07 $\begingroup$ "The career of a young theoretical physicist consists of treating the harmonic oscillator in ever-increasing levels of abstraction." Sidney Coleman. $\endgroup$ – Davidmh Jan 13 '15 at 19:18 To begin, note that there is more than one incarnation of "the" harmonic oscillator in physics, so before investigating its significance, it's probably beneficial to clarify what it is. What is the harmonic oscillator? There are at least two fundamental incarnations of "the" harmonic oscillator in physics: the classical harmonic oscillator and the quantum harmonic oscillator. Each of these is a mathematical thing that can be used to model part or all of certain physical systems in either an exact or approximate sense depending on the context. The classical version is encapsulated in the following ordinary differential equation (ODE) for an unknown real-valued function $f$ of a real variable: \begin{align} f'' = -\omega^2 f \end{align} where primes here denote derivatives, and $\omega$ is a real number. The quantum version is encapsulated by the following commutation relation between an operator $a$ on a Hilbert space and its adjoint $a^\dagger$: \begin{align} [a, a^\dagger] = I. \end{align} It may not be obvious that these have anything to do with one another at this point, but they do, and instead of spoiling your fun, I invite you to investigate further if you are unfamiliar with the quantum harmonic oscillator. Often, as mentioned in the comments, $a$ and $a^\dagger$ are called ladder operators for reasons which we don't address here. Every incarnation of harmonic oscillation that I can think of in physics boils down to understanding how one of these two mathematical things is relevant to a particular physical system, whether in an exact or approximate sense. Why are these mathematical models important? In short, the significance of both the classical and quantum harmonic oscillator comes from their ubiquity -- they are absolutely everywhere in physics. We could spend an enormous amount of time trying to understand why this is so, but I think it's more productive to just see the pervasiveness of these models with some examples. I'd like to remark that although it's certainly true that the harmonic oscillator is a simple an elegant model, I think that answering your question by saying that it's important because of this fact is kind of begging the question. Simplicity is not a sufficient condition for usefulness, but in this case, we're fortunate that the universe seems to really "like" this system. Where do we find the classical harmonic oscillator? (this is by no means an exhaustive list, and suggestions for additions are more than welcome!) Mass on a Hooke's Law spring (the classic!). In this case, the classical harmonic oscillator equation describes the exact equation of motion of the system. Many (but not all) classical situations in which a particle is moving near a local minimum of a potential (as rob writes in his answer). In these cases, the classical harmonic oscillator equation describes the approximate dynamics of the system provided its motion doesn't appreciably deviate from the local minimum of the potential. Classical systems of coupled oscillators. In this case, if the couplings are linear (like when a bunch of masses are connected by Hooke's Law springs) one can use linear algebra magic (eigenvalues and eigenvectors) to determine normal modes of the system, each of which acts like a single classical harmonic oscillator. These normal modes can then be used to solve the general dynamics of the system. If the couplings are non-linear, then the harmonic oscillator becomes an approximation for small deviations from equilibrium. Fourier analysis and PDEs. Recall that Fourier Series, which represent either periodic functions on the entire real line, or functions on a finite interval, and Fourier transforms are constructed using sines and cosines, and the set $\{\sin, \cos\}$ forms a basis for the solution space of the classical harmonic oscillator equation. In this sense, any time you are using Fourier analysis for signal processing or to solve a PDE, you are just using the classical harmonic oscillator on massively powerful steroids. Classical electrodynamics. This actually falls under the last point since electromagnetic waves come from solving Maxwell's equations which in certain cases yields the wave equation which can be solved using Fourier analysis. Where do we find the quantum harmonic oscillator? Take any of the physical systems above, consider a quantum mechanical version of that system, and the resulting system will be governed by the quantum harmonic oscillator. For example, imagine a small system in which a particle is trapped in a quadratic potential. If the system is sufficiently small, then quantum effects will dominate, and the quantum harmonic oscillator will be needed to accurately describe its dynamics. Lattice vibrations and phonons. (An example of what I assert in point 1 when applied to large systems of coupled oscillators. Quantum fields. This is perhaps the most fundamental and important item on either of these two lists. It turns out that the most fundamental physical model we currently have, namely the Standard Model of particle physics, is ultimately based on quantizing classical fields (like electromagnetic fields) and realizing that particles basically just emerge from excitations of these fields, and these excitations are mathematically modeled as an infinite system of coupled, quantum harmonic oscillators. joshphysicsjoshphysics The harmonic oscillator is important because it's an approximate solution to nearly every system with a minimum of potential energy. The reasoning comes from Taylor expansion. Consider a system with potential energy given by $U(x)$. You can approximate $U$ at $x=x_0$ by $$ U(x) = U(x_0) + (x-x_0) \left.\frac{dU}{dx}\right|_{x_0} + \frac{(x-x_0)^2}{2!} \left.\frac{d^2U}{dx^2}\right|_{x_0} + \cdots $$ The system will tend to settle into the configuration where $U(x)$ has a minimum --- but, by definition, that's where the first derivative $dU/dx = 0$ vanishes. Also a constant offset to a potential energy usually does not affect the physics. That leaves us with $$ U(x) = \frac{(x-x_0)^2}{2!} \left.\frac{d^2U}{dx^2}\right|_{x_0} + \mathcal O(x-x_0)^3 \approx \frac12 k (x-x_0)^2 $$ which is the harmonic oscillator potential for small oscillations around $x_0$. rob♦rob $\begingroup$ I wonder whether there is any relevant example of an oscillation around x₀ with d²U/dx²=0 at x₀ $\endgroup$ – Walter Tross Jan 13 '15 at 23:11 $\begingroup$ @WalterTross I remember talking about a quartic oscillator in some class, but I don't remember if there was a physical context or if it was just a made-up potential. That would make a good question. $\endgroup$ – rob♦ Jan 14 '15 at 5:05 $\begingroup$ The quartic oscillator is not the most important counterexample to "everything can be approximated harmonically". More relevantly, if a particle's ground state in some quantum-well is unsharp over a range where the higher-order expansion terms of the potential can't be neglected, then the harmonic oscillator is utterly useless for describing the system. As an extreme case, the hydrogen atom is totally not harmonic, you can't even Taylor-expand the potential at all! $\endgroup$ – leftaroundabout Jan 14 '15 at 11:12 $\begingroup$ @leftaroundabout but can you use other transforms to deal with it, like Fourier? $\endgroup$ – Ooker Sep 8 '17 at 11:37 $\begingroup$ @Ooker sure, but those only converge in an $L^2$ sense, not in a locally-pointwise sense. $\endgroup$ – leftaroundabout Sep 8 '17 at 12:26 The harmonic oscillator is common It appears in many everyday examples: Pendulums, springs, electronics (such as the RLC circuit), standing waves on a string, etc. It's trivial to set up demonstrations of these phenomena, and we see them constantly. The harmonic oscillator is intuitive We can picture the forces on systems such as pendulum or a plucked string. This makes it simple to study in the classroom. In contrast, there are many "everyday" examples that are not intuitive, such as the infamous Bernoulli effect lifting a disk by blowing air downwards. These paradoxes are great puzzles, but they would confuse (most) beginner students. The harmonic oscillator is mathematically simple Math is part of physics. In studying simple harmonic motion, students can immediately use the formulas that describe its motion. These formulas are understandable: for example, the equation for frequency shows the intuitive result that increasing spring stiffness increases frequency. At a more advanced level, students can derive the equations from first principles. The ability to solve a real-life problem so easily is a clear demonstration of how physics uses math. Engineering benefits greatly as well. Many systems, even very complex ones, are linear. Complicated linear systems act as multiple harmonic oscillators. For example, a pinned string naturally vibrates at frequencies that are multiples of its fundamental. Any motion of the string can be represented as a sum of each component vibration, with each components independent of other components. This superposition allows us to model things like plucking the string. Circular plates, guitar chambers, skyscrapers, radio antennas, and even molecules are more complex. However, superposition and other tools from linear systems theory still allows us to take massive shortcuts on the computation and trust the results. These computation methods also make good teaching tools for topics in linear algebra and differential equations. Because the harmonic oscillator is a familiar system that is so tightly connected with fundamental topics in math, science, and engineering it is one of the most widely studied and understood systems. Kevin KostlanKevin Kostlan The other answers already cover many of the most important aspects. One interesting application is in finding out how the form of the harmonic oscillator is connected to Gaussian (normal) distribution, another often used mathematical construct. I might've suggested this to joshphysics' list, but as it requires some details to appreciate, decided to make it a standalone answer (but it really is more of a drawn out comment). Take $N$ independent random variables $X_i$, each with variance $\sigma$ and, for simplicity, mean $0$. Now the characteristic function for an arbitrary probability distribution $P_X$ is $G_X(k) = \langle e^{ikX} \rangle = \int e^{ikx}P_X(x) \mathrm{d}x$. Writing out the exponential in Taylor series (where we cut off all the terms past the quadratic) $e^{ikx} \approx 1 + ixk - \frac{1}{2}x^2k^2$, we have $G_X(k) \approx 1 - \frac{1}{2}\sigma^2k^2$. Now define a new random variable $Z = \frac{\sum_{i=1}^N X_i}{\sqrt{N}}$, so $G_Z(k) = \left(G_X\left(\frac{k}{\sqrt{N}}\right)\right)^N \approx \left(1 - \frac{\sigma^2k^2}{2N}\right)^N$ and as $N\to\infty$ (all the higher order terms in the sum drop) we have by definition, $G_Z(k) = e^{-\frac{1}{2}\sigma^2k^2}$, which then gives the Gaussian distribution $$P_Z(z) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{z^2}{2\sigma^2}}$$ This is a simplistic derivation of the central limit theorem, which is of huge importance in several areas of science and probably among the most fundamental results of statistics. Note that in the derivation all the higher order terms dropped out (as $N\to\infty$), and the only remaining one was the quadratic, harmonic, term. This happens regularly in applications across different domains, yet I can't really name a fundamental reason why it should be so. alargealarge I think Rob's answer pretty much inclusive and true. I just want add something. If you expand the potential via Taylor series, the second derivative is looking for a tangential vector at point $x_0$ which is the minimum point of the curve, so it is zero. So, we have a potential of the form $\frac{1}{2}k(x-x_0)^2$ which we have shifted the the origin of x to the location of $x_0$. So we would approximate the curve of the potential to a parabola. That makes harmonic oscillator important for physics. aQuestionaQuestion $\begingroup$ Sorry I do not agree with you. I edit my answer. approximating any curve like potential energy or etc. makes our life easier for calculating anything around the shifted point. $\endgroup$ – aQuestion Jan 13 '15 at 15:17 protected by Qmechanic♦ Jan 13 '15 at 5:42 Not the answer you're looking for? Browse other questions tagged soft-question harmonic-oscillator or ask your own question. Why are sinusoids so common in nature? Why is the potential always quadratic in nature? Why are oscillations so ubiquitous in nature? In what sense is a quantum field an infinite set of harmonic oscillators? "QFT is simple harmonic motion taken to increasing levels of abstraction" Why are so many energies represented by $\frac{1}{2} ab^2$? Can Hooke's law be derived? Why is the wave equation so pervasive? Fermionic quantum harmonic oscillator What does Hooke's law have to do with molecular forces? Why is linear independence of harmonic oscillator solutions important? Are Black Holes set to take over the Harmonic Oscillator in the 21st century? Why would the relationship between period and mass be important to you in a Simple Harmonic motion? Are there exact expressions for the Floquet states of a periodically-forced, undamped harmonic oscillator? What are some of the best Physics resources online? Important physical models other than the harmonic oscillator? Are there any open problems in mathematics whose resolutions would have important physical implications? Why do we nondimensionalize the Schrödinger equation when solving the quantum harmonic oscillator? Hamiltonian of quantum harmonic oscillator with $\psi(x)=\delta(x)$: comparison to classical mechanics Is the quantum harmonic oscillator energy $E = n\hbar\omega$ or $E = (n + 1/2)\hbar\omega$? (Feynman Lectures)
CommonCrawl
Europe/Vienna English VCI2019 - The 15th Vienna Conference on Instrumentation Europe/Vienna timezone Conference Homepage [email protected] +43 (1) 5447328 - 60 932. Opening Manfred Krammer (Chairman OC VCI) 933. Welcome Georg Brasseur (OeAW - President of the Division of Mathematics and Natural Sciences) Jochen Schieck (TU and HEPHY - Director of HEPHY) Danas Ridikas (IAEA - Head of Physics Section) 929. Information from the Organizers Thomas Bergauer (Austrian Academy of Sciences (AT)) 918. Searches for Gravitational Waves by LIGO and Virgo: Recent Results and Future Plans David Reitze Invited Talk Plenary 1 The first discoveries by LIGO and Virgo have established gravitational wave detectors as a powerful new tool for probing the highest energy astrophysical events in the universe. In this talk, I'll give an overview of the detectors and present the most recent results on the searches for binary black hole/binary neutron star mergers as well as searches for other classes of gravitational wave... 919. From particle physics technologies to society Benjamin Frisch (CERN) Particle physics has revolutionized our understanding of the Universe, and it is the epitome of basic research: seeking answers to fundamental questions. In its pursuit of knowledge, particle has also played a role in developing innovative technologies: frontier instruments like the Tevatron at Fermilab or the Large Hadron Collider (LHC) at CERN, and their detectors, require frontier... 678. New ALICE detectors for Run 3 and 4 at the CERN LHC Wladyslaw Henryk Trzaska (University of Jyvaskyla (FI)) Semiconductor Detectors During Run 3 and 4 ALICE (A Large Ion Collider Experiment) will gain two orders of magnitude in the statistics over the combined data collected during Run 1 and Run 2 at the LHC. ALICE will also conduct high-precision measurements of rare probes over a broad range of transverse momenta with particular focus on low signal-to-background probes at low pT values. To achieve that goal a sustained... 527. Pixel-detector R&D for CLIC Dominik Dannheim (CERN) The physics aims at the proposed future CLIC high-energy linear e+e- collider pose challenging demands on the performance of the detector system. In particular the vertex and tracking detectors have to combine precision measurements with robustness against the expected high rates of beam-induced backgrounds. A spatial resolution of a few microns and a material budget down to 0.2% of a... 670. Development of the CMS Mip Timing Detector Marco Toliman Lucchini (Princeton University (US)) The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive Phase II upgrade program to prepare for the challenging conditions of the High-Luminosity LHC (HL-LHC). In particular, a new timing layer will measure minimum ionizing particles (MIPs) with a time resolution of ~30ps and hermetic coverage up to a pseudo-rapidity of |η|=3. This MIP Timing... 800. The LHCb Upgrade Programme and the VELO Paula Collins (CERN) The LHCb Upgrade I, currently under construction and scheduled to start data taking in Run 3, will transform the experiment to a triggerless system reading out the full detector at 40 MHz event rate. The increased luminosity and trigger efficiency anticipated at the upgrade will allow a huge increase in precision, in many cases to the theoretical limit, and the ability to perform studies... 920. The Silicon Photomultiplier Status and Perspectives Gianmaria Collazuol (University of Padova and INFN Padova) The Silicon Photomultiplier (SiPM) is a solid-state device capable of sensing, timing and quantifying with high accuracy light signals down to the single-photon level. Featuring large internal gain with negligible fluctuations, high intrinsic timing resolution, low-voltage operation, insensitivity to magnetic fields, high degree of radio-purity, mechanical robustness and excellent... 734. Experimental advances of photon detection time resolution limits in SiPMs and scintillator based detectors Stefan Gundacker (CERN) Scintillator based radiation detectors readout by SiPMs successively break records in their reached time resolution. Nevertheless, new challenges in time of flight positron emission tomography (TOF-PET) and high energy physics are setting unmatched goals in the 10ps range. Recently we have shown that high frequency (HF) readout of SiPMs significantly improves the measured single photon time... 607. Full System of Positron Timing Counter Having Time Resolution under 40 psec with Fast Plastic Scintillator Readout by SiPMs Dr Miki Nishimura (KEK) A positron timing counter (TC) required 30-40 ps time resolution for ~50 MeV/c positron by the MEG II experiment has been developed. We employed the high segmented design with 512 scintillator plates ($120 \times 40 \times 5$ mm$^3$ and $120 \times 50 \times 5$ mm$^3$) attached 6-SiPM-array at the both ends. Pile up is reduced by the segmented design, and multi-counter measurement improves the... 930. Art and History of Vienna Markus Friedl (Austrian Academy of Sciences (AT)) Art and History of Vienna The city of Vienna was essentially founded by the ancient Romans. In the late middle ages, it became the capital of the Habsburg Empire, and consequently grew in size and importance. Even though there are some Roman excavations, most of the architectural heritage originates from the monarchy. In particular, the turn of the 19th to 20th centuries was undoubtedly a peak in many aspects of arts... 922. Quantum Sensors in High-Energy Physics Juan Estrada (fermilab) I will discuss recent efforts in applying quantum information science (QIS) technology to High Energy Physics experiments, in particular efforts using quantum sensors in the search for low mass dark matter, and axion-line particles. I will also discuss the possible applications in QIS for technologies developed for HEP experiments. 784. The CMS Outer Tracker for the High Luminosity LHC Erik Butz (KIT - Karlsruhe Institute of Technology (DE)) The era of High Luminosity Large Hadron Collider will pose unprecedented challenges for detector design and operation. The planned luminosity of the upgraded machine is $5-7.5\times10^{34} \mathrm{cm}^{-2}\mathrm{s}^{-1}$, reaching an integrated luminosity of 3000-4000 fb$^{-1}$ by the end of 2039. CMS Tracker detector will have to be replaced in order to fully exploit the delivered luminosity... 621. Module and System test Development for the Phase-2 ATLAS ITk Pixel Upgrade Dr Tobias Flick (Bergische Universitaet Wuppertal (DE)) In the high-luminosity era of the Large Hadron Collider, the instantaneous luminosity is expected to reach unprecedented values, resulting in about 200 proton-proton interactions in a typical bunch crossing. To cope with the resulting increase in occupancy, bandwidth and radiation damage, the ATLAS Inner Detector will be replaced by an all-silicon system, the Inner Tracker (ITk). The innermost... 600. Strategies for reducing the greenhouse gas emissions from particle detectors operation at the CERN LHC experiments Roberto Guida (CERN) A wide range of gas mixtures is used for the operation of different gaseous detectors at the CERN LHC experiments. Some gases, as C2H2F4, CF4, C4F10 and SF6, are greenhouse gases (GHG) with high global warming potential and therefore subject to a phase down policy affecting the market with price increase and reduced availability. The reduction of GHG emissions is an objective of paramount... 708. Upgrade of the ALICE Time Projection Chamber Robert Helmut Munzer (Johann-Wolfgang-Goethe Univ. (DE)) The Time Projection Chamber (TPC) of the ALICE experiment is being upgraded with new readout chambers based on Gas Electron Multiplier (GEM) technology during the second long shutdown of the CERN Large Hadron Collider. The upgraded detector will operate continuously and trigger-less without the use of a gating grid. It will thus be able to read out all minimum bias Pb-Pb events that the LHC... 505. High space resolution μ-RWELL for high rate applications Giovanni Bencivenni (Istituto Nazionale Fisica Nucleare Frascati (IT)) Gaseous Detectors The micro-Resistive-WELL (μ-RWELL) is a compact, simple and robust Micro-Pattern Gaseous Detector (MPGD) developed for large area HEP applications requiring the operation in harsh environment. The detector amplification stage, similar to a GEM foil, is realized with a polyimide structure micro-patterned with a blind-hole matrix, embedded through a thin Diamond Like Carbon (DLC) resistive layer... 849. CUPID-0: a double-readout cryogenic detector for Double Beta Decay search Prof. Chiara Brofferio (University of Milano - Bicocca and INFN) Dark matter and other low-background experiments CUPID-0 is the first large mass neutrinoless double beta decay (0νDBD) experiment based on cryogenic calorimeters with dual read-out of light and heat for background rejection. The detector, consisting of 26 ZnSe crystals, 2 natural and 24 enriched at 95% in Se82, coupled with bolometric light detectors, has been constructed respecting very strict protocols and procedures, from the material... 629. EDET DH80k - Characterization of a DePFET based sensors for TEM Direct Electron Imaging Mitja Predikaka (Semiconductor Laboratory of the Max Planck Society) The EDET DH80k is a 1 MPixel camera system, optimized for the direct detection of 300 keV electrons from a TEM equipped with a pulsed, high intensity electron source. It was designed to record stroboscopic movies of dynamic processes with unprecedented temporal and spatial resolution. The camera consists of four identical modules with the complete set of frontend and peripheral electronics... 865. The CMS High Granularity Calorimeter for the High Luminosity LHC Rachel Yohay (Florida State University (US)) The CMS experiment at CERN will undergo significant improvements during the so-called Phase-II Upgrade to cope with a 10-fold increase in luminosity of the High Luminosity LHC (HL-LHC) era. Especially the forward calorimetry will then suffer from very high radiation levels and intensified pile-ups in the detectors. Thus, the CMS collaboration is designing a High Granularity Calorimeter (HGCal)... 608. Deep Diffused Avalanche Photodiodes for Charged Particle Timing Matteo Centis Vignali (CERN) The upgrades ATLAS and CMS for the High Luminosity LHC (HL-LHC) highlighted physics objects timing as a tool to resolve primary interactions within a bunch crossing. Since the expected pile-up is around 200, with an rms time spread of 170ps, a time resolution of about 30ps is needed. The timing detectors will experience a 1-MeV neutron equivalent fluence of $\Phi_{eq}=10^{14}$ and... 664. FoCal: a highly granular digital calorimeter Naomi Van Der Kolk (Nikhef National institute for subatomic physics (NL)) In light of the upgrade program of the ALICE detector a calorimeter at forward rapidities (FoCal) is being considered. This detector would measure photons, electrons, positrons and jets for rapidities eta > 3 offering a wealth of physics possibilities. Its main focus is on measurements related to the structure of nucleons and nuclei at very low Bjorken-x and possible effects of gluon... 852. PandaX-III high pressure xenon TPC for neutrinoless double beta decay search Dr SHAOBO WANG (Shanghai Jiao Tong University) The PandaX-III experiment uses high pressure Time Projection Chambers (TPCs) to search for neutrinoless double-beta decay of Xe-136 with high energy resolution and sensitivity at the China Jin-Ping underground Laboratory II (CJPL-II). Fine-pitch Microbulk Micromegas will be used for charge amplification and readout in order to reconstruct both the energy and track of the... 511. A SiPM-based dual-readout calorimeter for future leptonic colliders Massimiliano Antonello (Università degli Studi e INFN Milano (IT)) Calorimeters for future leptonic collider experiments have to provide extreme precision in reconstructing energies of both isolated particles and jets springing off the colliding beams. Thanks to the expected energy resolution and the excellent particle ID capability, the dual-readout fibre calorimeter could be a possible solution. This calorimetric technique reconstructs the electromagnetic... 911. DANAE – A new effort to directly search for Dark Matter with DEPFET-RNDR detectors Dr Holger Kluck (HEPHY) The direct search for dark matter (DM) at the sub-GeV/c² mass scale gained special interest during the last years, mainly motivated by various theoretical models. To search for individual DM-electron interactions in Si-semiconductor devices a readout noise level of less than 1e- RMS is required. One possible technique which promise a sub-electron noise level is the *Depleted P-channel Field... 659. New test beam results of 3D detectors constructed with poly-crystalline CVD diamond Michael Philipp Reichmann (ETH Zurich (CH)) The latest test beam results of 3D detectors fabricated with poly-crystalline chemical vapor deposition (CVD) diamonds will be shown. The devices have 50$\mu$m $\times$ 50$\mu$m cells with columns 2.6$\mu$m in diameter. In one of the devices the cells were ganged in a 3$\times$2 cell pattern and in the other the cells were ganged in a 5$\times$1 cell pattern to match the layouts of the pixel... 477. Design and status of the Mu2e CsI + SiPMs calorimeter Mrs Raffaella Donghia (LNF - INFN and Roma Tre University) The Mu2e experiment at Fermilab will search for the charged-lepton flavour violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. The Mu2e detector is composed of a tracker and an electromagnetic calorimeter and an external veto for cosmic rays. The calorimeter plays an important role in providing excellent particle identification... 785. In-depth study of Inverse-Low Gain Avalanche Detectors (ILGAD) for 4-dimensional tracking and radiation tolerance assessment of thin LGAD Esteban Curras Rivera (Universidad de Cantabria (ES)) For the high-luminosity LHC upgrade, the ATLAS and CMS experiments are planning to include dedicated detector systems to measure the arrival time of Minimum Ionising Particles (MIPs). Such systems should provide a timing resolution of 30 ps per MIP. State-of-the-art timing technologies integrating Silicon photo-multipliers and plastic scintillators do not tolerate the hadron fluences expected... 850. Quantum Dots for Rare Decays: the ESQUIRE Project Dr Luca Gironi (Universita` e INFN di Milano Bicocca) The future Neutrinoless Double Beta Decay (0νDBD) experiments will require a particle detector easily scalable in mass and able to reach good energy resolution (around 2% or better) in the region of interest for the study of these rare decays, at about 3 MeV. The ESQUIRE (Experiment with Scintillating QUantum dots for Ionizing Radiation Events) project aims at the development of a new category... 706. Belle II electromagnetic calorimeter. Alexander Kuzmin (Budker Institute of Nuclear Physics/Novosibirsk State University) The electromagnetic calorimeter of the Belle II detector and its performance in the first KEKB run during 2018 are described. It is a high-granularity homogeneous calorimeter based on 8736 CsI(Tl) scintillating crystals. The scintillation light is detected by two PIN photodiodes. Electronics of the calorimeter provides signal readout with 2 MHz digitization followed by wave form analysis... 640. Operational Experience and Performance with the ATLAS Pixel detector at the Large Hadron Collider Kerstin Lantzsch (University of Bonn (DE)) The tracking performance of the ATLAS detector relies critically on its 4-layer Pixel Detector, that has undergone significant hardware and readout upgrades to meet the challenges imposed by the higher collision energy, pileup and luminosity that are being delivered by the Large Hadron Collider (LHC), with record breaking instantaneous luminosities of 2 x 1034 cm-2 s-1 recently surpassed. 681. Status of the NEXT project Dr Lior Arazi (Ben-Gurion University of the Negev (IL)) Status of the NEXT project The NEXT program is developing the technology of high-pressure Xe gas TPCs with electroluminescent amplification (HPXe-EL) for neutrinoless double beta decay searches. The first phase of the program included the operation of two small prototypes, NEXT-DEMO and NEXT-DBDM, which demonstrated the robustness of the technology, its excellent energy resolution and its... 613. Development of new large calorimeter prototypes based on Lanthanum Bromide and LYSO crystals coupled to silicon photomultipliers: A direct comparison Dr Angela Papa (UniPi&INFN, PSI) The challenges for new calorimetry for incoming experiments at intensity frontiers is to provide detectors with ultra-precise time resolution and supreme energy resolution. Two very promising materials on the market are BrilLanCe (Cerium doped Lanthanum Bromide, LaBr3 (Ce)) and LYSO (Lutetium Yttrium OxyorthoSilicate, Lu2(1-x) Y2x SiO5 (Ce)), supported by recent developments aiming at... 577. Operational Experience of the Phase-1 CMS Pixel Detector Benedikt Vormwald (Hamburg University (DE)) In 2017, CMS has installed a new pixel detector with 124M channels that features full 4-hit coverage in the tracking volume and is capable to withstand instantaneous luminosities of $2 \times 10^{34} cm^{-2} s^{-1}$ and beyond. Many of the key technologies of modern particle detectors are applied in this detector, like efficient DCDC low-voltage powering, high-bandwidth $\mu$TCA backend... 724. The Large Enriched Germanium Experiment for Neutrinoless $\beta\beta$ Decay (LEGEND) Dr Michael Willers (Lawrence Berkeley National Laboratory) The use of high-purity germanium (HPGe) detectors enriched in the isotope $^{76}$Ge is one of the most promising techniques to search for neutrinoless double-beta decay, a process forbidden in the Standard Model of particle physics. A discovery of this lepton number violating process might answer the question of why the universe consists of matter (but not antimatter) and consequently, why... 765. Searching for neutrinoless double-beta decay with GERDA Natalia Di Marco (LNGS - INFN) The GERDA experiment searches for the lepton number violating neutrinoless double-beta decay of 76Ge operating bare, enriched Ge diodes in liquid argon. The BEGe detectors feature an excellent background discrimination from the analysis of the time profile of the detector signals, while the instrumentation of the cryogenic liquid volume surrounding the germanium detectors acts as an active... 812. The CMS Pixel Detector for the High Luminosity LHC Giacomo Sguazzoni (INFN (IT)) The High Luminosity Large Hadron Collider (HL-LHC) at CERN is expected to collide protons at a centre-of-mass energy of 14 TeV and to reach the unprecedented peak instantaneous luminosity of $5-7.5x10^{34} cm^{-2}s^{-1}$ with an average number of pileup events of 140-200. This will allow the ATLAS and CMS experiments to collect integrated luminosities up to 3000-4500 fb$^{-1}$ during the... 539. The PreProcessor modules for the ATLAS Tile Calorimeter at the HL-LHC Fernando Carrio Argos (Univ. of Valencia and CSIC (ES)) The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). It is a sampling calorimeter made of steel plates and plastic scintillators, read out by approximately 10,000 photomultipliers. In 2024, the LHC will be upgraded to the High Luminosity LHC (HL-LHC) allowing it to deliver up to 7 times the nominal instantaneous design... 738. Belle II Pixel Detector – Performance of final DEPFET Modules Mr Botho Paschen (University of Bonn) In spring 2018 the SuperKEKB accelerator in Tsukuba, Japan, provided first e+e- -collisions to the upgraded Belle II experiment. During this commissioning phase the volume of the innermost vertex detector was equipped with dedicated detectors for measuring the radiation environment as well as downsized versions of the final Belle II silicon strip (SVD) and pixel (PXD) detectors. The PXD is the... 790. NU-CLEUS: Exploring coherent neutrino-nucleus scattering with cryogenic detectors Mr Alexander Langenkämper (Physikdepartement E15, Technische Universität München, 85748 Garching, Germany) The detection of coherent-neutrino nucleus scattering (CEνNS) opens the door for new physics within and beyond the Standard Model of Particle Physics. NU-CLEUS is a novel neutrino experiment at a nuclear power reactor which allows for precision measurements with a novel cryogenic gram-scale detector design based on CRESST technology. A recent prototype detector has achieved an ultra-low energy... 553. The CMS ECAL Phase-2 Upgrade for High Precision Timing and Energy Measurements Federico Ferri (Université Paris-Saclay (FR)) The CMS electromagnetic calorimeter (ECAL) is a homogeneous calorimeter made of about 75000 lead tungstate scintillating crystals. In view of the high-luminosity phase of the LHC, the ECAL electronics must be upgraded to cope with the more stringent requirements in terms of trigger latency and rate. The new electronics will transmit the data in streaming mode from the front-end electronics to... 561. Development of resistive Micromegas TPCs for the T2K experiment Alain Delbart (CEA/IRFU,Centre d'etude de Saclay Gif-sur-Yvette (FR)) The long baseline neutrino experiment T2K has launched the upgrade project of its near detector ND280, crucial to reduce the systematic uncertainty to less than 4%. An essential component of this upgrade consists of the resistive Micromegas TPCs, for 3D track reconstruction, momentum measurement and particle identification. These TPC, with overall dimensions of 2x2x0.8 m3, will be equipped... 546. The EUSO-SPB2 mission Valentina Scotti Astroparticle Detectors EUSO-SPB2 is a second generation Extreme Universe Space Observatory (EUSO) on a Super-Pressure Balloon (SPB). The mission broadens the scientific objectives of the EUSO program and constitutes the first step towards the study of neutrino signals from the high atmosphere and space. The EUSO-SPB2 science payload will be equipped with three detectors designed for a long duration mission. One is... 480. Upgrade of the KamLAND-Zen Mini-balloon and Future Prospects Mr Hideyoshi Ozaki (Tohoku University) The observation of a neutrino-less double-beta (0$\nu\beta\beta$) decay would be evidence of neutrino's Majorana nature, and it might be a clue to explain the baryon asymmetry and the extremely light neutrino masses. The half-life of 0$\nu\beta\beta$ decay is more than 10$^{26}$ year in case of $^{136}$Xe, thus it is important to make radiopure detector to find the very rare... 828. APiX: a two-tier avalanche pixel sensor for charged particle detection and timing. Dr Paolo Brogi (Univ. of Siena and INFN Pisa, IT) A novel pixelated charged particle detector with fast timing capabilities is under development. It addresses two important requirements for the next generation of position sensitive detectors: minimization of material budget and power consumption, while providing high granularity and excellent timing. It is a "thin" (tens of micron), window-less, vertically integrated, CMOS detector. Internal... 630. First Production Modules of the ATLAS Micromegas and Performance Studies Aimilianos Koulouris (National Technical Univ. of Athens (GR)) The ATLAS collaboration at LHC has endorsed the resistive Micromegas technology, along with the small-strip Thin Gap Chambers (sTGC), for the high luminosity upgrade of the first muon station in the high-rapidity region, the so called New Small Wheel (NSW) project. After the R&D, design and prototyping phase, the first series production Micromegas quadruplets have been constructed at all... 486. Neutral bremsstrahlung in two-phase argon electroluminescence: first results and possible applications Ekaterina Shemyakina (Budker Institute of Nuclear Physics SB RAS) A new mechanism of proportional electroluminescence (EL) in two-phase Ar has been revealed, namely that of neutral bremsstrahlung (NBrS), that quantitatively describes the photon emission below the Ar excitation threshold and non-VUV component above the threshold. This paves the way for direct readout of electroluminescence (S2) signals in two-phase TPCs, using PMT and SiPM matrices, in... 547. Detectors for direct Dark Matter search at KamLAND Dr Alexandre Kozlov (The University of Tokyo) Nature and properties of the Dark Matter (DM) in the Universe are among the most fundamental questions of the modern particle physics and astrophysics. So far, the only experiment that claimed detection of a signal from the DM is the DAMA/LIBRA NaI(Tl) experiment located at the Gran Sasso underground laboratory in Italy. Until the recent time, the main obstacle in repeating the DAMA/LIBRA... 898. Microfabricated silicon substrates for pixel detectors assembly and thermal management Alessandro Mapelli (CERN) At CERN, the Detector Technologies (DT) group of the Experimental Physics (EP) department is actively investigating a number of innovative solutions for heat management and detector module assembly in HEP experiments. Among these, recent research carried out at EP-DT has focused on the development of microfluidic devices to cool silicon pixel detectors. In this respect, continuous advances in... 690. Progress on the PICOSEC-Micromegas Detector Development: towards a precise timing, radiation hard, large-scale particle detector with segmented readout Kostas Kordas (Aristotle University of Thessaloniki (GR)) Detectors with a time resolution of a few 10ps and robustness in high particle fluxes are necessary for precise 4D track reconstruction in future, high luminosity HEP experiments. In the context of the RD51 collaboration, the PICOSEC detector concept has been developed, which is a two-stage Micromegas detector with a photocathode coupled to a Cherenkov radiator. Single channel PICOSEC... 643. Commissioning and beam test a high pressure time projection chamber Alexander Deisting (Ruprecht-Karls-Universitaet Heidelberg (DE)) Due to their large active volume and low energy threshold for particle detection Time Projection Chambers (TPCs) are promising candidates to characterise neutrino beams at the next generation long baseline neutrino oscillation experiments such as DUNE and Hyper-K, the successor of the T2K experiment. The higher target density for the incoming neutrino beam of a TPC filled with gas at High... 566. Development of a 3D highly granular scintillator neutrino detector for the T2K experiment Saba Parsa (Universite de Geneve (CH)) The long baseline neutrino experiment T2K has launched the upgrade project of its near detector ND280, crucial to reduce the systematic uncertainty in the prediction of number of events at the far detector to less than 4%. An essential component of this upgrade is a highly segmented scintillator detector, acting as a fully active target for the neutrino interactions. The baseline concept for... 891. RD53A: a large-scale prototype chip for the phase II upgrade in the serially powered HL-LHC pixel detectors Aleksandra Dimitrievska (Lawrence Berkeley National Lab. (US)) The phase II upgrade of the HL-LHC experiments within the LHC intends to deepen the studies of the Higgs boson and to allow the discovery of further particles by adding an integrated luminosity of about $4000 fb^{-1}$ over 10 years of operation. This upgrade would overwhelm the installed pixel detector readout chips with higher hit rates and radiation levels than ever before. To match these... 739. A multi-PMT photodetector system for the Hyper-Kamiokande experiment Gianfranca De Rosa (INFN) Photon Detectors Hyper-Kamiokande (Hyper-K) is the next upgrade of the currently operating Super-Kamiokande experiment. Hyper-K is a large water Cherenkov detector with a fiducial volume which will be approximately 10 times larger than its precursor. Its broad physics program includes neutrinos from astronomical sources, nucleon decay, with the main focus the determination of leptonic CP violation. To detect... 867. Belle II aerogel RICH detector Leonid Burmistrov (Centre National de la Recherche Scientifique (FR)) Cherenkov Detectors Aerogel Ring Imaging CHerenkov counter (ARICH) - is the particle identification device installed in the forward region of the Belle II detector at SuperKEKB accelerator facility in Japan. The first electron – positron collisions at SuperKEKB took place 26 of April in 2018 during so called phase 2 run. Measured performance of the ARICH detector based on recorded bhabha events during phase 2 are... 455. The ultra light Drift Chamber of the MEGII experiment Dr Malte Hildebrandt (Paul Scherrer Institut) The MEG experiment, at the PSI, aims at searching the charged lepton flavor violating decay $\mu^{+}\rightarrow e^{+}\gamma$. MEG has already determined the world best upper limit on the branching ratio: BR<4.2$\times10^{-13}$@90\%CL. The new positron tracker is a high transparency single volume, full stereo cylindrical Drift Chamber (DC), immersed in a non uniform longitudinal B-field, with... 804. Dual-readout calorimetry, an integrate high-resolution solution for energy measurements at future electron-positron colliders Lorenzo Pezzotti (Universita and INFN (IT)) Traditional energy measurements in hadron detection have always been spoiled by the non-compensation problem. Hadronic showers develop an electromagnetic component, from neutral mesons' decays, over-imposed on the non electromagnetic component. As the two are typically sampled with very different responses, fluctuations between them directly spoil the hadronic energy resolution. Dual-readout... 702. The RICH detector of the NA62 experiment at CERN Monica Pepe (INFN Perugia (IT)) NA62 is the last generation kaon experiment at the CERN SPS aiming to study the ultra-rare $K^+ \rightarrow \pi^+ \nu \overline{\nu}$ decay. The main goal of the NA62 experiment is the measurement of this BR with 10% accuracy. This is achieved by collecting about 100 $K^+ \rightarrow \pi^+ \nu \overline{\nu}$ events. The challenging aspect of NA62 is the suppression of background decay... 507. Upgrade of the CMS Muon System with GEM Detectors: recent progress on construction, certification, Slice Test, and Long-term Operation Francesco Fallavollita (Università e INFN Pavia) The CMS Muon Spectrometer is being upgraded (the GE1/1 project) during the LS2 shutdown (2019-2020) using large-area, trapezoidal-shaped triple-GEM detectors in the forward region, 1.6 < eta < 2.2. We present the chamber assembly and qualification procedure, as well as an overview of the results obtained during detector qualification. We report preliminary results on system integration and... 529. AXEL: High-pressure Xe gas TPC for BG-free 0v2b search Dr Shuhei Obara (Kyoto University) Observation of the neutrinoless double-beta decay (0v2b) is a key to solve the neutrino absolute mass and the Majorana nature. Recent 0ν2b search experiments will test the neutrino mass region allowed in case of the inverted mass ordering, but oscillation experiments favor the normal ordering. For 0v2b search in the normal ordering region, a background-free search with a 1-ton scale large... 689. Beam tests of a large-scale TORCH time-of-flight demonstrator Thomas Henry Hancock (University of Oxford (GB)) The TORCH time-of-flight detector is designed to provide particle identification over the momentum range 2–10 GeV/c over large areas. The detector exploits prompt Cherenkov light produced by charge particles traversing a 10 mm thick quartz plate. The photons propagate via total-internal reflection and are focussed onto a detector plane comprising position-sensitive micro-channel plate (MCP)... 605. Micro Pattern Gas Detector Optical Readout for Directional Dark Matter Searches Gianluca Cavoto (Sapienza Universita e INFN, Roma I (IT)) The Time Projection method is ideal to track low kinetic energy charged particles. Large volumes can be readout with a moderate number of channels providing a complete 3D reconstruction of the tracks within the sensitive volume. The total released energy and the energy density along the tracks can be both measured allowing for particle identification and to solve the head-tail ambiguity of... 619. The ATLAS ITk Strip Detector System for the Phase-II LHC Upgrade John Stakely Keller (Carleton University (CA)) The ATLAS experiment at the Large Hadron Collider is currently preparing for a major upgrade of the Inner Tracking for the Phase-II LHC operation, scheduled to start in 2026. The radiation damage at the maximum integrated luminosity of 4000/fb implies integrated hadron fluencies over 2x10^16 neq/cm2 requiring a completed replacement of the existing Inner Detector. An all-silicon Inner Tracker... 626. The Cylindrical-GEM Inner Tracker Detector of the KLOE-2 Experiment Alessandro Di Cicco (Dipartimento di Matematica e Fisica, Università Roma Tre, Via della Vasca Navale 84, Rome, Italy and INFN Sezione di Roma Tre, Via della Vasca Navale 84, Rome, Italy) The KLOE-2 experiment represents the continuation of KLOE and acquired 5.5 fb$^{-1}$ data from November 2014 to March 2018 with the aim of collecting the largest sample of $\phi$ mesons at the DA$\Phi$NE $e^+e^-$ collider at the Frascati National Laboratory of INFN. A new tracking device, the Inner Tracker, was installed at the interaction region of KLOE-2 and it was operated together with... 657. A new Transition Radiation detector based on GEM technology Sergey Furletov (Jefferson Lab) Transition Radiation Detectors (TRD) has the attractive features of being able to separate particles by their gamma factor. The classical TRDs are based on Multi-Wire Proportional Chambers (MWPC) or straw tubes, filled with Xenon based gas mixture to efficiently absorb transition radiation photons. While it works for experiments with relatively low particle multiplicity, the performance of... 729. Commissioning of the Belle II Silicon Vertex Detector Giulia Casarosa (INFN - National Institute for Nuclear Physics) The Belle II experiment at the SuperKEKB collider of KEK (Japan) will accumulate $e^+e^-$ collision data at an unprecedented instantaneous luminosity of $8\times 10^{35}$ cm$^{-2}$s$^{-1}$, about 40 times larger than its predecessor experiment. The Belle II vertex detector consists of two layers of DEPFET based pixels (PXD) and four layers of double sided silicon strip detectors (SVD). The SVD... 667. Lumped element kinetic inductance detectors on CaF2 for neutrino-less double-beta decay and spin-dependent dark matter search Koji Ishidoshiro (Tohoku University) Superconducting detectors (SCDs) are widely used in astroparticle physics experiments such as dark matter search and cosmic microwave background experiments. Kinetic Inductance Detector (KID) is one of the promising SDCs since KID has several technical advantages: very low fundamental noise, easy fabrication, and high scalability with frequency domain multiplexing. KID consists of microwave... 648. A High-Granularity Timing Detector for the Phase-II upgrade of the ATLAS Calorimeter system: detector concept, description and R&D and first beam test results Bengt Lund-Jensen (KTH Royal Institute of Technology (SE)) The increase of the particle flux (pile-up) at the HL-LHC with luminosities of L ≃ 7.5 × 10^34 cm−2 s-1 will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing... 773. Direction-Sensitive Dark Matter Search Using Tungstate Scintillator Shunsuke Kurosawa (Tohoku Universtiy) Ones of the candidates for the Dark Matters are weakly interacting massive particles (WIMPs), and we expect that the Earth should experience a "wind" (named 'WIMP wind') against the direction of the rotation, where is direction to Cygnus. In this study, we propose a new type Dark matter detector using single crystals in order to have higher detection efficiency than gaseous ones. Some team... 632. Upgrade of the ATLAS Muon Spectrometer Thin Gap Chambers and their electronics for the HL-LHC phase Chav Chhiv Chau (Carleton University (CA)) The instantaneous luminosity of the LHC will be increased by almost an order of magnitude with respect to the design value by undergoing an extensive upgrade program for the High-Luminosity LHC (HL-LHC). Many upgrades are foreseen for the thin gap chambers (TGC) of the ATLAS Muon System. A Phase-I upgrade project is the replacement of the present first station in the forward regions with the... 818. Novel Resistive-Plate WELL sampling elements for (S)DHCAL Dr Shikma Bressler (Weizmann Institute of Science (IL)) Digital and Semi-Digital Hadronic Calorimeters (S)DHCAL were suggested for future Colliders as part of the particle-flow concept. Though studied mostly with RPC-based techniques, studies have shown that MPGD-based sampling elements could outperform. An attractive, industry-produced, robust, particle-tracking detector for large-area coverage, e.g. in (S)DHCAL, could be the novel single-stage... 782. The Gigatracker, the silicon beam tracker for the NA62 experiment at CERN. Luca Federici (CERN) The Gigatracker is the NA62 beam tracker. It is made of three $63.1 mm \times 29.3 mm$ stations of $300 \mu m \times 300 \mu m $hybrid silicon pixel detectors installed in vacuum ($\sim10^{-6} mbar$). The beam particles, flowing at 750 MHz, are traced in 4-dimensions by means of time-stamping pixels with a design resolution of $200 ps$. This performance has to be maintained despite the beam... 838. The SuperNEMO Demonstrator double beta experiment Andrea Jeremie (Laboratoire d'Annecy-le-Vieux de Physique des Particules (LAPP)) The SuperNEMO experiment will study decays of 82Se in order to look for neutrinoless double beta decays (0νββ), interactions that, if observed, would prove the Majorana nature of neutrinos. SuperNEMO inherits the tracking-calorimetry technology of NEMO-3, which allows for a clear determination of event kinematics, while aiming for an improved background suppression and 0νββ sensitivity. A... 569. Performance studies of RPC detectors with new environmentally friendly gas mixtures in presence of LHC-like radiation background Beatrice Mandelli (CERN) Resistive Plate Chamber (RPC) detectors are widely used at the CERN LHC experiments as muon trigger thanks to their excellent time resolution. They are operated with a Freon-based gas mixture containing C2H2F4 and SF6, both greenhouse gases (GHG) with a very high global warming potential (GWP). The search of new environmentally friendly gas mixtures is necessary to reduce GHG emissions and... 490. PETALO: Time-of-Flight PET with liquid xenon Ms Carmen Romo Luque Liquid xenon has several attractive features, which make it suitable for applications to nuclear medicine, such as high scintillation yield and fast scintillation decay time. Moreover, being a continuous medium with a uniform response, liquid xenon allows one to avoid most of the geometrical distortions of conventional detectors based on scintillating crystals. In this paper, we describe how... 575. SciFi – Upgrading LHCb with a Scintillating Fibre Tracker Lukas Gruber (CERN) LHCb will undergo a major upgrade during the LHC long shutdown in 2019/2020 to cope with increased instantaneous luminosities and to implement a trigger-less 40 MHz readout. The current inner and outer tracking detectors will be replaced by a single homogeneous detector based on plastic scintillating fibres (SciFi). The SciFi tracker covers an area of 340 m2 by using more than 10,000 km of... 493. Innovative $\gamma$ detector filled with high-density liquid for brain PET imaging Morgane Farradèche (IRFU / CEA Saclay, France) CaLIPSO is an innovative $\gamma$ detector designed for high precision cerebral PET imaging. For the first time, liquid trimethylbismuth is used as sensitive medium. The detector operates as a time-projection chamber and detects both Cherenkov light and charge signal. Indeed, each 511-keV photon releases a single primary electron that triggers a Cherenkov radiation and ionizes the medium. As... 726. Optical readout of gaseous detectors: new developments and perspectives Dr Florian Maximilian Brunbauer (CERN, Vienna University of Technology (AT)) Scintillation light detection by imaging sensors presents a versatile and intuitive readout modality for gaseous radiation detectors. Based on visible scintillation light emission from gas mixtures such as Ar/CF4, optical readout provides images with high spatial resolution. We present novel readout approaches including ultra-fast imaging for beam monitoring in addition to studies of optically... 633. The Mu3e Scintillating Fiber Timing Detector Prof. Alessandro Bravar (University of Geneva) The Mu3e experiment will search for the rare neutrinoless lepton flavor violating mu+ -> e+e+e- decay and it aims at reaching an ultimate sensitivity of 10^-16 on this branching ratio. The experiment will be performed at PSI using the most intense continuous surface muon beam in the world (presently ~1x10^8 mu/s). In order to reach this sensitivity all backgrounds must be rejected below this... 803. Construction, operation and performance of the novel MPGD-based photon detectors of COMPASS RICH-1 Yuxiang Zhao (Universita e INFN Trieste (IT)) The RICH-1 Detector of the COMPASS Experiment at CERN SPS has been upgraded in 2016: four new Photon Detectors, based on MPGD technology and covering an active area of 1.4 square meters replace the previously used photon detectors (MWPCs with CsI photocathodes). The new detector architecture consists in a hybrid MPGD combination: two layers of THGEMs, the first of which also acts as a... 504. Novel charged particles monitor of light ions PT treatments: results of preliminary tests using a RANDO® phantom Giacomo Traini (Sapienza Universita' di Roma) In Particle Therapy, the use of C, He and O ions as beam particles is being pursued to fully profit from their interaction with matter resulting into an improved efficacy in killing the cancer cells. An accurate on-line control of the dose release spatial distribution, currently missing in clinical practice, is required to ensure that the healthy tissues surrounding the tumor are spared,... 732. The gaseous QUAD pixel detector Dr Peter Kluit (Nikhef), Peter Kluit (Nikhef National institute for subatomic physics (NL)), Mr Ruud Kluit (Nikhef (NL)) We have developed a gaseous pixel detector based on four Timepix3 chips that can serve as a building block for a large detector plane. To provide the required gas amplification a fine grid has been deposited on the chip surface by wafer postprocessing (GridPix technology). The precisely aligned grid holes and chip pixels having a pitch of 55 µm and the high time resolution of 1.56 ns of the... 519. Evaluation of a novel photon-counting CT system using a 16-channel MPPC array for multicolor 3-D imaging Mr Takuya Maruhashi (Waseda University) X-ray computed tomography (CT) is widely used in diagnostic imaging of the interior of the human body; however, the radiation dose of conventional CT typically amounts to 10 mSv. Under such environments, X-ray photons are severely piled-up; therefore, the CT images are monochromatic and various artifacts are present due to beam hardening effects. In contrast, photon counting CT (PC-CT) offers... 822. Silicon Detectors for the LHC Phase-II Upgrade and Beyond – RD50 Status Report Tomasz Szumlak (AGH University of Science and Technology (PL)), on behalf of the RD50 Collaboration The inner tracking layers of all LHC experiments were designed and developed to cope with the environment of the present Large Hadron Collider (LHC). At the LHC Phase-II Upgrade foreseen for 2026, the particle densities and radiation levels will increase by roughly an order of magnitude compared to the present LHC conditions. Therefore, the inner tracking layers will need to be replaced. The... 474. The MEV project: an innovative high-resolution telescope for Muography of Etna Volcano Dr Giuseppe Gallo (University of Catania - Department of Physics and Astronomy - INFN, LNS Catania) The MEV project started in 2016 the construction of an innovative muon tracking telescope expressly designed for the muography of the Etna Volcano, in particular one of the active craters in its summit area. The telescope is a tracker based on extruded scintillating bars with WLS fibers and featuring an innovative read-out architecture. It is composed of 3×1 m2 XY planes; the angular... 829. GAPS: a balloon-borne cosmic-ray antimatter experiment Giuseppe Osteria (INFN - National Institute for Nuclear Physics) Novel theories beyond the Standard Model predict dark matter candidates that could provide a significant enhancement of the antideuteron and antiproton flux, in particular at low energies. The General Antiparticle Spectrometer (GAPS) experiment is the first antimatter search experiment designed specifically for low-energy cosmic ray antideuterons and antiprotons. GAPS identifies antideuterons... 778. High resolution three dimensional characterization of irradiated silicon detectors using a Two Photon Absorption-TCT Marcos Fernandez Garcia (Universidad de Cantabria (ES)) The Transient Current Technique (TCT) has been instrumental in the characterization of silicon radiation detectors over the last 20 years. Using visible or infrared lasers, excess carriers can be produced continuously along the beam propagation direction, the penetration depth of the light determining the length of the trail of charges. No spatial resolution is therefore obtained along this... 614. Single layer Compton detectors for measurement of polarization correlations of annihilation quanta Prof. Mihael Makek (Department of Physics, Faculty of Science, University of Zagreb (HR)) Measurement of gamma ray polarization can provide valuable insight in different areas of physics research: nuclear, particle and astrophysics. Also, since the polarizations of gamma quanta from positron annihilation are perpendicular, there have been studies to use these polarization correlations in Positron Emission Tomography (PET). The polarization of gammas can be determined from Compton... 591. First fragmentation measurements with the ΔE-TOF detector of the FOOT experiment Aafke Kraan The FOOT experiment was designed to identify the fragments produced in the human body during hadrontherapy and to measure their production cross-section. The ΔE-TOF detector of the FOOT apparatus estimates the atomic number Z and velocity β of the fragments by measuring the energy deposited (ΔE) in two layers of orthogonal plastic scintillator bars and the time-of-flight (TOF) with respect to... 606. GRAPES-3 Detector System Mr Atul Jain (GRAPES-3) The Gamma Ray Astronomy at Pev Energies phase-3 (GRAPES-3), experiment located in beautiful slopes of Nilgiris hills ,Ooty, India, consist of world class indigenously developed detector system. The core elements of the experiment are plastic scintillator (Sc) detector and proportional counter (PRC). A large array of 400 Sc detectors each having sensitive area of 1m2 are spread in field with... 495. Measurements and Simulations of Surface Radiation Damage Effects on IFX and HPK Test Structures Francesco Moscatelli (Universita e INFN, Perugia (IT)) Radiation damage effects at High Luminosity LHC expected fluences (2×1016 n/cm2 1 MeV) and total ionising doses (TID) (1 Grad) will impose very stringent constraints in terms of radiation resistance of solid-state detectors. The complex physical phenomena related to radiation damage effects can be addressed by means of TCAD tools aiming at evaluate the most suitable... 916. The LHAASO Experiment Huihai He (Institute of High Energy Physics, CAS) The Large High Altitude Air Shower Observatory (LHAASO) plans to build a hybrid extensive air shower (EAS) array with an area of about 1 km2 at an altitude of 4410 m a.s.l. in Sichuan province, China, aiming for very high energy gamma ray astronomy and cosmic ray physics around the spectrum knees. With an extensive air shower array covering an area of 1.3 km2 equipped with >40,000 m2 muon... 542. Bulk engineering for enhanced lateral drift sensors Hendrik Jansen (Deutsches Elektronen-Synchrotron (DE)) Future experiments in particle physics foresee few-micrometer single-point position resolution in their vertex detectors, motivated by e.g. b/light-quark-tagging capabilities. Silicon is today's material of choice for high-precision detectors and offers a high grade of engineering possibilities. Instead of scaling down pitch sizes, which comes at a high price for an increased number of... 593. Verification of Monolithic CMOS Pixel Sensor Chip with Ion Beams for Application in proton Computed Tomography Dr Ganesh Tambave (University of Bergen (NO)) proton Computed Tomography (pCT) is an emerging imaging modality useful in treatment of cancer using protons and heavy ions. The pCT collaboration in Bergen is building a prototype Digital Tracking Calorimeter (DTC) for proton therapy application. The DTC is a 41 layers of Si-Al sandwich structure where CMOS pixel sensors are used as the active element and aluminum is the absorbing material.... 610. Development of a novel neutron tracker for the characterisation of secondary neutrons emitted in Particle Therapy. Eliana Gioscio ( Centro Fermi, Museo Storico della Fisica e Centro Studi e Ricerche "E. Fermi", Roma, Italy) The MONDO (MOnitor for Neutron Dose in hadrOntherapy) project addresses the technical challenges posed by a neutron tracker detector: high detection efficiency and good backtracking precision. The project main goal is to develop a tracking device capable of fully reconstruct the four-momentum of the ultra-fast secondary neutrons produced in Particle Therapy treatments via double elastic... 592. Measurement results of the MALTA monolithic pixel detector Enrico Junior Schioppa (CERN) MALTA is a full scale monolithic pixel detector implemented in ToweJazz 180nm CMOS technology. The small pixel electrode allowed for the implementation of a fast, low noise and low power front-end, which is sensitive to the charge released by ionizing radiation in a 20-25 um deep depleted region. The novel asynchronous matrix architecture is designed to ensure low power consumption and high... 468. SiPM single photon time resolution measured via bi-luminescence Dr Christopher Betancourt (University of Zurich) SiPM We present results on measurements of the single photon time resolution on silicon photomultipliers using bi-luminescence. When a silicon photomultipler is biased passed breakdown, each avalanche produces a number of photons as electron-hole pairs recombine. If these photons enter a neighboring cell and trigger an additional avalanche, the process is referred to as optical cross-talk. We refer... 556. Analysis methods for highly radiation-damaged SiPMs Prof. Robert Klanner (University of Hamburg) Measurements and analysis methods are presented with the aim to determine the SiPM performance after irradiation by neutrons to fluences between 10^9 and 5x10^14 neq/cm^2. SiPMs with 4384 pixels of 15x15 µm2 produced by KETEK are used. Following measurements and analyses will be presented to determine the fluence dependence of the SiPM parameters given in the list. 1. Y–f from which the pixel... 616. Improving the CTR of a PET module using the DOI Andrea Polesel (Università degli Studi e INFN Milano (IT)) In a PET scanner, the probability of early stage detection of cancer is increased by high spatial resolution and sensitivity. Depth Of Interaction (DOI) is an important quantity both in small PET scanners and also in whole-body PET machines. The module we developed is a pixellated scintillator of LYSO crystals with single side readout and allows light recirculation thanks to a light and a... 768. Radiation characterization of two large and fully depleted CMOS pixel matrices fabricated in 150 nm and 180 nm technologies Toko Hirono (University of Bonn (DE)) Two different design concepts of the depleted monolithic CMOS active sensor (DMAPS) are realized in the large scale pixel matrixes, named LF-Monopix and TJ-Monopix. They are realized in so-called large and small electrode design in a pixel. In the large electrode DMAPS, a high bias voltage of 300 V is applied to the highly resistive wafer without damaging the readout electronics. Full... 866. Imaging with ion beams at MedAustron Mr Alexander Burker (Atominstitut, TU Wien) MedAustron is an Austrian cancer treatment center for proton and carbon therapy. For clinical use protons are accelerated up to 250 MeV, whereas carbon ions will be available up to 400 MeV/u. The facility also features a unique beam line exclusively for non-clinical research. This research beam line will be commissioned for even higher proton energies of up to 800 MeV. In this... 721. Towards wafer-scale monolithic CMOS integrated pixel detectors for X-ray photon counting Jorge Neves (G-ray Medical) A new semiconductor process is being developed for manufacturing monolithic CMOS pixel detectors. The technology is based on direct bonding of 200 mm CMOS wafers to an absorber in a low-temperature, oxide-free, covalent wafer bonding process. It is applicable to any material such as Si, GaAs and epitaxial SiGe. The latter are realized by means of space-filling arrays of SiGe crystals which can... 705. Using Quantum Entangled Photons to Measure the Absolute PDE of a Multi-Pixel SiPM Array Dr Jamie Williams (University of Leicester) Spontaneous parametric down-conversion (SPDC) of a visible pump photon is the generation of two less energetic, quantum entangled photons (QEPs), often in the near infrared (NIR), using a non-linear crystal such as beta barium borate (BBO). Since the detection of one QEP predicates the existence of its entangled twin, QEPs have previously been used to measure the absolute photon detection... 904. Development of the thin TOF-PET scanner based on fast monolithic silicon pixel sensors Daiki Hayakawa (Universite de Geneve (CH)) The Thin-TOF PET (TT-PET) project aims at the construction of a small-animal PET scanner based on silicon monolithic pixel sensors with 30 ps time resolution for 511 keV photons, equivalent to 100 ps time resolution for minimum ionizing particles. The high time resolution of the pixel sensor allows for precise time of flight measurement of the two photons and a significant improvement in the... 831. Radiation hard active pixel sensor with 25µm x 50µm pixel size designed for capacitive readout with RD53 ASIC Hui Zhang (Karlsruhe Institute of Technology (KIT)) We will present a sensor chip for a capacitively coupled particle detector (CCPD). CCPDs have been proposed for several experiments and it has been demonstrated that the signals from the sensor to the readout chip can be transmitted when the chips are glued. However, it is still not proven whether gluing can be done fast on a large number of devices. Therefore, we are investigating a new... 808. The 2 inches VSiPMT industrial prototype Felicia Carla Tiziana Barbato (INFN - National Institute for Nuclear Physics) Photon detection is a key factor to study many physical processes in several areas of fundamental physics research. Focusing the attention on photodetectors for particle astrophysics, we understand that we are very close to new discoveries and new results. In order to push the progress in the study of very high-energy or extremely rare phenomena (e.g. dark matter, proton decay, neutrinos from... 921. CMOS Active Pixel Sensors for High Energy Physics Luciano Musa (CERN) CMOS technology, which fueled the rapid growth of the information technology industry in the past 50 years, has also played and continues to play a crucial role in the remarkable development of detectors for High-Energy Physics (HEP) experiments. The amazing evolution of CMOS transistors in terms of speed, integration and cost decrease, allowed a continuous increase of density, complexity and... 644. ATLAS LAr Calorimeter Performance in LHC Run-2 and Electronics Upgrades for next Runs Maddie McKay (Southern Methodist University (US)) Liquid argon (LAr) sampling calorimeters are employed by ATLAS for all electromagnetic calorimetry in the pseudo-rapidity region |η| < 3.2, and for hadronic and forward calorimetry in the region from |η| = 1.5 to |η| = 4.9. In the LHC Run-2 about 150fb-1 of data at a center- of-mass energy of 13 TeV have been recorded. The well calibrated and highly granular LAr Calorimeter reached its design... 725. Performance of the Belle II imaging Time-Of-Propagation (iTOP) detector in first collisions Martin Bessner (Deutsches Elektronen-Synchrotron (DE)) The iTOP detector is a novel Cherenkov detector developed for particle identification at Belle II, an upgrade of the previous Belle experiment at KEK. The SuperKEKB accelerator, an upgrade of KEKB, collides electrons and positrons with a design luminosity of 8*10^(35)/(cm^2 s). In order to exploit the high collision rate Belle II has a trigger rate of up to 30 kHz. The iTOP detector uses... 473. CUORE: the first bolometric experiment at the ton scale for the search for neutrino-less double beta decay Bradford Welliver (Lawrence Berkeley National Laboratory) The Cryogenic Underground Observatory for Rare Events (CUORE) is the most massive bolometric experiment searching for neutrino-less double beta (0νββ) decay. The detector consists of an array of 988 TeO$_{2}$ crystals (742 kg active mass) arranged in a compact cylindrical structure of 19 towers. The construction of the experiment and, in particular, the installation of the towers in the... 503. Science and technology of the DARWIN observatory Prof. Guido Drexlin (Karlsruhe Institute of Technology) DARWIN is a next-generation dark matter and neutrino observatory based on 50 tons of xenon. Its central TPC of 2.6 m diameter and height is operated as dual-phase detector with optimized light and charge read-out. It will allow to search for WIMPs at the GeV-TeV mass scale down to the "neutrino floor" where coherent interactions of astrophysical neutrinos start to dominate the interaction... 837. The Jiangmen Underground Neutrino Observatory (JUNO) Cedric Cerna (CENBG/CNRS) The Jiangmen Underground Neutrino Observatory (JUNO) is an experiment under construction in China with the primary goal of determining the neutrino mass hierarchy (MH) with reactor anti-neutrinos. The JUNO detector system consists of a central detector, an active veto system and a calibration system. The central detector is a 35 meter diameter transparent acrylic sphere containing a 20 kton... 923. Large Liquid Argon TPCs and the search for CP Violation in the lepton sector with long baseline experiments Christos Touramanis (University of Liverpool (GB)) With three-neutrino-families mixing firmly established in recent years, and the relatively large value of theta_13 observed, the race is on to discover CP Violation in neutrino mixing in accelerator-driven long baseline neutrino oscillation experiments. NOvA and T2K will continue to provide increasingly precise measurements of the PMNS mixing matrix parameters into the next decade. DUNE will... 612. Recent results of the technological prototypes of the CALICE highly granular calorimeters Roman Poeschl (Laboratoire de l'Accelerateur Lineaire (FR)) The CALICE Collaboration has been conducting R&D for highly granular calorimeters since more than 15 years with an emphasis on detectors for Linear Colliders. This contribution will describe the commissioning, including beam tests, of large scale technological prototypes of a silicon tungsten electromagnetic calorimeter and hadron calorimeters featuring either a gaseous medium or scintillator... 462. Performance of Large Area Picosecond Photo-Detectors – LAPPD Dr Alexey Lyashenko (Incom Inc.) The Large Area Picosecond Photo-Detector (LAPPD™) is a microchannel plate (MCP) based planar geometry photodetector featuring single-photon sensitivity, semitransparent bi-alkali photocathode, millimeter spatial and picosecond temporal resolutions and an active area of to 350 square centimeters. The "baseline" LAPPD™ employs a borosilicate float glass hermetic package. Photoelectrons are... 555. EIGER: High frame rate pixel detector for synchrotron and electron microscopy applications Erik Fröjdh (Paul Scherrer Institut) The hybrid pixel detector EIGER, featuring 75$\times$75 $\mu$m$^2$ pixel size, is a photon counter designed for use at synchrotrons. The chip and the complete readout system were designed at the Paul Scherrer Institut, Switzerland. A single chip consists of 256$\times$256 pixels and can acquire data at 22000~frame/s with 4-bit counter depth. In a full module, 4$\times$2 chips are bonded to a... 925. Award Ceremony 924. Instrumentation -- state of the art and a look into the future Christian Joram (CERN) Progress in experimental physics relies often on advances and breakthroughs in instrumentation, leading to substantial gains in measurement accuracy, efficiency and speed, or even opening completely new approaches and methods. At a time when the R&D for the upgrade of the large LHC experiments is still in full swing, the Experimental Physics Department of CERN has proposed a new technological... 926. Closing Manfred Krammer (CERN) 750. 3D silicon sensor optimisation for high resolution time measurements Angelo Loi (Universita e INFN, Cagliari (IT)) Poster Session A Looking forward to future High Luminosity LHC experiments, efforts to develop new tracking detectors are increasing. A common approach to improve track reconstruction efficiency in high pile-up conditions is to add time measurement per pixel with resolution smaller than 50 ps. Different sensor technologies are under development in order to achieve those performances, like low gain avalanche... 456. A First Look At the Timepix2 in Heavy Ion Beams Prof. Lawrence Pinsky (University of Houston) Poster Session B The long awaited Timepix2 from the Medipix2 Collaboration is due to be available this fall (2018), and plans are in place to expose it to the Heavy Ion beams at the HIMAC facility in Japan this December (2018). The initial goal is to evaluate the extended dynamic range of its' novel pre-amplifier design, and to exercise its' overall performance in a wide range of heavy ion beams. The... 603. A long slab prototype for ILD SiW-Ecal Frederic Bruno Magniette (Université Paris-Saclay (FR)) The long slab is a new prototype for the SiW-Ecal, a silicon tungsten electromagnetic calorimeter for the ILD detector of the future International Linear Collider. This new prototype has been designed to demonstrate the ability to build a full length detecting layer (1.60m for the ILD barrel). Indeed, this length induces difficulties for clock and signal propagation and data integrity. The... 647. A novel 4D fast track finding system on FPGA Marco Petruzzo (Università degli Studi e INFN Milano (IT)) We present a novel 4D fast track finding system capable of reconstructing four dimensional particle trajectories in real time using precise space and time information of the hits. The fast track finding device that we are proposing is designed for the high-luminosity phase of LHC and it is based on a massively parallel algorithm to be implemented in commercial field-programmable gate array... 763. A particle detector system that exploits liquid argon scintillation light Marta Babicz (CERN, Geneva, Switzerland and Institute of Nuclear Physics PAN, Cracow, Poland) This paper describes a particle detection system that exploits the prompt signals from the scintillation light produced by ionizing particles in liquid argon. The system includes 10 R5912 Hamamatsu photomultipliers (PMTs) coated with TPB for the detection of the VUV scintillation light. A laser calibration system is used to set the gains and determine the relative timing of the PMTs. The setup... 469. Active doping profile of silicon detectors using innovative TLM scan method Dr Abdenour Lounis (Laboratoire de l'Accélérateur Linéaire, Orsay) Improvements of silicon detector technology for high energy physics applications demand the introduction of doping carriers into the sensor material to optimize the charge collection efficiency of the detecting devices. Total doping profile of any silicon sensor device can be measured with very high precision using secondary ions mass spectrometry (SIMS). In this work new 3D SIMS scannig... 897. ArgonCube: A Modular LArTPC with Pixelated Charge Readout Thomas Josua Mettler (Universitaet Bern (CH)) ArgonCube is a novel, modular approach to Liquid Argon Time Projection Chambers (LArTPCs). ArgonCube segments the total detector volume into an number of electrically and optically isolated TPCs sharing a common cryostat, providing improved performance while also mitigating technical risks with LAr purity and electric field. The field shaping uses a continuous resistive plane, field-shell,... 740. Beam test results of two shashlyk ECAL modules for NICA-MPD Mr Yulei Li (Tsinghua University ) Electromagnetic calorimeter (ECal) is an important detector of the Multi Purpose Detector (MPD) at the NICA collider. A shashlyk-type electromagnetic calorimeter is selected as MPD ECal. The particular goals of the MPD ECal are to measure of spatial positions and energy of photons and electrons. The whole ECal consists of 43008 shashlyk tower and each tower consists of 220 layers of 1.5mm... 459. Beam-Loss Damage Experiment on ATLAS-like Silicon Strip Modules Using an Intense Proton Beam Xavi Fernandez-Tejero (CNM-Barcelona (ES)) The ATLAS silicon tracker detectors are designed to sustain high dose integrated over several years of operation. This very substantial radiation hardness should also favour the survival of the detector in case of accidental beam losses. An experiment performed in 2006 showed that ATLAS Pixel detector modules (silicon planar hybridly coupled with FE-I3 electronics) could survive to beam... 875. Calibration and Performance of the Compact High Energy Camera SiPM Prototype Front-End Electronics proposed for the Cherenkov Telescope Array Mr Connor Duffy (University of Leicester) The Compact High Energy Camera (CHEC) is a full-waveform camera, designed and proposed for the dual mirror Schwarzschild-Couder small sized telescope of the Cherenkov Telescope Array. CHEC-S is the second prototype and is based upon silicon photomultiplier (SiPM) photodetectors optimised for single photon counting and nanosecond timing. The camera water-cooled focal plane plate comprises a... 488. Calibration of a polycrystalline 3D diamond detector fabricated for small field dosimetry Keida Kanxheri (INFN - National Institute for Nuclear Physics) In medical radiation dosimetry, the use of small photon fields is almost a prerequisite for high precision localized dose delivery to delineated target volume. The accurate measurement of standard dosimetric quantities in such situations depends on the size of the detector with respect to the field dimensions. Thanks to a new technology, polycrystalline diamond devices with 3-dimensional... 824. CEvNS detection with CONUS Tobias Schierhuber (Max-Planck-Institut für Kernphysik) Coherent elastic neutrino nucleus scattering (CEvNS) has been predicted since 1973, but eluded detection for more than 4 decades mainly due to a lack of technology able to detect small nuclear recoils. The process was first observed in August 2017 using a spallation neutron source [1]. Complementary to it, new projects like CONUS try to detect CEvNS using reactor anti-neutrinos. CONUS is based... 890. CMOS based SPAD Arrays for the Detection of Rare Photon Events at Cold Temperatures Michael Keller (Heidelberg University) We have operated a 2D array of $88 \times 88$ Single Avalanche Photo Diodes fabricated in a CMOS technology in liquid nitrogen to evaluate its dark count rate at low temperatures. We found a rate of <20 dark counts per second and per $mm^2$ equivalent active area and observed an additional background at the edge of the array, which we attribute to photons emitted in the peripheral circuitry.... 703. Coincident Detection of Cherenkov Photons from Compton Scattered Electrons for Medical Applications Mr Reimund Bayerlein (PhD Student) Throughout the last decade there has been an increasing interest in an efficient gamma ray detector for medical applications. Especially proton beam therapy and nuclear medicine could benefit from the ability to detect higher energetic gamma-radiation above 1 MeV. One possible detector would be a dual-plane Compton Camera. Coincident detection of energy and position of both the electron and... 510. Compact segmented hadron calorimeter for detection of low energy spectators at MPD/NICA facility. Alesandr Ivashkin (Russian Academy of Sciences (RU)) The forward hadron calorimeter (FHCal) for the detection of the protons and neutrons in energy range of 1-5 GeV is discussed. Since the calorimeter will operate inside the superconductive magnet with limited available space its length is about one meter only. A single FHCal module consists of 42 lead/scintillator sandwiches arranging in overall 4 interaction lengths. However, it works well... 578. Comparison of TCAD simulations of irradiated Si-sensors with beam-test measurements Dr Joern Schwandt (University of Hamburg) The aim of the work is to develop a model which allows reliably predicting the effects of radiation damage by hadrons in segmented silicon sensors up to 1 MeV equivalent neutron fluences of $2\cdot 10^{16} \text{n}/\text{cm}^2$, which are expected at the High-Luminosity LHC for an integrated luminosity of $3000~\text{fb}^{-1}$. Recently we presented a model with five effective traps (Hamburg... 623. Comparison of transition radiation measurements with a Si and GaAs pixel sensor on a TimePix3 chip Florian Dachs (Vienna University of Technology (AT)) Growing energies of particles at modern or planned particle accelerator experiments as well as various cosmic ray experiments require particle identification at gamma factors of up to $\approx 10^5$. At present there are no detectors capable of identifying single charged particles with reliable efficiency in this range of gamma-factors. New developments in pixel detectors allow to perform... 587. Construction of Vacuum-compatible Straw Tracker for COMET Phase-I Prof. Hajime NISHIGUCHI (KEK) The COMET experiment at J-PARC aims to search for a lepton-flavour violating process of muon to electron conversion in a muonic atom, $\mu$-e conversion, with a branching-ratio sensitivity of <$10^{−16}$, 4 orders of magnitude better than the present limit, in order to explore the parameter region predicted by most of well-motivated theoretical models beyond the Standard Model. The need for... 815. Deep Machine Learning on FPGAs for L1 trigger and Data Acquisition Sioni Paris Summers (Imperial College Sci., Tech. & Med. (GB)) Machine learning is becoming ubiquitous across HEP. There is great potential to improve trigger and DAQ performances with it. However, the exploration of such techniques within the field in low latency/power FPGAs has just begun. We present HLS4ML, a user-friendly software, based on High-Level Synthesis (HLS), designed to deploy network architectures on FPGAs. As a case study, we use HLS4ML... 551. DEPFET Detector development for the Wide Field Imager of ATHENA Dr Wolfgang Treberspurg (Max-Planck-Institut für extraterrestrische Physik) The ATHENA X-ray observatory was selected as ESA's second large-class mission, scheduled to launch in the early 2030s. To enable detailed explorations of the hot and energetic universe, two complementary focal-plane instruments are coupled to a high-performance X-ray telescope. As one of these, the WFI (Wide Field Imager) features an unprecedented survey power by combining an excellent count... 730. Design of large area MCP-PMT and a novel bowl-shape MCP Ms Ping Chen (Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences) The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. The R&D of large area microchannel plate photomultiplier tube (MCP-PMT) for JUNO started in 2011. In the last 3 years, much progress has been achieved. The high performance 8-inch and 20-inch prototypes were... 625. Design, Construction and Test of Small-Diameter Drift Tube Chambers for the Phase-1 Upgrade of the ATLAS Muon Spectrometer Patrick Rieck (Max-Planck-Institut fur Physik (DE)) The ATLAS muon spectrometer comprises an efficient muon trigger system and high muon momentum resolution up to the TeV scale. In the regions at both ends of the inner barrel layer of the muon spectrometer the trigger coverage in combination with the endcap muon spectrometer is limited. In order to improve the muon trigger capabilities at higher luminosities, additional resistive plate chambers... 772. Detection of epithermal and fast neutrons with Ce doped GAGG and LYSO scintillation materials. New advantages for TOF techniques. Prof. Mikhail Korzhik (Research Institute for Nuclear Problems, Minsk, Belarus ;NRC "Kurchatov Institute", Moscow, Russia ), Dr Hans-Georg Zaunick (Justus-Liebig-Universität ) Recently, we demonstrated that epithermal and fast neutrons produce distinct γ-quanta in the energy range bellow 1 MeV in Gd containing media. These soft quanta can be detected in the scintillation material, containing Gd ions. One of the promising candidates for this purpose is gadolinium-gallium-aluminum garnet Gd3Al2Ga3O12 (GAGG) doped with Ce, having high light yield and excellent energy... 497. Detector developments for high performance muography applications Dr Dezső Varga (Hungarian Academy of Sciences (HU)) Imaging with cosmic muons dates back by decades, initiated by searching for hidden structures in the Chephren pyramid by Alvarez. Since then, the term "muography" was coined for this possibility offered by nature's highly penetrating particles, and can be applied for imaging various large scale objects. As the observation point needs to be below the object of interest, either the detector is... 737. Development of a large active area beam telescope based on the SiD micro-strip sensor Mengqing Wu (Deutsches Elektronen-Synchrotron (DE)) A new beam telescope, Lycoris, is currently being installed as an improvement for the DESY test beam infrastructure within the EU Horizon2020 AIDA-2020 project. Lycoris telescope is designed to cover a large area for providing a reference momentum measurement to beam users in an 1 T solenoid magnet. It consists of six layers of the 10×10 cm2 surface, 25 μm(50 μm) sensor(readout) pitch,... 661. Development of a prototype of intraoperative PET-laparoscope system for surgical navigation in cancer surgery Ms Madhushanka Rukshani Liyanaarachchi (The University of Tokyo) PET (positron emission tomography) is used to preoperatively identify lymph node metastasis. However it is difficult to locate those lymph node metastasis during surgery. Intraoperative PET-laparoscope system consisting of an external fixed detector array and a movable detector which can be inserted into a patient's stomach has been proposed to identify lymph node metastasis during gastric... 821. Development of a Resistive Plate Device with micro-pattern technique. Paolo Iengo (CERN) We present an RPC built with techniques developed for micro-pattern gaseous detectors. It consists in two equal electrode plates made of FR4 substrate with 250 Cu readout strips. A 50 um insulating foil, carrying resistive lines, is glued on top of the substrate. Both the Cu and the resistive strips have a pitch of 400 um and width of 300 um. The plates are spaced by a 2 mm gap and rotated by... 873. Development of a Time Projection Chamber for Ion Transmission Imaging Jona Bortfeldt (Ludwig-Maximilians-Univ. Muenchen (DE)) At the LMU Department of Medical Physics a portable platform for proton irradiation of small animals is under development for pre-clinical research with tumor bearing mouse models. The platform intends to use beams available at clinical facilities. It consists of a custom beamline to produce particle beams of the needed energy range and focus and several beam monitoring and imaging systems,... 653. Development of Hafnium STJ for cosmic neutrino background search Dr Takashi Iida (University of Tsukuba) The COBAND experiment searches for far infra red photons from decays of cosmic neutrino background. In order to achieve sensitivity for neutrino lifetime predicted by some theoretical model, 2% energy resolution is required for the detector. We are developing a superconducting tunnel junction detector (STJ) using Hafnium superconductor (Hf) that have very small bandgap. Bandgap of Hf is 0.021... 654. Development of Spherical Proportional Counter for Light WIMP search ali Dastgheibi Fard (CNRS/LSM) The Spherical gaseous detector (or Spherical Proportional Counter, SPC) is a novel type of particle detector, with a broad range of applications. Its main features include a very low capacitance, a potential low energy threshold independent of the volume, a good energy resolution, robustness and a single detection readout channel. Applications range are from radon emanation gas monitoring,... 549. Discharge behavior of resistive Micromegas Mariagrazia Alviggi (Universita e sezione INFN di Napoli (IT)) We performed detailed studies to measure the effect of the mesh geometry and gas mixtures on the discharge behavior of resistive Micromegas. A Micromegas detector has been built at CERN with a special design allowing to easily replacing the mesh. It has 1028 readout strips with a pitch of 400 μm, an active area of 40x50 cm2 and 128 μm high pillar spacers. The resistive strips, screen-printed... 868. Energy deposition of protons in silicon sensors at MedAustron Mr Peter Paulitsch (Austrian Academy of Sciences (AT)) MedAustron is a hadron synchrotron primarily designed and built for cancer tumor treatment. Besides its clinical purpose, it is equipped with a dedicated beam line for non-clinical research. This beam line can be used for beam tests utilizing protons with an energy of up to 252 MeV at the moment, but 800 MeV will be available through 2019 as well as Carbon ions. In order to understand the... 685. Fabrication and Testing of a 1024-pixel SiPM Camera Dr Jik Lee (The Center for High Energy Physics, Kyungpook National University) Abstract: We have fabricated the 1024-pixel SiPM sensor and the associated electronics. We integrated the SiPM sensor and the electronics to build a pinhole camera. In this paper, we present the fabrication and assembly procedure of the SiPM sensor and the readout electronics, and the preliminary result of testing the pinhole camera. This camera can be readily used as an X-ray detector with an... 907. Fast Beam-Beam Collisions Monitor for experiments at NICA Dr Grigory Feofilov (Saint-Petersburg State University) Two interaction points are foreseen for beam intersections of NICA collider at JINR. The event-by-event monitoring of collisions is required both for the beam tuning and for event selection using the precise timing (T0) of the events for MPD and SPD experiments at NICA. Data on the reaction plane and on the event centrality of nucleus-nucleus collisions should be also obtained for physics... 548. Feasibility study of the use of CMOS image sensors in radioguided surgery with $\beta^-$ emitters Luisa Alunni Solestizi (Universita e INFN, Perugia (IT)) A feasibility study about the employment of widely and commercially avaliable CMOS imager sensors in a radioguided surgery probe for $\beta^-$ detection is presented. The radioguided surgery is a medical technique, which involves the use of a manageable probe for the intraoperative detection of the radiation emission of radiopharmaceuticals. The probe support the visual inspection of the... 870. Field effect transistor test structures for studies of inter-strip isolation in silicon strip detectors Viktoria Hinger (Austrian Academy of Sciences (AT)) Because of its radiation resilience, p-type silicon has been established as baseline material for tracking detectors in upcoming high-luminosity physics experiments. When deciding on the quality of p-type silicon strip sensors, strip isolation is crucial. Regions of highly doped p+ implant (p-stop) are introduced between n+ strips to interrupt the electron accumulation layer that forms at the... 520. First demonstration of portable Compton camera to visualize 223-Ra concentration for radionuclide therapy Mr Kazuya Fujieda (Waseda University) Radionuclide therapy (RNT) is an internal radiation therapy that can selectively damage cancer cells. Recently, the use of alpha-emitting radionuclides was initiated in RNT owing to its dose concentration and short range. In particular, 223-Ra is widely used for bone metastasis cancer. Despite its potential for clinical applications, it is difficult to know whether the drug has been properly... 899. First results on 3D pixel sensors interconnected to RD53A readout chip after high energy proton irradiation Jordi Duarte Campderros (Universidad de Cantabria (ES)) In this presentation results obtained in beam test experiments with 3D columnar pixel sensors interconnected with the RD53A readout chip are reported. RD53A is the first prototype in 65nm technology issued from RD53 collaboration for the future readout chip to be used in the upgraded pixel detectors. The interconnected modules have been tested on hadron beam at CERN before and after... 787. First test beam results obtained with IDEA, a detector concept designed for future lepton colliders Lisa Borgonovi (Universita e INFN, Bologna (IT)) IDEA (International Detector for Electron-positron Accelerators) is a detector concept designed for a future leptonic collider operating as a Higgs factory. It is based on innovative detector technologies developed over years of R&D. In September 2018, prototypes of the proposed subdetectors have been tested for the first time on a beam line at CERN, setting a milestone for the detector... 881. First tests of a reconfigurable depleted MAPS sensor for Digital Electromagnetic Calorimetry Dr Ioannis Kopsalis (University of Birmingham (GB)) Digital calorimetry relies on a highly granular detector where the cell size is sufficiently small so that only a single particle in a shower enters each cell within a single readout cycle. The DECAL sensor, a depleted monolithic active pixel sensor (DMAPS), has been proposed as a possible technology for future digital calorimeters. A DECAL sensor prototype has been designed and fabricated in... 574. Flavour Physics at the High Luminosity LHC: LHCb Upgrade II Vadym Denysenko (Universitaet Zuerich (CH)) The LHCb Collaboration is planning an Upgrade II, a flavour physics experiment for the high luminosity era. This will be installed in LS4 (2030) and targets an instantaneous luminosity of 1 to 2x10 34 cm-2 s-1, and an integrated luminosity of at least 300fb-1. Modest consolidation of the current experiment will also be introduced in LS3 (2025). This talk will present an overview of the LHCb... 658. HEPS-BPIX2: the Hybrid Pixel Detector with TSV Processing for High Energy Photon Source in China Dr Jie Zhang (Institute of High Energy Physics, Chinese Academy of Sciences) HEPS-BPIX2 is the second prototype of single-photon counting pixel detector with 1 million pixels developed for applications of synchrotron light sources. It follows the first prototype, HEPS-BPIX, with a pixel size of 150 µm x 150 µm and frame rate up to 1.2 kHz at 20-bit dynamic range. This paper contains a detailed description of HEPS-BPIX2 upgrade with a recently launched Through Silicon... 674. High light yield calcium iodide (CaI2) scintillator for astroparticle physics Dr Masao Yoshino (Tohoku University) Large light yield of scintillator can be a key to develop a good detector for astroprticle physics. Calcium Iodide (CaI2) crystal is discovered by Hoftadter et al. in 1960s and known to have large light yield. University of Tsukuba and IMR, Tohoku University are jointly developing CaI2 crystal from 2016 using updated facilities and leading-edge techniques. At first, vaporization of CaI2... 597. High-purity scintillating $\mathrm{CaWO_4}$ crystals for the direct dark matter search experiment CRESST Valentyna Mokina for the CRESST Collaboration (HEPHY) The direct dark matter search experiment CRESST (Cryogenic Rare Event Search with Superconducting Thermometers) uses scintillating $\mathrm{CaWO_4}$ single crystals as targets for possible nuclear recoils induced by Dark Matter particles. An intrinsic radioactive contamination of the crystals as low as possible is crucial for the sensitivity of the detectors. In the past $\mathrm{CaWO_4}$... 500. Instrumentation and optimization studies for a Beam Dump Experiment (BDX) at MESA Mr Mirco Christmann (Institute for Nuclear Physics, Mainz) At the Institute for Nuclear Physics in Mainz the new electron accelerator MESA will go into operation within the next years. In the extracted beam operation (155 MeV, 150 μA) the P2 experiment will measure the weak mixing angle in electron-proton scattering in 10,000 hours operation time. Therefore the high-power beam dump of this experiment is ideally suited for a parasitic dark sector... 554. Instrumentation concepts for Neganov-Luke assisted mK-temperature Germanium detectors in dark matter search Dr Bernhard Siebenborn (Karlsruhe Institute of Technology (KIT)) In direct searches for dark matter, the signature is a recoiling nucleus being hit by a massive dark matter particle, a so-called WIMP. A viable technology to search for such recoil signatures are detector arrays of Ge mono-crystals operated at a few mK temperature and equipped with electrodes and thermal sensors. Applying a small (few V/cm) external field, a simultaneous measurement of... 727. Investigating Microchannel Plate PMTs with TOFPET2 multichannel picosecond timing electronics Prof. Jon Lapington (University of Leicester) TOFPET2 is the second-generation design of a high-performance multichannel picosecond timing readout electronics ASIC produced by PETsys Electronics SA, Portugal. Originally developed for time-of-flight positron emission tomography using silicon photomultipliers, in this work we describe an experimental programme to evaluate the performance of TOFPET2 with pixelated microchannel plate... 805. Investigations on the radiation damage of the LHCb VELO: a full review Pawel Kopciewicz (AGH University of Science and Technology (PL)) The LHCb Vertex Locator (VELO) is a silicon micro-strip detector operating extremely close to the LHC proton beams. During nominal data-taking the innermost active strips are as close as ~8 mm to the beams. This proximity makes the LHCb VELO an ideal laboratory to study radiation damage effects in silicon detectors. The analysis of charge collection efficiency (CCE) data showed that there is... 735. Large acceptance high rate GEM detectors for muon tracking in heavy ion collisions of CBM experiment at FAIR Anand Kumar Dubey (Department of Atomic Energy (IN)) The Compressed Baryonic Matter(CBM) experiment at the upcoming FAIR facility will explore the phase diagram of hadronic matter in the region of highest baryon densities with various rare probes including light vector mesons and charmonium decaying into di-muon pairs. Unprecedented interaction rates of 10 MHz Au+Au collisions in an energy range (upto 11 AgeV) is a unique feature in CBM. The... 532. Latest Improvements of Microchannel-Plate PMTs Dr Albert Lehmann (Universität Erlangen-Nürnberg) Microchannel-plate (MCP) PMTs were identified as the only suitable photon sensors for the DIRC detectors of the PANDA experiment at FAIR. PANDA is a hadron physics experiment which employs a high intensity antiproton beam of up to 15 GeV/c to perform high precision measurements of, among others, objectives like charmonium spectroscopy and search for gluonic excitations. As the long-standing... 766. Level-1 track finding with an all-FPGA system at CMS for the HL-LHC Kristian Hahn (Northwestern University (US)) The CMS experiment at the LHC is designed to study a wide range of high energy physics phenomena. It employs a large all-silicon tracker within a 3.8T magnetic solenoid, which allows precise measurements of transverse momentum (pT) and vertex position. This tracking detector will be upgraded to coincide with the installation of the High-Luminosity LHC, which will provide luminosities of up... 716. Long-term and efficient operation of the MWPC muon detector at LHCb Sofia Kotriakhova (Petersburg Nuclear Physics Institut (RU)) With its ~1650 mˆ2 of MWPCs, the muon detector of LHCb is one of the largest instrument of this kind worldwide, and one of the most irradiated. Currently we run at the relatively low instantaneous luminosity of 4x10ˆ32 cm-2s-1, nevertheless the most irradiated MWPCs already integrated ~0.7 C/cm of accumulated charge per wire. The statistics of gas gaps affected by high voltage trips in the... 582. Longevity studies and Phase 2 electronics upgrade for CMS Cathode Strip Chambers in preparation of HL-LHC Bingran Wang (Northeastern University (US)) The muon system of the CMS experiment includes 540 Cathode Strip Chambers (CSCs) that serve as the primary source for muon detection and triggering in the end cap region. The CSCs are intended to operate throughout the life of the CMS experiment, including the challenging environment of the HL-LHC era. To access the longevity of CSCs over the HL-LHC lifespan, a new campaign of accelerated... 487. Low-pressure TPC with THGEM readout for ion identification in Accelerator Mass Spectrometry Tamara Shakirova (Budker Institute of Nuclear Physics) A new technique for ion identification in Accelerator Mass Spectrometry (AMS) has been proposed based on measuring the ion track ranges using a low-pressure time projection chamber (TPC). As a proof of principle, the low-pressure TPC with charge readout using a THGEM multiplier was developed. The tracks of alpha-particles from various radioactive sources were successfully recorded in the TPC.... 514. Luminometers for Future Linear Collider Experiments Dr Veta Ghenescu (Institute of Space Science (RO)) The FCAL collaboration develops fast, compact calorimeters to measure the luminosity of electron-positron collisions with high precision using small angle Bhabha scattering, and bunch-by-bunch using beamstrahlung pairs. Searches for new physics also require the detection of high energy electrons at low angles. Several sensor options, such as GaAs or single crystal sapphire, are under... 557. Lynkeos MIS: A Muon Imaging System for Nuclear Waste Containers Prof. Ralf Kaiser (University of Glasgow & Lynkeos Technology Ltd.) The Lynkeos Muon Imaging System (MIS) uses cosmic-ray muons for the 3D-imaging of the contents of shielded nuclear waste containers. The detector system consists of four scintillating fibre tracker modules using 64 channel MPAMTs as readout, two above and two below the object to track the muons. Complex imaging algorithms then reconstruct a 3D image of the object and its contents. 886. Measurements of Beam Backgrounds at SuperKEKB Luka Santelj (Jozef Stefan Institute) The Belle II experiment at the SuperKEKB asymmetric energy $e^+e^-$ collider in Tsukuba, Japan, finished its commissioning phase in 2018. In the years to follow the SuperKEKB will deliver an instantaneous luminosity of $8·10^{35}$cm$^{−2}$s$^{−1}$, which is 40 times higher than the record luminosity of its predecessor, KEKB. In order to exploit the physics potential of this new generation... 564. MIMOSIS, a CMOS sensor for the CBM Micro Vertex Detector Dr Michael Deveaux (Goethe University Frankfurt) The Compressed Baryonic Matter Experiment (CBM) is one of the core experiments of the future FAIR facility at Darmstadt/Germany. This experiment will explore the phase diagram of strongly interacting matter in the regime of high net baryon densities with numerous rare probes. The Micro Vertex Detector (MVD) will determine the secondary decay vertex of open charm particles with $\sim 50~\rm \mu... 484. MPGD hole-by-hole gain scanning by UV excited single photoelectron detection Gábor Nyitrai (Wigner Research Centre for Physics, Budapest (HU)) The developed high resolution scanner using focused UV light gave the possibility to study single photoelectron response of MPGDs on the sub-millimeter scale. This technology reveals the microstructure of photo-efficiency and local gain to quantitatively compare different GEM geometries and thus provides a powerful tool for GEM quality assurance. The readout detector uses a single GEM with... 883. Neutron detector based on a layer of LiI:Eu Ms Tamara Holub (ISMA NAS of Ukraine) Neutron detectors based on gas-discharge tubes filled with 3He have a high thermal neutron registration efficiency and are not sensitive to others ionizing radiation. But the main disadvantage of such detectors is the high cost of a very rare isotope 3He. One of the effective elements for the registration of thermal neutrons is the isotope of lithium 6Li, which nature contamination is much... 843. Neutron Gas Scintillation Imager with Glass Capillary Plate Mr Toru Moriya (Department of Physics, Yamagata University) A glass capillary plate (CP) is a thin glass plate of 300 $\mu$m thickness with a large number of through holes (50 $\mu$m diameter with 64 $\mu$m pitch). The CP is one of a device for a hole-type micropattern gaseous detectors (MPGD) as represented by gas electron multiplier (GEM). We have been developing a neutron gas scintillation imager (n-GSI) consisting of a thin layer of $^{10}$B, a CP... 572. New insights on boiling carbon dioxide flow in mini- and micro-channels for optimal silicon detector cooling Desiree Hellenschmidt (Universität Bonn (DE)) Whilst the thermal management needs of future silicon detectors are increasing, the mass and volume minimization of all ancillaries gets more demanding. This requires highly effective active cooling in very small channels. Due to its favourable thermo-physical properties, evaporative CO2 is used as refrigerant for the future generations of silicon detectors at LHC. However, available data on... 748. New Large Aperture Photodetectors for a Water Cherenkov Detector Yasuhiro NISHIMURA (The University of Tokyo) Three types of 50 cm-diameter photo-detectors were newly developed for a future large water Cherenkov detector, Hyper-Kamiokande. These detection performance was largely improved by adopting different amplification systems from a conventional 50-cm photomultiplier tube (PMT) in Super-Kamiokande. A new PMT with a box-and-line dynode was completed by optimizing the surface curvature, alignment... 533. NONINVASIVE METHOD OF RECOVERY OF GAS PARTICLE DETECTORS UNDER OPERATION IN HIGH-INTENSITY FIELDS OF RADIATION Prof. Anatoly Krivshich Aging effects result in a surface degradation of both the anode and cathode electrodes, which occur in different forms. Anode type of aging is associated with silicon deposits formed on the surface of the anode wires. This effect is manifested even with small accumulated charges in the range of (0.1–1.0) Coulomb per cm of the anode wire length. If there would be no silicon contamination in the... 906. Novel monolithic array of Silicon Drift Detector systems designed for X-ray absorption fluorescence and low energy X-ray fluorescence Daniela Cirrincione (INFN-Ts) In recent decades, new and better detectors for X-ray spectroscopy have been developed, and, among these, many are based on Silicon Drift Detectors (SDD). We present a further improvement resulting from the dedicated optimization of the whole detector system: SDD detector design and production technology, ultra-low noise front-end electronics, dedicated acquisition system and digital... 764. Operation and performance of the PADME active target Mrs Federica Oliva (INFN Lecce and Dip. di Matematica e Fisica, Università del Salento) Large size and thin high-quality polycrystalline diamond were used to build the full carbon active target of the PADME experiment, at the Beam Test Facility (BTF) of the Laboratori Nazionali di Frascati, searching for a dark photon of mass up to about 23.7 MeV. The diamond sensors were ordered from a US commercial firm and graphitic electrodes on the surfaces were produced by a UV excimer... 518. Operation of a silicon microstrip detector prototype for ultra-fast imaging at a synchrotron radiation beam. Mr Lev Shekhtman (Budker Institute of Nuclear Physics (RU)) A method of imaging of ultra-fast processes, like explosion or fast combustion, at a synchrotron radiation beam is being developed at the Siberian Synchrotron and Teraherz Radiation Center (SSTRC). Two stations are operating at beam line 0 at the VEPP-3 storage ring and at beam line 8 at the VEPP-4M storage ring. Both stations are equipped with the detector for imaging of explosions DIMEX,... 531. Particle detectors in Rare-gas crystals Dr Marco Guarise (University of Ferrara and INFN) Low energy threshold detectors are necessary in many frontier fields of the experimental physics. In particular these are extremely important for probing Dark Matter (DM) possible candidates. We present the activity of the AXIOMA matrix R&D project, a novel detection approach that exploits Rare gases crystals both undoped and doped maintained at low temperature. In the undoped matrices, the... 588. Particle Identification with DIRCs at PANDA Prof. Michael Johannes Dueren (Justus-Liebig-Universitaet Giessen (DE)) The DIRC technology (Detection of Internally Reflected Cherenkov light) offers an excellent possibility to minimize the form factor of Cherenkov detectors in hermetic high energy detectors. The PANDA experiment at FAIR in Germany will combine a barrel-shaped DIRC with a disc-shaped DIRC to cover an angular range of 5 to 140 degrees. Particle identification for pions and kaons with a separation... 602. Performance of four CVD diamond radiation sensors at high-temperature Dr David Smith (Brunel University London) Ionising radiation detectors based on wide band-gap materials have the potential to operate at temperatures greater than 200 °C. Such detectors are important in applications such as monitoring near reactors and in deep oil and gas well bore-hole logging. We discuss the development of alpha particle detectors, based on CVD diamond, which operate with good charge collection efficiency and energy... 792. Performance of the CMS RPC upgrade using 2D fast timing readout system Konstantin Shchablo (Centre National de la Recherche Scientifique (FR)) A new generation of RPC chambers capable to withstand high particle fluxes (up to 2000 Hz/cm2) and instrumented with a precise timing readout electronics is proposed to equip two of the four high eta stations of the CMS muon system. Doublet RPC detectors each made of two 1.4 mm HPL electrodes and separated by a gas gap of the same thickness are proposed. The new scheme reduces the amount of... 863. PID system for Super C-$\tau$ Factory at Novosibirsk Alexander Barniakov (Novosibirsk State University (RU) & Budker INP (RU)) The Super C-$\tau$ Factory at Novosibirsk is a new experiment with e$^+$e$^-$- collider with energy W=2$\div$6 GeV and luminosity up to 10$^{35}$cm$^{-1}$s$^{-1}$ (in 100 times higher than in operated today experiments in this energy region). For successful execution of the broad experimental program development of universal detector with excellent parameters is needed. R&D activities on all... 609. Positron tracking detector for J-PARC muon g-2/EDM experiment Takashi Yamanaka (Kyushu University) The muon anomalous magnetic moment (g-2) and the electric dipole moment (EDM) are calculated with high precisions in the Standard Model (SM) and are suitable to search for new physics beyond the SM. J-PARC muon g−2/EDM (E34) experiment aims to measure g−2 with a precision of 0.1 ppm and search for EDM with a sensitivity of $10^{−21}$ e$\cdot$cm with a different method from the muon g−2/ EDM... 797. Processing of AC-coupled n-in-p pixel detectors on MCz silicon using atomic layer deposition (ALD) grown aluminium oxide Jennifer Ott (Helsinki Institute of Physics (FI)) We report on the fabrication of capacitively-coupled (AC) n+-in-p pixel detectors on magnetic Czochralski silicon substrates. In our devices, we employ a layer of aluminium oxide (Al$_2$O$_3$) grown by atomic layer deposition (ALD) as dielectric and field insulator, instead of the commonly used SiO$_2$. As shown in earlier research, Al$_2$O$_3$ thin films exhibit high negative oxide charge,... 656. Production and performance study of Diamond-Like Carbon for the resistive electrode in MPGD application Dr You Lyu (University of Science and Technology of China (CN)) Diamond-like Carbon (DLC), a newly recognized resistive material, is a kind of metastable amorphous carbon material. DLC has recently received considerable attention and is increasingly exploited in resistive electrodes to suppress discharges in Micro-Pattern Gaseous Detector (MPGD). DLC coating provided a new method to produce high-quality resistive electrodes for MGPDs owing to it's low... 691. Prospects for Silicon, Diamond and Silicon Carbide detectors as fast neutron spectrometers Dr Marica Rebai (Istituto di Fisica del Plasma "P. Caldirola", Consiglio Nazionale delle Ricerche) The range of application of high band-gap solid state detectors is expanding in those environments where the high neutron flux is an issue, such as the high-flux spallation neutron sources and the thermonuclear fusion experiments. In particular, Diamond and Silicon Carbide are considered an interesting alternative to Silicon thanks to their high resistance to neutron damage. In this work we... 693. Proton Irradiation Hardness Investigations of 60 GHz Transceiver Chips for High Energy Physics Experimentations Mr Imran Aziz (Uppsala University) The replacement of wired readout systems with the broadband wireless links will significantly reduce the number of cables and their connectors at the LHC. These cables notably contribute in the active detector volume and cause multiple scattering. The availability of 60 GHz license free band (57-66 GHz) provides the opportunity to achieve 10's of Gbps wireless data rate for a single link. This... 876. RADEM, a Radiation Hard Electron Monitor for the JUICE mission Marco Pinto (LIP) The ESA next class-L mission to the Jovian system, JUICE, the Jupiter Icy Moons Explorer, will collect valuable data while orbiting Jupiter and three its moons for a period of three and a half years. RADEM, the Radiation Hard Electron Monitor is being developed to provide housekeeping information. It will also provide valuable scientific data on the energetic radiation environment of the... 819. Radiation damage in p-type EPI silicon pad diodes irradiated with different particle types and fluences Yana Gurimskaya (CERN) In view of the HL-LHC upgrade, radiation tolerant silicon sensors that contain low-resistivity p-type implants or substrates, like LGAD or HVCMOS devices, are being developed in the framework of ATLAS, CMS, RD50 and other sensor R&D projects. The devices are facing a particular problem - the apparent deactivation of the doping due to the irradiation, the so-called acceptor removal effect. 583. Real-Time Dose-Verification in Particle Therapy Using an Electron-Tracking Compton Camera Dr Kei Kamada (Tohoku University) Dose verification in situ is highly required in proton therapy. We have developed an electron-tracking Compton camera (ETCC) which consists of gaseous time projection chamber (TPC) and a position-sensitive scintillator. Since the TPC performs the electron-tracking, the ETCC is able to reconstruct Compton scattering event-by-event, and to reject the back ground strongly. In this presentation,... 704. Run and Slow Control System of the Belle II Silicon Vertex Detector Christian Irmler (Austrian Academy of Sciences (AT)), Mr Hao Yin (HEPHY Vienna) The Belle II Silicon Vertex Detector (SVD) is currently being finalized and commissioned at the SuperKEKB factory, Tsukuba, Japan. For a reliable operation and data taking of the SVD a sophisticated and robust run and slow control system has been implemented, which utilizes the Experimental Physics and Industrial Control System (EPICS) framework. EPICS uses client/server and publish/subscribe... 713. Series Production Testing, Commissioning and Initial Operation of the Belle II Silicon Vertex Detector Readout System Richard Thalmeier The Silicon Vertex Detector (SVD) of the Belle II experiment at the High Energy Accelerator Research Organization (KEK) in Tsukuba, Japan, consists of 172 double-sided microstrip technology silicon sensors arranged cylindrically in four layers around the interaction point. A total of 1748 readout chips (APV25) process and send the analog signals over 2.5 meter long copper cables to 48 Junction... 585. Silicon on Insulator Silicon Photomultiplier (SOI-SiPM) Array for Use in Photon counting CT Mr Akihiro Koyama (The University of Tokyo) Photon counting computed tomography (PCCT) based on indirect conversion detectors took great interests from its low fabrication cost and easy handling. In order to satisfy both count rate requirements of over 2 Mcps/mm2 and spatial resolution requirements in PCCT, sub-mm pitch silicon photomultiplier array using silicon on insulator technology (SOI-SiPM) was fabricated and evaluated... Mihael Makek (Faculty of Science, University of Zagreb) 779. Small-Pads Resistive Micromegas Prototype Camilla Di Donato (Universita e sezione INFN di Napoli (IT)) Detectors at future accelerators will require operation at rates up to three orders of magnitude higher than 15 kHz/cm$^2$, the hit rates expected in the current upgrades forward muon detectors of LHC experiments. A resistive Micromegas detectors with modified readout system can achieve rate capability up to few MHz/cm$^2$ and low occupancy, thanks to few mm$^2$ readout pads. We present the... 573. Status and Performance of CBM-TOF systems Ingo-Martin Deppner (Physikalisches Institut der Universität Heidelberg) The Compressed Baryonic Matter (CBM) experiment aims at exploring the QCD phase diagram at large baryon densities with heavy ion beams in the beam energy range from 2 A GeV to 11 A GeV at the SIS100 accelerator of FAIR/GSI. For charged particle identification that is required by many observables that are sensitive to the phase structure like collective flow, phase space population of rare... 502. Study of metal contacts on Cd1-xZnxTe and Cd1-xMnxTe crystals for radiation detector applications Mr Artem Brovko (Tel Aviv University, School of EE) This work describes a comprehensive study of metal contacts on single crystals of II-VI group semiconductors, Cd1-xZnxTe and Cd1-xMnxTe. Both materials are candidates for numerous detector applications, while the former is more established, the latter has potential advantages. In this work we formed metal contacts on high resistivity Bridgman grown Cd1-xZnxTe and Cd1-xMnxTe crystals by thermal... 793. Study on garnet scintillators for TOF-PET detectors and HEP exeriments Nicolaus Kratochwil (CERN) Scintillation crystals are widely used in detectors for high energy physics and medical physics. For positron emission tomography (PET), a major approach to improve the signal to noise ratio and image quality is the time-of-flight technique, where fast scintillators with high stopping power are needed. Particle detectors in future high energy physics experiments will operate at high collision... 461. Sub-nanosecond synchronization node for high-energy astrophysics: The KM3NeT White Rabbit Node David Calvo (IFIC) The first Detection Units of the KM3NeT infrastructure, whose main goal is the detection of cosmic neutrinos, are currently being deployed on the Mediterranean sea bed. The collaboration has chosen White Rabbit technology for providing sub-nanosecond synchronization between the Digital Optical Modules, the functional detection units of the detector. White Rabbit functionality is provided by... 515. TAIGA - a hybrid array for high energy gamma astronomy and cosmic ray physics Prof. Nikolay Budnev (Irkutsk State University) The TAIGA (Tunka Advanced Instrument for cosmic ray physics and Gamma Astronomy) facilities aims at gamma-ray astronomy at energies from a few TeV to several PeV, as well as cosmic ray physics from 100 TeV to several EeV. Combination of the wide angle Cherenkov timing detector TAIGA-HiSCORE with the 4-m class Imaging Atmospheric Cherenkov Telescopes (TAIGA-IACT) of FoV of 10x10 degrees... 683. Test Measurements with the Technical Prototype for the Mu3e Tile Detector Hannah Klingenmeyer (Kirchhoff Institute for Physics, Heidelberg University) The Tile Detector is a dedicated timing detector system developed for the Mu3e experiment, which is designed to search for the lepton-flavour violating (LFV) decay $\mu \rightarrow eee$ with a target sensitivity of $10^{-16}$. In order to determine the vertex of the three decay electrons, precise spatial and timing measurements are necessary, resulting in the requirement of a time resolution... 570. The ENUBET ERC project for an instrumented decay tunnel for future neutrino beams Elisabetta Parozzi (Milano Bicocca University) The ENUBET ERC project (2016-2021) is studying a narrow band neutrino beam where lepton production could be monitored at single particle level in an instrumented decay tunnel. For this purpose we have developed a specialized shashlik calorimeter with a compact readout. The modules are composed of 1.5 cm thick steel absorbers coupled to 5 mm thick plastic scintillators. A matrix of 3 x 3 fibers... 743. The NA64 experiment for searches of rare events at CERN Mr Michael Hösgen (HISKP), Michael Hosgen (University of Bonn (DE)) We report on the recent activity of the NA64 experiment at the SPS of CERN. The NA64 experiment uses a beam dump setup to conduct missing energy searches with a high intensity electron beam. In 2016$\,$-$\,$2018 separate dedicated searches for two mediators between standard model and dark sector, a new light vector boson A' and a new short-lived neutral boson X, were performed. The A' was... 567. The performances of photomultiplier tube of WCDA++ in LHAASO experiment Ms Hengying Zhang (Shandong University) In order to extend the dynamic range of Water Cherenkov Detector Array (WCDA) in Larger High Altitude Air Shower Observatory (LHAASO), another 1.5-inch photomultiplier tube (PMT) is placed aside the 8–inch PMT in each cell of WCDA. All these 1.5-inch PMTs, total 900, consist of the WCDA++ system. The performances of these 1.5-inch PMTs, with special designed di-output voltage divider, are test... 798. The Transition Radiation Detector in the CBM Experiment at FAIR Philipp Kähler (Westfaelische Wilhelms-Universitaet Muenster (DE)) The Compressed Baryonic Matter (CBM) experiment will be one of the research pillars of FAIR (Darmstadt, Germany), which is currently under construction. High-intensity heavy-ions beams delivered by the SIS100 accelerator (FAIR Phase 1) will be used to explore the QCD phase diagram at high baryon densities. Interaction rates of up to 10 MHz on a fixed target will enable measurements at an... 789. Thick, silicon CCDs to search for dark matter within the DAMIC-M Experiment Nuria Castello-Mor (Universidad de Cantabria, CSIC, Instituto de Fisica de Cantabria IFCA, (ES)) The DAMIC (Dark Matter in CCDs) Experiment employs the active silicon of low-noise charge-coupled devices (CCDs) as a target to search for a variety of dark matter candidates with masses below 10 GeV. An array of seven 675-$\mu$m thick CCDs with a target mass of ~40 grams has been collecting data at SNOLAB since early 2017 and the next stage of the experiment, DAMIC-M, will be an array of CCDs... 882. Thin, Double-Sided Radiation Detectors Using Alternative Implantation Techniques Tobias Wittig (CIS Institut fuer Mikrosensorik GmbH (DE)) The CiS Forschungsinstitut fuer Mikrosensorik is engaged in developments of radiation detector technologies on several different fields. Current projects are dealing with large area thinned sensors, active edge sensors and 3D-sensors. The challenge of producing cost-efficient, thin and large-sized sensors for High Energy Physics experiments is approached by a wet etching technology. Cavities... 550. Time-projection chamber development for Multi-Purpose Detector of NICA project Stepan Vereschagin (Joint Institute for Nuclear Research) Under the JINR scientific program on study of hot and dense baryonic matter, a new accelerator complex the Nuclotron-based Ion Collider fAcility (NICA) is under construction. The Multi-Purpose Detector (MPD) will operate at one of the collider interaction point and it is optimized for investigations of heavy-ion collisions in energy range from 4 to 11A GeV. TPC is proposed as central part of... 596. Timing and Synchronization of the DUNE Neutrino Detector David Cussans (University of Bristol (GB)) Synchronizing the different parts of a Particle Physics detector is an essential part of its operation. We describe the system being planned for the DUNE liquid argon neutrino detector and experience from the single phase protoDUNE detector currently operating at CERN. DUNE is will have four caverns, each of which can house a detector with 10kT fiducial mass. The first caverns is planned to... 508. Transverse and longitudinal segmented forward hadron calorimeters with SiPMs light readout for future fixed target heavy ion experiments Fedor Guber (Russian Academy of Sciences (RU)) Forward hadron calorimeters with transverse and longitudinal segmentation are developed for upgraded heavy ion NA61 and BM@N experiments and future CBM experiment at FAIR. The main purpose of these calorimeters is to provide an experimental event-by-event measurements of centrality and orientation of reaction plane in heavy-ion collisions at high beam rates. One of the features of these... 893. Vacuum ultra-violet SiPM development for nEXO Fabrice Retiere (TRIUMF) nEXO is an experiment being designed to search for neutrino-less double beta decay in liquid Xenon. Excellent energy resolution is required for background rejection which in turns require excellent efficiency for the detection of scintillation photons. At the same time, the intrinsic radioactive background of component must be minimize which rules out using photo-multiplier tube (PMTs)....
CommonCrawl
Esperanza Rising Chapter 12 StarryTeacher menacing threatening; intimidating; frightening a box or crate used for transporting fruit or vegetables cringed to show disgust or embarrassment at something huddled to crowd together; to curl one's body into a small space clenched to grasp firmly; grip despondent very sad and without hope reassembled to gather or come together again anguish extreme suffering, grief, or pain misjudged to make a wrong or unfair opinion or judgment about someone deserted; empty; unoccupied Esperanza Rising Chapter 7 Esperanza Rising Chapter 9- Part 1 Of Mice and Men Vocabulary rawrabrar 4000 Essential English Words - Book 3 (Unit 09) cocogky Ulysses vocabulary nick_caripsoz The Lion, The Witch, and the Wardrobe Spelling/Voc… Katelin_Adeszko Gr. 6 Hurricane Vocabulary It's Not A Game Vocabulary Constitution Test 105 terms ise107 Midterm 3 Study Guide sampson_chowdhury Pediatric Pain Management Nur 3 Exam 4 ChenoaSmithRN Anthro 21 Final Exam jcattalo17 Verified questions At college, a(n] _____ might stand in front of a classroom and speak to students. A short story that often features talking animals and a moral is called a[n] _____. Verified answer From the list below, supply the words needed to complete the paragraph. Some words will not be used. conduit, ancillary, semblance, behest, dossier, martyr, philatelist. At the FBI Director's _____, Special Agent Ford compiled a[n] on Caroline Polk, including a list of charges, previous warrants, and a psychological profile. No, it was not every day that a stamp thief made it to the most-wanted list, but Polk had simply gone too far when she burglarized the stamp collection of Terry Moore, a well-known _____ and, more important, a United States senator. Identifying the suspect had taken only hours; thanks to some _____ guidance from the local police department's homicide unit, investigators found Polk's fingerprints all over the heating _____ that she used to enter the senator's house. Polk's fingerprints were on record, largely because she was the only person in the country currently wanted for the grand theft of precious stamps. The Bureau had declined to arrest Polk in the past, for she was known to be armed, and few agents were willing to become _____ to the cause of stamp collecting. From the list below, supply the words needed to complete the paragraph. Some words will not be used. $$ \begin{matrix} \text{genesis} & \text{gait} & \text{emissary} & \text{queue}\\ \text{carp} & \text{patrician} & \text{} & \text{}\\ \end{matrix} $$ After six hours of driving, Charlie parked his car and walked across the parking lot with an odd _______. He was happy to stretch his legs, but he ______ a bout summer crowds when he saw that the ______ for the restroom extended around the comer of the rest stop. He should have expected as much, he reasoned; Memorial Day weekend was the traditional _______ of the summer season, and the crowds had arrived. From the list below, supply the words needed to complete the paragraph. Some words will not be used. filibuster, xenophobia, torpid, prodigal, subdued, invective. Senator Melita Darnell knew that she would have to _____ to prevent a vote on the new McDermid Bill. To her, the bill would pave the way for the same _____ government spending that she had vowed to eliminate. Fortunately, the senator was able to identify others who held _____ opinions on the bill, and thus could be convinced to abstain from voting. Flickr Creative Commons Images Some images used in this set are licensed under the Creative Commons through Flickr.com. Click to see the original works with their full license.
CommonCrawl
OSA Publishing > Optics Express > Volume 27 > Issue 15 > Page 21689 Frustrated tunneling ionization in the elliptically polarized strong laser fields Yong Zhao, Yueming Zhou, Jintai Liang, Zhexuan Zeng, Qinghua Ke, Yali Liu, Min Li, and Peixiang Lu Yong Zhao,1 Yueming Zhou,1,4 Jintai Liang,1 Zhexuan Zeng,1 Qinghua Ke,1 Yali Liu,1 Min Li,1 and Peixiang Lu1,2,3 1Wuhan National Laboratory for Optoelectronics and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China 2Hubei Key Laboratory of Optical Information and Pattern Recognition, Wuhan Institute of Technology, Wuhan 430205, China [email protected] [email protected] Y Zhao J Liang Z Zeng Q Ke Y Liu M Li P Lu pp. 21689-21700 Yong Zhao, Yueming Zhou, Jintai Liang, Zhexuan Zeng, Qinghua Ke, Yali Liu, Min Li, and Peixiang Lu, "Frustrated tunneling ionization in the elliptically polarized strong laser fields," Opt. Express 27, 21689-21700 (2019) Atomic and Molecular Physics Attosecond pulses High harmonic generation Multiphoton ionization Rydberg states Strong field physics Original Manuscript: May 30, 2019 Revised Manuscript: July 1, 2019 Manuscript Accepted: July 2, 2019 We theoretically investigated frustrated tunneling ionization (FTI) in the interaction of atoms with elliptically polarized laser pulses by a semiclassical ensemble model. Our results show that the yield of frustrated tunneling ionization events exhibits an anomalous behavior which maximizes at the nonzero ellipticity. By tracing back the initial tunneling coordinates, we show that this anomalous behavior is due to the fact that the initial transverse velocity at tunneling of the FTI events is nonzero in the linear laser pulses and it moves across zero as the ellipticity increases. The FTI yield maximizes at the ellipticity when the initial transverse momentum for being trapped is zero. Moreover, the angular momentum distribution of the FTI events and its ellipticity dependence are also explored. The anomalous behavior revealed in our work is very similar to the previously observed ellipticity dependence of the near- and below-threshold harmonics, and thus our work may uncover the mechanism of the below-threshold harmonics which is still a controversial issue. Two-dimensional photoelectron holography in strong-field tunneling ionization by counter rotating two-color circularly polarized laser pulses Qinghua Ke, Yueming Zhou, Jia Tan, Mingrui He, Jintai Liang, Yong Zhao, Min Li, and Peixiang Lu Fully differential measurement on above-threshold ionization of CO and CO2 molecules in strong laser fields Xianrong Liu, Yunquan Liu, Hong Liu, Yongkai Deng, Chengyin Wu, and Qihuang Gong J. Opt. Soc. Am. B 28(2) 293-297 (2011) Nonadiabatic tunnel ionization in strong circularly polarized laser fields: counterintuitive angular shifts in the photoelectron momentum distribution Yang Li, Pengfei Lan, Hui Xie, Mingrui He, Xiaosong Zhu, Qingbin Zhang, and Peixiang Lu Intra-half-cycle interference of low-energy photoelectron in strong midinfrared laser fields Hui Xie, Min Li, Yang Li, Yueming Zhou, and Peixiang Lu Ellipticity dependence of the near-threshold harmonics of H2 in an elliptical strong laser field Hua Yang, Peng Liu, Ruxin Li, and Zhizhan Xu L. V. Keldysh, "Ionization in the Field of a Strong Electromagnetic Wave," Sov. Phys. JETP 20, 1307 (1965). S. Luo, M. Li, W. Xie, K. Liu, Y. Feng, B. Du, Y. Zhou, and P. Lu, "Exit momentum and instantaneous ionization rate of nonadiabatic tunneling ionization in elliptically polarized laser fields," Phys. Rev. A 99(5), 053422 (2019). P. B. Corkum, "Plasma perspective on strong field multiphoton ionization," Phys. Rev. Lett. 71(13), 1994–1997 (1993). G. G. Paulus, W. Nicklich, H. Xu, P. Lambropoulos, and H. Walther, "Plateau in above threshold ionization spectra," Phys. Rev. Lett. 72(18), 2851–2854 (1994). W. Becker, F. Grasbon, R. Kopold, D. B. Miloševic, G. G. Paulus, and H. Walther, "Above-threshold ionization: From classical features to quantum effects," Adv. At. Mol. Opt. Phys. 48, 35–98 (2002). M. Lewenstein, P. Balcou, M. Y. Ivanov, A. L'Huillier, and P. B. Corkum, "Theory of high-harmonic generation by low-frequency laser fields," Phys. Rev. A 49(3), 2117–2132 (1994). F. Krausz and M. Ivanov, "Attosecond physics," Rev. Mod. Phys. 81(1), 163–234 (2009). J. Li, Q. Zhang, L. Li, X. Zhu, T. Huang, P. Lan, and P. Lu, "Orientation dependence of high-order harmonic generation in nanowire," Phys. Rev. A 99(3), 033421 (2019). Y. He, L. He, P. Lan, B. Wang, L. Li, X. Zhu, W. Cao, and P. Lu, "Direct imaging of molecular rotation with high-order-harmonic generation," Phys. Rev. A 99(5), 053419 (2019). L. Liang, P. Lan, X. Zhu, T. Huang, Q. Zhang, M. Lein, and P. Lu, "Reciprocal-Space-Trajectory Perspective on High-Harmonic Generation in Solids," Phys. Rev. Lett. 122, 193901 (2019). P. Meng, D. Zhao, and W. Liu, "Topological plasmonic modes in graphene-coated nanowire arrays," Opt. Quantum Electron. 51, 156 (2019). W. Becker, X. J. Liu, P. J. Ho, and J. H. Eberly, "Theories of photoelectron correlation in laser-driven multiple atomic ionization," Rev. Mod. Phys. 84(3), 1011–1043 (2012). A. Tong, Q. Li, X. Ma, Y. Zhou, and P. Lu, "Internal collision induced strong-field nonsequential double ionization in molecules," Opt. Express 27(5), 6415 (2019). G. Sansone, E. Benedetti, F. Calegari, C. Vozzi, L. Avaldi, R. Flammini, L. Poletto, P. Villoresi, C. Altucci, R. Velotta, S. Stagira, S. De Silvestri, and M. Nisoli, "Isolated Single-Cycle Attosecond Pulses," Science 314(5798), 443–446 (2006). E. Goulielmakis, M. Schultze, M. Hofstetter, V. S. Yakovlev, J. Gagnon, M. Uiberacker, A. L. Aquila, E. M. Gullikson, D. T. Attwood, R. Kienberger, F. Krausz, and U. Kleineberg, "Single-Cycle Nonlinear Optics," Science 320(5883), 1614–1617 (2008). J. Itatani, J. Levesque, D. Zeidler, H. Niikura, H. Pépin, J. C. Kieffer, P. B. Corkum, and D. M. Villeneuve, "Tomographic imaging of molecular orbitals," Nature 432(7019), 867–871 (2004). C. I. Blaga, J. Xu, A. D. DiChiara, E. Sistrunk, K. Zhang, P. Agostini, T. A. Miller, L. F. DiMauro, and C. D. Lin, "Imaging ultrafast molecular dynamics with laser-induced electron diffraction," Nature 483(7388), 194–197 (2012). Y. Huismans, A. Rouzée, A. Gijsbertsen, J. H. Jungmann, A. S. Smolkowska, P. S. W. M. Logman, F. Lépine, C. Cauchy, S. Zamith, T. Marchenko, J. M. Bakker, G. Berden, B. Redlich, A. F. G. van der Meer, H. G. Muller, W. Vermin, K. J. Schafer, M. Spanner, M. Y. Ivanov, O. Smirnova, D. Bauer, S. V. Popruzhenko, and M. J. J. Vrakking, "Time-Resolved Holography with Photoelectrons," Science 331(6013), 61–64 (2011). Y. Zhou, O. I. Tolstikhin, and T. Morishita, "Near-Forward Rescattering Photoelectron Holography in Strong-Field Ionization: Extraction of the Phase of the Scattering Amplitude," Phys. Rev. Lett. 116(17), 173001 (2016). J. Tan, Y. Zhou, M. He, Y. Chen, Q. Ke, J. Liang, X. Zhu, M. Li, and P. Lu, "Determination of the Ionization Time Using Attosecond Photoelectron Interferometry," Phys. Rev. Lett. 121(25), 253203 (2018). M. Li, H. Xie, W. Cao, S. Luo, J. Tan, Y. Feng, B. Du, W. Zhang, Y. Li, Q. Zhang, P. Lan, Y. Zhou, and P. Lu, "Photoelectron Holographic Interferometry to Probe the Longitudinal Momentum Offset at the Tunnel Exit," Phys. Rev. Lett. 122(18), 183202 (2019). J. Tan, Y. Zhou, M. He, Q. Ke, J. Liang, Y. Li, M. Li, and P. Lu, "Time-resolving tunneling ionization via strong-field photoelectron holography," Phys. Rev. A 99(3), 033402 (2019). Y. Liu, J. Tan, M. He, H. Xie, Y. Qin, Y. Zhao, M. Li, Y. Zhou, and P. Lu, "Photoelectron holographic interferences from multiple returning in strong-field tunneling ionization," Opt. Quantum Electron. 51(5), 145 (2019). M. P. de Boer and H. G. Muller, "Observation of large populations in excited states after short-pulse multiphoton ionization," Phys. Rev. Lett. 68(18), 2747–2750 (1992). R. R. Jones, D. W. Schumacher, and P. H. Bucksbaum, "Population trapping in Kr and Xe in intense laser fields," Phys. Rev. A 47(1), R49–R52 (1993). T. Nubbemeyer, K. Gorling, A. Saenz, U. Eichmann, and W. Sandner, "Strong-Field Tunneling without Ionization," Phys. Rev. Lett. 101(23), 233001 (2008). G. N. Gibson, L. Fang, and B. Moser, "Direct femtosecond laser excitation of the $2p$2p state of H by a resonant seven-photon transition in ${\textrm{H}}_{2}^{+}$H2+," Phys. Rev. A 74(4), 041401 (2006). Q. Li, X. Tong, T. Morishita, C. Jin, H. Wei, and C. D. Lin, "Rydberg states in the strong field ionization of hydrogen by 800, 1200 and 1600 nm lasers," J. Phys. B 47(20), 204019 (2014). H. Zimmermann, S. Patchkovskii, M. Ivanov, and U. Eichmann, "Unified Time and Frequency Picture of Ultrafast Atomic Excitation in Strong Laser Fields," Phys. Rev. Lett. 118(1), 013003 (2017). W. Zhang, X. Gong, H. Li, P. Lu, F. Sun, Q. Ji, K. Lin, J. Ma, H. Li, J. Qiang, F. He, and J. Wu, "Electron-nuclear correlated multiphoton-route to Rydberg fragments of molecules," Nature Communications 10, 757 (2019). H. Liu, Y. Liu, L. Fu, G. Xin, D. Ye, J. Liu, X. T. He, Y. Yang, X. Liu, Y. Deng, C. Wu, and Q. Gong, "Low Yield of Near-Zero-Momentum Electrons and Partial Atomic Stabilization in Strong-Field Tunneling Ionization," Phys. Rev. Lett. 109(9), 093001 (2012). B. Manschwetus, T. Nubbemeyer, K. Gorling, G. Steinmeyer, U. Eichmann, H. Rottke, and W. Sandner, "Strong Laser Field Fragmentation of ${\textrm{H}}_{2}$H2: Coulomb Explosion without Double Ionization," Phys. Rev. Lett. 102(11), 113002 (2009). T. Nubbemeyer, U. Eichmann, and W. Sandner, "Excited neutral atomic fragments in the strong-field dissociation of N2 molecules," J. Phys. B 42(13), 134010 (2009). J. McKenna, S. Zeng, J. J. Hua, A. M. Sayler, M. Zohrabi, N. G. Johnson, B. Gaire, K. D. Carnes, B. D. Esry, and I. Ben-Itzhak, "Frustrated tunneling ionization during laser-induced D2 fragmentation: Detection of excited metastable D* atoms," Phys. Rev. A 84(4), 043425 (2011). J. Wu, A. Vredenborg, B. Ulrich, L. P. H. Schmidt, M. Meckel, S. Voss, H. Sann, H. Kim, T. Jahnke, and R. Dörner, "Multiple Recapture of Electrons in Multiple Ionization of the Argon Dimer by a Strong Laser Field," Phys. Rev. Lett. 107(4), 043003 (2011). J. McKenna, A. M. Sayler, B. Gaire, N. G. Kling, B. D. Esry, K. D. Carnes, and I. Ben-Itzhak, "Frustrated tunnelling ionization during strong-field fragmentation of ${\textrm{D}}_{3}^{+}$D3+," New J. Phys. 14(10), 103029 (2012). A. von Veltheim, B. Manschwetus, W. Quan, B. Borchers, G. Steinmeyer, H. Rottke, and W. Sandner, "Frustrated Tunnel Ionization of Noble Gas Dimers with Rydberg-Electron Shakeoff by Electron Charge Oscillation," Phys. Rev. Lett. 110(2), 023001 (2013). W. Zhang, Z. Yu, X. Gong, J. Wang, P. Lu, H. Li, Q. Song, Q. Ji, K. Lin, J. Ma, H. Li, F. Sun, J. Qiang, H. Zeng, F. He, and J. Wu, "Visualizing and Steering Dissociative Frustrated Double Ionization of Hydrogen Molecules," Phys. Rev. Lett. 119(25), 253202 (2017). A. S. Landsman, A. N. Pfeiffer, C. Hofmann, M. Smolarski, C. Cirelli, and U. Keller, "Rydberg state creation by tunnel ionization," New J. Phys. 15(1), 013001 (2013). H. Zimmermann, J. Buller, S. Eilzer, and U. Eichmann, "Strong-Field Excitation of Helium: Bound State Distribution and Spin Effects," Phys. Rev. Lett. 114(12), 123003 (2015). S. Larimian, S. Erattupuzha, C. Lemell, S. Yoshida, S. Nagele, R. Maurer, A. Baltuška, J. Burgdörfer, M. Kitzler, and X. Xie, "Coincidence spectroscopy of high-lying Rydberg states produced in strong laser fields," Phys. Rev. A 94(3), 033401 (2016). U. Eichmann, T. Nubbemeyer, H. Rottke, and W. Sandner, "Acceleration of neutral atoms in strong short-pulse laser fields," Nature 461(7268), 1261–1264 (2009). S. Eilzer and U. Eichmann, "Steering neutral atoms in strong laser fields," J. Phys. B 47(20), 204014 (2014). H. Zimmermann and U. Eichmann, "Atomic excitation and acceleration in strong laser fields," Phys. Scr. 91(10), 104002 (2016). U. Eichmann, A. Saenz, S. Eilzer, T. Nubbemeyer, and W. Sandner, "Observing Rydberg Atoms to Survive Intense Laser Fields," Phys. Rev. Lett. 110(20), 203002 (2013). D. G. Arbó, C. Lemell, and J. Burgdörfer, "Signatures of tunneling and multiphoton ionization by short-laser pulses: The partial-wave distribution," J. Phys.: Conf. Ser. 635, 012003 (2015). S. Larimian, C. Lemell, V. Stummer, J.-W. Geng, S. Roither, D. Kartashov, L. Zhang, M.-X. Wang, Q. Gong, L.-Y. Peng, S. Yoshida, J. Burgdörfer, A. Baltuška, M. Kitzler, and X. Xie, "Localizing high-lying Rydberg wave packets with two-color laser fields," Phys. Rev. A 96(2), 021403 (2017). N. I. Shvetsov-Shilovski, S. P. Goreslavski, S. V. Popruzhenko, and W. Becker, "Capture into rydberg states and momentum distributions of ionized electrons," Laser Phys. 19(8), 1550–1558 (2009). W. Xiong, L. Peng, and Q. Gong, "Recent progress of below-threshold harmonic generation," J. Phys. B 50(3), 032001 (2017). D. C. Yost, T. R. Schibli, J. Ye, J. L. Tate, J. Hostetter, M. B. Gaarde, and K. J. Schafer, "Vacuum-ultraviolet frequency combs from below-threshold harmonics," Nat. Phys. 5(11), 815–820 (2009). H. Yun, J. H. Mun, S. I. Hwang, S. B. Park, I. A. Ivanov, C. H. Nam, and K. T. Kim, "Coherent extreme-ultraviolet emission generated through frustrated tunnelling ionization," Nat. Photonics 12(10), 620–624 (2018). W.-H. Xiong, X.-R. Xiao, L.-Y. Peng, and Q. Gong, "Correspondence of below-threshold high-order-harmonic generation and frustrated tunneling ionization," Phys. Rev. A 94(1), 013417 (2016). N. H. Burnett, C. Kan, and P. B. Corkum, "Ellipticity and polarization effects in harmonic generation in ionizing neon," Phys. Rev. A 51(5), R3418–R3421 (1995). K. Miyazaki and H. Takada, "High-order harmonic generation in the tunneling regime," Phys. Rev. A 52(4), 3007–3021 (1995). M. Kakehata, H. Takada, H. Yumoto, and K. Miyazaki, "Anomalous ellipticity dependence of high-order harmonic generation," Phys. Rev. A 55(2), R861–R864 (1997). H. Soifer, P. Botheron, D. Shafir, A. Diner, O. Raz, B. D. Bruner, Y. Mairesse, B. Pons, and N. Dudovich, "Near-Threshold High-Order Harmonic Spectroscopy with Aligned Molecules," Phys. Rev. Lett. 105(14), 143904 (2010). L. Ortmann, C. Hofmann, and A. S. Landsman, "Controlling the quantum number distribution and yield of Rydberg states via the duration of the laser pulse," arXiv:1810.07164 (2018). A. M. Perelomov, V. S. Popov, and M. V. Terent'ev, "Ionization of Atoms in an Alternating Electric Field: II," Sov. Phys. JETP 24, 207 (1967). M. V. Ammosov, N. B. Delone, and V. P. Krainov, "Tunnel ionization of complex atoms and of atomic ions in an alternating electromagnetic field," Sov. Phys. JETP 64, 1191 (1986). N. B. Delone and V. P. Krainov, "Energy and angular electron spectra for the tunnel ionization of atoms by strong low-frequency radiation," J. Opt. Soc. Am. B 8(6), 1207 (1991). L. Ortmann, C. Hofmann, and A. S. Landsman, "Dependence of Rydberg-state creation by strong-field ionization on laser intensity," Phys. Rev. A 98(3), 033415 (2018). V. D. Mur, S. V. Popruzhenko, and V. S. Popov, "Energy and momentum spectra of photoelectrons under conditions of ionization by strong laser radiation (The case of elliptic polarization)," J. Exp. Theor. Phys. 92(5), 777–788 (2001). J. A. Hostetter, J. L. Tate, K. J. Schafer, and M. B. Gaarde, "Semiclassical approaches to below-threshold harmonics," Phys. Rev. A 82(2), 023401 (2010). T. Peyronel, O. Firstenberg, Q. Liang, S. Hofferberth, A. V. Gorshkov, T. Pohl, M. D. Lukin, and V. Vuletic, "Quantum nonlinear optics with single photons enabled by strongly interacting atoms," Nature 488(7409), 57–60 (2012). L. Li and A. Kuzmich, "Quantum memory with strong and controllable Rydberg-level interactions," Nat. Commun. 7(1), 13618 (2016). C. Guerlin, J. Bernu, S. Deléglise, C. Sayrin, S. Gleyzes, S. Kuhr, M. Brune, J.-M. Raimond, and S. Haroche, "Progressive field-state collapse and quantum non-demolition photon counting," Nature 448(7156), 889–893 (2007). C. Sayrin, I. Dotsenko, X. Zhou, B. Peaudecerf, T. Rybarczyk, S. Gleyzes, P. Rouchon, M. Mirrahimi, H. Amini, M. Brune, J.-M. Raimond, and S. Haroche, "Real-time quantum feedback prepares and stabilizes photon number states," Nature 477(7362), 73–77 (2011). I. Burenkov, A. Popov, O. Tikhonova, and E. Volkova, "New features of interaction of atomic and molecular systems with intense ultrashort laser pulses," Laser Phys. Lett. 7(6), 409–434 (2010). E. A. Volkova, A. M. Popov, and O. V. Tikhonova, "Ionization and stabilization of atoms in a high-intensity, low-frequency laser field," J. Exp. Theor. Phys. 113(3), 394–406 (2011). M. V. Fedorov, N. P. Poluektov, A. M. Popov, O. V. Tikhonova, V. Y. Kharin, and E. A. Volkova, "Interference Stabilization Revisited," IEEE J. Sel. Top. Quantum Electron. 18(1), 42–53 (2012). J. Dubois, S. A. Berman, C. Chandre, and T. Uzer, "Capturing Photoelectron Motion with Guiding Centers," Phys. Rev. Lett. 121(11), 113202 (2018). Agostini, P. Altucci, C. Amini, H. Ammosov, M. V. Aquila, A. L. Arbó, D. G. Attwood, D. T. Avaldi, L. Bakker, J. M. Balcou, P. Baltuška, A. Bauer, D. Becker, W. Benedetti, E. Ben-Itzhak, I. Berden, G. Berman, S. A. Bernu, J. Blaga, C. I. Borchers, B. Botheron, P. Brune, M. Bruner, B. D. Bucksbaum, P. H. Buller, J. Burenkov, I. Burgdörfer, J. Burnett, N. H. Calegari, F. Cao, W. Carnes, K. D. Cauchy, C. Chandre, C. Chen, Y. Cirelli, C. Corkum, P. B. de Boer, M. P. De Silvestri, S. Deléglise, S. Delone, N. B. Deng, Y. DiChiara, A. D. DiMauro, L. F. Diner, A. Dörner, R. Dotsenko, I. Du, B. Dubois, J. Dudovich, N. Eberly, J. H. Eichmann, U. Eilzer, S. Erattupuzha, S. Esry, B. D. Fang, L. Fedorov, M. V. Feng, Y. Firstenberg, O. Flammini, R. Fu, L. Gaarde, M. B. Gagnon, J. Gaire, B. Geng, J.-W. Gibson, G. N. Gijsbertsen, A. Gleyzes, S. Gong, Q. Gong, X. Goreslavski, S. P. Gorling, K. Gorshkov, A. V. Goulielmakis, E. Grasbon, F. Guerlin, C. Gullikson, E. M. Haroche, S. He, F. He, L. He, M. He, X. T. He, Y. Ho, P. J. Hofferberth, S. Hofstetter, M. Hostetter, J. Hostetter, J. A. Hua, J. J. Huang, T. Huismans, Y. Hwang, S. I. Itatani, J. Ivanov, I. A. Ivanov, M. Y. Jahnke, T. Ji, Q. Jin, C. Johnson, N. G. Jones, R. R. Jungmann, J. H. Kakehata, M. Kan, C. Kartashov, D. Ke, Q. Keldysh, L. V. Keller, U. Kharin, V. Y. Kieffer, J. C. Kienberger, R. Kim, H. Kim, K. T. Kitzler, M. Kleineberg, U. Kling, N. G. Kopold, R. Krainov, V. P. Krausz, F. Kuhr, S. Kuzmich, A. L'Huillier, A. Lambropoulos, P. Lan, P. Landsman, A. S. Larimian, S. Lein, M. Lemell, C. Lépine, F. Levesque, J. Lewenstein, M. Li, H. Li, L. Li, Q. Li, Y. Liang, L. Liang, Q. Lin, C. D. Lin, K. Liu, H. Liu, J. Liu, K. Liu, W. Liu, X. J. Liu, Y. Lu, P. Lukin, M. D. Luo, S. M. Logman, P. S. W. Ma, J. Ma, X. Mairesse, Y. Manschwetus, B. Marchenko, T. Maurer, R. McKenna, J. Meckel, M. Meng, P. Miller, T. A. Miloševic, D. B. Mirrahimi, M. Miyazaki, K. Morishita, T. Moser, B. Muller, H. G. Mun, J. H. Mur, V. D. Nagele, S. Nam, C. H. Nicklich, W. Niikura, H. Nisoli, M. Nubbemeyer, T. Ortmann, L. Park, S. B. Patchkovskii, S. Paulus, G. G. Peaudecerf, B. Peng, L. Peng, L.-Y. Pépin, H. Perelomov, A. M. Peyronel, T. Pfeiffer, A. N. Pohl, T. Poletto, L. Poluektov, N. P. Pons, B. Popov, A. Popov, A. M. Popov, V. S. Popruzhenko, S. V. Qiang, J. Qin, Y. Quan, W. Raimond, J.-M. Raz, O. Redlich, B. Roither, S. Rottke, H. Rouchon, P. Rouzée, A. Rybarczyk, T. Saenz, A. Sandner, W. Sann, H. Sansone, G. Sayler, A. M. Sayrin, C. Schafer, K. J. Schibli, T. R. Schmidt, L. P. H. Schultze, M. Schumacher, D. W. Shafir, D. Shvetsov-Shilovski, N. I. Sistrunk, E. Smirnova, O. Smolarski, M. Smolkowska, A. S. Soifer, H. Song, Q. Spanner, M. Stagira, S. Steinmeyer, G. Stummer, V. Sun, F. Takada, H. Tan, J. Tate, J. L. Terent'ev, M. V. Tikhonova, O. Tikhonova, O. V. Tolstikhin, O. I. Tong, A. Tong, X. Uiberacker, M. Ulrich, B. Uzer, T. van der Meer, A. F. G. Velotta, R. Vermin, W. Villeneuve, D. M. Villoresi, P. Volkova, E. Volkova, E. A. von Veltheim, A. Voss, S. Vozzi, C. Vrakking, M. J. J. Vredenborg, A. Vuletic, V. Walther, H. Wang, B. Wang, M.-X. Wei, H. Wu, C. Wu, J. Xiao, X.-R. Xie, H. Xie, W. Xie, X. Xin, G. Xiong, W. Xiong, W.-H. Xu, H. Xu, J. Yakovlev, V. S. Yang, Y. Ye, D. Ye, J. Yoshida, S. Yost, D. C. Yu, Z. Yumoto, H. Yun, H. Zamith, S. Zeidler, D. Zeng, H. Zeng, S. Zhang, K. Zhang, L. Zhang, W. Zhao, D. Zhao, Y. Zhou, X. Zhou, Y. Zhu, X. Zimmermann, H. Zohrabi, M. Adv. At. Mol. Opt. Phys. (1) IEEE J. Sel. Top. Quantum Electron. (1) J. Exp. Theor. Phys. (2) J. Opt. Soc. Am. B (1) J. Phys. B (4) J. Phys.: Conf. Ser. (1) Laser Phys. (1) Laser Phys. Lett. (1) Nat. Phys. (1) New J. Phys. (2) Opt. Quantum Electron. (2) Phys. Rev. A (16) Phys. Rev. Lett. (18) Phys. Scr. (1) Rev. Mod. Phys. (2) Sov. Phys. JETP (3) Fig. 1. The yields of excited atoms as a function of the ellipticity of the laser pulses. The wavelengths and the laser intensities are specified in the legend. Fig. 2. Probability distributions of the FTI events on the coordinates of the tunneling ionization time $t_0$ and initial transverse momentum $P_\perp$ at different ellipticities (from 0 to 0.4) and same laser parameters. The laser intensity (I=$8\times 10^{13}$W/cm$^2$) and wavelength (800nm) are same with the curve a of Fig. 1. The colorbar on the right represents the recapture probability. The increasing pulse ellipticity leads to the crescent-shaped Rydberg area shifting to the right gradually. The black crosses indicate the coordinate of ($t_0$=0, $P_\perp$=0). In Fig. 2(a), the dashed red line divides the Rydberg area into three regions, vertical regions AI, AII and horizontal region B. Fig. 3. Probability distributions of the FTI events in the initial tunneling coordinates for the linear laser pulses($\xi$=0). The laser wavelengths and intensities are (a) 800 nm and $8\times 10^{13}$ W/cm$^2$, (b) 800 nm and $1.6\times 10^{14}$ W/cm$^2$, (c) 1600 nm and 8$\times 10^{13}$ W/cm$^2$. The colorbar on the right represents the recapture probability. The dashed black lines are the level curves of the probability of FTI events. The dashed red line divides the Rydberg area into three regions, vertical regions AI, AII and horizontal region B. Fig. 4. (a, c) The distribution of FTI events in the initial tunneling coordinates and (b, d) the yield of FTI events for laser fields with shorter pulse duration. The laser intensity and wavelength are $8\times 10^{13}$W/cm$^2$ and 800 nm. The pulse duration in (a) (b) is N=10 and in (c) (d) is N=4. Fig. 5. The angular momentum number distribution of Rydberg-state atoms at different ellipticities. The laser intensity (I=$8\times 10^{13}$W/cm$^2$) and wavelength (800nm) are same with the curve a of Fig. 1. It exhibits a symmetric triple-hump structure for the linear laser field ($\xi$=0). Fig. 6. (a) The distribution of angular momentum number L in the coordinates of initial transverse momentum and tunneling time at $\xi$=0. (b) The distribution of principal quantum number n in the coordinates of initial transverse momentum and tunneling time at $\xi$=0. Different colors represent different numbers L and n. Fig. 7. (a) A sample classical trajectories of the FTI events (the blue solid curve) and the corresponding guiding center trajectory (the dashed cyan curve) at $\xi =0.25$. (b) The same trajectories at $\xi =0.30$. (a) and (b) have the same initial conditions, $t_0=0, P_\perp =0$. The black crosses show the initial position of classical trajectory, and the black dot marks the zero point of the coordinate. Fig. 8. The distribution of the final angular momentum ($L_f$) versus the initial transverse momentum at tunneling ($P_\perp$). The dashed magenta line shows the analytical formula (9). The black cross marks the zero point of the coordinate. The colorbar on the right represents the recapture probability. (1) P ( t 0 , p ⊥ ) ∝ exp ⁡ [ − 2 ( 2 I p ) 3 / 2 3 | F ( t 0 ) | ] exp ⁡ [ − p ⊥ 2 2 I p | F ( t 0 ) | ] . (2) r ¨ ( t ) = − F ( t ) − ∇ V ( r ) , (3) F ( t ) = cos 2 ⁡ ( ω t 2 N ) E 0 ξ 2 + 1 [ cos ⁡ ( ω t ) x ^ + ξ sin ⁡ ( ω t ) y ^ ] . (4) H ( r , p , t ) = | p | 2 2 + V ( r ) + r ⋅ E ( t ) , (5) r = r g + E ( t ) ω 2 , (6) p = p g + A ( t ) , (7) H ¯ ( r g , p g ) = | p g | 2 2 + V e f f ( r ) . (8) r g = ( I p E 0 + E 0 ω 2 ) x ^ = E 0 ω 2 ( γ 2 2 + 1 ) x ^ p g = P ⊥ y ^ . (9) L f = r f × P f = r g × P g = E 0 ω 2 ( γ 2 2 + 1 ) P ⊥ z ^ ,
CommonCrawl
Journal of The Korean Astronomical Society (천문학회지) The Korean Astronomical Society (한국천문학회) Bimonthly Earth Science(Earth/Atmosphere/Marine/Astronomy) > Astronomy The Journal of the Korean Astronomical Society, JKAS, is an international scientific journal publishing papers in all the fields of astronomy and astrophysics (theoretical, observational, and instrumental) with a view to the advancement of the frontier knowledge. Manuscripts are classified into original contributions, proceedings, and reviews. http://www.kas.org/view/submitpaper.jsp?lang=kor KSCI KCI SCOPUS SCIE Volume 36 Issue spc1 EUV AND SOFT X-RAY EMISSION IN CLUSTERS OF GALAXIES BOWYER STUART 295 https://doi.org/10.5303/JKAS.2004.37.5.295 PDF KSCI Observations with EUVE, ROSAT, and BeepoSAX have shown that some clusters of galaxies produce intense EUV emission. These findings have produced considerable interest; over 100 papers have been published on this topic in the refereed literature. A notable suggestion as to the source of this radiation is that it is a 'warm' (106 K) intracluster medium which, if present, would constitute the major baryonic component of the universe. A more recent variation of this theme is that this material is 'warm-hot' intergalactic material condensing onto clusters. Alternatively, inverse Compton scattering of low energy cosmic rays against cosmic microwave background photons has been proposed as the source of this emission. Various origins of these particles have been posited, including an old (${\~}$Giga year) population of cluster cosmic rays; particles associated with relativistic jets in the cluster; and cascading particles produced by shocks from sub-cluster merging. The observational situation has been quite uncertain with many reports of detections which have been subsequently contradicted by analyses carried out by other groups. Evidence supporting a thermal and a non-thermal origin has been reported. The existing EUV, FUV, and optical data will be briefly reviewed and clarified. Direct observational evidence from a number of different satellites now rules out a thermal origin for this radiation. A new examination of subtle details of the EUV data suggests a new source mechanism: inverse Compton scattered emission from secondary electrons in the cluster. This suggestion will be discussed in the context of the data. IMAGING NON-THERMAL X-RAY EMISSION FROM GALAXY CLUSTERS: RESULTS AND IMPLICATIONS HENRIKSEN MARK;HUDSON DANNY 299 We find evidence of a hard X-ray excess above the thermal emission in two cool clusters (Abell 1750 and IC 1262) and a soft excess in two hot clusters (Abell 754 and Abell 2163). Our modeling shows that the excess components in Abell 1750, IC 1262, and Abell 2163 are best fit by a steep power law indicative of a significant non-thermal component. In the case of Abell 754, the excess emission is thermal, 1 ke V emission. We analyze the dynamical state of each cluster and find evidence of an ongoing or recent merger in all four clusters. In the case of Abell 2163, the detected, steep spectrum, non-thermal X-ray emission is shown to be associated with the weak merger shock seen in the temperature map. However, this shock is not able to produce the flatter spectrum radio halo which we attribute to post-shock turbulence. In Abell 1750 and IC 1262, the shocked gas appears to be spatially correlated with non-thermal emission suggesting cosmic-ray acceleration at the shock front. GALAXY CLUSTERS IN GAMMA-RAYS: AN ASSESSMENT FROM OBSERVATIONS REIMER OLAF 307 Clusters of galaxies are believed to constitute a population of astrophysical objects potentially able to emit electromagnetic radiation up to gamma-ray energies. Evidence of the existence of non-thermal radiation processes in galaxy clusters is indicated from observations of diffuse radio halos, hard X-ray and EUV excess emission. The presence of cosmic ray acceleration processes and its confinement on cosmological timescales nearly inevitably yields in predicting energetic gamma-ray emission, either directly deduceably from a cluster's multifreqency emission characteristics or indirectly during large-scale cosmological structure formation processes. This theoretical reasoning suggests several scenarios to actually detect galaxy clusters at gamma-ray wavelengths: Either resolved as individual sources of point-like or extended gamma-ray emission, by investigating spatial-statistical correlations with unidentified gamma-ray sources or, if unresolved, through their contribution to the extragalactic diffuse gamma-ray background. In the following I review the situation concerning the proposed relation between galaxy clusters and high-energy gamma-ray observations from an observational point-of-view. PROPERTIES AND SPECTRAL BEHAVIOUR OF CLUSTER RADIO HALOS FERETTI L.;BRUNETTI G.;GIOVANNINI G.;KASSIM N.;ORRU E.;SETTI G. 315 Several arguments have been presented in the literature to support the connection between radio halos and cluster mergers. The spectral index distributions of the halos in A665 and A2163 provide a new strong confirmation of this connection, i.e. of the fact that the cluster merger plays an important role in the energy supply to the radio halos. Features of the spectral index (flattening and patches) are indication of a complex shape of the radiating electron spectrum, and are therefore in support of electron reacceleration models. Regions of flatter spectrum are found to be related to the recent merger. In the undisturbed cluster regions, instead, the spectrum steepens with the distance from the cluster center. The plot of the integrated spectral index of a sample of halos versus the cluster temperature indicates that clusters at higher temperature tend to host halos with flatter spectra. This correlation provides further evidence of the connection between radio emission and cluster mergers. RADIO RELICS IN CLUSTERS OF GALAXIES GIOVANNINI GABRIELE;FERETTI LUIGINA 323 In this paper we review the observational results on Relic radio sources in clusters of galaxies. We discuss their observational properties, structures and radio spectra. We will show that Relics can be divided according to their size, morphology, and location in the galaxy cluster. These differences could be related to physical properties of Relic sources. The comparison with cluster conditions suggests that Relics could be related to shock waves originated by cluster mergers. OBSERVING MAGNETIC FIELDS ON LARGE SCALES RUDNICK LAWRENCE 329 Observations of magnetic fields on scales up to several Mpc are important for understanding cluster and large-scale structure evolution. Our current census of such structures is heavily biased - towards fields of several $\mu$G, towards fields in deep potential wells, and towards high inferred field strengths m cooling flow and other clusters from improper analysis of rotation measure data. After reviewing these biases, I show some recent results on two relics that are powered in very different ways. I describe new investigations that are now uncovering weak diffuse fields in the outskirts of clusters and other low density environments, and the good prospects for further progress. FARADAY ROTATION OBSERVATIONS OF MAGNETIC FIELDS IN GALAXY CLUSTERS CLARKE TRACY E. 337 The presence of magnetic fields in the intracluster medium in clusters of galaxies has been revealed through several different observational techniques. These fields may be dynamically important in clusters as they will provide additional pressure support to the intracluster medium as well as inhibit transport mechanisms such as thermal conduction. Here, we review the current observational state of Faraday rotation measure studies of the cluster fields. The fields are generally found to be a few to 10 $\mu$G in non-cooling core clusters and ordered on scales of 10 - 20 kpc. Studies of sources at large impact parameters show that the magnetic fields extend from cluster cores to radii of at least 500 kpc. In central regions of cooling core systems the field strengths are often somewhat higher (10 - 40 $\mu$G) and appear to be ordered on smaller scales of a few to 10 kpc. We also review some of the recent work on interpreting Faraday rotation measure observations through theory and numerical simulations. These techniques allow us to build up a much more detailed view of the strength and topology of the fields. NEW PROBES OF INTERGALACTIC MAGNETIC FIELDS BY RADIOMETRY AND FARADAY ROTATION KRONBERG PHILIPP P. 343 The energy injection of galactic black holes (BH) into the intergalactic medium via extragalactic radio source jets and lobes is sufficient to magnetize the IGM in the filaments and walls of Large Scale Structure at < [B] > ${\~}0.l{\mu}G$ or more. It appears that this process of galaxy-IGM feedback is the primary source of IGM cosmic rays(CR) and magnetic field energy. Large scale gravitational infall energy serves to re-heat the intergalactic magnetoplasma in localities of space and time, maintaining or amplifying the IGM magnetic field, but this can be thought of as a secondary process. I briefly review observations that confirm IGM fields around this level, describe further Faraday rotation measurements in progress, and also the observational evidence that magnetic fields in galaxy systems around z=2 were approximately as strong then, ${\~}$10 Gyr ago, as now. A BAYESIAN VIEW ON FARADAY ROTATION MAPS - SEEING THE MAGNETIC POWER SPECTRUM IN CLUSTERS OF GALAXIES VOGT CORINA;ENBLIN TORSTEN A. 349 Magnetic fields are an important ingredient of galaxy clusters and are indirectly observed on cluster scales as radio haloes and radio relics. One promising method to shed light on the properties of cluster wide magnetic fields is the analysis of Faraday rotation maps of extended extragalactic radio sources. We developed a Fourier analysis for such Faraday rotation maps in order to determine the magnetic power spectra of cluster fields. In an advanced step, here we apply a Bayesian maximum likelihood method to the RM map of the north lobe of Hydra A on the basis of our Fourier analysis and derive the power spectrum of the cluster magnetic field. For Hydra A, we measure a spectral index of -5/3 over at least one order of magnitude implying Kolmogorov type turbulence. We find a dominant scale of about 3 kpc on which the magnetic power is concentrated, since the magnetic autocorrelation length is ${\lambda}_B = 3 {\pm} 0.5\;kpc$. Furthermore, we investigate the influences of the assumption about the sampling volume (described by a window function) on the magnetic power spectrum. The central magnetic field strength was determined to be ${\~}7{\pm}2{\mu}G$ for the most likely geometries. LARGE SCALE MAGNETIC FIELDS IN LENS GALAXIES NARASIMHA D.;CHITRE S. M. 355 Differential Faraday Rotation measurements between the images of same background source, of multiply-imaged gravitational lens systems can be effectively used to provide a valuable probe to establish the existence of large-scale ordered magnetic fields in lensing galaxies as well as galaxy clusters. Estimates of the magnetic field in lens galaxies, based on the radio polarization measurements do not appear to show any clear evidence for evolution with redhsift of the coherent large scale magnetic field between redshift of 0.9 and the present epoch. However, our method clearly establishes the presence of coherent large scale magnetic field in giant ellitpical galaxies. X-RAY STUDIES OF THE INTRACLUSTER MEDIUM IN CLUSTERS OF GALAXIES - CHARACTERIZING GALAXY CLUSTERS AS GIANT LABORATORIES BOHRINGER HANS 361 Galaxy clusters as the densest and most prominent regions within the large-scale structure can be used as well characterizable laboratories to study astrophysical processes on the largest scales. X-ray observations provide currently the best way to determine the physical properties of galaxy clusters and the environmental parameters that describe them as laboratories. We illustrate this use of galaxy clusters and the precision of our understanding of them as laboratory environments with several examples. Their application to determine the matter composition of the Universe shows good agreement with results from other methods and is therefore a good test of our understanding. We test the reliability of mass measurements and illustrate the use of X-ray diagnostics to study the dynamical state of clusters. We discuss further studies on turbulence in the cluster ICM, the interaction of central AGN with the radiatively cooling plasma in cluster cooling cores and the lessons learned from the ICM enrichment by heavy elements. X-RAYING LARGE-SCALE STRUCTURE HENRY J. PATRICK 371 We review the observational evidence for the existence of a warm-hot intergalactic medium (WHIM). We expect that the morphology of this material is similar to that of cosmic rays and magnetic fields in large-scale structure, i.e., filaments connecting clusters of galaxies. Direct evidence for the WHIM, either in emission or absorption, is weak. X-RAY EMISSION FROM THE WARM-HOT INTERGALACTIC MEDIUM KAASTRA JELLE S. 375 In this paper I give an overview of the detection of emission from the warm-hot intergalactic medium (WHIM) in the outer parts of clusters of galaxies. The evidence for the presence of soft excess X-ray emission in 7 out of 21 clusters is summarized, and it is demonstrated that several of these clusters show the signatures of thermal emission in the outer parts. A strong signature is the presence of redshifted O VII emission at 0.57 keV. In the central parts, several clusters show also a soft excess, but m this case the observations cannot well discriminate between a thermal or non-thermal origin of the soft X-ray excess. HEATED INTRACLUSTER GAS AND RADIO CONNECTIONS: THE SINGULAR CASE OF MKW 3S MAZZOTTA PASQUALE;BRUNETTI GIANFRANCO;GIACINTUCCI SIMONA;VENTURI TIZIANA;BARDELLI SANDRO 381 Similarly to other cluster of galaxies previously classified as cooling flow systems, the Chandra observation of MKW 3s reveals that this object has a complex X-ray structure hosting both a X-ray cavity and a X-ray filament. Unlike the other clusters, however, the temperature map of the core of MKW 3s shows the presence of extended regions of gas heated above the radially averaged gas temperature at any radius. As the cluster does not show evidences for ongoing major mergers Mazzotta et al. suggest a connection between the heated gas and the activity of the central AGN. Nevertheless, due to the lack of high quality radio maps, this interpretation was controversial. In this paper we present the results of two new radio observations of MKW 3s at 1.28 GHz and 604 MHz obtained at the GMRT. Together with the Chandra observation and a separate VLA observation at 327 MHz from Young, we show unequivocal evidences for a close connection between the heated gas region and the AGN activity and we briefly summarize possible implications. TRACING BRIGHT AND DARK SIDES OF THE UNIVERSE WITH X-RAY OBSERVATIONS SUTO YASUSHI;YOSHIKAWA KOHJI;DOLAG KLAUS;SASAKI SHIN;YAMASAKI NORIKO Y.;OHASHI TAKAYA;MITSUDA KAZUHISA;TAWARA YUZURU;FUJIMOTO RYUICHI;FURUSHO TAE;FURUZAWA AKIHIRO;ISHIDA MANABU;ISHISAKI YOSHITAKA 387 X-ray observations of galaxy clusters have played an important role in cosmology, especially in determining the cosmological density parameter and the fluctuation amplitude. While they represent the bright side of the universe together with the other probes including the cosmic microwave background and the Type Ia supernovae, the resulting information clearly indicates that the universe is dominated by dark components. Even most of cosmic baryons turns out to be dark. In order to elucidate the nature of dark baryons, we propose a dedicated soft-X-ray mission, DIOS (Diffuse Intergalactic Oxygen Surveyor). Recent numerical simulations suggest that approximately 30 to 50 percent of total baryons at z = 0 take the form of the warm-hot intergalactic medium (WHIM) with $10^5K < T < 10^7K $which has evaded the direct detection so far. The unprecedented energy resolution (${\~} 2eV$) of the XSA (X-ray Spectrometer Array) on-board DIGS enables us to identify WHIM with gas temperature $T = 10^6 {\~} 10^7K$ and overdensity $\delta$ = 10 ${\~}$ 100 located at z < 0.3 through emission lines of OVII and OVIII. In addition, WHIMs surrounding nearby clusters are detectable with a typical exposure time of a day, and thus constitute realistic and promising targets for DIOS. CURRENT STATUS OF SHOCK ACCELERATION THEORY DRURY LUKE O'C 393 This paper describes some recent developments in our understanding of particle acceleration by shocks. It is pointed out that while good agreement now exists as to steady nonlinear modifications to the shock structure, there is. also growing evidence that the mesoscopic scales may not in fact be steady and that siginficant instabilties associated with magnetic field amplification may be a feature of strong collisionless plasma shocks. THE ACCELERATION AND TRANSPORT OF COSMIC RAYS WITH HELIOSPHERIC EXAMPLES JOKIPII J. R. 399 Cosmic rays are ubiquitous in space, and are apparently present wherever the matter density is small enough that they are not removed by collisions with ambient particles. The essential similarity of their energy spectra in many different regions places significant general constraints on the mechanisms for their acceleration and confinement. Diffusive shock acceleration is at present the most successful acceleration mechanism proposed, and, together with transport in Kolmogorov turbulence, can account for the universal specta. In comparison to shock acceleration, statistical acceleration, invoked in many situations, has significant disadvantages. The basic physics of acceleration and transport are discussed, and examples shown where it apparently works very well. However, there are now well-established situations where diffusive shock acceleration cannot be the accelerator. This problem will be discussed and possible acceleration mechanism evaluated. Statistical acceleration in these places is possible. In addition, a new mechanism, called diffusive compression acceleration, will be discussed and shown to be an attractive candidate. It has similarities with both statistical acceleration and shock acceleration. COSMIC RAY ACCELERATION AT COSMOLOGICAL SHOCKS KANG HYESUNG;JONES T. W. 405 Cosmological shocks form as an inevitable consequence of gravitational collapse during the large scale structure formation and cosmic-rays (CRs) are known to be accelerated at collisionless shocks via diffusive shock acceleration (DSA). We have calculated the evolution of CR modified shocks for a wide range of shock Mach numbers and shock speeds through numerical simulations of DSA in 1D quasi-parallel plane shocks. The simulations include thermal leakage injection of seed CRs, as well as pre-existing, upstream CR populations. Bohm-like diffusion is assumed. We show that CR modified shocks evolve to time-asymptotic states by the time injected particles are accelerated to moderately relativistic energies (p/mc $\ge$ 1), and that two shocks with the same Mach number, but with different shock speeds, evolve qualitatively similarly when the results are presented in terms of a characteristic diffusion length and diffusion time. We find that $10^{-4} - 10^{-3}$ of the particles passed through the shock are accelerated to form the CR population, and the injection rate is higher for shocks with higher Mach number. The CR acceleration efficiency increases with shock Mach number, but it asymptotes to ${\~}50\%$ in high Mach number shocks, regardless of the injection rate and upstream CR pressure. On the other hand, in moderate strength shocks ($M_s {\le} 5$), the pre-existing CRs increase the overall CR energy. We conclude that the CR acceleration at cosmological shocks is efficient enough to lead to significant nonlinear modifications to the shock structures. ULTRA HIGH ENERGY COSMIC RAYS AND THE MAGNETIZED UNIVERSE OLINTO ANGELA V. 413 The current state and future prospects of ultra high energy cosmic ray physics are reviewed. These cosmic rays with energies well above $10^{18}$ eV are messengers of an unknown extremely high-energy universe. ULTRA HIGH ENERGY COSMIC RAYS AND CLUSTERS JONES T. W. 421 I briefly review the current theoretical status of the origins of ultrahigh energy cosmic rays with special emphasis on models associated with galaxy clusters. Some basic constraints on models are laid out, including those that apply both to so-called 'top-down' and 'bottom-up' models. The origins of these UHECRs remain an enigma; no model stands out as a clear favorite. Large scale structure formation shocks, while very attractive conceptually in this context, are unlikely to be able to accelerate particles to energies much above $10^{18}eV$. Terminal shocks in relativistic AGN jets seem to be more viable candidates physically, but suffer from their rarity in the local universe. Several other, representative, models are outlined for comparison. MAGNETIC FIELD IN THE LOCAL UNIVERSE AND THE PROPAGATION OF UHECRS DOLAG KLAUS;GRASSO DARIO;SPRINGEL VOLKER;TKACHEV IGOR 427 We use simulations of large-scale structure formation to study the build-up of magnetic fields (MFs) in the intergalactic medium. Our basic assumption is that cosmological MFs grow in a magnetohy-drodynamical (MHD) amplification process driven by structure formation out of a magnetic seed field present at high redshift. This approach is motivated by previous simulations of the MFs in galaxy clusters which, under the same hypothesis that we adopt here, succeeded in reproducing Faraday rotation measurements (RMs) in clusters of galaxies. Our ACDM initial conditions for the dark matter density fluctuations have been statistically constrained by the observed large-scale density field within a sphere of 110 Mpc around the Milky Way, based on the IRAS 1.2-Jy all-sky redshift survey. As a result, the positions and masses of prominent galaxy clusters in our simulation coincide closely with their real counterparts in the Local Universe. We find excellent agreement between RMs of our simulated galaxy clusters and observational data. The improved numerical resolution of our simulations compared to previous work also allows us to study the MF in large-scale filaments, sheets and voids. By tracing the propagation of ultra high energy (UHE) protons in the simulated MF we construct full-sky maps of expected deflection angles of protons with arrival energies $E = 10^{20}\;eV$ and $4 {\times} 10^{19}\;eV$, respectively. Accounting only for the structures within 110 Mpc, we find that strong deflections are only produced if UHE protons cross galaxy clusters. The total area on the sky covered by these structures is however very small. Over still larger distances, multiple crossings of sheets and filaments may give rise to noticeable deflections over a significant fraction of the sky; the exact amount and angular distribution depends on the model adopted for the magnetic seed field. Based on our results we argue that over a large fraction of the sky the deflections are likely to remain smaller than the present experimental angular sensitivity. Therefore, we conclude that forthcoming air shower experiments should be able to locate sources of UHE protons and shed more light on the nature of cosmological MFs. MERGERS, COSMIC RAYS, AND NONTHERMAL PROCESSES IN CLUSTERS OF GALAXIES SARAZIN CRAIG L. 433 Clusters of galaxies generally form by the gravitational merger of smaller clusters and groups. Major cluster mergers are the most energetic events in the Universe since the Big Bang. The basic properties of cluster mergers and their effects are discussed. Mergers drive shocks into the intracluster gas, and these shocks heat the intracluster gas. As a result of the impulsive heating and compression associated with mergers, there is a large transient increase in the X-ray luminosities and temperatures of merging clusters. These merger boost can affect X-ray surveys of clusters and their cosmological interpretation. Similar boosts occur in the strong lensing cross-sections and Sunyaev-Zeldovich effect in merging clusters. Merger shock and turbulence associated with mergers should also (re)accelerate nonthermal relativistic particles. As a result of particle acceleration in shocks and turbulent acceleration following mergers, clusters of galaxies should contain very large populations of relativistic electrons and ions. Observations and models for the radio, extreme ultraviolet, hard X-ray, and gamma-ray emission from nonthermal particles accelerated in these shocks will also be described. Gamma-ray observations with GLAST seem particularly promising. EXTRAGALACTIC COSMIC RAYS AND MAGNETIC FIELDS: FACTS AND FICTION ENBLIN TORSTEN 439 A critical discussion of our knowledge about extragalactic cosmic rays and magnetic fields is at-tempted. What do we know for sure? What are our prejudices? How do we confront our models with the observations? How can we assess the uncertainties in our modeling and in our observations? Unfortunately, perfect answers to these questions can not be given. Instead, I describe efforts I am involved in to gain reliable information about relativistic particles and magnetic fields in extragalactic space. COSMIC RAYS AND GAMMA-RAYS IN LARGE-SCALE STRUCTURE INOUE SUSUMU;NAGASHIMA MASAHIRO;SUZUKI TAKERU K.;AOKI WAKO 447 During the hierarchical formation of large scale structure in the universe, the progressive collapse and merging of dark matter should inevitably drive shocks into the gas, with nonthermal particle acceleration as a natural consequence. Two topics in this regard are discussed, emphasizing what important things nonthermal phenomena may tell us about the structure formation (SF) process itself. 1. Inverse Compton gamma-rays from large scale SF shocks and non-gravitational effects, and the implications for probing the warm-hot intergalactic medium. We utilize a semi-analytic approach based on Monte Carlo merger trees that treats both merger and accretion shocks self-consistently. 2. Production of $^6Li$ by cosmic rays from SF shocks in the early Galaxy, and the implications for probing Galaxy formation and uncertain physics on sub-Galactic scales. Our new observations of metal-poor halo stars with the Subaru High Dispersion Spectrograph are highlighted. THE QUEST FOR COSMIC RAY PROTONS IN GALAXY CLUSTERS PFROMMER C.;ENSSLIN T. A. 455 There have been many speculations about the presence of cosmic ray protons (CRps) in galaxy clusters over the past two decades. However, no direct evidence such as the characteristic $\gamma$-ray signature of decaying pions has been found so far. These pions would be a direct tracer of hadronic CRp interactions with the ambient thermal gas also yielding observable synchrotron and inverse Compton emission by additionally produced secondary electrons. The obvious question concerns the type of galaxy clusters most likely to yield a signal: Particularly suited sites should be cluster cooling cores due to their high gas and magnetic energy densities. We studied a nearby sample of clusters evincing cooling cores in order to place stringent limits on the cluster CRp population by using non-detections of EGRET. In this context, we examined the possibility of a hadronic origin of Coma-sized radio halos as well as radio mini-halos. Especially for mini-halos, strong clues are provided by the very plausible small amount of required CRp energy density and a matching radio profile. Introducing the hadronic minimum energy criterion, we show that the energetically favored CRp energy density is constrained to $2\%{\pm}1\%$ of the thermal energy density in Perseus. We also studied the CRp population within the cooling core region of Virgo using the TeV $\gamma$-ray detection of M 87 by HEGRA. Both the expected radial $\gamma$-ray profile and the required amount of CRp support this hadronic scenario. SECONDARY ELECTRONS IN CLUSTERS OF GALAXIES AND GALAXIES HWANG CHORNG- YUAN 461 We investigate the role of secondary electrons in galaxy clusters and in ultra-luminous infrared galaxies (ULIGs). The radio emission in galaxy clusters and ULIGs is believed to be produced by the synchrotron radiation of relativistic electrons. Nonetheless, the sources of these relativistic electrons are still unclear. Relativistic secondary electrons can be produced from the hadronic interactions of cosmic-ray nuclei with the intra-cluster media (ICM) of galaxy clusters and the dense molecular clouds of ULIGs. We estimate the contribution of the secondary electrons in galaxy clusters and ULIGs by comparing observational results with theoretical calculations for the radio emission in these sources. We find that the radio halos of galaxy clusters can not be produced from the secondary electrons; on the other hand, at least for some ULIGs, the radio emission can be dominated by the synchrotron emission of the secondary electrons. NONTHERMAL COMPONENTS IN THE LARGE SCALE STRUCTURE MINIATI FRANCESCO 465 I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to $\gamma$-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of $\gamma$-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration. POSSIBLE MERGER SIGNATURE IN SZ MAPS KOCH PATRICK 471 We propose an analytical model to estimate the influence of a merger on the thermal SZ effect. Following observations we distinguish between subsonic and transonic mergers. Using analytical velocity fields and the Bernoulli equation we calculate the excess pressure around a moving subcluster for an incompressible subsonic gas. Positive excess around the stagnation point and negative excess on the side of the subcluster lead to characteristic signatures in the SZ map, of the order of $10\%$ compared to the unperturbed signal. For a transonic merger we calculate the change in the thermal spectral SZ function, resulting from bow shock accelerated electrons. The merger shock compression factor determines the power law tail of the new non-thermal electron population and is directly related to a shift in the crossover frequency. This shift is typically a few percent towards higher frequencies. COSMIC RAYS ACCELERATED AT SHOCK WAVES IN LARGE SCALE STRUCTURE RYU DONGSU;KANG HYESUNG 477 Shock waves form in the intergalactic space as an ubiquitous consequence of cosmic structure formation. Using N-body/hydrodynamic simulation data of a ACDM universe, we examined the properties of cosmological shock waves including their morphological distribution. Adopting a diffusive shock acceleration model, we then calculated the amount of cosmic ray energy as well as that of gas thermal energy dissipated at the shocks. Finally, the dynamical consequence of those cosmic rays on cluster properties is discussed. COSMIC RAY ACCELERATION DURING LARGE SCALE STRUCTURE FORMATION BLASI PASQUALE 483 Clusters of galaxies are storage rooms of cosmic rays. They confine the hadronic component of cosmic rays over cosmological time scales due to diffusion, and the electron component due to energy losses. Hadronic cosmic rays can be accelerated during the process of structure formation, because of the supersonic motion of gas in the potential wells created by dark matter. At the shock waves that result from this motion, charged particles can be energized through the first order Fermi process. After discussing the most important evidences for non-thermal phenomena in large scale structures, we describe in some detail the main issues related to the acceleration of particles at these shock waves, emphasizing the possible role of the dynamical backreaction of the accelerated particles on the plasmas involved. PARTICLE ACCELERATION AND NON-THERMAL EMISSION FROM GALAXY CLUSTERS BRUNETTI GIANFRANCO 493 The existence and extent of non-thermal phenomena in galaxy clusters is now well established. A key question in our understanding of these phenomena is the origin of the relativistic electrons which may be constrained by the modelling of the fine radio properties of radio halos and of their statistics. In this paper we argue that present data favour a scenario in which the emitting electrons in the intracluster medium (ICM) are reaccelerated in situ on their way out. An overview of turbulent-particle acceleration models is given focussing on recent time-dependent calculations which include a full coupling between particles and MHD waves. BLACK HOLE-IGM FEEDBACK, AND LINKS TO IGM FIELDS AND CR'S KRONBER PHILIPP P. 501 The uniquely large dimensions of Giant radio galaxies (GRGs) make it possible to probe for stringent limits on total energy content, Faraday rotation, Alfven speeds, particle transport and radiation loss times. All of these quantities are more stringently limited or specified for GRG's than in more 'normal' FRII radio sources. I discuss how both global and detailed analyses of GRG's lead to constraints on the CR electron acceleration mechanisms in GRG's and by extension in all FRII radio sources. The properties of GRG's appear to rule out large scale Fermi-type shock acceleration. The plasma parameters in these systems set up conditions that are favorable for magnetic reconnection, or some other very efficient process of conversion of magnetic to particle energy. We conclude that whatever mechanism operates in GRG's is probably the primary extragalactic CR acceleration mechanism in the Universe. SIMULATING NONTHERMAL RADIATION FROM CLUSTER RADIO GALAXIES TREGILLIS I. L.;JONES T. W.;RYU DONGSU 509 We present results from an extensive synthetic observation analysis of numerically-simulated radio galaxy (RG) jets. This analysis is based on the first three-dimensional simulations to treat cosmic ray acceleration and transport self-consistently within a magnetohydrodynamical calculation. We use standard observational techniques to calculate both minimum-energy and inverse-Compton field values for our simulated objects. The latter technique provides meaningful information about the field. Minimum-energy calculations retrieve reasonable field estimates in regions physically close to the minimum-energy partitioning, though the technique is highly susceptible to deviations from the underlying assumptions. We also study the reliability of published rotation measure analysis techniques. We find that gradient alignment statistics accurately reflect the physical situation, and can uncover otherwise hidden information about the source. Furthermore, correlations between rotation measure (RM) and position angle (PA) can be significant even when the RM is completely dominated by an external cluster medium. LOW-LEVEL RADIO EMISSION FROM RADIO GALAXIES AND IMPLICATIONS FOR THE LARGE SCALE STRUCTURE KRISHNA GOPAL;WIITA PAUL J.;BARAI PARAMITA 517 We present an update on our proposal that during the 'quasar era' (1.5 $\le$ z $\le$ 3), powerful radio galaxies could have played a major role in the enhanced global star-formation, and in the widespread magnetization and metal pollution of the universe. A key ingredient of this proposal is our estimate that the true cosmological evolution of the radio galaxy population is likely to be even steeper than what has been inferred from flux-limited samples of radio sources with redshift data, when an allowance is made for the inverse Compton losses on the cosmic microwave background which were much greater at higher redshifts. We thus estimate that a large fraction of the clumps of proto-galactic material within the cosmic web of filaments was probably impacted by the expanding lobes of radio galaxies during the quasar era. Some recently published observational evidence and simulations which provide support for this picture are pointed out. We also show that the inverse Compton x-ray emission from the population of radio galaxies during the quasar era, which we inferred to be largely missing from the derived radio luminosity function, is still only a small fraction of the observed soft x-ray background (XRB) and hence the limit imposed on this scenario by the XRB is not violated. THE ORDERING OF MAGNETIC FIELDS IN THE COSMOS BIERMANN PETER L.;KRONBER PHILIPP P. 527 It is argued that the key task in understanding magnetic fields in the cosmos is to comprehend the origin of their order or coherence over large length scales in galaxies. Obtaining magnetic fields can be done in stars, whose lifetime is usually $10^{10}$ rotations, while galactic disks have approximately 20 to 50 rotations in their lifetime since the last major merger, which established the present day gaseous disk. Disorder in the galactic magnetic fields is injected on the disk time scale of about 30 million years, about a tenth of the rotation period, so after one half rotation already it should become completely disordered. Therefore whatever mechanism Nature is using, it must compete with such a short time scale, to keep order in its house. This is the focal quest. GENERATION OF MAGNETIC FIELDS IN COSMOLOGICAL SHOCKS MEDVEDEV MIKHAIL V.;SILVA LUIS O.;FIORE MASSIMILIANO;FONSECA RICARDO A.;MORI WARREN B. 533 The origin of magnetic fields in the universe remains an outstanding problem in cosmology. We propose that these fields are produced by shocks during the large-scale structure formation. We discuss the mechanism of the field generation via the counter-streaming (Weibel) instability. We also show that these Weibel-generated fields are long-lived and weakly coupled to dissipation. Subsequent field amplification by the intra-cluster turbulence may also take place, thus maintaining the magnetic energy density close to equipartition. A GRADIENT-T SZE HATTORI MAKOTO;OKABE NOBUHIRO 543 The inverse Compton scattering of the cosmic microwave background (CMB) radiation with electrons in the intracluster medium which has a temperature gradient, was examined by the third-order perturbation theory of the Compton scattering. A new type of the spectrum distortion of the CMB was found and named as gradient T Sunyaev-Zel'dovich effect (gradT SZE). The spectrum has an universal shape. There is a zero distortion point, the cross over frequency, at 326GHz. When the hotter region locates closer to an observer, the intensity becomes brighter than the CMB in the frequency region lower than the cross over frequency and fainter than the CMB in the frequency region higher than the cross over frequency. When the cooler region locates closer to an observer, the distorted part of the spectrum has an opposite sign to the above case. The amplitude of the spectrum distortion does not de-pend on the electron density and depends on the heat conductivity and the total temperature variation along a line of sight. Therefore, the gradT SZE provides an unique opportunity to measure thermally nonequilibrium electron momentum distribution function in the ICM and combined with the X-ray measurements of the electron temperature distribution provides an opportunity of direct measurement of the heat conductivity in the ICM. GENERATION OF MAGNETIC FIELDS BY TEMPERATURE GRADIENTS OKABE NOBUHIRO;HATTORI MAKOTO 547 We showed that magnetic fields are generated in the plasma which have the temperature inhomogeneities. The mechanism is the same as the Weibel instability because the velocity distribution functions are at non-equilibrium and anisotropic under the temperature gradients. The growth timescale is much shorter than the dynamical time of structure formation. The coherence length of magnetic fields at the saturated time is much shorter than kpc scale and then, at nonlinear phase, become longer by inverse-cascade process. We report the application of our results to clusters of galaxies, not including hydrodynamic effects. LARGE SCALE MAGNETOGENESIS THROUGH RADIATION PRESSURE LANGER MATHIEU;PUGET JEAN-LOUP;AGHANIM NABILA 553 We present a new model for the generation of magnetic fields on large scales occurring at the end of cosmological reionisation. The inhomogeneous radiation provided by luminous sources and the fluctuations in the matter density field are the major ingredients of the model. More specifically, differential radiation pressure acting on ions and electrons gives rise to electric currents which induce magnetic fields on large scales. We show that on protogalactic scales, this process is highly efficient, leading to magnetic field amplitudes of the order of $10^{-1l}$ Gauss. While remaining of negligible dynamical impact, those amplitudes are million times higher than those obtained in usual astrophysical magnetogenesis models. Finally, we derive the relation between the power spectrum of the generated field and the one of the matter density fluctuations. We show in particular that magnetic fields are preferably created on large (galactic or cluster) scales. Small scale magnetic fields are strongly disfavoured, which further makes the process we propose an ideal candidate to explain the origin of magnetic fields in large scale structures. THERMAL CONDUCTION IN MAGNETIZED TURBULENT GAS CHO JUNGYEON;LAZARIAN A. 557 We discuss diffusion of particles in turbulent flows. In hydrodynamic turbulence, it is well known that distance between two particles imbedded in a turbulent flow exhibits a random walk behavior. The corresponding diffusion coefficient is ${\~}$ ${\upsilon}_{inj}{\iota}_{turb}$, where ${\upsilon}_{inj}$ is the amplitude of the turbulent velocity and ${\iota}_{turb}$ is the scale of the turbulent motions. It Is not clear whether or not we can use a similar expression for magnetohydrodynamic turbulence. However, numerical simulations show that mixing motions perpendicular to the local magnetic field are, up to high degree, hydrodynamical. This suggests that turbulent heat transport in magnetized turbulent fluid should be similar to that in non-magnetized one, which should have a diffusion coefficient ${\upsilon}_{inj}{\iota}_{turb}$. We review numerical simulations that support this conclusion. The application of this idea to thermal conductivity in clusters of galaxies shows that this mechanism may dominate the diffusion of heat and may be efficient enough to prevent cooling flow formation when turbulence is vigorous. TURBULENCE STATISTICS FROM SPECTRAL LINE OBSERVATIONS LAZARIAN A. 563 Turbulence is a crucial component of dynamics of astrophysical fluids dynamics, including those of ISM, clusters of galaxies and circumstellar regions. Doppler shifted spectral lines provide a unique source of information on turbulent velocities. We discuss Velocity-Channel Analysis (VCA) and its offspring Velocity Coordinate Spectrum (VCS) that are based on the analytical description of the spectral line statistics. Those techniques are well suited for studies of supersonic turbulence. We stress that a great advantage of VCS is that it does not necessary require good spatial resolution. Addressing the studies of mildly supersonic and subsonic turbulence we discuss the criterion that allows to determine whether Velocity Centroids are dominated by density or velocity. We briefly discuss ways of going beyond power spectra by using of higher order correlations as well as genus analysis. We outline the relation between Spectral Correlation Functions and the statistics available through VCA and VCS. TURBULENCE PRODUCED BY TSUNAMIS IN GALAXY CLUSTERS FUJITA YUTAKA;MATSUMOTO TOMOAKI;WADA KEIICHI 571 Clusters of galaxies are filled with X-ray emitted hot gas with the temperature of T ${\~}$2-10 keV. Recent X-ray observations have been revealing unexpectedly that many cluster cores have complicated, peculiar X-ray structures, which imply dynamical motion of the hot gas. Moreover, X-ray spectra indicate that radiative cooling of the cool gas is suppressed by unknown heating mechanisms (the 'cooling flow problem'). Here we propose a novel mechanism reproducing both the inhomogeneous structures and dynamics of the hot gas in the cluster cores, based on state-of-the-art hydrodynamic simulations. We showed that acoustic-gravity waves, which are naturally expected during the process of hierarchical structure formation of the universe, surge in the X-ray hot gas, causing a serous impact on the core. This reminds us of tsunamis on the ocean surging into an distant island. We found that the waves create fully-developed, stable turbulence, which reproduces the complicated structures in the core. Moreover, if the wave amplitude is large enough, they can suppress the cooling of the core. The turbulence could be detected in near-future space X-ray missions such as ASTRO-E2. MHD SIMULATIONS OF A MOVING SUB CLUMP WITH HEAT CONDUCTION ASAI NAOKI;FUKUDA NAOYA;MATSUMOTO RYOJI 575 High resolution observations of cluster of galaxies by Chandra have revealed the existence of an X-ray emitting comet-like galaxy C153 in the core of cluster of galaxies A2125. The galaxy C153 moving fast in the cluster core has a distinct X-ray tail on one side, obviously due to ram pressure stripping, since the galaxy C153 crossed the central region of A2125. The X-ray emitting plasma in the tail is substantially cooler than the ambient plasma. We present results of two-dimensional magnetohydrodynamic simulations of the time evolution of a sub clump like C153 moving in magnetized intergalactic matter. Anisotropic heat conduction is included. We found that the magnetic fields are essential for the existence of the cool X-ray tail, because in non-magnetized plasma the cooler sub clump tail is heated up by isotropic heat conduction from the hot ambient plasma and does not form such a comet-like tail. DETECTION OF EMISSION FROM WARM-HOT GAS IN THE UNIVERSE WITH XMM? BOWYER STUART;VIKHLININ ALEXEY 579 Recently, claims have been made of the detection of 'warm-hot' gas in the intergalactic medium. Kaastra et al. (2003) claimed detection of ${\~} 10^6$ K material in the Coma Cluster but studies by Arnaud et al. (2001), and our analysis of the Chandra observations of Coma (Vikhlinin et al. 2001), find no evidence for a $10^6$ K gas in the cluster. Finoguenov et al. (2003) claimed the detection of $3 {\times} 10^6$ gas slightly off-center from the Coma Cluster. However, our analysis of ROSAT data from this region shows no excess in this region. We propose an alternative explanation which resolves all these conflicting reports. A number of studies (e.g. Robertson et al., 2001) have shown that the local interstellar medium undergoes charge exchange with the solar wind. The resulting recombination spectrum shows lines of O VII and O VIII (Wargelin et al. 2004). Robertson & Cravens (2003) have .shown that as much as $25\%$ of the Galactic polar flux is heliospheric recombination radiation and that this component is highly variable. Sporadic heliospheric emission could account for all the claims of detections of 'warm-hot' gas and explain the conflicts cited above. CLUSTER MERGERS AND NON-THERMAL PHENOMENA: A STATISTICAL MAGNETO-TURBULENT MODEL CASSANO R.;BRUNETTI G. 583 With the aim to investigate the statistical properties and the connection between thermal and non-thermal properties of the ICM in galaxy clusters, we have developed a statistical magneto-turbulent model which describes, at the same time, the evolution of the thermal and non-thermal emission from galaxy clusters. In particular, starting from the cosmological evolution of clusters, we follow cluster. mergers, calculate the spectrum of the magnetosonic waves generated in the ICM during these mergers, the evolution of relativistic electrons and the resulting synchrotron and Inverse Compton spectra. We show that the broad band (radio and hard x-ray) non-thermal spectral properties of galaxy clusters can be well accounted for by our model for viable values of the parameters (here we adopt a EdS cosmology). OCCURENCE AND LUMINOSITY FUNCTIONS OF GIANT RADIO HALOS FROM MAGNETO-TURBULENT MODEL CASSANO R.;BRUNETTI G.;SETTI G. 589 We calculate the probability to form giant radio halos (${\~}$ 1 Mpc size) as a function of the mass of the host clusters by using a Statistical Magneto-Turbulent Model (Cassano & Brunetti, these proceedings). We show that the expectations of this model are in good agreement with the observations for viable values of the parameters. In particular, the abrupt increase of the probability to find radio halos in the more massive galaxy clusters ($M {\ge} 2{\times}10^{15} M_{\bigodot}$) can be well reproduced. We calculate the evolution with redshift of such a probability and find that giant radio halos can be powered by particle acceleration due to MHD turbulence up to z${\~}$0.5 in a ACDM cosmology. Finally, we calculate the expected Luminosity Functions of radio halos (RHLFs). At variance with previous studies, the shape of our RHLFs is characterized by the presence of a cut-off at low synchrotron powers which reflects the inefficiency of particle acceleration in the case of less massive galaxy clusters. FINDING COSMIC SHOCKS: SYNTHETIC X-RAY ANALYSIS OF A COSMOLOGICAL SIMULATION HALLMAN ERIC J.;RYU DONGSU;KANG HYESUNG;JONES T. W. 593 We introduce a method of identifying evidence of shocks in the X-ray emitting gas in clusters of galaxies. Using information from synthetic observations of simulated clusters, we do a blind search of the synthetic image plane. The locations of likely shocks found using this method closely match those of shocks identified in the simulation hydrodynamic data. Though this method assumes nothing about the geometry of the shocks, the general distribution of shocks as a function of Mach number in the cluster hydrodynamic data can be extracted via this method. Characterization of the cluster shock distribution is critical to understanding production of cosmic rays in clusters and the use of shocks as dynamical tracers. THE CONTRIBUTION TO THE EXTRAGALACTIC γ-RAY BACKGROUND BY HADRONIC INTERACTIONS OF COSMIC RAYS PRODUCING EUV EMISSION IN CLUSTERS OF GALAXIES KUO PING-HUNG;BOWYER STUART;HWANG CHORNG- YUAN 597 A substantial number of processes have been suggested as possible contributors to the extragalactic $\gamma$-ray background (EGRB). Yet another contribution to this background will be emission produced in hadronic interactions of cosmic-ray protons with the cluster thermal gas; this class of cosmic rays (CRs) has been shown to be responsible for the EUV emission in the Coma Cluster of galaxies. In this paper we assume the CRs in the Coma Cluster is prototypic of all clusters and derive the contribution to the EGRB from all clusters over time. We examine two different possibilities for the scaling of the CR flux with cluster size: the number density of the CRs scale with the number density of the thermal plasma, and alternatively, the energy density of the CRs scale with the energy density of the plasma. We find that in all scenarios the EGRB produced by this process is sufficiently low that it will not be observable in comparison with other mechanisms that are likely to produce an EGRB. LINEAR ANALYSIS OF PARKER-JEANS INSTABILITY WITH COSMIC-RAY KUWABARA TAKUHITO;KO CHUNG-MING 601 We present the results of the linear analysis for the Parker-Jeans instability in the magnetized gas disks including the effect of cosmic-ray diffusion along the magnetic field lines. We adopted an uni-formly rotating two temperature layered disk with a horizontal magnetic fields and solved the perturbed equations numerically. Fragmentation of gases takes place and filamentary structures are formed by the growth of the instability. Nagai et al. (1998) showed that the direction of filaments being formed by the Parker-Jeans instability depends on the strength of pressure outside the unperturbed gas disk. We found that at some range of external pressures the direction of filaments is also governed by the value of the diffusion coefficient of CR along the magnetic field lines k. 3D SIMULATIONS OF RADIO GALAXY EVOLUTION IN CLUSTER MEDIA O'NEILL SEAN M.;SHEARER PAUL;TREGILLIS IAN L.;JONES THOMAS W.;RYU DONGSU 605 We present a set of high-resolution 3D MHD simulations exploring the evolution of light, supersonic jets in cluster environments. We model sets of high- and low-Mach jets entering both uniform surroundings and King-type atmospheres and propagating distances more than 100 times the initial jet radius. Through complimentary analyses of synthetic observations and energy flow, we explore the detailed interactions between these jets and their environments. We find that jet cocoon morphology is strongly influenced by the structure of the ambient medium. Jets moving into uniform atmospheres have more pronounced backflow than their non-uniform counterparts, and this difference is clearly reflected by morphological differences in the synthetic observations. Additionally, synthetic observations illustrate differences in the appearances of terminal hotspots and the x-ray and radio correlations between the high- and low-Mach runs. Exploration of energy flow in these systems illustrates the general conversion of kinetic to thermal and magnetic energy in all of our simulations. Specifically, we examine conversion of energy type and the spatial transport of energy to the ambient medium. Determination of the evolution of the energy distribution in these objects will enhance our understanding of the role of AGN feedback in cluster environments.
CommonCrawl
Reviewers' Comments Modeling epigenetic regulation of PRC1 protein accumulation in the cell cycle Marzena Dolbniak1, Marek Kimmel2, 1Email author and Jaroslaw Smieja1 © Dolbniak et al. 2015 Accepted: 2 September 2015 Epigenetic regulation contributes to many important processes in biological cells. Examples include developmental processes, differentiation and maturation of stem cells, evolution of malignancy and other. Cell cycle regulation has been subject of mathematical modeling by a number of authors that resulted in many interesting models and application of analytic techniques ranging from stochastic processes to partial differential equations and to integral, functional and operator equations. In this paper we address the question of how the regulation of protein contents influences the long-term dynamics of the population. To accomplish this, we follow the philosophy of a 1984 model by Kimmel et al., but adjust the details to fit the experimental data on protein PRC1 from a more recent paper. We built a model of cell cycle dynamics of the PRC1 and fitted it to the data made available by Cohen and his co-authors. We have run the model for a large number of cell generations, recording the PRC1 contents in all cells of the resulting pedigree, at constant time intervals. During cell division the PRC1 is unequally divided between daughter cells. The picture emerging from simulations of Data set 1 is that of a very well-tuned regulatory circuit that provides a stable distribution of PRC1 contents and interdivision times. Data set 2 seems qualitatively different, with more variation in cell cycle duration. The main question we address is whether the regulatory feedbacks deduced from single cell cycle data provide epigenetic regulation of cell characteristics in long run. PRC1 is a good candidate because of its role in setting timing of division. Findings of the current paper include tight regulation of the cell cycle (particularly the timing of the cell cycle) even that PRC1 is only one of the players in cell dynamics. Understanding that association, even close, does not necessarily imply causation, we consider this an interesting and important result. This article was reviewed by Ollivier Hyrien, Anna Marciniak-Czochra and Alberto d'Onofrio. Mathematical model PRC1 protein Stochastic fluctuations Asymmetric division Epigenetic regulation contributes to many important processes in biological cells. Examples include developmental processes, differentiation and maturation of stem cells, evolution of malignancy and other [1]. One of the processes, which have been studied for at least several decades, is regulation of cell size and cell cycle duration. More specifically, how the dynamics of protein production and the manner in which proteins are split between the two progeny cells leads to preservation of cell population age and size structure (homeostasis). A related question is under what circumstances these dynamics lead to phenomena such as bimodality and, as a consequence, separation of distinct cell subpopulations. Cell cycle regulation has been subject of mathematical modeling by a number of authors which resulted in many interesting models and application of analytic techniques ranging from stochastic processes to partial differential equations and to integral, functional and operator equations [2, 3]. Some of the models have been applied to data on bacterial and eukaryotic cells, also in the context of cancer modeling [4]. An example of an early model devised to capture cell cycle regulation and unequal division of mass among progeny cells, is the model by Kimmel et al. [5] (Fig. 1), which considers the dynamics of the distribution of the total mass of cell RNA in a growing cell population. In that model, the birth-mass of a cell, represented by random variable (rv) X 0 , determines both the mass at division, X 2 , and the time, T, to division (cell-cycle duration). Schematics of the models of asymmetric division. a Model of Kimmel et al. (1984). Birth-mass of a cell, represented by random variable (rv) X 0, determines both the mass at division, X 2, and the time, T, to division (cell-cycle duration): which considers the dynamics of the distribution of the total mass of cell RNA in a growing cell population. In that model, the birth-mass of a cell, represented by random variable (rv) X 0, determines both the mass at division, X 2, and the time, T, to division (cell-cycle duration): X 2 = ϕ(X 0), T = ψ(X 0). At division, the parent-cell mass is randomly split between the two progeny cells, according to the expression X 0 ' = UX 2, X 0 ' ' = (1 − U)X 2 in which the random variable U is independent of rv X 2, and it is distributed symmetrically over the interval (0, 1), so that E(U) = 1/2. Uneven partition of mass among progeny cells is the only source of randomness in the basic model. b Model of Kimmel (1997). Large particles (biological cells), follow a binary fission process. Each of the large particles is born containing a number of small particles (genes, proteins, viruses, organelles), which multiply or decay during the large particles lifetime. Small particles are then split between the two progeny of the large particle and the process continues in each of them $$ {X}_2=\phi \left({X}_0\right),\kern0.5em T=\psi \left({X}_0\right) $$ At division, the parent-cell mass is randomly split between the two progeny cells, according to the expression $$ {X}_0\hbox{'}=U{X}_2,\kern0.5em {X}_0"=\left(1-U\right){X}_2 $$ in which the rv U is independent of rv X 2, and it is distributed symmetrically over the interval (0, 1), so that E(U) = 1/2. Uneven partition of mass among progeny cells is the only source of randomness in the basic model. A more general version of the model retains the deterministic mass growth, but includes stochastic time to division [6]. In other models, the growth of mass is deterministic or stochastic and division occurs when a randomly assigned mass threshold is reached [3]. The model leads to stable exponential growth, a process in which the numbers of cells in all possible subsets of (X 0, X 2, T) values grow exponentially at a rate defined by a Malthusian parameter λ. Please see the Conclusions Section for further remarks. Cell-to-cell differences are present in any cell population. The sources of variation in population include the extrinsic and intrinsic noise and are well characterized for many cell types (e.g., [7, 8]). Non-genetic intrinsic heterogeneity stems from the random (thermal) nature of interaction of individual molecules, such as mRNA and proteins. Since some of these biomolecules are present in a relatively small number in a cell, their stochastic fluctuations are, unlike in classical test-tube chemistry, not averaged out [1]. As stated above, an equally important source of heterogeneity is the unequal distribution of cellular mRNA and proteins between two daughter cells after cell division. In this paper we address the question of how the regulation of protein contents influences the long-term dynamics of the population. To accomplish this, we follow the philosophy of Kimmel et al. model [5], but adjust the details to fit the experimental data on protein PRC1 from the paper by Cohen et al. [9]. PRC1 protein is expressed at relatively high levels during S and G2/M phases of the cell cycle before dropping dramatically after mitotic exit and entrance into G1 phase. PRC1 is a substrate of several cyclin-dependent kinases (CDKs) and it has become a novel human protein of cytokinetic importance since its identification [10]. PRC1 takes part in midzone microtubule formation and is essential to the cytokinetic machinery of mammals, via collaboration with Kinesin-4 in setting up a controlled zone of overlapping, antiparallel microtubules at the spindle midzone [11]. Upon anaphase onset and removal of inhibitory CDK1 phosphorylation, PRC1 dimers form, which recruit Kinesin-4, a plus-end directed motor protein that inhibits microtubule dynamics, helps stabilize and regulate spindle microtubule assembly within cytokinesis. The PRC1-Kinesin-4 complex identifies and regulates the spindle midzone microtubules during cell division, which is crucial in order for cytokinesis to progress properly. Our model assumes that PRC1 dynamics contributes to determination of the duration of the cell cycle, with influence of other factors represented as noise. Further information concerning the role of PRC1 is found in the papers [12–16]: The role of PRC1 in cancer has been considered in references [12, 17] and the role in radiation resistance and stemness in reference [18]. As already mentioned, the model we use is patterned after the model by Kimmel et al. [5]. Dynamics of PRC1 during a single cell cycle are separated into two phases: degradation and accumulation. The following variables are used to describe the two phases: X 0 – the number of PRC1 molecules at the beginning of the cell cycle, X 1 – minimum number of molecules, at the end of the degradation phase and the beginning of the accumulation phase, X 2 – number of molecules at the end of the cell cycle, T 1 - duration of the degradation phase, T 2 – duration of the accumulation phase, a – protein degradation rate, and b – protein production rate. We assume that in the degradation phase the dynamics of the protein are described as exponential decay, while in the accumulation phase they are described as exponential growth $$ {X}_1={X}_0 \exp \left(-a{T}_1\right),\kern1em {X}_2={X}_1 \exp \left(b{T}_2\right) $$ We assume that following cell division, the protein is split unequally among the two progeny cells according to expressions (2). In the balanced exponential growth, X 0, X 2, and U are distributed identically in each cell and so we have X 0 = UX 2 which by independence of X 2 and U implies $$ V\left({X}_0\right)=E\left({X}_2^2\right)E\left({U}^2\right)-E{\left({X}_2\right)}^2E{(U)}^2=V\left({X}_2\right)E\left({U}^2\right)+E{\left({X}_2\right)}^2V(U) $$ $$ \frac{V\left({X}_0\right)}{E{\left({X}_0\right)}^2}=\frac{V\left({X}_2\right)E\left({U}^2\right)}{E{\left({X}_0\right)}^2}+\frac{E{\left({X}_2\right)}^2V(U)}{E{\left({X}_0\right)}^2} $$ Considering that because of symmetry E(U) = 1/2 and E(X 0) = E(X 2)/2 we obtain $$ \frac{V\left({X}_0\right)}{E{\left({X}_0\right)}^2}=4\frac{V\left({X}_2\right)E\left({U}^2\right)}{E{\left({X}_2\right)}^2}+4V(U) $$ and passing to the coefficients of variation (\( c{v}_X=\sqrt{V(X)}/E(X) \) for rv X), we obtain $$ c{v}_{X_0}^2=c{v}_U^2\left(c{v}_{X_2}^2+1\right)+c{v}_{X_2}^2 $$ Solving the above for cv U 2 results in $$ c{v}_U^2=\frac{c{v}_{X_0}^2-c{v}_{X_2}^2}{c{v}_{X_2}^2+1} $$ Data-based model building We have at our disposal the four PRC1 data sets available from the Supplemental Data to reference [9]. Each of the data sets consists of individual-cell measurements of PRC1 contents collected at constant time intervals. The trajectories are depicted in Fig. 2, the legend of which contains the relevant details. Briefly, after division, the level of PRC1 decreases and then increases, to reach a maximum value immediately before division. Empirical and simulated trajectories of the PRC1 protein content for the 4 data sets from reference [9]. a Data set 1. Published in the main body of [9] The composite picture includes illustration of outlier removal, empirical trajectories excluding outliers, and simulated trajectories. b Data set 2. We use it for comparison with Data set 1. (c and d) Data sets 3 and 4 Detailed model description and estimation Based on data analysis, the model entails the following detailed principles: ln(a) depends linearly on X 0 with additive Gaussian noise, ln(T 1) (T 1 in case of Data set 2) depends linearly on ln(a) with additive Gaussian noise, $$ {X}_1={X}_0 \exp \left(-a{T}_1\right), $$ ln(b) depends linearly on ln(a) and X 1 with additive Gaussian noise, T 2 depends linearly on T 1 with additive Gaussian noise, $$ {X}_2={X}_1 \exp \left(b{T}_2\right), $$ X 0 ' = UX 2, X 0 ' ' = (1 − U)X 2, i.e., in multi-generation simulations, the next-generation starting protein contents are modeled using Eq. 2, where the distribution of the random variable U is assumed to belong to the Beta-family. Table 1 depicts the correlations computed from the data. They indicate a strong positive correlation between ln(a) and X 0, which supports item 1 above. Similarly, there exists a strong negative correlation between ln(T 1) and ln(a), which supports item 2. For Data set 2, the correlation is slightly better for T 1 and ln(a). Then, X 1 can be computed from the exponential decay expression as in item 3. Further, there exists a strong positive correlation between ln(b) and ln(a), and a strong negative correlation between ln(b) and X 1, which supports item 4. Finally, there exists a strong negative correlation between T 1 and T 2, which supports item 5. Then, X 2 can be computed from the exponential growth expression as in item 6. Data-derived correlations between pairs of variables characterizing the PRC1 trajectories in Data sets 1–4 Data set 1 T1 + T2 ln(a) ln(b) Detection and removal of outliers in the data The first step was finding and removing the outlier trajectories of PRC1 protein. In all 4 data sets the individual measurements differed with respect to the dynamic of this protein. We decided that the most important is how much the amount of PRC1 increases during cell cycle. When the X 2/X 0, ratio was calculated, in raw data in some cells the number of protein molecules at the end of the cell cycle was up to fifty times higher than at the beginning, which is biologically unlikely. We use a modification of Tukey test to eliminate the outliers [19]. Figure 2 shows the X 2/X 0 ratio and distributions before and after removal of outliers. We believe that outliers result from measurement errors. The beginning or the end of the cell cycle might have been identified incorrectly. Estimation of model parameters Following determination of the structure of the model, estimation of the coefficients of the linear relationships and the variances of noise has been accomplished using standard regression techniques [19]. Final relationships for Data set 1 are presented in Table 2. Table 3 presents the simulation-based counterparts of experimental correlations from Table 1 (Data set 1). Each cell has different values of all parameters. Regression-based coefficients for the 4 versions of the model based on Data sets 1–4, respectively −1.600 3.9 × 10−6 N(0, 0.62) ln(T1) N(0, 0.124) −1.78 × 10−5 −0.6564 7.82 × 10−5 N(0,0.45) N(0,0. 0.35) N(0,0.485) Simulation-based correlations between pairs of variables characterizing the PRC1 trajectories in Data sets 1–4 Comparison of the scatterplots of experimental vs. model-based relationships among model variables is depicted in Fig. 3 (Data sets 1–4). Comparison demonstrates a very good agreement of experiment and model-based data, especially for Data set 1, which was used to construct the model. Data-based vs. simulation-based scatterplots of relationships among principal model variables for Data sets 1–4 (a–d). Symbols: data-based, red diamonds; simulation-based, blue triangles We have not observed many cases where calculated parameter was negative. Nevertheless, when such a case happened we rejected the calculated parameter and draw another ε value. Effectively, this means we use Gaussian noise, conditional on nonnegativity. As for the unequal division, the coefficient of variation of the random variable U, characterizing the asymmetry of division, is estimated from Eq. (4). In simulations, the amount of proteins received by daughter cells was sampled from a symmetric beta distribution, which has the variance equal to V(U) = (8α + 4)− 1. Values of parameters of beta for all data sets are depicted in Table 4. Parameters of the beta distribution of the random variable U, characterizing the asymmetry of division Simulations of population dynamics We have run the model for a large number of cell generations, recording the PRC1 contents in all cells of the resulting pedigree, at constant time intervals. To generate the population of cells, simulation was started with a single ancestor cell. During cell division the PRC1 is unequally divided between daughter cells as described above. Results of long-term simulations of the model based on Data sets 1 and 2 are presented in Fig. 4. The picture emerging from simulations of Data set 1 is that of a very well-tuned regulatory circuit that provides a stable distribution of PRC1 contents and interdivision times. Outliers, being usually particularly high values of X 2 appear sporadically and are eliminated in the succeeding 1 or 2 generations. Data set 2 seems qualitatively different, with more variation in cell cycle duration. Results of long-term simulations of the model based on Data set 1 (a) and Data set 2 (b). Composite figures include: Dynamics of the time series of the PRC1 contents in a randomly chosen lineage of descendants of the ancestor cell; and color-scale depiction of the simulated genealogies in linear scale (highlighting outliers) and semi-logarithmic scale Simplified mathematical model explaining the cell cycle regulation The equations of the model can be written explicitly, including the noise terms, based on the detailed model description (items 1–6 from the list earlier on), with parameter values depending on the data set, as listed in Table 2. It is instructive to consider a model stripped of noise, which very clearly shows the straightforward nature of cell cycle regulation as estimated from the data. The model will be shown to be very robust to noise introduced by asymmetric division and, at least numerically, this can be extended to any source of noise. The equations of the model without noise are as follows: $$ \begin{array}{l} \ln (a)={b}_1+{b}_{10}{X}_0\\ {} \ln \left({T}_1\right)={b}_2+{b}_{21} \ln (a)\kern0.5em \left( Data\kern0.5em set s\kern0.5em 1,\kern0.5em 3,\kern0.5em 4\right)\\ {}{T}_1={b}_2+{b}_{21} \ln (a)\kern0.5em \left( Data\kern0.5em set\kern0.5em 2\right)\\ {} \ln \left({X}_1\right)= \ln \left({X}_0\right)-a{T}_1\\ {} \ln (b)={b}_4+{b}_{41} \ln (a)+{b}_{43}{X}_1\\ {}{T}_2={b}_5+{b}_{52}{T}_1\\ {} \ln \left({X}_2\right)= \ln \left({X}_1\right)+b{T}_2\end{array} $$ Explicit expressions for all the variables can be found, although they are cumbersome. In particular, we obtain the functions X 2 = ϕ(X 0), T = ψ(X 0), with T = T 1 + T 2, which define the Kimmel et al. model [5]. Figure 5 depicts X 1, X 2, T 1, and T 2, as functions of X 0, for Data sets 1 and 2. The relationships are somewhat different in both cases; Data set 1 exhibits monotonous increasing dependence of X 2 on X 0, while for the Data set 2, the X 2 graph attains a maximum and then decays to 0. Remarkably, T 1, and T 2 are practically constant as functions of X 0 in both cases. Relationships derived from the deterministic version of the model based on Data sets 1 (a) and 2 (b). Horizontal axis, X 0, graphs depict variables X 1, X 2, T 1 and T 2, as functions of X 0 In the case of the deterministic model (5), if in addition the divisions are symmetric, the equilibrium value of X 0 satisfies the equation $$ {X}_0=\phi \left({X}_0\right)/2 $$ as shown in Fig. 6. However, with asymmetric division, the equilibrium is disrupted since Graphical depiction of the equilibria based on the model and Data sets 1 (a) and 2 (b). Horizontal axis, X 0, graph of X 2 as a function of X 0 (continuous line), intersected with the graph of 2X 0 $$ {X}_0\hbox{'}=U\phi \left({X}_0\right) $$ where X 0 ' is the initial PRC1 contents in the randomly chosen progeny. This results in the values of X 0 oscillating from one generation to another, while the values of X 2 = ϕ(X 0) are much less affected (Fig. 7), which illustrates the efficiency of the regulatory mechanism. This is exactly the case considered in [5] and in [20] and the equilibrium distribution can be computed using methods of these papers. Finally, if the full stochastic model is used, then both X 0 and X 2 oscillate, since the uncertainty embedded in the model counteracts the regulatory feedback. This latter case has not been studied analytically. Oscillations of the PRC1 levels before and after division, generated by models based on Data sets 1 (a) and 2 (b). For several generations of a randomly chosen lineage started by an ancestral cell, series of values of X 0 and X 2 (interpolated by smooth lines for optical convenience) are depicted. Continuous lines, X 0; dashed lines X 2. For the case of a deterministic model with asymmetry of divisions being the only source of randomness, the continuous line is superimposed on the reference deterministic equilibrium Parent-progeny and sib-sib correlations These correlations were computed in [9] for Data set 1. We present model-based correlations in Fig. 8. They are somewhat different from those in the original paper, which however may be the question of scaling and color-coding. Simulation-based correlations of PRC1 protein for each time point of the cell cycle (percentage) on (a) Data set 1, and (b) Data set 2 It is interesting and important to understand the mechanisms of epigenetic regulation in proliferating eukaryotic cells. There exist a number of models, with very strong experimental background, which explain the interplay of signaling pathways underlying the timing of cell division including stochastic effects [21]. In addition to this, there exists a very large body of literature addressing experimental relationships among cell size at birth, duration of the cell cycle and asymmetry of division. Idiosyncratically, we may mention models of Kimmel et al. [5], Dyson et al. [3], and Di Talia et al. [22]. One of these models [23] based on observations on embryonic cells led to a bimodal distribution of cell sizes in the population. Asymmetry of division has been deemed to play a major role in generation of variability in cell populations. Various molecular mechanisms may underlie asymmetric cell division. For example, in our previous works, we use a stochastic model based on branching processes, which qualitatively describe new wave of single cell-based observation. The model, originally devised in [24] to model evolution of unstable gene amplification and then analyzed mathematically by other, is presented in Fig. 1b. We consider a set of large particles (biological cells), following a binary fission process. Each of the large particles is born containing a number of small particles (genes, proteins, viruses, organelles), which multiply or decay during the large particle's lifetime. The arising population of small particles is then split between the two progeny of the large particle and the process continues in each of the progeny. This "division-within-division" or "branching-within-branching" occurs in various settings in cell and molecular biology. Examples include tightly regulated phenomena such as replication of chromosomal DNA, but also processes in which the number of objects produced in each biological cell is a random variable. Recent progress in single-cell measurement techniques enabled a much more precise look at cell cycle kinetic in individual cells. We based our modeling up-to-date on the publicly available data from Alon's laboratory. They tracked levels of a number of cell-cycle related and other proteins, some of them over a number of cell cycles and presented synthetic statistics for some of them [9]. Methodologically, we developed a relatively simple model allowing peeling off layers of stochasticity, related to intermediate stages in PRC1 regulation. The initial variability of PRC1 is party cancelled by resetting it to a low level and then increasing its contents until division. We do not know how the timing of the minimum of PRC1 is related to the cell-cycle phases; this is an interesting question in itself. The model, when stripped of stochasticity except for asymmetry of division, reduces to the old model of Kimmel et al. [5], which has been completely characterized mathematically [20] using tools of the operator semigroup theory. It may be mentioned that in another paper, Arino and Kimmel [25] analyzed a model, which included more stochastic elements than the original model in ref. [5]. In that model, in addition to the asymmetric division, the time the progeny cell spends in the cycle is a random variable with conditional distribution density given its birth size. It has been demonstrated that this approach, originating in the theory of branching processes, is essentially equivalent to the more typical (at the time) formulation in the form of a transport partial differential equation with a nonlocal feedback through boundary condition. For further discussion, see ref. [25]. In the current paper, we used a mathematical model to reproduce experimental trajectories of the PRC1 protein published in [9] and extend the results to model long-range dynamics of the cell population. The main question we address is whether the regulatory feedbacks deduced from single cell cycle data provide epigenetic regulation of cell characteristics in long run. PRC1 protein is regulated by the cell cycle. This protein is absolutely required in cytokinesis, without it cell cannot divide to form two daughter cells [26]. PRC1 is a good candidate because of its role in setting timing of division. Findings of the current paper include tight regulation of the cell cycle (particularly the timing of the cell cycle) even that PRC1 is only one of the players in cell dynamics. Understanding that association, even close, does not necessarily imply causation, we consider this an interesting and important result. In recent publications authors analyzed single-cell data. Authors of the first paper [27], used the Fucci system (the first marker indicates G0/G1 phases, and the second one the S/G2/M phases) to calculate the length of the cell cycle and the cell cycle phases. They calculated correlations between parent-progeny (no correlation), siblings and cousins (high correlations) cell cycle lengths. Obtained results can be explained by circadian clock control over the mammalian cell cycle in cell. Dynamic of the two Fucci makers was not analyzed, so it is difficult to compare ref. [28] with our work, which is mainly focused on protein dynamics. In another paper [28] authors analyzed what impact on cell signal response intrinsic and extrinsic noise has and how cells can eliminate variability causes by extrinsic noise. They performed single-cell measurements of three key signal pathways: extracellular signal-regulated kinase, calcium and nuclear factor kappa-B. Again, this paper has a different focus. Also recently, single-cell expression of cell cycle regulators was analyzed in ref. [29], but the authors explained variability in cell cycle length in the terms of a mammalian clock control. To confirm their theory they proposed a simple linear mathematical model. We used a more parsimonious paradigm of correlation and regression methods to predict what directed influences on a cell cycle are caused by number of protein. The novelty of the present study is the combination of single cell experimental data, correlation analysis and mathematical modeling of individual cell dynamics. First of all we would like to thank the referees for their comments and suggestions that were addressed as follows: Reviewer's report 1: Prof. Ollivier Hyrien, University of Rochester This manuscript deals with studying the contribution of PRC1 to the regulation of the cell cycle. A mathematical framework is proposed that describes (1) the dynamics of PRC1 during the cell cycle, (2) the random allocation of the protein at division, and (3) cell kinetics. The model is interesting and developed based on an earlier stochastic model proposed by Kimmel and colleagues (1984). An application of this model is presented in which the authors analyze data on the protein dynamics in H1299 non-small cell lung cancer cell lines published by Cohen et al. (2009). Simulations indicate that the model achieves a good description of experimental data. Authors' response: Thank you for a positive overview. Page 5, Eq. 3. Are the parameters a and b constrained to be positive? Are they random (i.e., cell-specific) or are they identical across cells? Authors' response: Parameters a and b are always positive. Every cell has different values of these parameters. Based on initial number of molecules (X 0 ) we use linear regression to calculate log(a) $$ log(a) = {b}_1 + {b}_{10}{X}_0 + \varepsilon $$ b 1 , b 10 and ε are as described in Table 2 (similar principles are used to estimate log(b)). They differ among data sets. As you can see, we do not have to constrain a and b to be positive. We add additional information on page 5. Page 6: "T2 depends linearly on T1 with additive Gaussian noise". Since T2 is a duration, perhaps what is meant here is simply that T2 is linearly associated with T1, without making any distributional assumption about the noise. This would preserve the positivity of T2. Authors' response: The noise term is needed to obtain agreement with the data. We have not observed many cases where calculated T 2 was a negative value. Nevertheless, when such case happened we rejected calculated T 2 and drew another ε value. Effectively, this means we use Gaussian noise, conditional on nonnegative T 2 . Page 7: "… at the end of the cell cycle was up to 50 times higher than at the beginning, which is biologically unlikely". Is it the beginning of the cell cycle of the beginning of the accumulation phase. Also, could the authors comment briefly on possible explanations for why the ratio X 2/X1 was so high in some cells? For example, could this be due to the nonlinearity of the relationship between fluorescence intensity and number of molecules, or to measurement errors? Authors' response: Apologies for misprint. We calculate the X 2 /X 0 ratio, so the ratio between number of molecules at the beginning and at the end of the cell cycle. We believe that outliers result from measurement errors. The beginning or the end of the cell cycle might have been identified wrongly. The measurements were performed before 2009, when cell tracking was less well developed. Page 7: Was parameter estimation performed on the trajectory of protein concentration for each cell individually? Pages 8–9: In running model simulations, were the random times T1 and T2 assumed to follow specific distributions? Authors' response: Parameters in equations (5) were estimated using data from all cells. In this paper, we have not made any distributional assumptions, except for Gaussian distributions of noise terms. Page 9: The dependence structure induced by the assumed mechanism of protein dynamics is an interesting feature of the model. Could the authors elaborate on the comparison between model-based correlations and those obtained from experimental data? Authors' response: Model-based (Table 3 ) and data-based correlations (Table 1 ) are in good agreement, particularly when their absolute values are high. The agreement is best for Data set 1, which was used as a reference to create the model. Reviewer's report 2: Prof. Anna Marciniak-Czochra, University of Heidelberg Authors consider a mathematical model of epigenetic control of the cell cycle, taking into account stochastic effects and resulting heterogeneity of cell population. Such models have been conceived in the past (see Kimmel's own model published in 1984), but have been largely abandoned for the lack of precise measurements of biomolecules at a single-cell scale. The topic has become important in part because of progress in quantification and in part because of recent emphasis on epigenetic controls of the cell cycle. The model employs publicly available data from Alon's laboratory, in particular, single-cell trajectories of the PRC1 protein involved in cell cycle controls. Using a multistep estimation procedure, authors successfully build a model that reconstructs the stochastic dynamics at the single-cell level and marginal and joint distributions of most of the meaningful parameters. Authors also demonstrate that if stripped of various layers of dynamics, the model can be reduced to Kimmel et al. 1984 model of cell cycle regulation. Also, it leads to cell population homeostasis when run for extended times. There are some interesting points that the authors should address before the paper becomes suitable for Biology Direct: The model in its mathematical framework considers a factor (protein) that may be an active regulator of the cell cycle. It might be worthwhile to discuss if PRC1 qualifies as such factor. Authors' response: PRC1 protein is not an active regulator of the cell cycle per se, but it is regulated by the cell cycle. This protein is absolutely required in cytokinesis, without it cell cannot divide to form two daughter cells. More precisely, the central spindle bundle is not formed and this prevents the final abscission event [26]. We think that because of a strong correlation to cell-division events, PRC1 protein qualifies to be used in our model. Authors provide simulations of the long-term dynamics of the model under variable levels of stochasticity. What is missing is a discussion of mathematical results that might be relevant for establishing long-term homeostasis of the model. Authors' response: Recently, single-cell expression of cell cycle regulators was analyzed in ref. [29], but the authors explained variability in cell cycle length in the terms of a mammalian clock control. To confirm their theory they proposed a simple linear mathematical model. We used a more parsimonious paradigm of correlation and regression methods to predict what directed influences on a cell cycle are caused by number of protein. The novelty of the present study is the combination of single cell experimental data, correlation analysis and mathematical modeling of individual cell dynamics. Finally, recently, there have been a number of new papers published, which either involve similar models, or show new techniques for obtaining data at single-cell level (see Nature 2015, 519, 468–471, or Science 2014, 346, 1370–1373). Enhanced discussion of these models, compared to the model in the present manuscript, is desirable. Authors' response: In these two publications authors analyzed single-cell data. Authors of the first paper [27], used the Fucci system (the first marker indicates G0/G1 phases, and the second one the S/G2/M phases) to calculate length of cell cycle and cell cycle phases. They calculated correlations between parent-progeny (no correlation), siblings and cousins (high correlations) cell cycle lengths. Obtained results can be explained by circadian clock control over the mammalian cell cycle in cell. Proteins dynamic of two makers was not analyzed, so it is hard to compare that with our work, which is mainly focused on protein dynamics. In the second paper [28] authors analyses what impact on cell signal response have intrinsic and extrinsic noise and how cell can eliminate variability causes by extrinsic noise. They performed single-cell measurements of three key signal pathways: extracellular signal-regulated kinase, calcium and nuclear factor kappa-B. Reviewer's report 3: Prof. Alberto d'Onofrio, International Prevention Research Institute In this computational epigenetics works the authors investigate how the regulation of the PRC1 protein content influence the long term behaviour of a cellular population. The general topic of how epigenetic changes impact on a population is one of the most important of molecular biology, and it is at the interface between systems biology and population dynamics. I think that the idea of the manuscript is very good, the work is well written (apart an important minor detail) and the results are of interest. Authors' response: Thank you for a very positive comment. I recommend its acceptance upon a minor but important change is implemented. The changes concerns that fact the in this work the authors do not provide enough mathematical details of the original model of unequal divisions by Kimmel et al. (1984), which makes the paper more difficult to be read for those who, differently form myself, did not read it. Authors' response: We included additional information about the model of Kimmel et al. 1984. I suggest of inserting in the full text the supplemental figure S1. Authors' response: We included Figure S1 in the main body of the paper (currently Fig. 3 ) There is some typos. For example in the abstract "we follow the philosophy of a 1984 model by Kimmel model" Authors' response: Apologies for the typos. We corrected all of them. PRC1: Protein Regulator of cytokinesis 1 CDKs: Cyclin-dependent kinases H1299: Human non-small cell lung carcinoma cell line The authors were supported by the National Science Center (Poland) grant nr DEC-2012/04/A/ST7/00353 (MD, MK, JS) to Marek Kimmel, and by the grants from the Division of Mathematical Sciences of the National Science Foundation DMS-1361411 to Marek Kimmel. Additionally, MD is holder of scholarship DoktoRiS-Scholarship program for Innovative Silesia. MD took part in model building, carried out parameter estimation and simulation, prepared the manuscript. MK designed the model, supervised parameter estimation and simulation, prepared the manuscript. JS initiated the project, researched literature, supervised simulations, prepared the manuscript. All authors read and approved the manuscript. Systems Engineering Group, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland Departments of Statistics and Bioengineering, Rice University, MS 138, 6100 Main, Houston, TX 77005, USA Huang S. Non-genetic heterogeneity of cells in development: more than just noise. Development. 2009. doi:10.1242/dev.035139.Google Scholar Patsy H, Jagers P, Vatutin VA. Branching processes: variation, growth, and extinction of populations, Vol. 5. Cambridge: Cambridge University Press; 2005.Google Scholar Janet D, Villella-Bressan R, Webb G. A nonlinear age and maturity structured model of population dynamics: I Basic theory. J Math Anal Appl. 2000;242(1):93–104.View ArticleGoogle Scholar Kimmel M, Axelrod D. Branching processes in biology. 2nd ed. New York: Springer; 2015.Google Scholar Kimmel M, Darzynkiewicz Z, Arino O, Traganos F. Analysis of a cell cycle model based on unequal division of metabolic constituents to daughter cells during cytokinesis. J Theor Biol. 1984;110(4):637–64.View ArticlePubMedGoogle Scholar Arino O, Kimmel M, Zerner M. Analysis of a cell population model with unequal division and random transition. In: Arino O, Axelrod DE, Kimmel M, editors. Mathematical population dynamics. New York: Marcel Dekker; 1991. p. 3–12.Google Scholar Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Science. 2002;297:1183–6.View ArticlePubMedGoogle Scholar Altschuler SJ, Wu LF. Cellular heterogeneity: Do differences make a difference? Cell. 2010;141:559–63.PubMed CentralView ArticlePubMedGoogle Scholar Cohen AA, Kalisky T, Mayo A, Geva-Zatorsky N, Danon T. Protein dynamics in individual human cells: experiment and theory. PLoS ONE. 2009. doi:10.1371/journal.pone.0004901.Google Scholar Jiang W, Jimenez G, Wells NJ, Hope TJ, Wahl GM, Hunter T, et al. PRC1: a human mitotic spindleassociated CDK substrate protein required for cytokinesis. Mol Cell. 1998. doi:10.1016/S1097-2765(00)80302-0.PubMedGoogle Scholar Bechstedt S, Brouhard GJ. Motors and MAPs collaborate to size Up microtubules. Dev Cell. 2013;26(2):118–20. doi:10.1016/j.devcel.2013.07.010.View ArticlePubMedGoogle Scholar Piunti A, Rossi A, Cerutti A, Albert M, Jammula S, Scelfo A, et al. Polycomb proteins control proliferation and transformation independently of cell cycle checkpoints by regulating DNA replication. Nat Commun. 2014. doi:10.1038/ncomms4649.PubMed CentralPubMedGoogle Scholar Hu B, Li S, Zhang X, Zheng X. HSCARG, a novel regulator of H2A ubiquitination by downregulating PRC1 ubiquitin E3 ligase activity, is essential for cell proliferation. Nucleic Acids Res. 2014. doi:10.1093/nar/gku230.Google Scholar Subramanian R, Ti SC, Tan L, Darst SA, Kapoor TM. Marking and measuring single microtubules by PRC1 and kinesin-4. Cell. 2013. doi:10.1016/j.cell.2013.06.021.Google Scholar van den Boom V, Rozenveld-Geugien M, Bonardi F, Malanga D, van Gosliga D, Heijink A, et al. Nonredundant and locus-specific gene repression functions of PRC1 paralog family members in human hematopoietic stem/progenitor cells. Blood. 2013. doi:10.1182/blood-2012-08-451666.PubMedGoogle Scholar Rayess H, Wang MB, Srivatsan ES. Cellular senescence and tumor suppressor gene p16. Int J Cancer. 2012. doi:10.1002/ijc.27316.PubMed CentralPubMedGoogle Scholar Boukarabila H, Saurin AJ, Batsche E, Mossadegh N, van Lohuizen M, Otte AP, et al. The PRC1 Polycomb group complex interacts with PLZF/RARA to mediate leukemic transformation. Gene Dev. 2009. doi:10.1101/gad.512009.PubMed CentralPubMedGoogle Scholar Gieni RS, Ismail IH, Campbell S, Hendzel MJ. Polycomb group proteins in the DNA damage response: a link between radiation resistance and "stemness". Cell Cycle. 2011;10(6):883–94.View ArticlePubMedGoogle Scholar van de Geer SA. Least squares estimation, Encyclopedia of statistics in behavioral science. 2005. p. 1041–5.Google Scholar Arino O, Kimmel M. Asymptotic analysis of a cell cycle model based on unequal division. SIAM J Appl Math. 1987;47(1):128–45.View ArticleGoogle Scholar Tyson JJ, Novak B. Control of cell growth, division and death: information processing in living cells. Interface Focus. 2014. doi:10.1098/rsfs.2013.0070.PubMed CentralPubMedGoogle Scholar Di Talia S, Skotheim JM, Bean JM, Siggia ED, Cross FR. The effects of molecular noise and size control on variability in the budding yeast cell cycle. Nature. 2007. doi:10.1038/nature06072.PubMedGoogle Scholar Kimmel M, Arino O. Cell cycle kinetics with supramitotic control, two cell types, and unequal division: a model of transformed embryonic cells. Math Biosci. 1991;105(1):47–79.View ArticlePubMedGoogle Scholar Kimmel M. Quasistationarity in a branching model of division-within-division, Classical and modern branching processes. New York: Springer; 1997. p. 157–64.Google Scholar Arino O, Kimmel M. Comparison of approaches to modeling of cell population dynamics. SIAM J Appl Math. 1993;53(5):1480–504.View ArticleGoogle Scholar Mollinori C, Kleman JP, Saoudi Y, Jablonski SA, Perard J, Yen TJ, et al. Ablation of PRC1 by small interfering RNA demonstrates that cytokinetic abscission requires a central spindle bundle in mammalian cells, whereas completion of furrowing does Not. Mol Biol Cell. 2005;16:1043–55.View ArticleGoogle Scholar Sandler O, Mizrahi SP, Weiss N, Agam O, Simon I, Balaban NQ. Lineage correlations of single cell division time as a probe of cell-cycle dynamics. Nature. 2015;519:468–47.View ArticlePubMedGoogle Scholar Selimkhanov J, Taylor B, Yao J, Pilko A, Albeck J, Hoffmann A, et al. Accurate information transmission through dynamic biochemical signaling networks. Science. 2014;346(6215):1370–3.View ArticlePubMedGoogle Scholar Feillet C, Krusche P, Tamanini F, Janssens RC, Downey MJ, Martin P, et al. Phase locking and multiple oscillating attractors for the coupled mammalian clock and cell cycle. Proc Natl Acad Sci U S A. 2014;111(27):9828–33.PubMed CentralView ArticlePubMedGoogle Scholar
CommonCrawl
Centripetal issue when considering gravity Forgive me if my question seems silly, but I am quite baffled. Suppose you have a satellite orbiting a horizontal swing planted into the ground and we want to find the velocity with which the satellite must be moving in order to succeed in taking a fully circular path around the swing (no failure followed by parabolic fall). Indeed if we solve for $F = m(v^2)/R$, where $F$ can be equated to $g$, we can also solve for the velocity; all nice and good, and the nature of this method makes some intuitive sense but there is a problem (at least for me): the force that the satellite experiences is an inward force towards the center, and therefore the force described in the aforementioned equation is the centripetal force. However, this conflicts with the method of solving the question, since at the top of the path, the centripetal Force and the gravitational force will be both pointing downwards. Therefore shouldn't the satellite actually be "doubly" compelled to accelerate to the ground? But clearly when we for for $v$ with $g$, we assume that they cancel each other out - hence they point in opposite directions. The idea that the cancel also "explains" the intuition that at the top of the circular path, the satellite will be feeling the most "weightless" it will throughout its revolution. Additionally from a different reference frame / non centrifugal one, it makes sense that at the top, the most force will be on the satellite to turn its path downward as it is after that instant that the satellite sees its downward acceleration. I am, essentially confused with how to handle centripetal/centrifugal "fictitious" forces in this worked example. gravity forces centrifugal-force centripetal-force Kyle Kanos Just_a_foolJust_a_fool I believe your confusion comes from a misunderstanding of the designation of a force as "centripetal". Any calculation of centripetal force is telling you how much force is needed to make a circular motion take place. This doesn't create the force. There is no guarantee that a force of the calculated size and direction actually exists! You need to go out and look around to see if the force is available from any combination of existing forces. Suppose you are driving a $1000$ kg car along a flat road, entering a $200$ m radius circle at $20$ m/s.. The centripetal force equation says that the centripetal force needed is:$$F_C=\frac{mv^2}{r}=\frac{1000\times20^2}{200}=2000\text{ Newtons}$$Normally this centripetal force would come from friction between your slightly turned front wheels and the road. Fine; but your tires are bald, the road is smooth, and someone has dumped litres of motor oil on the ground. The needed centripetal force is still $2000\text{ Newtons}$, but you don't have it, and you won't follow the curve. You'll go straight ahead and wind up in the outside ditch. So you mount some rockets on the car, thrusting sideways with $1200\text{ Newtons}$ in towards the centre of the circle. This is the only central force present. Too bad; this force is enough to drive your car at that speed around a circle with a radius given by:$$r=\frac{m v^2}{F}=\frac{1000\times 20^2}{1200}=333.33\text{ m}$$You'll still go off the road. So you decide to keep the rockets and slow down to $15.5$ m/s. At that speed, the centripetal force needed is:$$F_C=\frac{mv^2}{r}=\frac{1000\times15.5^2}{200}=1201.25\text{ Newtons}$$So you'll drift out a little, but make it around the curve. DJohnMDJohnM Not the answer you're looking for? Browse other questions tagged gravity forces centrifugal-force centripetal-force or ask your own question. Do centripetal and reactive centrifugal forces cancel each other out? Reference frame and centrifugal force Is there a reaction force on the ball in a vertical circular motion? Is centrifugal force equal in magnitude to the centripetal force in the frame of a body undergoing circular motion? Centripetal force in loop motion Centripetal force for non-uniform circular motion Why do we use the term 'centrifugal force' to calculate net gravitational force? Question about centripetal acceleration when marble slides down sphere What provides the centripetal force? Which force is required by a satellite revolving around the earth?
CommonCrawl
Pacific Journal of Mathematics for Industry Multiplicative modelling of four-phase microbial growth María Jesús Munoz-Lopez1,2, Maureen P. Edwards3, Ulrike Schumann4,5 & Robert S. Anderssen6 Pacific Journal of Mathematics for Industry volume 7, Article number: 7 (2015) Cite this article Microbial growth curves, recording the four-phases (lag, growth, stationary, decay) of the dynamics of the surviving microbes, are regularly used to support decision-making in a wide variety of health related activities including food safety and pharmaceutical manufacture. Often, the decision-making reduces to a simple comparison of some particular feature of the four-phases, such as the time at which the number of surviving microbes reaches a maximum. Consequently, in order to obtain accurate estimates of such features, the first step is the determination, from experimental measurements, of a quantitative characterization (model) of the four-phases of the growth-decay dynamics involved, which is then used to determine the values of the features. The multiplicative model proposed by Peleg and colleagues is ideal for such purposes as it only involves four parameters which can be interpreted biologically. For the determination of the four parameters in this multiplicative model from observational data, an iterative two-stage linear least squares algorithm is proposed in this paper. Its robustness, which is essential to support successful comparative assessment, is assessed using synthetic data and validated using experimental data. In addition, for the multiplicative model, an analytic formula is derived for estimating the average lifetimes of the surviving microbes. For microbial growth considerations in areas as diverse as food contamination and pharmaceutical manufacture, the key data are the four-phases (lag, growth, stationary, decay) of the growth dynamics of the surviving microbes [7, 8]. For the utilization of such data for comparative assessment, monitoring and predictive purposes, an appropriate model is required which accurately tracks the four phases [8]. Depending on the situation under consideration, such a model can be utilized in various ways. In food contamination situations, it can be used to compare different inactivation (heating) strategies in food processing or to comparatively assess the survival characteristics of different classes of microbes. In pharmaceutical situations, it can be used to predict the optimal time to harvest the surviving microbes, since it is only the surviving microbes that can be used to make the pharmaceutical. In the study of soil microbes, comparative assessment has been used to compare the chemical and physical factors which influence the relative levels of microbial carbon and nitrogen biomasses [10]. For modelling and tracking the changing features of the four-phases, Peleg and colleagues [8, 9] have proposed and analysed a multiplicative model consisting of the product of two Kohlrausch (stretched exponential) functions [2, 7] with positive and negative exponential growth $$ N(t)=N_{0}\exp\left[\left(\frac{t}{t_{cg}}\right)^{m_{1}}\right]\exp\left[-\left(\frac{t}{t_{cd}}\right)^{m_{2}}\right], $$ where the parameters t cg and t cd represent the characteristic times for the growth and the decay, respectively, (had they been unimpeded) and the exponents m 1 and m 2 characterize the nature of the exponential growth and decay. Such a model, after the initial growth from a starting population of N 0, allows for a subsequent decrease in the size of the population, as occurs for the survivors in a closed environment [7, 8]. On setting m 1=β, m 2=b, \(\alpha =\left (1/t_{cg}\right)^{m_{1}}\) and \(a=\left (1/t_{cd}\right)^{m_{2}}\), Eq. (1) takes the more compact form $$ N(t)=N_{0}\exp\left(\alpha t^{\beta}\right)\exp\left(-at^{b}\right), $$ which models an initial growth (by having α>a) which is eventually dominated by the decay (by having b>β). It is easier to describe the algorithm using this equation. Once the parameters a, b, α and β have been determined, one can then use the above relationships to determine m 1, m 2, t cg and t cd . These relationships are discussed from a biological interpretative perspective in subsections 2.3 and 2.4. As explained in Edwards et al. [7], the importance of this model is that it is the solution of a non-autonomous ordinary differential equation which is able to track the four-phases. It therefore circumvents the shortcomings associated with models which are the solutions of autonomous ordinary differential equation, such as the Verhulst, since their solutions can only model the first, second and third phases, but not the fourth. To determine the parameters in fitting the multiplicative model to observational survival data, Peleg and Corradini [8] suggest the use of mathematical software such as Mathematica. The challenge here is the need to find starting values for the parameters which are representative of the situation under consideration and to ensure that the solver used is stable with respect to measurement noise and limited data. Here, it is shown how the special structure of the multiplicative model can be exploited to derive an iterative two-step procedure for the determination of the parameters. An assessment of its robustness, using synthetic data, is given. Validation is performed using microbial (fungal) survival measurements. The paper has been organized in the following manner. The multiplicative model is discussed in Section 2 and an analytic average lifetime formula for it is derived. The algorithm is proposed in Section 3 and tested on synthetic data in Section 4. The application of the algorithm to real microbial survival data is the subject of Section 5 along with conclusions. The multiplicative model proposed by Peleg et al. (2009) [9] can be derived in various ways. 2.1 From first principles For an initial population N 0>0, unrestrained growth can be modelled as a positive exponent stretched exponential (Kohlrausch) function $$ N_{g}(t)=N_{0}\exp\left[\left(\frac{t}{t_{cg}}\right)^{m_{1}}\right], $$ where t cg denotes the characteristic growth time and m 1 characterizes the rate of growth. The decay can be modelled in a similar manner as a negative exponent stretched exponential (Kohlrausch) function $$ f_{d}(t)=\exp\left[-\left(\frac{t}{t_{cd}}\right)^{m_{2}}\right], \quad 0\leq f_{d}(t)\leq 1, m_{2}>m_{1}, $$ where t cd denotes the characteristic decay time and m 2 characterizes the rate of decay. If it is assumed that the decay modifies the growth multiplicatively as a function of the time, then Eqs. (3) and (4) combine to give $$N(t)=N_{g}(t)f_{d}(t) $$ or, equivalently, $$N(t)=N_{0}\exp\left[\left(\frac{t}{t_{cg}}\right)^{m_{1}}\right]\exp\left[-\left(\frac{t}{t_{cd}}\right)^{m_{2}}\right]. $$ Justification for this being a realistic model of a four-phase growth-decay process is given in Edwards et al. [7], where it is shown that such a structure corresponds to the solution of a non-autonomous ordinary differential equation model of a quite general growth-decay process. 2.2 Solution of the non-autonomous von Bertalanffy equation As noted by Edwards et al. [7], a key property of the multiplicative model (2) is that it is a solution of the non-autonomous von Bertalanffy equation $${} \frac{dN}{dt}=\bar{\alpha}(t) N^{\bar{\beta}}-\bar{a}(t)N^{\bar{b}}+\psi(t), N(0)=N_{0}, \bar{\beta}>0, \bar{b}>0, $$ when \(\bar {\alpha }(t)=\alpha \beta t^{\beta -1}\), \(\bar {\beta }=1\), \(\bar {a}(t)=abt^{b-1}\), \(\bar {b}=1\) and ψ(t)=0. On substituting these values into (5) and setting ψ(t)=0, the last equation becomes $$ \frac{dN}{dt}=\theta(t)N, \quad \theta(t)=\left[\alpha\beta t^{\beta-1}-abt^{b-1}\right], $$ which becomes, when θ(t) is a constant because β=b=1, the standard exponential growth-decay equation. For the von Bertalanffy Eq. (5), Edwards and Anderssen [6] have performed a Lie point symmetry analysis to identify the regularity that \(\bar {\alpha }\), \(\bar {\beta }\), \(\bar {a}\), \(\bar {b}\) and ψ(t) must satisfy in order for (5) to have interesting classes of analytic solutions (often referred to technically as non-trivial symmetries). Such symmetries can then be utilized to explore for new closed form solutions. 2.3 Biological interpretation of the parameters α, β, a and b The relevance of the above two derivations for Eqs. (1) and (2) is that they shed light on how to interpret the parameters α, β, a and b biologically and study their interactive interdependence. The starting point is Eq. (2) rewritten in its equivalent form (6). For the standard decay Eq. (6), when θ(t) is a constant θ 0 and the population corresponds to a discrete ensemble of members, as holds for microbial growth-decay, the characteristic time of the exponential decay 1/θ 0 corresponds algebraically to the "mean lifetime", ℓ 0, of the members in the ensemble. (The algebraic details are contained in the Appendix.) In addition, if all the individual lifetimes are measured with respect to the same initial reference state, then 1/θ 0 corresponds to the arithmetic mean of these individual times. As highlighted in the Appendix, the mean lifetime concept can be extended to any four-phase microbial growth-decay situation which decays to zero. This generalized mean lifetime will be denoted by ℓ θ . Its importance relates to the fact that it measures a key biologically relevant feature, the average life time of the microbes in a four-phase growth-decay situation. From a food safety perspective, ℓ θ can be used to identify strategies that allow inactivation to be performed effectively, whereas, from a pharmaceutical perspective, an understanding of the value of ℓ θ is required to guarantee the time optimal harvesting of live microbes. The corresponding formula for the θ(t) of Eq. (6) thereby takes the form $$ \begin{aligned} \ell_{\theta}=\ell_{\alpha,\beta,a,b}=&\int_{0}^{\infty} \exp\left(\alpha \tau^{\beta}-a\tau^{b}\right)\tau d\tau{/}\\ &\int_{0}^{\infty} \exp\left(\alpha \tau^{\beta}-a\tau^{b}\right)d\tau. \end{aligned} $$ In particular, with respect to given growth-decay data, the algorithm is used to determine the values of the parameters α, β, a and b which are then substituted into Eq. (7) which can be evaluated using Matlab. As is clear from the form of the right hand side of Eq. (7), the value of ℓ α,β,a,b can be used to compare different scenarios for the parameters α, β, a and b. For example, since together α and β identify how a particular microbial population grows, the values of α and β could be fixed and the values of a and b varied to find the minimum value of the average lifetimes ℓ α,β,a,b as a characterization for an optimal strategy for performing inactivation. Comparative values for ℓ α,β,a,b for various growth-decay dynamics are discussed in subsection 5. 2.4 Biological interpretation of the parameters t cg , t cd , m 1 and m 2 If m 1=m 2=1, the parameters t cg and t cd correspond, respectively, to the characteristic times of the growth and decay. In particular, they characterize how quickly the growth and decay of the microbes within a population occur, with the rate of growth (decay) being inversely proportional to the value of t cg (t cd ). Consequently, the values of t cg and t cd give an immediate indicative illustration of the relative strengths of the growth and decay dynamics. However, the interpretation of the contributions of t cg and t cd to the growth-decay dynamics must be modified by the values of m 1 and m 2. Since m 1=β and m 2=b, it follows, on equating coefficients in Eqs. (1) and (2), that $$ t_{cg}=\alpha^{-1/\beta}=\frac{1}{\alpha^{1/\beta}} \qquad \text{and} \qquad t_{cd}=a^{-1/b}=\frac{1}{a^{1/b}}. $$ Consequently, multiple choice of α and β (a and b) will generate the same value for t cg (t cd ). Such ambiguities are resolved by determining α and β (a and b) from the experimental data of the growth-decay dynamics under consideration. The linear least squares procedures, as outlined in section 3.1, achieve this by first estimating the values of α and β, and then the values of a and b, separately in an iterative manner. 2.5 Properties of the multiplicative model Sufficient conditions, in terms of the parameters in the more compact form (2) for the multiplicative model, which guarantee a four-phase structure, are given by α>a (which guarantees initial growth) and b>β (which guarantees subsequent decay). Alternatively, in terms of the general form of Eq. (6) with an arbitrary θ, four-phase dynamics is guaranteed if θ(t) is initially positive, which guarantees initial growth, and \(\int _{0}^{\infty } \theta (\tau)d\tau =-\infty \), which guarantees that subsequent decay occurs and goes to zero. Taking logarithms of the more compact form (2) yields the additive relationship $$\ln N(t)=\ln N_{0}+\alpha t^{\beta}-at^{b}, $$ $$ \ln N(t)-\ln N_{0}=\alpha t^{\beta}-at^{b}, $$ which will play a key role in the formulation of the algorithm. Consequently, the logarithmic growth-decay dynamics, at a given time t ∗, corresponding to the number of surviving microbes at that time, thereby becomes $$ \left. \frac{d\ln N(t)}{dt}\right]_{t^{*}}=\alpha\beta t_{*}^{\beta-1}-abt_{*}^{b-1}, $$ which yields a connection back to the multiplicative model being the solution of a particular form of the von Bertalanffy equation. The essence of the current situation is the fitting of the multiplicative model (2) to given experimental data, which reduces to the determination of estimates for the four parameters α, β, a, b. However, the multiplicative model is non-linear and the amount of experimental data available is usually quite limited. The standard procedure proposed by various authors is to use some non-linear regression software package such as is available in Matlab. The limitation here is the need to find starting values for the parameters which are representative of the situation under consideration. In comparative assessment situations, it is important that the estimates of the values of the parameters α, β, a, b correctly characterize the situations being compared. For example, if the value of the parameter b was used to assess the effectiveness of different inactivation strategies, then the estimates of b, utilized for the comparative assessment, must correctly represent the actual decay occurring so that no incorrect action or advice was implemented. As explained below, because of the way in which the estimation is performed, the determination of the parameters α, β, a, b is essentially unique in that the estimation is performed, iteratively, as two separate steps involving first the growth phase, to determine α and β, and then the decay phase, to determine a and b. In a sense, compared with non-linear least squares methods, the proposed algorithm is an example of "let the data decide". The rationale is that if one just uses a non-linear solver to do the parameter identification, then no specific structure is exploited within the data which relates to subsets of the parameters. In the algorithm proposed here, this is possible as, in the multiplicative model, the model is separable into a growth component, involving only α and β, and a decay component, involving only a and b. 3.1 Estimating the parameters In the past, different methods have been proposed and used to model microbial growth and decay dynamics. For instance, in order to assess the nature of the initial lag-phase of growth-decay dynamics, Baranyi et al. [3] proposed the use of detection times. However, this requires that the detection times be limited to the initial exponential growth in order to avoid underestimating the rate of growth of the lag-phase. A different suggestion, proposed by Baranyi and colleagues [4, 5], was to solve a time separable non-autonomous model of the form $$ \frac{dN(t)}{dt}=\phi(t)\mu(N)N, \qquad N(0)=N_{0}, $$ where the separable time function ϕ(t) performs the transformation of the autonomous equation $$ \frac{dN(t)}{dt}=\mu(N)N, \qquad N(0)=N_{0}, $$ into the non-autonomous Eq. (11). Various choices for ϕ(t) have been proposed and analysed by Baranyi and colleagues. However, they have not chosen a form for ϕ(t) that corresponds to that for the non-autonomous equation which generates the multiplicative model (2). In particular, their emphasis is on modelling the growth of the total population. Peleg and Corradini [8], for determination of the parameters in the multiplicative model (1), suggest non-linear least squares. The difficulty is that representative starting values for the parameters must be chosen for the implementation of such methods, which the proposed algorithm avoids. The algorithm proposed and implemented here, which explicitly exploits the properties of logarithms, is based on the iterative use of two linear least squares approximations applied to different phases of a growth curve. Its advantage is that it can be iterated to obtain successively better approximations for the parameters α, β, a, b. This type of algorithm does not appear to have been published in the microbial growth modelling literature, though it has been used to determine the parameters of the stretched exponential (Kohlrausch) function in rheological and biological applications [1]. Consider the model in the form $$N(t)=N_{0}\exp\left(\alpha t^{\beta}\right)\exp\left(-at^{b}\right). $$ On taking the logarithm and reorganizing, the last equation becomes $$ \ln N(t)-\ln N_{0}=\alpha t^{\beta}-at^{b}. $$ For the initial growth data, the decay term exp(−a t b) can be neglected, since it is the behaviour of exp(α t β) that dominates at this stage. Consequently, the first step in the implementation of the algorithm is the determination of initial estimates α 1 and β 1 for α and β using the model $$ \ln\{\ln N(t) - \ln N_{0}\}=\ln \alpha +\beta\ln t. $$ With respect to a representative sample \(d_{i}^{*}=N(t_{i}),~i=1,~2,~\cdots,~I,~I>>2,\) of the first two of the four-phases, a linear least squares estimate can be derived for lnα 1, and hence α 1, and β 1 using the overdetermined system of equations $$ \ln\left[\ln d_{i}^{*}-\ln N_{0}\right]=\ln \alpha+\beta\ln t_{i}, \quad i=1,~2,~\cdots,~I. $$ The second step in the implementation of the algorithm is the determination of initial estimates a 1 and b 1 for a and b using the model $$ \ln\left\{-\ln N(t) +\ln N_{0}+\alpha_{1} t^{\beta_{1}}\right\}=\ln a +b\ln t. $$ With respect to a representative sample d j #=N(t j ), j=1, 2, ⋯, J, J>>2, of the last two of the four-phases, a linear least squares estimate can be derived for lna 1, and hence a 1, and b 1 using the overdetermined system of equations $$ \begin{aligned} &\ln\left[-\ln d_{j}^\#+\ln N_{0}+\alpha_{1}t_{j}^{\beta_{1}}\right]\\ &\quad=\left[\ln a_{1}+b_{1}\ln t_{j}\right],~j=1,~2,~\cdots,~J. \end{aligned} $$ The third step in the implementation of the algorithm is the determination of estimates α 2 and β 2 for α and β using the model $$ \ln\left\{\ln N(t) - \ln N_{0}+a_{1}t^{b_{1}}\right\}=\ln \alpha +\beta\ln t. $$ With respect to a representative sample \(d_{\ell }^{*}=N(t_{\ell }),~\ell =1,~2,~\cdots,~L,~L>>2\), of the lag and growth phases, a linear least squares estimate can be derived for lnα 2, and hence α 2, and β 2 using the overdetermined system of equations $$ \ln\left[\ln d_{\ell}^{*}-\ln N_{0}+a_{1}t_{\ell}^{b_{1}}\right]=\ln \alpha_{2}+\beta_{2}\ln t_{\ell}. $$ The fourth, fifth, ⋯ steps in the implementation now iterate, respectively, between the second and third steps. 3.2 Algorithm implementation Because the implementation of the algorithm involves the evaluation of logarithms, the choice of the scale for the times becomes an important issue. In the situations studied here, the basic time scale is days. However, for measurements made at fractions of a day, the logarithms will be negative. Consequently, to avoid this potential difficulty, it is best to work with a time scale (hours, minutes or seconds) such that all the times, at which measurements were made, are greater than one. The validation of the algorithm using synthetic data The numerical performance of the algorithm was initially assessed using the following uniform grid discrete synthetic data $$ \begin{aligned} \{N(t_{i})\}&=\{N(t_{i})=N_{0}\exp\left(\alpha t_{i}^{\beta}\right)\\ &\quad\times\exp\left(-a{t_{i}^{b}}\right)|~i=1,~2,~\cdots,~I\}, \end{aligned} $$ where the values of N 0 and the parameters α, β, a, b are specified. The discrete values {N(t i )} were used to simulate exact and non-exact measurements scenarios of the four-phase growth-decay dynamics with the goal of testing the performance of the algorithm with respect to the accuracy of the recovery of the parameters, and the quality of the reconstructions of the four-phase growth-decay dynamics curves compared with the actual N(t). Though the comparison of the reconstructions of the growth-decay dynamics is indicatively important, the key issue is the robustness, accuracy and reliability of the recovery of the parameters, as it is those that will be used for subsequent decision-making and comparative assessments. 4.1 The synthetic data analysis The exact synthetic data used to test the algorithm was generated using the discrete multiplicative model data {N(t i )} of Eq. (20) with the parameter values α=6, β=1.5, a=4, b=2 and N 0=100. 4.1.1 Exact synthetic data inversion For the synthetic data situation without noise, only 7 data points are needed to perform the parameter estimation using the algorithm, which returns the correct values α=6, β=1.5, a=4, b=2. The result for the exact data situation is illustrated in Fig. 1, where the exact and estimated curves are recovered exactly. Reconstruction for exact synthetic data 4.1.2 Simulation studies of non-exact synthetic data inversion It is known that, in carefully performed measurements of microbial growth-decay dynamics, the measurement error does not depend on the size of the population as it evolves. Consequently, it was only necessary to test the robustness of the algorithm with respect to Gaussian error perturbations. For this, the exact discrete values {N(t i )} were perturbed in the following manner to generate the simulated measurement data $${} d_{i}^{(G)}=N(t_{i})+K\epsilon_{i},\ i=1,~2,~\cdots,~100,\ K\sim \text{constant}, $$ with the {ε i } being Gaussian errors with mean zero and variance 1. In order to comprehensively test the performance of the algorithm, the inversion was performed on 500 realizations of the simulated measurement data \(d_{i}^{(G)}\) with the corresponding values of α, β, a, b, thereby generated, summarized as histograms as in Fig. 2. Histograms of the parameter value α, β, a, b for different levels of the added Gaussian errors for 500 realizations As is clear from Fig. 2, the level of uncertainty in the determination of the parameters α, β, a, b increases as the level of the added Gaussian errors increases. As is clear from Fig. 1, the range of the values of N(t) is approximately ∼[ 0, 800]. Consequently, it is only when the value of K, relative to N(t), becomes suitably large that a spread in the values of α, β, a, b becomes graphically significant. In addition, it shows that the values of the exponents β and b are more accurately recovered than the multipliers α and a. This difference in the recovery of β and b, compared with that for α and a, is confirmed in terms of the statistics of the values of the parameters α, β, a, b tabulated in Fig. 3. The statistics of the parameters α, β, a, b for different levels of the added Gaussian errors for 500 realizations. The standard deviations for β and b are considerably less than those for α and a This difference represents a direct illustration of how fundamental β and b are to determining the growth and decay, respectively, in order to accurately recover a four-phase structure. It implies that a good fit to a four-phase structure cannot be achieved by simply varying α and a unless good estimates of β and b have been determined. This interpretation is implicit in the proposed algorithm, as illustrated in Eqs. (19) and (17), which highlight that β and b are the slopes of the straight lines that are fitted to the logarithmic data. This relates to the fact that, in terms of the linearity of the algebra of Eqs. (19) and (17), the constants ln(α) and ln(a) do not influence the actual slopes β and b of the straight line fits to the logarithmic data. Furthermore, the algorithm estimates β and b separately using, respectively, a growth component and a decay component of the four-phase structure. Consequently, this illustrates the uniqueness in the determination of the parameters α, β, a, b and, hence, the values of t cg and t cd of Eq. (8). The importance of the number of data points used in the recovery of the parameters is illustrated in Figs. 4 and 5. It shows that something like ∼50 data points are required to guarantee reliable results. This highlights the difficulty of the often occurring practical situation of only having a small number of measurements (such as 10) of the growth-decay dynamics. The errors in the estimated values of the parameters α, β, a, b as a function of the number of data point (25, 50, 100) for different levels of the added Gaussian errors for 500 realizations The standard deviations of the errors in the estimated values of the parameters α, β, a, b as a function of the number of data point (25, 50, 100) for different levels of the added Gaussian errors for 500 realizations Application of the algorithm to microbial survival data and conclusions 5.1 Recovery of the parameters α, β, a, b In order to illustrate the practicality of the algorithm for real data, it was applied to the measurements from a study of the growth-decay dynamics for the filametus fungus Fusarium oxysporum. Fusarium oxysporum is a plant pathogenic fungus with a wide host range causing a variety of diseases contributing to crop losses all over the globe. To obtain microbial growth data in a closed environment we monitored the growth of the fungus Fusarium oxysporum in minimal media. A primary potato dextrose broth culture was inoculated with Conidiospores from a −80 °C frozen stock and grown at 28 °C, shaking at 200 rpm for 2 days. Cells were collected by centrifugation, suspended in water, the optical density at 260 nm was measured and the cell concentration determined by comparison with a standard curve. A fresh secondary minimal medium culture was inoculated with 1.0E6 cells/ml and grown as above. In a temporal fashion, 1000 μ l samples were removed from the culture, the cells collected by centrifugation and suspended in water (between 100 μ l and 500 μ l) adjusting the suspension volume as the culture became denser. Care was taken that cells were well suspended at all times by vigorous vortexing. Cells were then stained with Propidium Iodide for 5 minutes. Microscopic images were taken using three independent 5 μ l subsamples imaging at least 7 independent regions of each sample. Bright field and fluorescence images were taken and the total number of cells counted using the bright field image. Dead cell counts were obtained from fluorescent images as propidium iodide permeates the membranes of dead cells staining these red. The average number of total and dead cells was determined and, as the cell suspension was more concentrated than the culture, the suspension volume was taken into account to determine the proportional number of total and dead cells in the culture. The measurements represent a situation where the data is sparse and has only been measured for part of the decay phase. Nevertheless, it contains sufficient data to allow the algorithm to recover useful estimates of the parameters α, β, a, b, which can be used to evaluate ℓ θ of Eq. (7). 5.2 Evaluation of average lifetimes ℓ α,β,a,b The generalized mean lifetime ℓ α,β,a,b of Eq. (7) was evaluated for the exact synthetic data of Fig. 1 and the fungus data of Fig. 6. The resulting values were 1.223757 and 14.14 days. Corresponding to the 500 simulations discussed above in relation to Figs. 1 and 2, the corresponding histogram for the resulting ℓ α,β,a,b values is plotted in Fig. 7. The means of the histograms in Fig. 7 are all the same for the three levels of noise considered, which represents indirect proof of the stability of ℓ α,β,a,b . Its accuracy and reliability are reflected in the fact that these histogram means correspond to the rounding of the exact value of 1.223757. The growth-decay dynamics for the fungus Fusarium oxysporum f.sp. conglutinans Histograms for the values of ℓ α,β,a,b for different levels of the added Gaussian errors for 500 realizations using the synthetic data of Eq. (21) For the determination of the four parameters α,β,a,b in the multiplicative model (2), a simple, easily implementable, iterative two-stage linear least squares algorithm has been proposed. Its robustness has been confirmed by testing it on synthetic data. Its practicality has been demostrated by applying it to measured growth-decay for the fungus Fusarium oxysporum. In addition, for the multiplicative model, an analytic formula has been derived for estimating the average lifetimes ℓ α,β,a,b of the surviving microbes, which has been applied to the synthetic and measured data. Overall, it appears that the numerical performance of the algorithm and the average liftime estimate will be useful in the support of decision-making related to health issues such as food safety and pharmaceutical manufacture. Mean lifetime for microbial growth-decay for the multiplicative model The standard decay model $$ \hspace{.6cm}\frac{dN}{dt}=-\lambda N, \quad N=N(t), \quad N(0)=N_{0}, \quad \lambda>0 $$ ⇓ Solve: $$ \hspace{1.7cm}N(t)=N_{0}\exp(-\lambda t), \quad N(\infty)=0 $$ ⇓ Transform N(t) to an exponential probability distribution: $$ \hspace{1.7cm}{\mathcal P}(N(t))=\frac{\lambda}{N_{0}}N(t)=\lambda\exp(-\lambda t) $$ ⇓ The mean of the exponential distribution is λ: $$\hspace{1cm} \frac{1}{\lambda}=\tau=\text{~relaxation~time~}=\text{~mean~lifetime} $$ The generalized decay model $${} \frac{dN_{\theta}}{dt}=\theta(t) N_{\theta}, \quad N_{\theta}(0)=N_{0}, \quad \theta(t)=\alpha\beta t^{\beta-1}-ab t^{b-1} $$ $$ \hspace{1.2cm}N_{\theta}(t)=N_{0}\exp({\int_{0}^{t}} \theta(\tau)d\tau), \quad N_{\theta}(\infty)=0 $$ ⇓ Regularity: θ(0)>0 and \(\int _{0}^{\infty } \theta (\tau)d\tau =-\infty \) ⇓ Transform N θ (t) to a probability distribution: $$ \hspace{.5cm}{\mathcal P}(N_{\theta}(t))=\frac{N_{\theta}(t)}{{\mathcal A}(N_{\theta}(t))}, \quad {\mathcal A}(N_{\theta}(t))=\int_{0}^{\infty} N_{\theta}(\tau)\tau $$ ⇓ Compute the mean of \({\mathcal P}(N_{\theta }(t))\): $$\hspace{1cm}{\mathcal M}({\mathcal P}(N_{\theta}(t)))=\int_{0}^{\infty} {\mathcal P}(N_{\theta}(\tau))\tau d\tau $$ Anderssen, R.S., Helliwell, C.A.: Information recovery in molecular biology: causal modelling of regulated promoter switching experiments. J. Math. Biol. 67, 105–122 (2013). Anderssen, R.S., Husain, S., Loy, R.J.: The Kohlrausch function: properties and applications. ANZIAM J (E). 45, C800—C816 (2004). Baranyi, J., Pin, C.: Estimating bacterial growth parameters by means of detection times. Appl. Enviro. Microbiol. 65(2), 732–736 (1999). Baranyi, J., Roberts, T.A., McClure, P.: A nonautonomous differential-equation to model bacterial-growth. Food Microbio. 10(1), 43–59 (1993). Baranyi, J., Roberts, T.A., McClure, P.: Some properties of a nonautonomous determistic growth-model describing the adjustment of the bacterial population to a new environment. IMA. J. Math. Appl. Med. Bio. 10(4), 293–299 (1993). Edwards, M.P., Anderssen, R.S.: Symmetries and solutions of the non-autonomous von bertalanffy equation. Commun. Nonlinear Sci. Numer. Simulat. 22, 1062–1067 (2015). Edwards, M.P., Schumann, U., Anderssen, R.S.: Modelling microbial growth in a closed environment. J. Math-for-Industry. 5, 33–40 (2013). Peleg, M., Corradini, M.G.: Microbial growth curves: what the models tell us and what they cannot. Crit. Rev. Food Sci. Nutr. 51(10), 917–945 (2011). Peleg, M., Corradini, M.G., Normand, M.D.: Isothermal and Non-isothermal Kinetic Models of Chemical Processes in Foods Governed by Competing Mechanisms. J. Agric. Food Chem. 57(16), 7377–7386 (2009). Wardle, D.A.: A comparative-assessment of factors which influence microbial biomass carbon and nitrogen levels in soil. Biol. Rev. Camb. Philos. Soc. 67, 321–358 (1992). The authors thank the reviewer whose comments helped improved the clarity of the paper. The third author wishes to acknowledge the support of Prof. Thomas Preiss (Department of Genome Science, School of Medical Sciences (JCSMR), Australian National University, Garran Road, ACT 2601, Australia) for his support to carry out this research. Mathematical Sciences Institute, Australian National University, Canberra, 2601, ACT, Australia María Jesús Munoz-Lopez Present address: School of Mathematics, Trinity College Dublin, Dublin, 2, Ireland School of Mathematics and Applied Statistics, University of Wollongong, Wollongong, 2522, NSW, Australia Maureen P. Edwards CSIRO Plant Industry, GPO Box 1600, Canberra, 2601, ACT, Australia Ulrike Schumann Present address: Department of Genome Science, School of Medical Sciences (JCSMR), Australian National University, Garran Road, Canberra, 2601, ACT, Australia CSIRO Digital Productivity, GPO Box 664, Canberra, 2601, ACT, Australia Robert S. Anderssen Search for María Jesús Munoz-Lopez in: Search for Maureen P. Edwards in: Search for Ulrike Schumann in: Search for Robert S. Anderssen in: Correspondence to María Jesús Munoz-Lopez. Munoz-Lopez, M.J., Edwards, M.P., Schumann, U. et al. Multiplicative modelling of four-phase microbial growth. Pac. J. Math. Ind. 7, 7 (2015) doi:10.1186/s40736-015-0018-0 Synthetic Data Comparative Assessment Average Lifetime Multiplicative Model Overdetermined System
CommonCrawl
Charged Meson Form Factors at Jefferson Lab Hall C QCD and Hadrons Nathan Heinrich (University of Regina) Quantum Chromodynamics (QCD) is the accepted theory of the strong force between quarks and gluons and in recent years many successful predictions have come out of perturbative QCD (pQCD). However, pQCD is restricted by the running coupling constant $\alpha_s$, so at lower energies a problem arises where the predictions of pQCD no longer apply. While QCD-based models attempt to understand this region, they must be guided by experiment. Thus, many open questions remain: How does QCD transition between the perturbative (weak) and non-perturbative (strong) regimes? What predictions does QCD make for hadronic structure? How do other properties of hadrons, such as mass and spin, arise from QCD? In order to help answer these questions, the form factors of charged mesons, specifically the $\pi^+$ and $K^+$, are ideal candidates as they are relatively simple systems for theory to predict and are accessible experimentally. As the Goldstone bosons of the strong interaction, they are also seen as key to understanding some properties of QCD, such as Dynamic Chiral Symmetry Breaking (DCSB), which is the mechanism believed to generate >98% of the visible mass in the universe. This talk will give an overview of the effort to study the $\pi^+$ and $K^+$ form factors at Jefferson Lab, as well as a quick overview of the facilities at Jefferson Lab and Hall C. [email protected] Mr Ali Usman (University of Regina) Dr Dave Gaskell (Jefferson Lab) Prof. Garth Huber (University of Regina) Prof. Peter Markovitz (Florida International University) Mr Richard Trotta (Catholic University of America) Dr Stephen Kay (University of Regina) Prof. Tanja Horn (Catholic University of America) Mr Vijay Kumar (University of Regina) Dr Vladimir Berdinkov (Catholic University of America) WNPPC2021_JLabHallCChargedMesonFormFactors.pdf
CommonCrawl
Improved pharmacodynamics of epidermal growth factor via microneedles-based self-powered transcutaneous electrical stimulation Detachable dissolvable microneedles: intra-epidermal and intradermal diffusion, effect on skin surface, and application in hyperpigmentation treatment Pritsana Sawutdeechaikul, Silada Kanokrungsee, … Supason Wanichwecharungruang Wireless, closed-loop, smart bandage with integrated sensors and stimulators for advanced wound care and accelerated healing Yuanwen Jiang, Artem A. Trotsyuk, … Zhenan Bao A pulsatile release platform based on photo-induced imine-crosslinking hydrogel promotes scarless wound healing Jian Zhang, Yongjun Zheng, … Xiaoyang Wu Mathematical-model-guided development of full-thickness epidermal equivalent Junichi Kumamoto, Shinobu Nakanishi, … Mitsuhiro Denda Hard polymeric porous microneedles on stretchable substrate for transdermal drug delivery Aydin Sadeqi, Gita Kiaee, … Sameer Sonkusale Micropore closure time is longer following microneedle application to skin of color Abayomi T. Ogunjimi, Jamie Carr, … Nicole K. Brogden Utilization of ex vivo tissue model to study skin regeneration following microneedle stimuli Xue Liu, Rebecca Barresi, … Charbel Bouez Effect of physical stimuli on hair follicle deposition of clobetasol-loaded Lipid Nanocarriers Tamara Angelo, Nesma El-Sayed, … Tais Gratieri Tissue Interlocking Dissolving Microneedles for Accurate and Efficient Transdermal Delivery of Biomolecules Shayan Fakhraei Lahiji, Youseong Kim, … Hyungil Jung Yuan Yang1,2,3 na1, Ruizeng Luo1,3 na1, Shengyu Chao1,3, Jiangtao Xue4, Dongjie Jiang1,3, Yun Hao Feng5, Xin Dong Guo5, Dan Luo1,3,6, Jiaping Zhang2, Zhou Li ORCID: orcid.org/0000-0002-9952-72961,3,6,7 & Zhong Lin Wang ORCID: orcid.org/0000-0002-5530-03801,8 Nature Communications volume 13, Article number: 6908 (2022) Cite this article Bionanoelectronics Protein delivery Epidermal growth factor is an excellent drug for promoting wound healing; however, its conventional administration strategies are associated with pharmacodynamic challenges, such as low transdermal permeability, reduction, and receptor desensitization. Here, we develop a microneedle-based self-powered transcutaneous electrical stimulation system (mn-STESS) by integrating a sliding free-standing triboelectric nanogenerator with a microneedle patch to achieve improved epidermal growth factor pharmacodynamics. We show that the mn-STESS facilitates drug penetration and utilization by using microneedles to pierce the stratum corneum. More importantly, we find that it converts the mechanical energy of finger sliding into electricity and mediates transcutaneous electrical stimulation through microneedles. We demonstrate that the electrical stimulation applied by mn-STESS acts as an "adjuvant" that suppresses the reduction of epidermal growth factor by glutathione and upregulates its receptor expression in keratinocyte cells, successfully compensating for receptor desensitization. Collectively, this work highlights the promise of self-powered electrical adjuvants in improving drug pharmacodynamics, creating combinatorial therapeutic strategies for traditional drugs. Epidermal growth factor (EGF) is a small polypeptide consisting of 53 amino acid residues and three disulfide bonds, with the latter determining biological activity. EGF plays a significant role in regulating cell growth, survival, migration, apoptosis, proliferation, and differentiation1,2. The biological effects of EGF are exerted by binding to EGF receptor (EGFR), which activates Ras/mitogen-activated protein kinase (Ras/MAPK), phosphatidylinositol 3-kinase/AKT (PI3K/AKT) and phospholipase C-kiγ/protein after the autophosphorylation of receptor tyrosine kinase (RTK)3,4. EGFR is distributed on the surface of fibroblasts, endothelial cells, smooth muscle cells, and epidermal cells5. After binding EGF, EGFR signaling promotes cell chemotaxis and remodeling, triggering the formation of granulation tissue and epidermis. Due to its excellent performance in accelerating epidermal regeneration, EGF is commonly used in the treatment of surgical wounds, burns, and diabetic ulcers6,7,8. EGF administration is associated with several pharmacodynamic challenges. Firstly, regarding topical administration in the form of creams commonly used in clinics, the high molecular weight of EGF (Mw≈6 kDa) limits its penetration of the stratum corneum9,10. Thus, only trace amounts of EGF can pass through the hair follicle to the basal layer and influence surviving keratinocytes11. Although EGF can be delivered transdermally by injection, this poses a risk of bacterial infection for wound patients, and the pain caused by the injection reduces patient compliance, which is undesirable for wound treatment12,13. Secondly, EGF has low stability in vivo; this is because glutathione (GSH) breaks the disulfide bonds that stabilize the EGF structure, resulting in the reduction and inactivation of EGF, which greatly reduces its efficacy14,15. Finally, EGFR has high affinity for EGF, and their specific binding promotes cell proliferation and migration by activating the downstream factor PI3K via the EGF/EGFR pathway16. However, EGF can cause rapid endocytosis of EGFR into endosomes and eventual degradation in lysosomes17. Thus, long-term EGF treatment leads to desensitization and attenuation of EGFR, terminating the signaling pathway18,19. Therefore, improving pharmacodynamics of EGF in wound healing can be approached from the following aspects: (i) increase the penetration rate in a minimally invasive manner; (ii) maintain chemical stability of EGF and prevent its reduction by GSH; (iii) upregulate EGFR expression to compensate for receptor desensitization. It has been proved that physiological electric fields could upregulate the expression of growth factor receptors in cardiomyocytes and corneal epithelial cells20,21,22. This inspired us to develop a device with transcutaneous electrical stimulation (ES) and transdermal drug delivery capabilities to improve drug permeability while compensating for receptor desensitization. Advanced microneedle is an ideal therapeutic medium. As minimally invasive transdermal devices, microneedles (MNs; length <1 mm) can penetrate the stratum corneum without bleeding or pain23,24,25. MNs can be fabricated from different materials, such as silicon, glass, polymers, or even metals, endowing them with good mechanical properties, solubility, adhesion, and electrical conductivity26,27,28. Micromolecular and macromolecular drugs can be encapsulated in dissolvable MNs and diffused directly into the skin as MNs degrades29,30. It is worth mentioning that conductive MNs could be used as electrodes to reach the low-resistance dermis (~10 kΩ) and bypass the high-resistance stratum corneum (~10 MΩ), thus enabling transcutaneous ES to improve the pharmacodynamics of EGF. In this study, we designed a microneedle-based self-powered transcutaneous electrical stimulation system (mn-STESS) to improve the pharmacodynamics of EGF in terms of wound healing. The integrated mn-STESS consisted of a sliding free-standing triboelectric nanogenerator (sf-TENG) and two-stage gold coated polylactic acid/cross-linked gelatin–cross-linked hyaluronic acid (PLA-Au/cGel-cHA) composite microneedle patches (CMNPs). The device was wireless, passive, and easily attached to the skin. The sf-TENG converted the biomechanical energy generated by finger sliding into biosafe microcurrent without causing skin damage or drug inactivation. CMNP penetrated the stratum corneum and continuously released EGF into the skin for 24 h. Meanwhile, CMNP utilized the current generated by the sf-TENG for transcutaneous ES. As an electrical adjuvant, self-powered ES suppressed the GSH-mediated reduction of EGF by regulating the motility of both molecules, thus maintaining the stability of exogenous EGF. Long-term cell and animal experiments also showed that ES simultaneously upregulated EGFR expression to compensate for receptor desensitization. The constructed mn-STESS could effectively improve the pharmacodynamics of EGF to aid wound healing. Result and discussion Design and integration of the microneedles-based self-powered transcutaneous electrical stimulation system The mn-STESS was designed based on sf-TENG and two-stage CMNPs to improve the pharmacodynamics of EGF in wound healing and could adhere to the skin (Fig. 1a). The sf-TENG was composed of triboelectric, dielectric and electrode layers. Polyimide (PI) film was used as the triboelectric layer and polytetrafluoroethylene (PTFE) thin films covered with Kapton tape served as the dielectric layers (Fig. 1b). Polylactic acid-coated gold microelectrode array patches (PLA-Au MNP) were utilized as the electrodes. The drug-loaded cross-linked gelatin and cross-linked hyaluronic acid microneedles (cGel-cHA MNs) were covered on the PLA-Au MNP to create a two-stage CMNP. After applying CMNP to the skin, the drug loaded in the cGel-cHA MNs and current generated by sf-TENG were simultaneously introduced into the skin31,32,33. A chitosan dressing was placed in the middle of mn-STESS to absorb the wound exudate and cushion the vertical force exerted by finger sliding34. Fig. 1: Overview of the mn-STESS. a The mn-STESS consisted of sf-TENG, CMNP, and dressing. The CMNP composed of cGel-cHA MNs loading drug and PLA-Au MNs as electrode. b Three-dimensional structure of the mn-STESS. c–e Brightfield micrographs of PLA MNP, PLA-Au MNP, and CMNP. f cHA microparticles encapsulated in CMNP. g Typical force-displacement curve of the compression force of MNs; (inset) schematic of the experimental setup. h Young's moduli of MNs. (n = 3 independent samples. Data are presented as mean ± SEM). i Parafilm penetration depths of MNs; (inset) schematic of the experimental setup. (n = 5 independent samples. Data are presented as center line, limits and whiskers, 25%–75%). Source data are provided as a Source Data file. Fabrication and characterization of two stage CMNP The two-stage CMNP was fabricated as follows (Supplementary Fig. 1). Due to the excellent mechanical properties, good biocompatibility and biodegradability of polylactic acid (PLA)35,36, PLA microneedle patches (PLA MNPs) were first fabricated by a thermoforming process (5 × 5 MN array on a 7 × 7 mm patch). Considering that the shape, height, base diameter and array density of the microneedles will affect their puncture performance; in this work, the microneedles were designed to be conical with a height of 550 μm, a base width of 300 μm and the inter-needle spacing of 500 μm (Fig. 1c). A PLA-Au MNP was subsequently generated by sputtering a 50 nm-thick layer of gold on the surface of the PLA MNP (Fig. 1d). Finally, drug-loaded cGel-cHA MNs with the length of 550 μm were prepared by vacuuming and covered on the tip of PLA-Au MNs to fabricate the two-stage PLA-Au/cGel-cHA CMNP. The total height of the two-stage needle body was 750 μm, with an overlapping needle length of ~350 μm (Fig. 1e and Supplementary Fig. 2). After penetration of the dermis, the PLA-Au MNP could introduce current generated by sf-TENG to form transcutaneous ES and directly deliver the drug into the skin by cGel-cHA MNs. As a water-soluble macromolecule, the absorption of EGF through the skin stratum corneum is problematic37. EGF could be delivered into skin using CMNP fabricated from biosafe cross-linked gelatin (cGel) and cross-linked hyaluronic acid (cHA) microparticles. In this study, Gel and HA were cross-linked by genipin and 1,4-butanediol diglycidyl ether (BDDE), respectively, to control the EGF release rate and mechanical properties of CMNP. Genipin was nucleophilically attacked by the primary amine group of the Gel molecule, resulting in opening of the dihydropyran ring to form a heterocyclic amine (Supplementary Fig. 3a). Gel molecules formed a network structure with genipin as a cross-linking bridge. As shown in the Fourier transform-infrared (FT-IR) spectra (Supplementary Fig. 3b), the characteristic peaks of Gel at 1245, 1540, and 1650 cm−1 were assigned to the C = O bonds of carbonyl groups, N–H bonds of the amino groups, and N–H bonds of amide III groups, respectively. In the cGel sample, the intensity of the N–H band decreased, and the peak shifted slightly in the low wavenumber direction (from 1245 to 1240 cm−1), indicating that genipin and amino groups of Gel undergo a cross-linking reaction38,39. For cHA microparticles, there was a new peak at 1445 cm−1 in the FT-IR spectrum, attributing to the ether bonds (C–O–C) formed by the cross-linking of BDDE epoxy groups and HA hydroxyl groups (Supplementary Fig. 4a, b)40. The diameter of the cHA microparticles was about 15 μm (Supplementary Fig. 4c) and the diameter increased to ~20 μm after encapsulation of EGF (Fig. 1f), because it swelled by absorbing the EGF solution. In our study, EGF was directly encapsulated in cGel and cHA microparticles, and thus did not participate in the chemical cross-linking process of the hydrogel. HPLC and ELISA results further confirmed that the structure and activity of EGF did not change after encapsulation by cGel and cHA hydrogels (Supplementary Fig. 5a, b). The mechanical properties of MNs were crucial for their transdermal ability. Conical-shaped PLA MNs have suitable geometry (height, base width and array density) and sufficient mechanical strength to pierce the skin41,42,43. The ability of the two-stage MN to penetrate the skin mainly depended on the second-stage drug-loaded MN located at the tip. The force-displacement curves presented the force applied to the MN gradually increased with displacement. The slope of the PLA-Au/cGel MNs was greater than that of PLA-Au/Gel MNs because the dense network structure formed by cross-linked molecular chains improved the mechanical properties of Gel. The slope of PLA-Au/cGel-cHA MNs was shallower than that of PLA-Au/cGel MNs, possibly due to the incorporation of cHA microparticles in the MNs matrix (Fig. 1g and Supplementary Fig. 6a, b). The Young's modulus of the PLA-Au/cGel-cHA MNs was ~100 MPa, which was markedly higher than that of skin (~0.13 MPa)44, this indicated that CMNP (with conical-shaped PLA-Au/cGel-cHA MNs) with sufficient mechanical properties had the potential to penetrate the skin tissue (Fig. 1h). The penetration depth of MNs was evaluated by using them to pierce Parafilm that mimicked the skin. According to the holes formed in each membrane, the MN penetration depth was calculated from the number of layers pierced. The thickness of the human epidermis and dermis is ~200 μm and 2 mm, respectively. Compared with the one-stage Gel MNs, the two-stage MNs had a greater penetration depth (730 μm) (Fig. 1i), confirming that CMNP could well penetrate and be completely deposited in the dermal tissue, facilitating drug delivery into the skin. PLA-Au/cGel-cHA MNs were further applied to both the porcine skin and living mouse skin to further confirm their ability to penetrate the skin. As expected, application of microneedles created indelible puncture sites on the skin surface and in the penetration cavities. The histological section images of skin tissues with H&E staining showed that CMNP could easily pierce the epidermis and completely penetrate the drug-loaded needle into the dermis (Supplementary Fig. 7a–f). Working principle and output properties of sf-TENG The corona discharge method was used to increase the surface charge density of the PTFE dielectric layer (Fig. 2a), thereby enhancing the output of sf-TENG45. The working principle of sf-TENG was shown in Fig. 2b. The finger and outermost PI were used as the triboelectric layers. Due to electrostatic induction, the surface of the finger would carry positive charge when in contact with the outermost PI, and the charge distribution of the PLA-Au MNP (electrode layers) changed when the finger slid on the PI. As indicated by the COMSOL simulation, when the finger slid from left to right on the sf-TENG, the potential of the right PLA-Au MNP was higher than that of the left, driving electron flow from the left PLA-Au electrode to the right electrode through the skin as an external load (Fig. 2c, d). Similarly, when the finger moved to the left, electrons flowed in the opposite direction. The output performances of sf-TENG under different achievable finger sliding frequencies were investigated, where the sliding frequency of 2 Hz had the most suitable output parameters (Supplementary Fig. 8a). In our experiments, a pigskin-wrapped mechanical linear motor was used instead of finger stroking to drive the sf-TENG, and the resulting open-circuit voltage (VOC), short-circuit current (ISC), and short-circuit transferred charge (QSC) were about 20 V, 1 μA, and 11 nC, respectively (Fig. 2e–g), which were consistent with the output performance of sf-TENG driven by human finger sliding (Supplementary Fig. 8b–d and Supplementary Movie 1). Fig. 2: Electric performance of mn-STESS. a Sketch of a corona discharge system. b Schematic of the working principle of sf-TENG. c, d Photographs and COMSOL simulation schematics of finger sliding from left to right on sf-TENG. e–g VOC, ISC, and QSC of the sf-TENG. Source data are provided as a Source Data file. mn-STESS improved permeability and utilization of EGF As a drug carrier, the second-stage cGel-cHA MN had a tightly connected polymer network controlling drug release in a physicochemical manner46. To enable continuous daily treatment, the cross-linking degree of cGel and cHA particle content could be adjusted to control the sustained release time of CMNP to 24 h. The effect of cGel MNs cross-linking degree on EGF release kinetics was first explored in vitro. EGF was released from cGel MNs due to water absorption and swelling of cGel (Fig. 3a(i)). The release rate of cGel MNs mainly depended on the degree of cross-linking of cGel, which increased with the volume of genipin solution (Supplementary Fig. 9a). After cross-linking, the Gel solution changed from yellow to dark blue (Supplementary Fig. 9b). The EGF release rate decreased with increasing cGel MNs cross-linking (Fig. 3b and Supplementary Table 1) and the sustained release time also increased, which was attributed to the formation of a tighter network structure of cGel with a higher degree of cross-linking. Both 53% and 65% cross-linked cGel MNs could continuously release drug for a long period (90% release in 18 h) (Supplementary Fig. 10a). However, the poor flowability of cGel with 65% cross-linking hampered the preparation of MNs (Supplementary Fig. 9c). Therefore, cGel with a cross-linking degree of 53% was used to fabricate MNs for subsequent studies. Fig. 3: Drug delivery in vitro and in vivo. a Schematic of EGF release from MNP based on different drug-loaded materials: (i) cGel MNs; (ii) cGel-cHA MNs in CMNP and mn-STESS; (iii) motion of EGF and GSH molecules in the CMNP (NS) and mn-STESS (ES) groups. b EGF release efficiency from the cGel MNs with the different crosslinking degree. (n = 3 independent samples. Data are presented as mean ± SEM). c EGF release efficiency from cGel-cHA MNs with the different cHA microparticle contents. (n = 3 independent samples. Data are presented as mean ± SEM). d EGF release efficiency from Gel MNs, cGel MNs, cGel-cHA MNs, and mn-STESS. (n = 3 independent samples, mean ± SEM). e Fluorescence images of skin penetrated by mn-STESS at different times. Fluorescence intensities are indicated by a color scale (right). Blue to red presents the minimum to maximum fluorescence intensity. f Mass of EGF from CMNP and mn-STESS in GSH solution; (inset) motion behavior of EGF and GSH in the CMNP (NS, left) and mn-STESS (ES, right) groups; the yellow parts indicate the disulfide bond of EGF. (n = 5 independent samples. Data are presented as mean ± SEM). g Diffusion coefficients of EGF and GSH under NS and ES. Data are presented as mean ± SEM, and the errors are generated by the linear fit of the mean square displacement from simulation trajectory analysis. h Distance between EGF and GSH molecules under ES; CMNP was set as control group. Data are presented as mean ± SEM, and the errors are generated by the linear fit of the mean square displacement from simulation trajectory analysis. Source data are provided as a Source Data file. Sustained drug release by MNs could be further prolonged by introducing cHA particles into cGel. As the drug release process of cGel-cHA MNs, the outer layer of cGel swelled upon contact with solution, allowing the encapsulated drug and drug-loaded cHA particles to slowly exude (Fig. 3a(ii)). Then, the escaped cHA particles further swelled and slowly released the drug. Different cHA contents exhibited differentiated drug release kinetics: as the content of cHA microparticles in cGel MNs increased, more EGF was introduced into cHA microparticles to slow down the EGF release; however, when the cHA content exceeded 60%, MNs released drug more rapidly due to lack of cGel encapsulation (Fig. 3c and Supplementary Table 2). MNs with 40% and 60% cHA microparticle contents enabled sustained drug release over 24 h (Supplementary Fig. 10b). Moreover, the higher proportion of cHA microparticle also influenced the mechanical performance of MNs by causing discontinuities in the distribution of each component. At a cHA microparticle content of 60%, the parafilm penetration rate of cGel-cHA MNs decreased rapidly, indicating that the mechanical properties of cGel-cHA MNs deteriorated (Supplementary Fig. 10c). Taken together, considering the optimal mechanical and drug release kinetic properties of MNs, the optimized drug-loaded portion of the CMNP was composed of 53% cross-linked cGel and 40% cHA microparticles. The effect of the electric field generated by sf-TENG on the movement behavior of drug molecules was further investigated47. The in vitro drug release kinetics indicated no difference in drug release rate between mn-STESS and CMNP (Fig. 3d and Supplementary Table 3); moreover, currents at different sliding frequencies in the mn-STESS also did not affect the drug release rate (Supplementary Fig. 10d). The mn-STESS maintained sustained release of EGF for 24 h (Supplementary Fig. 10e), implying that the current generated by mn-STESS hardly modulated the physicochemical process of microneedle-based drug release. The penetration depth and release kinetics of the drug were visualized in vitro in skin-mimicking hydrogels prepared with 15% porcine gelatin. The results showed that the penetration depth of fluorescently labeled EGF delivered by the mn-STESS was ~990 μm, while the drugs administered by conventional dressings only stayed on the surface. Similarly, the current generated by mn-STESS under different sliding frequencies also did not affect the penetration depth of drug (Supplementary Fig. 11a, b). The release kinetics of mn-STESS was further studied in vivo. Material toxicity tests showed that all components of the CMNP were highly biocompatible (Supplementary Fig. 12a, b). Two CMNPs loaded with FITC-labeled EGF were applied to the back skin of mice and the drug release process was visualized using an in vivo imaging system (IVIS) (Fig. 3e and Supplementary Fig. 13a, b). The imaging results showed that the fluorescence area and intensity at the administration site gradually increased during the initial 12 h, indicating that the drug was continuously released from the CMNP and diffused in the skin. Subsequently, the fluorescence area and intensity decreased simultaneously, as the encapsulated drugs gradually diffused into the deep tissues and were absorbed into the systemic circulation48. Consistent with the drug release results in vitro, mn-STESS continuously released drug over 24 h in vivo. Therefore, mn-STESS could improve the pharmacodynamics by enhancing EGF penetration without changing the drug release rate. mn-STESS maintained the stability of EGF by suppressing GSH reduction Reduced GSH (Mw ≈ 307.33), a highly active antioxidant, is widely distributed in the skin tissue fluid with the concentration of 2~20 μM49,50. GSH could reduce EGF by opening the disulfide bond, rendering it inactive51,52. mn-STESS enhanced EGF activity by suppressing the reduction of GSH. ELISA confirmed that although the ES produced by mn-STESS had little effect on the concentrations of monocomponent EGF and GSH (Supplementary Fig. 14a, b). Interestingly, when both components were present, the EGF concentration was 4.8 times higher in the mn-STESS group compared to the CMNP group (Fig. 3f), confirming the inhibitory effect of ES on GSH reduction. The underlying mechanism might be attributed to the regulation of molecular motion by the electric field (Fig. 3a(iii)). A molecular dynamics simulation showed that the ES generated by mn-STESS had little effect on the movement rate of EGF, consistent with the drug release results for the mn-STESS group described above. However, the diffusion coefficient of GSH in the mn-STESS group was 1.5 times that of the CMNP group, resulting in a significant increase in the intermolecular distance between EGF and GSH (Fig. 3g, h and Supplementary Fig. 14c), thereby reducing the collision probability between GSH and EGF and preventing destruction of the EGF disulfide bond by GSH. mn-STESS upregulated EGFR expression in HaCaT cells to compensate for receptor desensitization EGFR is a membrane surface receptor that specifically binds to EGF with high affinity37,53. However, continuous administration of EGF desensitizes cellular EGFR, resulting in a significant reduction in drug efficacy. During the natural wound healing process, increased EGFR expression in corneal epithelial cells is associated with bioelectric fields, suggesting that ES is a potential strategy to compensate for receptor desensitization. To investigate whether the ES produced by mn-STESS enhanced EGFR expression and improved EGF pharmacodynamics, a wound model was further established in vitro, and EGFR immunofluorescence staining was performed on epidermal cells under different treatment modes (Fig. 4a). The results showed that the expression level of EGFR in the CMNP group was the lowest, which was only 37% of that in the Blank group, confirming that severe receptor desensitization occurred under EGF administration. The EGFR expression was significantly increased under the intervention of ES. Despite the simultaneous presence of ES and EGFR administration in the mn-STESS group, the receptor sensitization induced by self-powered ES overwhelmed the drug-induced downregulation of EGFR, and EGFR expression in the mn-STESS group was 4.7-fold that in the CMNP group (Fig. 4b, c). Therefore, mn-STESS significantly compensated for receptor desensitization, suggesting that it has potential for improving pharmacodynamics. Fig. 4: Effects of electrical stimulation synergistically with EGF on HaCaT cell behavior. a mn-STESS generated ES and released EGF to promote cell proliferation and migration. b Representative fluorescence images of F-action (green) and EGFR (red) in HaCaT cells treated with CMNP (EGF), ES, and mn-STESS (EGF and ES); scale bar, 50 µm. c Mean fluorescence intensities of EGFR expression in HaCaT cells. (n = 3 independent samples. ***p < 0.001. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). d Relative growth rate of HaCaT cells treated with CMNP (EGF), ES, and mn-STESS (EGF and ES). (n = 3 independent samples. *p < 0.05. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). e Representative images of HaCaT cell migration; red area indicated the migrated cells; scale bar, 200 µm. f Migration area of HaCaT cells treated with CMNP (EGF), ES, and mn-STESS (EGF and ES). (n = 3 independent samples. *p < 0.05 and ***p < 0.001. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). Source data are provided as a Source Data file. Activation of the EGF/EGFR signaling pathway could promote cell proliferation33. Cell viability tests exhibited that the monotherapy with ES (1 μA current) and EGF administration (10 ng/mL) had no effect on cell proliferation after 24 h. However, the combination of ES and EGF administration in the mn-STESS group accelerated cell proliferation at the same cell culture time (Fig. 4d). Specific binding of EGF and EGFR alters the actin cytoskeletal structure, thereby promoting cell migration54. Fluorescent staining confirmed that EGF and mn-STESS significantly promoted F-actin aggregation and altered cytoskeletal distribution in HaCaT cells (Fig. 4b). Moreover, cell scratching experiments mimicking wounds confirmed that mn-STESS further promoted cell migration by improving the pharmacodynamics of EGF. After 4 h of intervention, the percentage of wound healing area in the Blank, CMNP, ES, and mn-STESS groups was ~20.9%, 49.3%, 33.0%, and 63.2%, respectively (Fig. 4e, f). The scratch healing area ratio of the CMNP group was about 2.4-fold that of the Blank group, confirming the effect of EGF on promoting cell migration. The percentage healed area in the ES group was slightly but non-significantly larger than that in the Blank group, likely because the alternating current generated by sf-TENG in mn-STESS was far less effective than direct current in promoting cell migration55. The wound healing area in the mn-STESS group was even 28.2% larger than that in the CMNP group. Because the direct effect of alternating current ES on cell migration was weak, this remarkable healing effect was attributed to the improved EGF pharmacodynamics induced by mn-STESS. In addition, mn-STESS did not generate additional thermal perturbations to affect cell migration. As the current of the built-in sf-TENG was only 1 μA, its theoretical thermal energy was only 0.02 J, which hardly caused the temperature of the culture medium to increase (Supplementary Fig. 15). PI3K is a downstream molecule of the EGF/EGFR signaling pathway16. To confirm that cell proliferation and migration were related to activation of the EGF/EGFR signaling pathway, PI3K expression of HaCaT cells in different groups was assessed by immunofluorescence staining. PI3K expression in the ES group did not correspond to the EGFR content due to lower phosphorylation levels of EGFR at limited endogenous EGF concentrations. As shown by the staining results, there was no difference in PI3K positivity between the ES and Blank groups. The addition of exogenous EGF in the CMNP group significantly increased the rate of PI3K positivity. In the mn-STESS group, the PI3K content was further increased under the stimulation of self-powered ES (Supplementary Fig. 16a, b). These results demonstrated that mn-STESS promoted the expression of EGFR through self-powered ES to improve pharmacodynamics, thereby enhancing the binding of EGF to EGFR, promoting the phosphorylation of EGFR, and accelerating cell migration and proliferation by activating the EGFR signaling pathway. Improved pharmacodynamics of EGF via mn-STESS in wound healing A mouse full-thickness skin wound model (with a 7 × 7 mm defect) was further established to evaluate the therapeutic effect of mn-STESS. CMNPs penetrated the skin barrier and directly delivered the drug to the skin (Supplementary Fig. 17a). Hematoxylin and eosin (H&E) staining showed that CMNP penetrated the dermis to a depth of ~700 μm. Moreover, the long-term skin penetration of CMNPs did not trigger inflammation, and the skin returned to a normal state 20 min after CMNP removal (Supplementary Fig. 17b, c). Compared with traditional chitosan dressing loaded with EGF (CD-EGF), CMNPs could better promote wound healing because of high permeability and utilization of EGF (Supplementary Fig. 17d, e). To ensure that the mn-STESS could provide ES to wounds in animal models, the wound potentials and currents before and after mn-STESS intervention were monitored (Supplementary Fig. 18). The peak wound current after mn-STESS intervention was 1 μA, identical to the short-circuit current of mn-STESS. The peak wound potential after mn-STESS intervention was slightly higher than the endogenous wound electrical potential. The wound current and potential waveforms exhibited the same frequency (2 Hz) as sf-TENG. It was worth mentioning that the horizontal sliding of the finger when driving the mn-STESS produced little mechanical stimulation to the wound, which was attributed to the buffering of the stress by the chitosan dressing covering the wound surface. At the same time, animal experiments also confirmed that simple mechanical stimulation by finger moving had no significant effect on wound healing (Supplementary Fig. 19a, b). Furthermore, the PI layer on the surface of mn-STESS also shielded the thermal conduction of finger sliding, and the skin surface temperature remained almost unchanged after driving mn-STESS for 4 h (Supplementary Fig. 20). Translational medicine results showed mn-STESS significantly promoted wound healing. As shown in Fig. 5a, two CMNPs were attached to both sides of the wound to avoid secondary traumatization in the wound area. mn-STESS continuously released EGF into the skin and provided self-powered ES as an adjuvant for synergistic therapy. The surgical photos showed that the wound healing effect in all intervention groups was superior to that in the CD group. Especially in the mn-STESS group, the healing rate in the first 6 days was ~18% and ~36% faster than that in the CMNP and ES groups, respectively (Fig. 5b–d). H&E staining of tissue from the wound center showed that the newborn epithelium (NE) in the mn-STESS group was longest (Supplementary Fig. 21). H&E staining of wound healing tissue also showed that the mn-STESS group presented even better wound healing quality, with more new vessels (NV) and hair follicles (HF) compared to the other groups (Fig. 5e–g). The above results demonstrated that the electrical adjuvant produced by sf-TENG synergistically enhanced the wound-healing effect of EGF. Fig. 5: mn-STESS promoted wound healing in vivo. a Schematic showing how mn-STESS promoted wound healing. b Representative digital images of wound areas treated with CMNP (EGF) and mn-STESS (EGF and ES) on days 0, 3, 6, 9, and 12 (n = 6). c, d Quantitative analysis of wound area and relative healing rate for each group on days 0, 3, 6, 9, and 12. (n = 4 independent samples. *p < 0.05. *, # and & indicate the significant differences between other groups and CD, CMNP, ES, respectively. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). e H&E staining of wound healing tissue showing new epithelium (NE), new granulation tissue (GT), and new hair follicles (HF); the magnified H&E staining in GT showing new vessels (NVs). Blue lines, black rectangles, thin black arrows and thick black arrows indicate NE, GT, HF, and NV, respectively. f Quantitative statistics of NV in healing skin. (n = 3 independent samples. *p<0.05 and **p<0.01. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). g Quantitative statistics of new HF in healing skin. (n = 3 independent samples. **p < 0.01. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). h, i Representative fluorescence images and fluorescence intensities of EGFR (red) in wound areatreated with CD, CMNP (EGF), ES, and mn-STESS (EGF and ES). (n = 3 independent samples. **p < 0.01 and ***p < 0.001. All statistical analyses were performed by one-way ANOVA. Data are presented as mean ± SEM). Source data are provided as a Source Data file. To confirm in animal models that the wound-healing effect of mn-STESS was mediated by improvement of EGF pharmacodynamics, we further evaluated EGFR and PI3K expression in wound tissue by immunofluorescence staining. The fluorescence results showed that the expression of EGFR and PI3K in the mn-STESS group were increased by 43% and 55% compared with those in the CMNP group, respectively (Fig. 5h, i and Supplementary Fig. 22). Notably, no obvious receptor desensitization occurred in the new epithelial tissue of the CMNP group, which was attributed to the metabolism and diffusion of EGF in vivo 24 h after administration. However, at the CMNP puncture site, significant EGFR desensitization could still be observed due to the high local drug residual concentration, and the expression of EGFR in the CMNP group was only about 40% of that in the CD group, which was similar to the results of cell experiments. In contrast, the mn-STESS group could still effectively compensate for the desensitization of EGFR at the microneedle puncture site (Supplementary Fig. 23a, b). The consistency of the in vitro and in vivo findings consolidated the compensatory effect of mn-STESS on receptor desensitization and further activated the EGF/EGFR pathway and downstream molecule by improving EGF pharmacodynamics, thereby significantly promoting wound healing. In summary, we developed the mn-STESS by integrating sf-TENG and CMNP (Fig. 6). The sf-TENG converted the mechanical energy generated by finger sliding into electricity. The two-stage structure of CMNP could not only deliver drugs, but also introduce a microcurrent produced by sf-TENG into the dermis for transcutaneous ES. In response to pharmacodynamic challenges, mn-STESS improved the efficacy of EGF in various aspects: (i) continuously delivered EGF in a minimally invasive manner to improve drug permeability and utilization; (ii) ES generated by mn-STESS increased the intermolecular distance between EGF and GSH by promoting GSH movement, thereby suppressing the reduction of EGF; (iii) self-powered ES also upregulated EGFR expression to compensate for receptor desensitization and improved the efficacy of EGF. In vitro and in vivo experiments confirmed that the strengthened EGF pharmacodynamics by mn-STESS promoted cell proliferation and migration by activating the EGF/EGFR pathway and downstream molecule PI3K. Furthermore, mn-STESS therapy possessed great benefits in wound healing by promoting wound re-epithelialization, vascularization, and HF formation. This work proposed a therapeutic strategy based on self-powered electrical adjuvants, opening a new era for improving the tolerance of classic drugs. Fig. 6: mn-STESS improved EGF pharmacodynamics to promote wound healing. mn-STESS not only delivered EGF transdermally, but also performed transdermal ES, which acted as an adjuvant and synergized with EGF. ES generated by mn-STESS reduced GSH-mediated reduction of EGF to maintain its stability, upregulated cellular EGFR expression to compensate for receptor desensitization, and activated the downstream factor PI3K to promote wound healing. Polydimethylsiloxane (PDMS, Sylgard 184) was obtained from Dow Corning (Midland, USA). Polylactic acid (PLA) was purchased from Lakeshore Biomaterials Inc. (AL, USA). NaOH was purchased from Shanghai Aladdin Bio-Chem Technology Co., LTD (Shanghai, China). 1,4-butanediol diglycidyl ether (BDDE, Mw = 202.25 Da) was obtained from Adamas Reagent Co. Ltd. (Basel, Switzerland). Genipin was purchased from J&K Scientific (Beijing, China). Gelatin (Gel) from cold water fish skin and from porcine skin, hyaluronic acid (HA, Mw ≈ 20,000–400,000 Da) were purchased from Sigma-Aldrich (MO, USA). Human epidermal growth factor (EGF), FITC-labeled EGF, glutathione, DMEM high glucose medium, penicillin/streptomycin, CCK-8, Triton X-100, DAPI and BSA were purchased from Beijing Solarbio Technology Co., Ltd (Beijing, China). 1640 medium and fetal bovine serum (FBS) were obtained from Gibco. The antibodies used in this study were summarized as follows (category number, company): Phalloidin-iFluor 488 (ab176753, Abcam), anti-EGFR (ab52894, Abcam), anti-PI3 Kinase p110 beta (ab151549, Abcam), Goat Anti-Rabbit IgG H&L (Alexa Fluor 647) (ab150083, Abcam) and Goat Anti-Rabbit IgG H&L (FITC) (ab6717, Abcam). Fabrication of MN matrix materials and drug formulations The cross-linked gelatin (cGel) solution and the cross-linked hyaluronic acid (cHA) microparticle were prepared for encapsulating drug through the chemical cross-linking according to the previous reported method46. Briefly, 40% (w/v) Gel solution was obtained by dissolving the Gel powders from cold water fish skin in deionized water under magnetic stirring at 50 °C for 1 h. Then, the genipin solution was slowly dropped dropwise to the prepared Gel solution, and mixed at 40 °C for 96 h to produce the cross-linked Gel. The cHA hydrogel was prepared by mixing hyaluronic acid powder (HA, Mw ≈ 20,000–400,000 Da, 1 g) and 1,4-butanediol diglycidyl ether (BDDE, 200 μl) in NaOH solution (0.25 M, 9.8 mL, pH = 13) at 65 °C for 3 h, and then purifying with 95% alcohol to remove extra NaOH, BDDE and uncrosslinked HA fragments. Finally, the acquired cHA hydrogel was masked by a ball mill for 20 min, and filtered with a 500-mesh filter to obtain cHA gel microparticles. In order to encapsulate the drug, the obtained cHA microparticles were first immersed in human epidermal growth factor (EGF) solution (10 μg mL−1, 1 mL) to fully swell to receive drug-loaded microparticles. Then the drug-loaded cHA microparticle and EGF drug power were mixed uniformly with cGel under magnetic stirring at room temperature to obtain the cGel-cHA solution. Fabrication of the CMNP Polylactic acid microneedle patch (PLA MNP) was prepared by thermoforming with the polydimethylsiloxane (PDMS, Sylgard 184) template. PLA-Au MNP were subsequently obtained by sputtering a layer of gold with the thickness of 50 nm on the surface of PLA MNP. The prepared drug solution (10 μg mL−1, 100 μL) was applied on the PDMS template under vacuum at −85 kPa for 30 min. Then, the remaining drug was removed from the surface of template, and the drug solution filled in the cavity was dried at room temperature under vacuum at −85 kPa for 20 min. Next, the prepared matrix material (Gel, cGel, and cGel-cHA) was filled into the cavity under vacuum for 45 min. Following removal of residual material from the mold surface, PLA-Au MNP was aligned and pressed into the drug-loaded MN cavity, and dried at room temperature for 12 h. The two stage PLA-Au/cGel-cHA CMNP was successfully removed from the mode for further analysis. Characterization of MNP The size and morphology of MN was observed using a stereomicroscope (SZX7, Olympus, Japan) and a cold field emission scanning electron microscope (SEM, SU8020). The mechanical property of MN was measured using a dynamometer (Force Gauge Model, Mark-10, USA). The MNP was placed on the rigid stainless-steel stage, and the mechanical sensor probe was slowly moved vertically downward at a speed of 1 mm min−1 until the set value of 20 N was reached. In this process, the function of force and displacement was recorded. The Young's modulus of the MN was calculated according to the following formula, $${{{{{\rm{E}}}}}}={{{{{\rm{\sigma }}}}}}/{{{{{\rm{\varepsilon }}}}}}=({{{{{\rm{F}}}}}}/{{{{{\rm{A}}}}}})/(\Delta {{{{{\rm{L}}}}}}/{L}_{0})={{{{{\rm{F\cdot }}}}}}{L}_{0}/{{{{{\rm{A\cdot }}}}}}\Delta {{{{{\rm{L}}}}}}$$ where E is the Young's modulus, σ is the uniaxial stress, ε is the strain, F is the compressive force, A is the cross-section area perpendicular to the applied force, ΔL is the change in length (negative value if compressed), and L0 is the original length. For the penetration ability of the MNs, the MNP was inserted into penetration artificial membrane using 10 N produced by the mechanics test bench. The artificial membrane with the thickness of 127 μm was made by folding parafilm into an 8-layer square. The MNP was attached to the mechanical sensing probe and vertically pierced into the artificial membrane. After peeling off the MNP, the penetration rate of MNP was determined by accumulating the number of holes on each layer of parafilm, and insertion depth was calculated by adding the depression depth of the last parafilm layer to the thickness of all penetrated parafilm (Supplementary Fig. 24). $${{{{{\rm{Insertion\; depth}}}}}}=127\times {{{{{\rm{piercing\; layers}}}}}}+{{{{{\rm{depression\; depth}}}}}}$$ Preparation of sf-TENG sf-TENG consisted of two triboelectric layers and two electrodes. A PI film (30 × 60 mm) and two PVDF films (7 × 7 mm) were utilized as the triboelectric layer and dielectric layers, respectively. Two conducting PLA-Au MNP of CMNP were especially employed as electrodes. The surface charge density of the PTFE dielectric layer was increased by applying the corona discharge method. Briefly, PTFE film was covered on the Al sheet grounded by a wire, and a polarization voltage of 5 kV was loaded on the PTFE film for 15 min through a corona needle. Furthermore, in order to avoid the loss of charge, a layer of PI was covered on PTFE film. Characterization of sf-TENG The output performance of sf-TENG including voltage, current, and transferred charge were measured and recorded by oscilloscope (HD 4096, Teledyne LeCroy). In the test, a finger or the pigskin instead of finger was used to touch sf-TENG driven by a linear motor (E1100, LinMot) with the frequency of 1, 1.5, and 2 Hz. Drug delivery in vitro To determine the release kinetics of CMNP loading EGF, loaded-drug part of CMNP fabricated by the different matrix materials (Gel, cGel, cGel-cHA) were completely immersed in 0.01 M phosphate Buffer solution (PBS) with the magnetically stirring of the 200 rpm at 37 °C for 24 h. At the specific time points, ~300 μL of the sample was extracted and supplemented with the equal volume of fresh PBS. Following the obtained sample filtered with the filter tube (MW = 10,000), the release amount of EGF form CMNP was quantitatively tested by the UV absorbance at 278 applying the ultraviolet spectrophotometer (UV/VIS Spectrometer, Lambda 35, PerkinElmer). The drug loading of CMNP was 0.5 μg. The EGF release efficiency of CMNP was determined by the ratio of the cumulative release amount at the different times and the drug loading of CMNP. For the release drug of mn-STESS, two CMNP immersed in PBS were connected to the two electrodes of sf-TENG (the other two CMNPs) through copper wires, and a piece of pig skin (10 × 10 mm) instead of human finger was used to slide on the sf-TENG driven by a linear motor. The activity of EGF released from CMNP and mn-STESS in GSH solution (10 μmol L−1, 3 mL) was tested by human EGF ELISA kit. Animal experiments All animal experiments were performed according to protocols approved by approved by Committee on Ethics of Beijing Institute of Nanoenergy and Nanosystems (A-2019027). Mice were maintained on a 12 h light/dark cycle in individually ventilated cages at 22 °C and 48% humidity with unrestricted access to food and water unless otherwise state. Mice were group-housed if same sex little-mates were available. Mice were purchased from the Beijing Vital River Laboratory Animal Technology Co., Ltd., China. Drug delivery in vivo To visualize drug release in vivo, FITC-labeled EGF was loaded into CMNP. Two CMNPs were simultaneously pierced into the dorsal skin without the hair of the female Kunming mice (6–8 weeks, 25–30 g) at a distance of 1 cm. Six mice were treated with mn-STESS for 24 h. The two CMNP applied on skin were connected to the two electrodes of sf-TENG by copper wires. sf-TENG outputted the 1 μA current by using pigskin sliding on it. Fluorescent images of mice were captured using in vivo imaging system (IVIS, Xenogen 200, Caliper Life Sciences, Hopkinton, MA), and the fluorescent intensity of FITC labeled EGF on the insertion area at different times (1, 2, 6, 12, 18, 24 h) were measured using Living Image 4.0 software package. When the mn-STESS was peeled off the skin, the first-stage PLA-Au microneedles were removed along with the patch, and the optical micrographs confirmed that the PLA-Au microneedles were not bent or broken. The second-stage cGel-cHA microneedles were retained in the body and gradually degraded. In vitro experiments confirmed that the cGel-cHA microneedles could be completely degraded after 72 h in PBS. (Supplementary Fig. 25a, b). The HaCaT cells (keratinocyte cells, SCSP-5091) and L929 cells (fibroblasts, GNM28) were acquired from the Cell Bank of the Chinese Academy of Sciences in Beijing, China. HaCaT cells were cultured in 1640 medium (C11875500BT) containing 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin (P1400). L929 cells were cultured in DMEM high glucose medium (11995) containing 10% FBS and 1% penicillin/streptomycin (P1400). The cell culture condition was humid incubator (CCL-170B-8, ESCO) with 5% CO2 at 37 °C. All materials of mn-STESS were tested for cytotoxicity. The cell viability/cytotoxicity detection and CCK-8 was used to assess L929 cell viability. The images were captured by a confocal fluorescence microscope (Leica SP8). The microplate reader (BioRad iMark) was applied to test absorbance. Cell proliferation experiment The HaCaT cells were cultured in 12-well cell plate with CMNP for 24 h. The 12-well cells were randomly assigned to four groups, namely Blank group, CMNP group, ES group, and mn-STESS group. The cells in Blank group received no intervention. The cells in the CMNP group were treated with 10 ng mL−1 EGF. The cells were treated with the sf-TENG at current of 1 μA in the ES group. In the mn-STESS group, cells were treated with mn-STESS which loaded 10 ng mL−1 EGF and outputted current of 1 μA. CCK-8 was used to assess the viability of HaCaT cells. The microplate reader (BioRad iMark) was applied to test absorbance. Scratch test The HaCaT cells were cultured in 12-well cell plate with CMNP for 48 h. Whereafter, a scratch about 400 μm wide was made on the HaCaT cells that covered the 12-well cell plate using the tip of the pipette. The cells are intervened under different conditions. HaCaT cells were treated by the 1 μA ES generated by sf-TENG and the 10 ng mL−1 EGF from CMNP. Immunofluorescence of HaCaT cells The HaCaT cells were treated by EGF, ES, and mn-STESS in 12-well cell plate, and the Blank group had no intervention. The cells were blocked with 3% BSA (SW3015) and 10% FBS (10099–141, Gibco) in 0.3% Triton X-100 (T8200) for 2 h at room temperature, incubated with Phalloidin-iFluor 488 (ab176753, Abcam, 1:1000), anti-EGFR (ab52894, Abcam,1:200) and anti-PI3 Kinase p110 beta (ab151549, Abcam,1:200) overnight, and washed 3 times with PBS. Then the sections incubated with Goat Anti-Rabbit IgG H&L (Alexa Fluor 647) (ab150083, Abcam, 1:400) and Goat Anti-Rabbit IgG H&L (FITC) (ab6717, Abcam, 1:400) for 1 h. Finally, DAPI (1:100, c0060) was used to incubate the sections. The photos were taken with the confocal fluorescence microscope (Leica SP8). Image–Pro Plus 6.0 was used to analyze the positive expressions. Animal experiment for wound healing The experiments were performed on 36 female Kunming mice (6–8 weeks, 25–30 g) without any skin diseases. The back hair of mice was removed with an electrical hair cutter and depilation cream. The full-thickness skin wound (7 × 7 mm) was excised on the back. The 36 mice were randomly divided into five groups and treated for 12 days (replaced once a day), namely CD group, CD-EGF group, CMNP group, CMNP-sliding group, ES group, and mn-STESS group. The wounds in the CD group were covered with only chitosan dressings. The wounds in CD-EGF group were covered with chitosan dressings loading 1 μg EGF. The wounds in the CMNP group were treated by two CMNPs loading 0.5 μg EGF without sliding. The wounds in the CMNP-sliding group were treated by two CMNPs loading 0.5 μg EGF with sliding. The wounds were treated with the sf-TENG at current of 1 μA in the ES group. In the mn-STESS group, wounds were treated with mn-STESS which loaded 1 μg EGF and outputted current of 1 μA. On the 0th, 3rd, 6th, 9th and 12th day, the wound form was observed and recorded by a digital camera. Image–Pro Plus 6.0 was used to measure surface areas of the wounds. On the 12th day (without treatment), wound tissues were collected for observation and analysis. $${{{{{\rm{Remaining}}}}}}\;{{{{{\rm{wound}}}}}}\;{{{{{\rm{area}}}}}}\,(\%)=[{{{{{\rm{wound}}}}}}\;{{{{{\rm{area}}}}}}\big/{{{{{\rm{initial}}}}}}\;{{{{{\rm{wound}}}}}}\;{{{{{\rm{area}}}}}}]\times 100$$ Wound electrical measurement The wound potential and current were measured by the electrometer (Keithley 6517B) and oscilloscope (Teledyne LeCroy HD 4096). The positive electrode of the potential electrode needed to be placed on the wound edge and the negative electrode needed to be placed on the wound center. Four percent paraformaldehyde was used to soak tissues overnight. The tissues were dehydrated with graded ethanol and embedded in paraffin blocks for sectioning at 4 μm. Tissue sections were stained with hematoxylin and eosin (H&E) and immunofluorescence (IF). In the IF staining, the sections were blocked with 3% BSA (SW3015) and 10% FBS (10099-141, Gibco) in 0.3% Triton X-100 (T8200) for 2 h at room temperature, incubated with anti-EGFR (ab52894, Abcam, 1:100) and anti-PI3 Kinase p110 beta (ab151549, Abcam, 1:100) overnight, washing 3 times with PBS. Then the sections incubated with Goat Anti-Rabbit IgG H&L (Alexa Fluor 647) (ab150091, Abcam, 1:100) for 1 h. Finally, DAPI (1:100, c0060) was used to incubate the sections. The photos were taken with the confocal fluorescence microscope (Leica SP8). Image–Pro Plus 6.0 was used to analyze the positive expressions. Statistics and reproducibility At least three independent experiments of each type have been done and produced consistent results. Specifically, the experiments shown in the following figures were repeated three times: main Figs. 1c–e; 3e, supplementary Figs. 2; 4c; 6b; 7b, c, f; 24b, c; 25a. Statistics are expressed as the mean ± standard error of mean (SEM) of at least three or more independent simple. The one-way ANOVA was used to determine the statistical significance of the differences. Image–Pro Plus 6.0, Origin 2018 and Excel was used for data analysis and plotting. *p < 0.05, **p < 0.01 and ***p < 0.001 were considered statistically significant. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. The authors declare that all data supporting the results in this study are available within the paper and its Supplementary Information, or from the corresponding authors upon reasonable request. Source data for the figures in this study are available from figshare with the identifier https://doi.org/10.6084/m9.figshare.21411567. Source data are provided with this paper. Chakraborty, S. et al. Constitutive and ligand-induced EGFR signalling triggers distinct and mutually exclusive downstream signalling networks. Nat. Commun. 5, 5811 (2014). Alexander, P. B. et al. EGF promotes mammalian cell growth by suppressing cellular senescence. Cell Res. 25, 135–138 (2015). Chen, J. C. et al. Expression and function of the epidermal growth factor receptor in physiology and disease. Physiol. Rev. 96, 1025–1069 (2016). Ho, J. et al. Candidalysin activates innate epithelial immune responses via epidermal growth factor receptor. Nat. Commun. 10, 2297–2309 (2019). Zhang, X. W., Gureasko, J., Shen, K., Cole, P. A. & Kuriyan, J. An allosteric mechanism for activation of the kinase domain of epidermal growth factor receptor. Cell 125, 1137–1149 (2006). Hu, B. et al. An intrinsically bioactive hydrogel with on-demand drug release behaviors for diabetic wound healing. Bioact. Mater. 6, 4592–4606 (2021). Shao, M. Y., Fan, Y. Q., Zhang, K., Hu, Y. & Xu, F. J. One nanosystem with potent antibacterial and gene-delivery performances accelerates infected wound healing. Nano Today 39, 101224 (2021). Qiang, L., Yang, S., Cui, Y. H. & He, Y. Y. Keratinocyte autophagy enables the activation of keratinocytes and fibroblasts and facilitates wound healing. Autophagy 17, 2128–2143 (2021). Tadros, A. R. et al. STAR particles for enhanced topical drug and vaccine delivery. Nat. Med. 26, 341–347 (2020). Bouwstra, J. A., Helder, R. W. J. & Ghalbzouri, A. E. Human skin equivalents: Impaired barrier function in relation to the lipid and protein properties of the stratum corneum. Adv. Drug Deliv. Rev. 175, 113802 (2021). Bos, J. D. & Meinardi, M. M. H. M. The 500 Dalton rule for the skin penetration of chemical compounds and drugs. Exp. Dermatol. 9, 165–169 (2000). Lu, Y. F. et al. Engineering bacteria-activated multifunctionalized hydrogel for promoting diabetic wound healing. Adv. Funct. Mater. 31, 2105749 (2021). Kim, Y. C., Park, J. H. & Prausnitz, M. R. Microneedles for drug and vaccine delivery. Adv. Drug Deliv. Rev. 64, 1547–1568 (2012). Kim, H. et al. Hyaluronate-epidermal growth factor conjugate for skin wound healing and regeneration. Biomacromolecules 17, 3694–3705 (2016). Yang, Y. X. et al. Trisulfide bond–mediated doxorubicin dimeric prodrug nanoassemblies with high drug loading, high self-assembly stability, and high tumor selectivity. Sci. Adv. 6, eabc1725 (2020). Tomas, A., Futter, C. E. & Eden, E. R. EGF receptor trafficking: consequences for signaling and cancer. Trends Cell Biol. 24, 26–34 (2014). Ceresa, B. P. Spatial regulation of epidermal growth factor receptor signaling by endocytosis. Int. J. Mol. Sci. 14, 72–87 (2012). Lemmon, M. A. & Schlessinger, J. Cell signaling by receptor tyrosine kinases. Cell 141, 1117–1134 (2010). Pawson, T. Protein modules and signalling networks. Nature 373, 573–580 (1995). Zhao, M., Dick, A., Forrester, J. V. & McCaig, C. D. Electric field–directed cell motility involves up-regulated expression and asymmetric redistributionof the epidermal growth factor receptors and is enhanced by fibronectin and laminin. Cell 10, 1259–1276 (1999). Mukherjee, R. et al. Long-term localized high-frequency electric stimulation within the myocardial infarct: effects on matrix metalloproteinases and regional remodeling. Circulation 122, 20–32 (2010). Liu, Q. & Song, B. Electric field regulated signaling pathways. Int. J. Biochem. Cell Biol. 55, 264–268 (2014). Yu, J. C. et al. Glucose-responsive insulin patch for the regulation of blood glucose in mice and minipigs. Nat. Biomed. Eng. 4, 499–506 (2020). Zhang, X. X. et al. Bio-inspired clamping microneedle arrays from flexible ferrofluid-configured moldings. Sci. Bul. 64, 1110–1117 (2019). Chang, H. et al. Cryomicroneedles for transdermal cell delivery. Nat. Biomed. Eng. 5, 1008–1018 (2021). Larrañeta, E., Lutton, R. E. M., Woolfson, A. D. & Donnelly, R. F. Microneedle arrays as transdermal and intradermal drug delivery systems: Materials science, manufacture and commercial development. Mat. Sci. Eng. R. 104, 1–32 (2016). Yang, S. Y. et al. A bio-inspired swellable microneedle adhesive for mechanical interlocking with tissue. Nat. Commun. 4, 1702 (2013). Yang, Y. et al. Self-powered controllable transdermal drug delivery system. Adv. Funct. Mater. 31, 2104092 (2021). Wan, T., Pan, Q. & Ping, Y. Microneedle-assisted genome editing: A transdermal strategy of targeting NLRP3 by CRISPR-Cas9 for synergistic therapy of inflammatory skin disorders. Sci. Adv. 7, eabe2888 (2021). Zhang, Y. Q. et al. Advances in transdermal insulin delivery. Adv. Drug Deliv. Rev. 139, 51–70 (2019). Zhao, C. C. et al. Highly efficient in vivo cancer therapy by an implantable magnet triboelectric nanogenerator. Adv. Funct. Mater. 29, 1808640 (2019). Cao, Y. et al. A self-powered triboelectric hybrid coder for human-machine interaction. Small Methods 6, e2101529 (2022). Sun, M. et al. Nanogenerator-based devices for biomedical applications. Nano Energy 89, 106461 (2021). Dong, R. H. et al. In situ electrospinning of aggregation-induced emission nanofibrous dressing for wound healing. Small Methods 6, 2101247 (2022). Lee, B. K., Yun, Y. & Park, K. PLA micro- and nano-particles. Adv. Drug Deliv. Rev. 107, 176–191 (2016). Ouyang, H. A bioresorbable dynamic pressure sensor for cardiovascular postoperative care. Adv. Mater. 33, 2102302 (2021). Ogiso, H. et al. Crystal structure of the complex of human epidermal growth factor and receptor extracellular domains. Cell 110, 775–787 (2002). Zhang, Y. et al. Preparation, characterization, and evaluation of genipin crosslinked chitosan/gelatin three-dimensional scaffolds for liver tissue engineering applications. J. Biomed. Mater. Res. A 104, 1863–1870 (2016). Mallick, S. P. et al. Genipin-crosslinked gelatin-based emulgels: An insight into the thermal, mechanical, and electrical studies. AAPS PharmSciTech 16, 1254–1262 (2015). Zhang, J. N., Chen, B. Z., Ashfaq, M., Zhang, X. P. & Guo, X. D. Development of aBDDE-crosslinked hyaluronic acid based microneedles patch as a dermal filler for anti-ageing treatment. J. Ind. Eng. Chem. 65, 363–369 (2018). Hao, Y. Y. et al. Effect of polymer microneedle pre-treatment on drug distributions in the skin in vivo. J. Drug Target. 28, 811–817 (2020). Makvandi, P. et al. Engineering microneedle patches for improved penetration: analysis, skin models and factors affecting needle insertion. Nano-Micro Lett. 13, 92–133 (2021). Chen, B. Z. Safety evaluation of solid polymer microneedles in human volunteers at different application sites. ACS Appl. Bio Mater. 2, 5616–5625 (2019). Kalra, A., Lowe, A. & Al-Jumaily, A. M. Mechanical behaviour of skin: a review. J. Mater. Sci. Eng. 5, 1000254 (2016). Ouyang, H. et al. Symbiotic cardiac pacemaker. Nat. Commun. 10, 1821 (2019). Chen, B. Z., Zhang, L. Q., Xia, Y. Y., Zhang, X. P. & Guo, X. D. A basal-bolus insulin regimen integrated microneedle patch for intraday postprandial glucose control. Sci. Adv. 6, eaba7260 (2020). Xu, L. L., Yang, Y., Mao, Y. K. & Li, Z. Self-powerbility in electrical stimulation drug delivery system. Adv. Mater. Technol. 7, 2100055 (2021). Chen, B. Z., Ashfaq, M., Zhu, D. D., Zhang, X. P. & Guo, X. D. Controlled delivery of insulin using rapidly separating microneedles fabricated from genipin-crosslinked gelatin. Macromol. Rapid Commun. 39, 1800075 (2018). Duperray, J., Sergheraert, R., Chalothorn, K., Tachalerdmanee, P. & Perin, F. The effects of the oral supplementation of L-Cystine associated with reduced L-Glutathione-GSH on human skin pigmentation: a randomized, double-blinded, benchmark- and placebo-controlled clinical trial. J. Cosmet. Dermatol. 21, 802–813 (2022). Alton, M. & Anderson, M. E. Glutathione. Annu. Rev. Biochem. 52, 711–760 (1983). Fan, M. L. et al. Design and biological activity of epidermal growth factor receptor-targeted peptide doxorubicin conjugate. Biomed. Pharmacother. 70, 268–273 (2015). Lina, M., Chantal, E., Pierre, S. H. & Marc, B. EGF ediates protection against fas-induced apoptosis by depleting and oxidizing intracellular GSH stocks. J. Cell Physiol. 198, 62–72 (2004). Arkhipov, A. et al. Architecture and membrane interactions of the EGF receptor. Cell 152, 557–569 (2013). Ma, L. Q. et al. Epidermal growth factor (EGF) and interleukin (IL)−1β synergistically promote ERK1/2-mediated invasive breast ductal cancer cell migration and invasion. Mol. Cancer 11, 79–90 (2012). Luo, R. Z., Dai, J. Y., Zhang, J. P. & Li, Z. Accelerated skin wound healing by electrical stimulation. Adv. Healthc. Mater. 10, 2100557 (2021). This work was financially supported by the National Natural Science Foundation of China (T2125003 and 61875015 to Z.L.; 81873936 to J.P.Z.; 51902344 to D.L.; 52161145410 and 51873015 to X.D.G.), National Key Research and Development Program of China (2021YFB3201200 to D.L.), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA16021101 to Z.L.), the Beijing Natural Science Foundation (JQ20038 and L212010 to Z.L.) and the outstanding project of the Youth training Program of Military Medical Science and Technology (21QNPY026 to J.P.Z.). These authors contributed equally: Yuan Yang, Ruizeng Luo. Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing, 101400, China Yuan Yang, Ruizeng Luo, Shengyu Chao, Dongjie Jiang, Dan Luo, Zhou Li & Zhong Lin Wang Department of Plastic Surgery, State Key Laboratory of Trauma, Burns and Combined Injury, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, 400038, China Yuan Yang & Jiaping Zhang School of Nanoscience and Technology, University of Chinese Academy of Sciences, Beijing, 100049, China Yuan Yang, Ruizeng Luo, Shengyu Chao, Dongjie Jiang, Dan Luo & Zhou Li Institute of Engineering Medicine, Beijing Institute of Technology, Beijing, 100081, China Jiangtao Xue Beijing Laboratory of Biomedical Materials, College of Materials Science and Engineering, Beijing University of Chemical Technology, Beijing, 100029, China Yun Hao Feng & Xin Dong Guo Center of Nanoenergy Research, School of Physical Science and Technology, Guangxi University, Nanning, 530004, China Dan Luo & Zhou Li Institute for Stem Cell and Regeneration, Chinese Academy of Sciences, Beijing, 100101, China Georgia Institute of Technology, Atlanta, GA, 30332 0245, USA Zhong Lin Wang Yuan Yang Ruizeng Luo Shengyu Chao Dongjie Jiang Yun Hao Feng Xin Dong Guo Dan Luo Jiaping Zhang Y.Y. and R.Z.L. contributed equally to this work. Y.Y., R.Z.L., D.L. and Z.L. designed the study, performed experimental measurements, and wrote the manuscript. S.Y.C. and Z.L.W. directed the preparation of sf-TENG. X.D.G. contributed to the fabrication of CMNP. J.T.X. contributed to COMSOL simulation. Y.Y. and Y.H.F. accomplished the molecular dynamics simulation. R.Z.L. completed the cell experiment test and correlative data analysis. Y.Y., R.Z.L., and J.P.Z. carried out the animal experiment and correlative data analysis. D.J.J. contributed to the schematic of the article. Correspondence to Dan Luo, Jiaping Zhang or Zhou Li. Nature Communications thanks Sei Kwang Hahn and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Description of Additional Supplementary Files Supplementary Movie S1 Yang, Y., Luo, R., Chao, S. et al. Improved pharmacodynamics of epidermal growth factor via microneedles-based self-powered transcutaneous electrical stimulation. Nat Commun 13, 6908 (2022). https://doi.org/10.1038/s41467-022-34716-5 Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
pdgLive Home > ${{\mathit W}}$ > ${{\mathit W}}$ ANOMALOUS MAGNETIC MOMENT ${{\boldsymbol W}}$ ANOMALOUS MAGNETIC MOMENT INSPIRE search The full magnetic moment is given by ${\mathit \mu}_{{{\mathit W}}}$ = $\mathit e(1+\kappa +\lambda )/2{\mathit m}_{{{\mathit W}}}$. In the Standard Model, at tree level, $\kappa ~=~$1 and $\lambda ~=~$0. Some papers have defined $\Delta \kappa $ =$~1−\kappa $ and assume that $\lambda ~=~$0. Note that the electric quadrupole moment is given by $−\mathit e(\kappa −\lambda )/{{\mathit m}^{2}}_{{{\mathit W}}}$. A description of the parameterization of these moments and additional references can be found in HAGIWARA 1987 and BAUR 1988 . The parameter$~\Lambda $ appearing in the theoretical limits below is a regularization cutoff which roughly corresponds to the energy scale where the structure of the ${{\mathit W}}$ boson becomes manifest. VALUE (${{\mathit e}}/2{\mathit m}_{{{\mathit W}}}$) EVTS TECN $2.22$ ${}^{+0.20}_{-0.19}$ 2298 1 DLPH ${\it{}E}^{\it{}ee}_{\rm{}cm}$= 183+189 GeV • • • We do not use the following data for averages, fits, limits, etc. • • • ALITTI UA2 GROTCH VANDERBIJ 1 ABREU 2001I combine results from ${{\mathit e}^{+}}{{\mathit e}^{-}}$ interactions at 189 GeV leading to ${{\mathit W}^{+}}{{\mathit W}^{-}}$ , ${{\mathit W}}{{\mathit e}}{{\mathit \nu}_{{e}}}$ , and ${{\mathit \nu}}{{\overline{\mathit \nu}}}{{\mathit \gamma}}$ final states with results from ABREU 1999L at 183 GeV to determine $\Delta \mathit g{}^{{{\mathit Z}}}_{1}$, $\Delta \kappa _{{{\mathit \gamma}}}$, and $\lambda _{{{\mathit \gamma}}}$. $\Delta \kappa _{{{\mathit \gamma}}}$ and $\lambda _{{{\mathit \gamma}}}$ are simultaneously floated in the fit to determine $\mu _{{{\mathit W}}}$. 2 ABE 1995G report $-1.3<\kappa <3.2$ for $\lambda $=0 and $-0.7<\lambda <0.7$ for $\kappa $=1 in ${{\mathit p}}$ ${{\overline{\mathit p}}}$ $\rightarrow$ ${{\mathit e}}{{\mathit \nu}_{{e}}}{{\mathit \gamma}}$ X and ${{\mathit \mu}}{{\mathit \nu}_{{\mu}}}{{\mathit \gamma}}$ X at $\sqrt {\mathit s }$ = $1.8$ TeV. 3 ALITTI 1992C measure $\kappa $ = $1$ ${}^{+2.6}_{-2.2}$ and $\lambda $ = $0$ ${}^{+1.7}_{-1.8}$ in ${{\mathit p}}$ ${{\overline{\mathit p}}}$ $\rightarrow$ ${{\mathit e}}{{\mathit \nu}}{{\mathit \gamma}}$ + X at $\sqrt {\mathit s }$ = 630 GeV. At 95$\%$CL they report $-3.5<\kappa <5.9$ and $-3.6<\lambda <3.5$. 4 SAMUEL 1992 use preliminary CDF and UA2 data and find $-2.4<\kappa <3.7$ at 96$\%$CL and $-3.1<\kappa <4.2$ at 95$\%$CL respectively. They use data for ${{\mathit W}}{{\mathit \gamma}}$ production and radiative ${{\mathit W}}$ decay. 5 SAMUEL 1991 use preliminary CDF data for ${{\mathit p}}$ ${{\overline{\mathit p}}}$ $\rightarrow$ ${{\mathit W}}{{\mathit \gamma}}$ X to obtain $-11.3$ ${}\leq{}$ $\Delta \kappa $ ${}\leq{}$ $10.9$. Note that their $\kappa $ = 1$−\Delta \kappa $. 6 GRIFOLS 1988 uses deviation from $\rho $ parameter to set limit $\Delta \kappa $ ${ {}\lesssim{} }$ 65 ($\mathit M{}^{2}_{\mathit W}/\Lambda {}^{2}$). 7 GROTCH 1987 finds the limit $-37$ $<$ $\Delta \kappa $ $<$ 73.5 (90$\%$ CL) from the experimental limits on ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \nu}}{{\overline{\mathit \nu}}}{{\mathit \gamma}}$ assuming three neutrino generations and $-19.5$ $<$ $\Delta \kappa $ $<$ 56 for four generations. Note their $\Delta \kappa $ has the opposite sign as our definition. 8 VANDERBIJ 1987 uses existing limits to the photon structure to obtain $\vert \Delta \kappa \vert $ $<$ 33 (${\mathit m}_{{{\mathit W}}}/\Lambda $). In addition VANDERBIJ 1987 discusses problems with using the $\rho $ parameter of the Standard Model to determine $\Delta \kappa $. 9 GRAU 1985 uses the muon anomaly to derive a coupled limit on the anomalous magnetic dipole and electric quadrupole ($\lambda $) moments 1.05 $>\Delta \kappa $ ln($\Lambda /{\mathit m}_{{{\mathit W}}}$) $+$ $\lambda $/2 $>-2.77$. In the Standard Model $\lambda $ = 0. 10 SUZUKI 1985 uses partial-wave unitarity at high energies to obtain $\vert \Delta \kappa \vert $ ${ {}\lesssim{} }$ 190 (${\mathit m}_{{{\mathit W}}}/\Lambda ){}^{2}$. From the anomalous magnetic moment of the muon, SUZUKI 1985 obtains $\vert \Delta \kappa \vert $ ${ {}\lesssim{} }$ 2.2/ln($\Lambda /{\mathit m}_{{{\mathit W}}}$). Finally SUZUKI 1985 uses deviations from the $\rho $ parameter and obtains a very qualitative, order-of-magnitude limit $\vert \Delta \kappa \vert $ ${ {}\lesssim{} }$ 150 (${\mathit m}_{{{\mathit W}}}/\Lambda ){}^{4}$ if $\vert \Delta \kappa \vert $ ${}\ll$1. 11 HERZOG 1984 consider the contribution of ${{\mathit W}}$-boson to muon magnetic moment including anomalous coupling of ${{\mathit W}}{{\mathit W}}{{\mathit \gamma}}$ . Obtain a limit $-1$ $<$ $\Delta \kappa $ $<$ 3 for $\Lambda $ ${ {}\gtrsim{} }$ 1 TeV. ABREU 2001I PL B502 9 Measurement of Trilinear Gauge Boson Couplings $\mathit WWV$, ($\mathit V$ ${}\equiv$ ${{\mathit Z}}$, ${{\mathit \gamma}}$) in ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Collisions at 189 GeV ABE 1995G PRL 74 1936 Measurement of ${{\mathit W}}$ Photon Couplings with CDF in ${{\mathit p}}{{\overline{\mathit p}}}$ Collisions at $\sqrt {s }$ = 1.8 TeV ALITTI 1992C PL B277 194 Direct Measurement of the ${{\mathit W}}$ $−$ ${{\mathit \gamma}}$ Coupling at the CERN ${{\overline{\mathit p}}}{{\mathit p}}$ Collider PL B280 124 The Magnetic Moment of the ${{\mathit W}}$ Boson PRL 67 9 Bounds on the Magnetic Moment of the ${{\mathit W}}$ Boson GRIFOLS 1988 IJMP A3 225 Electroweak Boson Selfcouplings and the Scale of Compositeness GROTCH 1987 PR D36 2153 New Limits from Single Photon Searches at ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Colliders VANDERBIJ 1987 PR D35 1088 New Bound on the Anomalous Magnetic Moment of the ${{\mathit W}}$ Boson GRAU 1985 PL 154B 283 Effects of Anomalous Magnetic Dipole and Electric Quadrupole Moments of the Charged Weak Boson on ($\mathit g-2)_{{{\mathit \mu}}}$ PL 153B 289 Magnetic Moment of ${{\mathit W}}$ and Scale of Composite Weak Bosons HERZOG 1984 PL 148B 355 Constraints on the Anomalous Magnetic Moment of the ${{\mathit W}}$ Boson from the Magnetic Moment of the Muon
CommonCrawl
What if Earth and Moon revolved around each other like Pluto and Charon? What would be different for us if Earth and Moon revolved around each other like Pluto and Charon do? The reason this doesn't happen for the Earth-moon system is the different masses involved. Could you elaborate on your question? That is, in this hypothetical situation, is the Earth mass set equal to Pluto's? Or would you like to retain Earth's mass and just put it in a larger orbit around the system's c.o.m.? V-J 7 years ago @John As gravitational potential of any given body decreases to zero only at infinity, a two-body system's center of mass is always at a location non-equal to the center of mass of the larger body. Hence, the above does happen for practically all systems (excluding some very unlikely, very specific cases) – like Earth-Moon, but the extent to which it happens varies with the mass ratio of the bodies involved. @ V-J, yes, you are effectively repeating what I just said. The proposed question deals with a larger separation from the c.o.m. and thus either a different mass or a contrived, hypothetical situation. Gerald 7 years ago @cd1: What properties of the Pluto-Charon system is your focus on? Mutual tidal locking, a center of mass outside Pluto, tilted axis with respect to the ecliptic, larger moon, closer distance, moon of similar density, orbital period of the moon less than an Earth week? V-J They do, but due to the ratio of masses being vastly different, they seem like they would not to do so as moon seems to rotate just around (the centre of) Earth. The ratio of Earth and Moon's masses is $\frac{M_{Earth}}{M_{Moon}} = 81.3$ whereas for Pluto and Charon the same ratio is $\frac{M_{Pluto}}{M_{Charon}} = 8.09$. Because the ratio for Pluto and Charon is relatively small, the center of the system – barycenter, around which the two bodies orbit – is somewhere on a line drawn between the two celestial bodies's mass centers. But for Earth and Moon, because Earth is proportionally much heavier, the system's barycenter does not reach outside of Earth, but instead is about 4,500 kilometers from the center of Earth (see the photo below, too): In cases where one of the two objects is considerably more massive than the other (and relatively close), the barycenter will typically be located within the more massive object. Rather than appearing to orbit a common center of mass with the smaller body, the larger will simply be seen to "wobble" slightly. This is the case for the Earth–Moon system, where the barycenter is located on average 4,671 km from the Earth's center, well within the planet's radius of 6,378 km. Source: Wikipedia - Barycenter The major effect of this co-rotational system is that Earth seems to "wobble" on its orbit, as mentioned in the quote from Wikipedia above. @JeppeStigNielsen makes a good point about differences in tidal locking in the comments below. In the Earth-Moon system, only Moon is tidally locked (which causes us to see only one face of it, so only roughly a half of it, from Earth), whereas in Pluto-Charon both of the bodies are tidally locked. Earth is not tidally locked because of the higher mass ratio between it and Moon, but the lower mass ratio Pluto-Charon system is, as the lower mass Charon has slowly changed Pluto's rotation to match its orbital movement. 37 Points 11 Comments Share So Moon does not implicitly revolve around Earth, but instead both Moon **and Earth** revolve around a common point: the barycenter of Earth-Moon system. This applies to all celestial bodies, to a more or less negligible effect: for example, the planets in our Solar System do not implicitly revolve around Sun, but instead their respective systems' barycenters. This precision is not needed for most everyday cases however, so approximations like "Moon revolves around Earth" and "Planets revolve around Sun" are just fine. Jeppe Stig Nielsen 7 years ago There is another difference. In the Pluto-Charon system, the major body (Pluto) is tidally locked, while in our system, Earth is not tidally locked. Because of this we can enjoy the view of the moon from every longitude (from 180 degrees west to 180 degrees east) on Earth. **If** Earth had been tidally locked, Earth would have had a near side and a far side. As a consequence, Greenwich in England would not be an "arbitrary" origin of the longitude. Instead 0 degrees would be the defined as the meridian just under the Moon (on average). **The most dramatic difference would be almost no tides.** @JeppeStigNielsen That's actually a really interesting point; never thought what Moon's path on the sky would look like, if seen at all, if Earth was tidally locked! As far as I'm concerned, your answer is merely a re-statement of the question -- although on a second read, the question is somewhat ambiguous. I think the question is more in regards to the changed magnitude of the revolution around barycenter, rather than revolution AT ALL. And the consequences asked for would be in terms of tides, phases of the moon, and long-term orbital stability. Todd Wilcox 7 years ago @John It seems to me that the question itself is actually based on a false assumption, namely that the way Pluto and Charon interact with respect to each other is fundamentally different from how the Earth and the Moon interact. Both systems revolve around their respective barycenters. The only important difference, as noted in Jeppe's comment, is that Pluto is tidally locked and the Earth is not. The question does not explicitly ask about tidal locking, but perhaps that is what the asker is really curious about. It's not clear. @V-J As the Moon is tidally locked to the Earth, and we've been to the Moon, we know first-hand how tidal locking can affect the appearance of the neighbor body in the sky, and extrapolate. If Earth were tidal locked to the Moon, we can expect the Moon would stay roughly in the same place with some motion in the sky (in those places where it can be seen at all) and it would still have phases that I think would be monthly. The exact "wiggle" pattern that the Moon would have is not obvious, though. See: http://starchild.gsfc.nasa.gov/docs/StarChild/questions/question58.html called2voyage 7 years ago This answer neglects the fact that the Pluto-Charon barycenter is outside of Pluto, while the Earth-Moon barycenter is inside Earth. Bobson 7 years ago It might be worth mentioning that the Sun-Jupiter barycenter is *not* located within the Sun's radius. @called2voyage I wouldn't say it neglects it, but it is true I did not mention it, because I thought the difference would be implied by the question's animation and the pictured I linked, which it of course could have failed to do, which I neglected. @ToddWilcox Yeah I'm guessing too that Moon, seen from tidally locked Earth, would have phases, because the only thing affecting the phases would be Sun outside the binary system. The movement of Moon in that case would be interesting to see though. Wayfaring Stranger 7 years ago To have earth and moon tidally locked, you'd have to have moon in geosynch orbit, about 35,786 kilometres. That's a fair bit closer than its current 405,696 km. It'd cover about 5.7° of sky. That distance is still outside the fluid Roche limit for the moon 18,381 km https://en.wikipedia.org/wiki/Roche_limit so the setup is *doable*. Thomas Pornin On barycentres The Pluto-Charon couple is not qualitatively different to the Earth-Moon couple with regards to orbits. As was pointed out in other answers, in both cases, the two bodies revolve around each other, i.e. they are best described as orbiting around their barycentre. In more physical terms, the referential centred on the barycentre of the Earth-Moon system is "more Galilean" than the referential centred on the Earth geometrical centre: if you do high-precision measures of physical systems on Earth, you will see some "jitter" that shows that the Earth is not really Galilean. Most of which is due to the Earth rotation (the jitter being most famously demonstrated by Foucault's pendulum) but even if you account for the rotation, you still get some residual perturbations coming from the revolution of the Earth around the Earth-Moon barycentre. (And if you fix these ones, you still get some due to the Earth revolution around the Sun -- really, the revolution of the Earth around the Solar system barycentre -- and then some because of the rotation of the Galaxy, and so on, but they are increasingly difficult to detect.) About tides When two round bodies orbit each other, they have a tendency to go into "tidal locking": their individual rotation speed will synchronize with the revolution, so that, in fine, the two bodies always keep the same hemisphere toward each other. Pluto and Charon are at that step. The Moon is also tidally locked with the Earth: we always see the same hemisphere (in fact we see a tiny bit more than half of the Moon, because it wobbles a bit). The Earth is not tidally locked... yet. But it will ultimately be. Indeed, the Earth and the Moon exert tidal forces on each other. This is most easily explained by considering orbital speed: when a very small satellite orbits around a big planet, it must go at a speed that depends on the satellite's altitude: the further the satellite is, the slowest it goes (e.g. low-orbit satellites zoom at around 8 km/s, while the Moon goes at a leisurely 1 km/s or so). But the Moon is quite bulky: its radius is a bit more than 1700 km. This means that if the Moon's centre goes at the right speed for its orbit, the rocks on the far side of the Moon are 1700 km further from the Earth, and thus are a wee bit too fast for that orbit, so they want to leave. Similarly, the rocks on the near side of the moon are 1700 km closer to the Earth, and are thus going too slowly: they tend to "fall" toward the Earth. The phenomenon is symmetrical: the Earth also experiences tidal forces from the Moon. In fact, both Earth and Moon experience tidal forces from the gravitational Earth-Moon couple. This engenders tides, where water moves around in response to the forces; rocks do not because they're rocks, i.e. not very fluid under normal conditions -- they would like to move, but are too rigid to do so. The tidal forces are somehow opposed to the Earth rotating faster than the 27 days for an Earth-Moon revolution, and the rotational energy is slowly dissipated: some of it happens to be injected in the Earth-Moon gravitational coupling, which drives them apart of each other (it has been measured thanks to reflectors from space probes and Apollo missions: the Moon is fleeing us at a rate of about 38 mm per year); the rest is lost in the friction from moving water, thus ultimately converted to heat radiated into space. Bottom-line: the Earth rotation is slowing down. For instance, a day would have lasted about 22 hours at the time of the dinosaurs (the big ones, not the birds). The slowing-down is known around time-keeping circles as the ΔT. However... Even when the Earth becomes tidally locked with the Moon, there will still be tides (at least, if there is still liquid water at that time, which is not a given, since the Sun energy output is predicted to go down sharply 5 billion years from now). Indeed, the Sun-Earth couple also produces tidal forces. The Earth-Moon tidal forces are about twice stronger, so the Moon-induced tides are bigger, but in a tidally locked situation, we should still witness tides induced by the Sun -- but on a smaller scale. (Without the Moon, the Earth would ultimately tidally lock with the Sun and a 365-days rotation -- using today's length for a day, of course. I am not entirely clear about what should become of the Sun-Earth-Moon system in the very long term; but it appears that this is still widely open research, notably because there are other planets in the mix, leading to a very complex situation.) userLTK TLDr answer: Both answers are very good. There's a few more details to consider if we want to look at all the what-ifs in this amusing but wacky scenario. Already mentioned, the ratio of size is 8 to 1, not 81 to 1, so for starters, the Charon like Moon would be much larger in the sky. The Moon, with roughly 10 times the mass, figuring slightly greater density due to some minor compacting, would still be 2.1 times as big across, assuming the same distance, that would make it 4 times as bright in the night sky. A full moon would be quite impressive. Perhaps (just barely) bright enough to read by if it was a large text book. (some people claim to be able to read by Moonlight now, most people can't, but 4 times brighter, a full moon might just be bright enough. Solar eclipses would become more frequent and last about twice as long and you might think the Earth would be slightly colder due to the Moon blocking some sunlight, but the Moon, believe it or not, radiates more heat on the earth than it blocks because the lit up moon that faces us is nearly 400 degrees F in peak daytime and it's not hard to see that a surface that temperature radiates some heat. Not a lot, but some. A question on that here, so a little over 4 times the energy (ignoring solar eclipse losses), about 1/2,500th the heat from the sun, might work out to 1/10th of a degree at night during a full moon. Not a lot, certainly, but measurable to anyone with sensitive enough instruments. The brightness and size of the Moon would obviously be more noticeable than about 1/10th of 1 degree in temperature (C not F). A moon of that mass would slow down the Earth's rotation significantly faster, already mentioned, but this one, we have to give some thought to. When the Moon formed it was much closer to the Earth, about 3-5 times the radius of the Earth away. Source. That's outside the Hill Sphere, and the formation of the Moon left the Earth rotating very rapidly so the effects (rapidly rotating earth, very powerful lunar tides) would still be there but the Lunar tides would be 10 times greater, so we're looking at earth-quake level tides when the moon, 10 times the mass, was 3-5 earth radius away. The Moon, because during formation it wouldn't have much angular momentum, would quickly settle into a tidally locked rotation around the Earth. The Tidal effect on the Earth, being 10 times greater would cause (roughly) 10 times the tidal bulge on the Earth would would push the Moon away from the Earth about 10 times faster, but, at the same time, the tidal drag slowing down the Earth, would be 10 times as great too (I assume that corresponds to about 10 times as fast). So, basically, the Moon and Earth would follow the system they're in now, but it would proceed about 10 times as fast with a Moon with 10 times the mass. The estimate (here) is that it will take about 50 billion years for the Moon to slow down the Earth enough to enter into tidal locking, so divide that by 10, we would be very close to a tidal lock today. The Earth would rotate very slowly. The moon would also (likely) be a bit further from the Earth and probably have a more wobbly orbit due to solar perturbations, and perhaps, escaped completely. This is a complicated bit of mathematics that I'd prefer not to attempt (at the Moon's current mass, the Sun will go Red Giant long before either the Moon escapes or the Earth gets tidally locked but with a Moon 10 times as massive, that's probably no longer the case and either the Moon is gone or the Moon is more distant, has a more elongated orbit, and the Earth is at or close to tidally locked. If the Moon escapes, we'd have a near earth orbit object of enormous size that could later crash into us or swing past the earth and move our orbit - either effect and simply the effect of having no moon, would be enormous. Discussion on the Moon/earth escape vs tidal locking here If we assume full tidal locking, 29.5 days (synodic, not sidreal) and a moon a bit further out so we might be looking at 30 something to maybe 40 days for 1 earth rotation, that's 20 days of sunshine, 20 days of night. That would play absolute havoc on the weather systems and seasons. Day to night would have a bigger effect than Summer vs winter, and the summer days would be scorching, though some regions might due just fine because of rainfall. Evolution could probably adapt to that, but it doesn't sound fun to me. The further distance might make the Moon just 3 times as bright in the night sky instead of 4. still pretty bright though. You'd still get 6 months of sun and 6 months of night at the poles, but for most of the earth, this would be a radical change having days and nights that long. Other possible effects, Obliquity (no moon, perhaps greater, a larger ice age driver), see here. Also, if the Earth still had the Moon but the Moon was in a more elongated orbit, we'd still have tides as the Moon moved in and out to appogee and perogee. see picture The bottom line, while we might not give it much thought, a different sized moon would actually change quite a lot. A smaller moon would move away from the Earth more slowly and earth might be able to have a 2nd moons perhaps by capture, if the moon was smaller, also we might have more aggressive ice ages and climate changes due to a greater obliquity variation and, assuming the giant impact still happens in a similar way but a smaller amount of debris (which doesn't make sense, but lets pretend), then a smaller moon wouldn't have slowed the Earth's rotation as much and the Earth might be rotating quite a bit faster, 10 or 15 hour days instead of 24. The effects would be pretty significant. orbit the-moon earth pluto charon
CommonCrawl
Corporate Finance & Accounting Financial Ratios The Formula for Calculating Internal Rate of Return in Excel By Daniel Jassy The internal rate of return (IRR) is a core component of capital budgeting and corporate finance. Businesses use it to determine which discount rate makes the present value of future after-tax cash flows equal to the initial cost of the capital investment. Or, to put it more simply: What discount rate would cause the net present value (NPV) of a project to be $0? If an investment will require capital that could be used elsewhere, the IRR is the lowest level of return from the project that is acceptable in order to justify the investment. If a project is expected to have an IRR greater than the rate used to discount the cash flows, then the project adds value to the business. If the IRR is less than the discount rate, it destroys value. The decision process to accept or reject a project is known as the IRR rule. The internal rate of return allows investments to be analyzed for profitability by calculating the expected growth rate of an investment's returns and is expressed as a percentage. Internal rate of return is calculated such that the net present value of an investment yields zero, and therefore allows the comparison of the performance of unique investments over varying periods of time The internal rate of return's shortcomings derive from the assumption that all future reinvestments will take place at the same rate as the initial rate. Modified internal rate of return allows the comparison of the fund when different rates are calculated for the initial investment and the capital cost of reinvestment which often differ. When investments have cash flows that move up and down at various times in the year, the above models return inaccurate numbers, and the XIRR function within excel allows the internal rate of return to account for the date ranges selected and return a more accurate result. One advantage of using IRR, which is expressed as a percentage, is that it normalizes returns: everyone understands what a 25% rate means, compared to a hypothetical dollar equivalent (the way the NPV is expressed). Unfortunately, there are also several critical disadvantages with using the IRR to value projects. You should always pick the project with the highest NPV, not necessarily the highest IRR, because financial performance is measured in dollars. If faced with two projects with similar risks, Project A with 25% IRR and Project B with 50% IRR, but Project A has a higher NPV because it is long-term, you would pick Project A. The second big issue with IRR analysis is that it assumes you can continue to reinvest any incremental cash flow at the same IRR, which may not be possible. A more conservative approach is the Modified IRR (MIRR), which assumes reinvestment of future cash flows at a lower discount rate. The IRR Formula The IRR cannot be derived easily. The only way to calculate it by hand is through trial and error because you are trying to arrive at whatever rate which makes the NPV equal to zero. For this reason, we'll start with calculating NPV: NPV=∑t=0nCFt(1+r)twhere:CFt=net after-tax cash inflow-outflows duringa single period tr=internal rate of return that could be earned inalternative investmentst=time period cash flow is receivedn=number of individual cash flows\begin{aligned} &NPV = \sum_{t = 0}^n \frac { CF_t }{ (1 + r)^t } \\ &\textbf{where:} \\ &CF_t = \text{net after-tax cash inflow-outflows during} \\ &\text{a single period } t \\ &r = \text{internal rate of return that could be earned in} \\ &\text{alternative investments} \\ &t = \text{time period cash flow is received} \\ &n = \text{number of individual cash flows} \\ \end{aligned}​NPV=t=0∑n​(1+r)tCFt​​where:CFt​=net after-tax cash inflow-outflows duringa single period tr=internal rate of return that could be earned inalternative investmentst=time period cash flow is receivedn=number of individual cash flows​ Or this calculation could be broken out by individual cash flows. The formula for a project that has an initial capital outlay and three cash flows follows: NPV=CF0(1+r)0+CF1(1+r)1+CF2(1+r)2+CF3(1+r)3\begin{aligned} &NPV = \frac {CF_0}{(1 + r)^0} + \frac {CF_1}{(1 + r)^1} + \frac {CF_2}{(1 + r)^2} + \frac {CF_3}{(1 + r)^3}\\ \end{aligned}​NPV=(1+r)0CF0​​+(1+r)1CF1​​+(1+r)2CF2​​+(1+r)3CF3​​​ If you are unfamiliar with this sort of calculation, here is an easier way to remember the concept of NPV: NPV = (Today's value of the expected future cash flows) - (Today's value of invested cash) Broken down, each period's after-tax cash flow at time t is discounted by some rate, r. The sum of all these discounted cash flows is then offset by the initial investment, which equals the current NPV. To find the IRR, you would need to "reverse engineer" what r is required so that the NPV equals zero. Financial calculators and software like Microsoft Excel contain specific functions for calculating IRR. To determine the IRR of a given project, you first need to estimate the initial outlay (the cost of capital investment) and then all the subsequent future cash flows. In almost every case, arriving at this input data is more complicated than the actual calculation performed. Calculating IRR in Excel There are two ways to calculate IRR in Excel: Using one of the three built-in IRR formulas Breaking out the component cash flows and calculating each step individually, then using those calculations as inputs to an IRR formula—as we detailed above, since the IRR is a derivation, there is no easy way to break it out by hand The second method is preferable because financial modeling works best when it is transparent, detailed, and easy to audit. The trouble with piling all the calculations into a formula is that you can't easily see what numbers go where, or what numbers are user inputs or hard-coded. Here is a simple example of an IRR analysis with cash flows that are known and consistent (one year apart). Assume a company is assessing the profitability of Project X. Project X requires $250,000 in funding and is expected to generate $100,000 in after-tax cash flows the first year and grow by $50,000 for each of the next four years. You can break out a schedule as follows (click on image to expand): The initial investment is always negative because it represents an outflow. You are spending something now and anticipating a return later. Each subsequent cash flow could be positive or negative—it depends on the estimates of what the project delivers in the future. In this case, the IRR is 56.77%. Given the assumption of a weighted average cost of capital (WACC) of 10%, the project adds value. Keep in mind that the IRR is not the actual dollar value of the project, which is why we broke out the NPV calculation separately. Also, recall that the IRR assumes we can constantly reinvest and receive a return of 56.77%, which is unlikely. For this reason, we assumed incremental returns at the risk-free rate of 2%, giving us a MIRR of 33%. Why IRR is Important The IRR helps managers determine which potential projects add value and are worth undertaking. The advantage of expressing project values as a rate is the clear hurdle it provides. As long as the financing cost is less than the rate of potential return, the project adds value. The disadvantage to this tool is that the IRR is only as accurate as of the assumptions that drive it and that a higher rate does not necessarily mean the highest value project in dollar terms. Multiple projects can have the same IRR but dramatically different returns due to the timing and size of cash flows, the amount of leverage used, or differences in return assumptions. IRR analysis also assumes a constant reinvestment rate, which may be higher than a conservative reinvestment rate. What is the formula for calculating net present value (NPV) in Excel? How Do I Calculate a Discount Rate Over Time, Using Excel? What Is the Formula for Calculating Net Present Value (NPV)? What Is the Difference Between WACC and IRR? Return on Investment (ROI) Versus Internal Rate of Return (IRR) The Difference Between Present Value (PV) and Net Present Value (NPV) Pooled Internal Rate Of Return (PIRR) Pooled internal rate of return computes overall IRR for a portfolio that contains several projects by aggregating their cash flows. How to Use the Profitability Index (PI) Rule The profitability index (PI) rule is a calculation of a venture's profit potential, used to decide whether or not to proceed. Internal Rate of Return – IRR The internal rate of return (IRR) is a metric used in capital budgeting to estimate the profitability of potential investments. Net Present Value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows over a period of time. Modified Internal Rate of Return – MIRR Definition While the internal rate of return (IRR) assumes that the cash flows from a project are reinvested at the IRR, the modified internal rate of return (MIRR) assumes that positive cash flows are reinvested at the firm's cost of capital, and the initial outlays are financed at the firm's financing cost. How Money-Weighted Rate of Return Measures Investment Performance A money-weighted rate of return is a measure of the performance of an investment. The money-weighted rate of return is calculated by finding the rate of return that will set the present values of all cash flows equal to the value of the initial investment.
CommonCrawl
Pathways to recovery from COVID-19: characterizing input–output linkages of a targeted sector Tugrul Temel ORCID: orcid.org/0000-0002-3989-62841 & Paul Phumpiu2 Journal of Economic Structures volume 10, Article number: 29 (2021) Cite this article At present, the world is facing an unprecedented employment challenge due to the COVID-19 pandemic. International Labor Organization of the United Nations expects the largest amount of youth unemployment at the global level to take place in manufacturing, real estate, wholesale, and accommodation sectors. This paper has two objectives. The first is to introduce a graph-theoretic method for identifying upstream and downstream pathways of a targeted sector and characterize them in ways that help respond to and recovery from the adverse effects of the COVID-19 pandemic. The second is to apply this method in the context of China, Japan, India, Russia, Germany, Turkey, UK and USA, which together account for about 60 percent of the world GDP. Based on the analysis of most recent input–output data from 2015, manufacturing sector is found to be top priority sector to be targeted in all the eight countries, followed by real estate and wholesale sectors, and these sectors should be coupled with isolated communities of sectors to capture external employment and growth effects. Characterizing the critical pre-COVID-19 linkages of a targeted sector should inform policy makers regarding the design of employment and growth strategies to recover from the pandemic. At present, the world is facing an unprecedented employment challenge due to the COVID-19 pandemic. ILO (2020) expects the largest amount of youth unemployment at the global level to take place in manufacturing, real estate, wholesale, and accommodation sectors (see Table 1). This calls for the generation of information about the underlying properties of sectoral linkages in production networks to respond to and recover from the pandemic. This paper addresses the information need by developing a novel graph-theoretic method, which has been specifically designed to identify and characterize upstream and downstream pathways of a targeted sector in ways that help recovery from a shock to the production network. The method is applied to characterize production networks of China, Japan, India, Russia, Germany, Turkey, UK and USA, which together account for about 60 percent of the world GDP, and generate information for informed policy making to address the adverse effects of the pandemic. Table 1 ILO model-based global estimates of youth employment in hard-hit sectors The objective is not to estimate the output effects of the pandemic-related unemployment derived from the multiplier analysis but to elaborate on the ILO's unemployment estimates and provide critical information that can be used in policy interventions aimed to recover from the negative output effects of the pandemic. For example, ILO forecasts a substantial amount of unemployment in the manufacturing sector due to the COVID-19 related loss of working hours, while our paper generates information from the past input–output data that can be employed in the design of policy interventions aimed to minimize or avoid such unemployment, and thereby offer ways to recover the projected output lossssuch information is the identification of significant upstream and downstream pathways of the targeted manufacturing sector; a second piece of information is the specific grouping of sectors supporting the manufacturing sector (i.e., community structure of the manufacturing sector); a third piece of information is the critical binary links connecting several communities (i.e., betweenness) when the manufacturing sector is targeted. Knowledge of these network relations centered around the manufacturing sector can be used to formulate employment and growth strategies. The empirical analysis uses IO data from 2015, which is the most recent available data in OECD database. Therefore, our paper assumes that the properties of a production system in 2015 of a country remained unchanged during the period 2015–2020. The employment strategies elaborated in what follows should be interpreted relative to the 2015 IO properties of the country examined. The findings show that manufacturing (MA2) is top priority sector to be targeted in all the eight countries, followed by real estate (EST) and wholesale (WHS) sectors, and that these sectors should be coupled with isolated communities of sectors to capture external employment effects from the interacting communities (or clusters). Naturally, sector coupling would vary across countries, depending on the linkages between the communities identified. This paper is organized in five sections. Following the Introduction, Section 2 presents a brief review of the literature to position and motivate the current paper, pointing out, where it contributes to the literature by developing a new method for characterizing a targeted sector with its upstream and downstream pathways. Section 3 describes the new method and the three network concepts used in the analysis. Section 4 applies the method using the 2015 IO data for eights countries. Drawing on the results from Section 4, Section 5 discusses how to integrate the new information obtained from partial sectoral analysis into wider employment policy interventions. Section 6 concludes the paper. From a single sector to a network of sectors To date, single sector analysis has received more attention compared to networked-sector analysis, undermining the importance of the inter-sector connectivity in a production network. A key sector, for example, is usually identified based on the size of its output multiplier or of its backward and forward multipliers. The premise is that the larger its multiplier is, the larger its impact would be. However, the assessment of the impact of a sector would make more sense if its position and role within the network it belongs to is considered. That is, the size of its multiplier as well as its connectivity to the rest of the network provide complementary information useful for the impact assessment. The method we develop in this paper has three specific objectives. The first is to identify key upstream and downstream pathways centered around a targeted sector. The second is to derive communities (or clusters) of sectors of the given targeted sector and their within-community interaction patterns. The third is to identify the critical between-community linkages transmitting external influence from one community to another. These objectives shift the focus from individual sectors to pathways of key upstream and downstream sectors and their community structure. In other words, we do not concentrate on a single key sector or few sectors but rather characterize the dominant production relations arising when an individual sector is targeted. The objectives stated above highlight this point by emphasizing the application of graph-theoretic concepts, such as connectivity, community structure, connected components, and source–sink pathways. An important point is that the analysis is carried out for a targeted sector, which allows for a cross-country comparison of dominant patterns of linkages when the same sector is targeted across different countries. Our method enriches the multiplier analysis often carried out in the literature by using graph-theoretic concepts and methods. In that sense, key sector identification based on output multipliers, for example, is implemented by using graph-theoretic methods based on vertex centrality measures. The importance of a sector is assessed not only by the size of its multipliers but also by its positional superiority within a narrowly defined network. In doing so, the method exploits the structure of connectivity of a sector or a community of sectors. In the context of input-output analysis, the existing literature calls a sector to be key if it has the largest output multiplier in the Leontief inverse matrix or if it concurrently has the largest backward and forward multipliers. Our method, however, would define a sector as key in the context of the sectoral connectivity implied by the targeting algorithm developed. This would imply that sector i that may be key in the case of targeting sector j may not be key when sector k is targeted. Take, for example, an IO matrix with five sectors {A, B, C, D, E} with A being the key sector in terms of output multiplier and D being the non-key sector. If the objective is to create the largest impact on sector C from A and the leading pathway is \(\mathbf {A}\rightarrow \mathbf {B}\rightarrow \mathbf {D}\rightarrow \mathbf {C}\), the most critical sector becomes sector D since the absence of D, no matter how high its multiplier is, reduces the entire pathway to nothing. That is, the weakest linkage in a functionally connected pathway represents the highest degree of success in achieving the final objective, which is to increase the impact on sector C. With this example, the focus shifts from identifying the key sector(s) to identifying the key pathway(s). In doing so, the meaning of the term "key" also changes from a single sector to communities of the sectors along the key pathways in which all the sectors are functionally (or algorithmically) linked. A "star" network illustrated in Fig. 5 with the weakest sector MA2 being at the center and other sectors becoming the satellites of MA2 is a good example of sector MA2 becoming the most critical sector in the network, although it may very well be a non-key sector in terms of output multiplier. The removal of MA2 from the network leads to the collapse of the star network. In the literature of complex networks, the concept of cascading behaviour is used to refer to influence subgraphs in which state of certain vertices influences the behaviour of others.Footnote 1 Formally, an "infection" event can spread contagion through infected players which constitute a propagation tree, known as a cascade. In fact, our method is very similar to the cascading concept used to identify certain patterns of linkages in a production network in which a given sector is targeted. The cascading in our algorithm starts with a targeted sector. In the first step, the immediate input providers of the targeted sector are identified. In the second step, the input providers of the immediate input providers of the targeted sector are identified and so on. This results in a subgraph incorporating the upstream linkages of the targeted sector. Likewise, the algorithm also derives the downstream cascading of the output supply of a targeted sector. Once identified, the upstream and downstream cascades are combined. The cascade structure accommodates nonlinearity of the relations, while stressing the functional connectivity of the sectors (Kleinberg 2013; Taglioni and Winkler 2016). In the development of the method, some ideas from key sector identification (Schultz 1977), structural path analysis (Defourny and Thorbecke 1984), fundamental economic structures (Hewings et al. 1989; Jensen et al. 1991), and interconnectedness in regional input–output matrices (Lantner and Carluer 2004) have been exploited to characterize upstream and downstream production pathways of a targeted sector. The method combines backward and forward linkages to create a network in which both demand and supply information flows between sectors. The Leontief inverse of the IO matrix measures the level of backward linkages measured as the proportion of total output that represents purchases from sectors in an economy. Hirschman (1977) defines forward linkage of a particular sector as the proportion of total output of this sector that does not go to final demand but to other sectors. Following Dietzenbacher (1997), the Ghosh matrix represents forward multipliers as a measure of change in output values in response to changes in the prices of primary inputs. Loviscek (1982) suggests the use of both backward (input demand) and forward (output supply) linkages in order to obtain an accurate picture of interindustry structure as such linkages incorporate demand-side and supply side information. In case of targeting A, for example, our method identifies the pathways and their communities incorporating input providers to sector A (i.e., upstream to sector A) and consumers of outputs of sector A (i.e., downstream to sector A). In the sense of Loviscek (1982), combining demand and supply-side information, our method characterizes a unified production network of sector A in which A's input demand and output supply can be examined simultaneously by considering its demand and supply constraints. Jensen et al. (1991)'s concept of fundamental economic structure (FES) relates to our work. In a spatial context, IO cells containing flows that are consistently present at predictable levels over a range of economies are called "fundamental" as they represent economic activities inevitably required in all economies. Other IO cells with data for more region-specific sectors (for example, mining) define the nonfundamental economic structure (NFES). The identifiable patterns/linkages of predictable cells constitute a FES, which can be estimated using regression techniques. Our method, however, offers a graph-theoretic approach to revealing key FESs by targeting a given sector over a time-series of IO matrices. For example, one may target sector A by using a time-series of IO matrices and discover the FESs as the pathways or community structures that remain unchanged over a relative long period of time. For purposes of illustration, the current paper applied the method to a time-series of IO matrices (11 IO matrices during 2005–2015) of China by targeting the same sector MA2 at the same threshold level (0.15 < x). The findings confirm that there is a fundamental network that remains unchanged over the period 2005–2015 in China.Footnote 2 Upstream and downstream linkages of a targeted sector An Algorithm is developed that aims to identify upstream and downstream linkages of a targeted sector, and its implementation is illustrated within the input-output (IO) framework. For purposes of simplicity, an example IO matrix given in Fig. 1(1) is used that allows for the demonstration of the step-by-step implementation of the algorithm. The example IO matrix consists of five components. The first component is an intermediate consumption sub-matrix (X) in Fig. 1(2) with five sectors: {A, B, C, D, E}. The second is a column-vector of final consumption (Y); the third, a column-vector of total demand (\(\mathbf {X}_{D}\)); the fourth, a row-vector of value-added (VA); and the fifth, a row-vector of total supply (\(\mathbf {X}_{S}\)), all of which are illustrated in Fig. 1(1). Sub-matrix X in Fig. 1(2) and total output supply \(\mathbf {X}_{S}\) is used to calculate the backward technical coefficients matrix, \(A_{b}=[\mathbf {X_{ij}}/\mathbf {X_{S}^{j}}]\), given in Fig. 1(3). The Leontief inverse matrix, \(\mathbf {M}_{b}[m]\equiv (I-A_{b})^{-1}\), in Fig. 1(4) defines the so-called backward multiplier matrix with m denoting individual multipliers, where I stands for an identity matrix with dimension (5, 5). For notational simplicity, we will use \(\mathbf {M}_{b}\). In order to focus on the analysis of inter-sectoral connectivity, the diagonal cells in \(\mathbf {M}_{b}[m]\) are replaced with zeros; that is, \(\mathbf {M}_{b}-diag[\mathbf {M}{}_{b}]\) in Fig. 1(5).Footnote 3 The matrix, \({\overline{\mathbf {M}}}_{b}\), in Fig. 1(6) is obtained through column-wise standardization of \(\mathbf {M}_{b}-diag[\mathbf {M}{}_{b}]\). In doing so, individual multipliers of a user sector are adjusted to reflect the relative importance of a supplier in the output multiplier of the user sector. The standardized matrix \({\overline{\mathbf {M}}}_{b}[x]\) is the only input used in targeting a sector by setting an arbitrary threshold significance level (\({\overline{\mathbf {M}}}_{b}(0.25\leqslant x)\)) with x being matrix elements greater than or equal to 0.25. The matrix \({\overline{\mathbf {M}}}_{b}(0.25\leqslant x)\) in Fig. 1(7) is a reduced form of \({\overline{\mathbf {M}}}_{b}[x]\), which includes only the cells greater than or equal to 0.25. Suppose that a user sector A is targeted to identify the entire chain of its direct and indirect suppliers; that is, to identify the entire chain of upstream sectors of user A. Identifying upstream linkages of a targeted sector A Using backward multipliers in \(\mathbf {M}_{b}\) represents only half through the targeting exercise, because a backward linkage defines the input linkage of the targeted sector. To be complete, other half should be based on forward multipliers in \(\mathbf {M}_{f}[m]\equiv (I-A_{f})^{-1}\) (the so-called Ghosh matrix) given in Fig. 2(4) as a forward linkage defines the output linkage of the targeted sector. For notational simplicity, we will use \(\mathbf {M}_{f}\). The only difference between the derivation of backward multipliers and forward multipliers is that the latter uses the forward coefficients matrix, \(A_{f}=[\mathbf {X_{ji}}/\mathbf {X_{D}^{j}}]\), in Fig. 2(3) to calculate the row-wise standardized matrix, \({\overline{\mathbf {M}}}_{f}[x]\), in Fig. 2(6). The matrix \({\overline{\mathbf {M}}}_{f}(0.25\leqslant x)\) in Fig. 2(7) is a reduced form of \({\overline{\mathbf {M}}}_{f}\), which includes only the cells greater than or equal to 0.25. Suppose that a supplier sector A is targeted to identify the entire chain of its direct and indirect users; that is, to identify the entire chain of downstream sectors of supplier A.Footnote 4 Identifying downstream linkages of a targeted sector A Having derived the backward and forward reduced forms, \({\overline{\mathbf {M}}}_{b}(0.25\leqslant x)\) in Fig. 1(7) and \({\overline{\mathbf {M}}}_{f}(0.25\leqslant x)\) in Fig. 2(7), the next step is to use them to identify the upstream and downstream pathways of a targeted sector, for example, A, and map these pathways as a single network with a view to examining the connectivity of the upstream and downstream sectors of the targeted sector A. Replicating the targeting exercise for the rest of the sectors in the IO matrix would generate five networks, one for each sector. In what follows, we explain the implementation of the algorithm developed in three steps.Footnote 5 Step 1: (using \({\overline{\mathbf {M}}}_{b}(0.25\leqslant x)\): At an arbitrarily set significance level, 0.25, from input side, we target user sector A associated with the 1st column of \({\overline{\mathbf {M}}}_{b}(0.25\leqslant x)\). This means that those numbers which are equal to or greater than 0.25 in the 1st column are considered as significant enough from the user perspective, in which case there are two significant linkages. One is from B to A with a coefficient of 0.27 (denoted as \(B\rightarrow A\)), and another is from D to A with a coefficient of 0.41 (denoted by \(D\rightarrow A\)). Then, moving to the 2nd column associated with user sector B, we observe that A also provides input to B (denoted by \(A\rightarrow B\)) with a strength level of 0.34, and that D provides input to B (denoted by \(D\rightarrow B\)) with a strength level of 0.28. We then move on to identify the significant suppliers of user sector D associated with the 4th column. Suppliers B and C provide input to user D through the two linkages denoted by \(B\rightarrow D\) and \(C\rightarrow D\) with the strength levels of 0.25 and 0.46, respectively. Finally, we identify suppliers of user sector C by moving to the 3rd column, in which case suppliers B and D are observed as significant with the strength levels of 0.36 for the linkage \(B\rightarrow C\) and 0.33 for the linkage \(D\rightarrow C\). This completes the search of significant direct and indirect suppliers of the targeted user sector A. Important to note is that, although the IO matrix has five sectors, the search for the suppliers of user A results in a directed network of four sectors, revealing that sector E is irrelevant from the point of input supply to the targeted sector A. Combining all of the binary linkages identified in this step generates the directed network in Fig. 1(8), which consists of a set of eight binary linkages when user sector A is targeted: $$\begin{aligned} Input=\{B\rightarrow A,\, D\rightarrow A,\, A\rightarrow B,\, D\rightarrow B,\, B\rightarrow D,\, C\rightarrow D,\, B\rightarrow C,\, D\rightarrow C\}. \end{aligned}$$ Step 2: (using \({\overline{\mathbf {M}}}_{f}(0.25\leqslant x)\): At the same significance level, 0.25, from output side, we target supplier sector A associated with the 1st row of \({\overline{\mathbf {M}}}_{f}(0.25\leqslant x)\). This means that those numbers which are equal to or greater than 0.25 in the 1st row are considered as significant enough from the supplier perspective, in which case there is one significant linkage from A to B with the strength level of 0.45 (denoted as \(A\rightarrow B\)). Then, moving to the 2nd row associated with supplier sector B, we observe three linkages from B: \(B\rightarrow C\) with a strength level of 0.26, \(B\rightarrow D\) with a strength level of 0.31, and \(B\rightarrow E\) with a strength level of 0.28. We then move on to identify the significant users of supplier sector C associated with the 3rd row. Supplier C provides output to users D and E, which are, respectively, denoted by \(C\rightarrow D\) and \(C\rightarrow E\) with the strength levels of 0.43 and 0.31. Supplier D associated with the 4th row provides output to user E (denoted by \(D\rightarrow E\)) with the strength level of 0.44. Finally, supplier E associated with the 5th row provides output to users B and D, which are denoted by \(E\rightarrow B\) and \(E\rightarrow D\) with the strength levels of 0.34 and 0.35, respectively. This completes the search of significant direct and indirect users of the targeted supplier sector A. Combining all of the binary output linkages identified in this step generates the directed network in Fig. 2(8), which consists of a set of nine binary linkages when supplier sector A is targeted: $$\begin{aligned} Output=\{A\rightarrow B,\, B\rightarrow C,\, B\rightarrow D,\, B\rightarrow E,\, C\rightarrow D,\, C\rightarrow E,\, D\rightarrow E,\, E\rightarrow B,\, E\rightarrow D\}. \end{aligned}$$ Step 3: It should be noted that, as illustrated in Fig. 3(3),input network in 1 and output network in 2 have four common linkages given in 3: $$\begin{aligned} Input\,\cap \, output=\{A\dashrightarrow B,\, B\dashrightarrow C,\, B\dashrightarrow D,\, C\dashrightarrow D\}, \end{aligned}$$ which simultaneously carry both input (denoted by solid blue arrows) and output (denoted by solid red arrows arrows). These common linkages are shown by dashed blue arrows in Fig. 3(3). Combined network of upstream and downstream linkages of a targeted sector A To sum up, when sector A is targeted, its upstream linkages form the input supply network shown in Fig. 3(1), whereas its downstream linkages form the output supply network shown in Fig. 3(2). As seen in Fig. 3(3), the two networks combined fully characterize sector A's connectivity (i.e., all the linkages that matter for A at the given threshold strength level of 0.25) both in input and output space. Connected components and their communities Any digraph such as the one illustrated in Fig. 3(3) can be further analyzed by deriving its connected components and community structures. A directed graph is said to be connected if there is a path between all pairs of vertices (or production sectors in our context). A connected component of a digraph is a maximal connected sub-graph. Connected components of a directed graph comprise an acyclic directed graph, meaning that individual connected components form a partition into sub-graphs that are themselves connected. To visually illustrate these concepts, a digraph G with 15 sectors (nodes) is used as an example (see Fig. 4). The digraph G has a single connected component with 7 sectors out of 15. Since the connected component is a single entity within which all sectors are linked to each other, any influence exerted on a sector will flow across all the sectors within the component. There is no way for a sector to avoid the impact on itself of others within the component as they are all connected. Example digraph G, its connected components and community structure In the next step, the question is whether there is a partition of a connected component into sub-graphs, each one of which maximizes Modularity statistic (Charikar 2000; Fortunato et al. 2004; Newman and Girvan 2004; Capocci et al. 2005; Newman 2006, 2009; Easley and Kleinberg 2010; Fortunato 2010; Giatsidis et al. 2011). We know that sectors within a connected component are all linked, but we do not know whether there are distinct sub-graphs within the connected component concerned. The community structure of the connected component is detected on the basis of Community Modularity statistic. The detected community structure tells us that there are two communities (or clusters) of sectors, {AGF, WHS} and {TSC, EST, CST, MA2, EGW}, that are highly correlated or homogenous in terms of Modularity criterion, centrality measure for example (see Fig. 4). A network of key sectors From a sectoral perspective, a sector is said to be key to another sector if it has the maximum contribution to the total output multiplier of the other sector. From an economy-wide perspective, however, a sector is said to be key if its total output multiplier is the largest compared to the total output multipliers of other sectors in the economy. We adopt the sectoral perspective and separately identify the key sectors from a backward multiplier matrix and those from a forward multiplier matrix. Then, we construct a directed graph using the pooled set of linkages obtained from the backward (blue arrows) and forward (red arrows) multiplier matrices. The final directed graph illustrated in Fig. 5 represents a combined network consisting of the most influential linkages (blue and red arrows combined) on the input and output sides. Network of key sectors from both backward and forward linkages For simplicity, we examine the case in which a sector has one key input (output) sector (\(k=1\)) only, meaning that the maximum backward (forward) multiplier is selected from each column (row) in a backward (forward) multiplier matrix. This yields two directed graphs: one for backward linkages (blue) and another for forward linkages (red). Thereafter, the two graphs are combined to generate the final network of input–output linkages of key sectors with \(k=1\). The same procedure can be applied for \(k>1\), depending on the size of the multiplier matrix examined. The choice of k is arbitrary, depending on the objective pursued. As illustrated in Fig. 5, from the network perspective, MA2 stands alone as a critical sector as it has the function of coordinating changes in the rest of the network. Almost all sectors in the network are linked to MA2, making this sector so powerful for the survival of the network. Removing it from the network will lead to the collapse of the entire network. In this sense, MA2 is a key sector. This interpretation emphasizes not only the importance of connectivity but also the network structure. Data: input–output matrices The method and the network concepts described in Section 3 are applied to characterize IO systems of eight countries: China, India, Japan, Russia in Asia; Germany, Turkey and UK in Europe, and USA. The IO data used in the implementation are obtained from OECD's IO database for the most recent available year 2015.Footnote 6 The OECD IO matrices with 36 sectors have been aggregated to 15 sectors using the 2008 UN definitions for sector aggregation (United Nations, Development and Bank 2009). The aggregation allows for a comparative analysis of the IO systems across countries. Our point of departure is the sector aggregation of ILO. The first column in Table 2 shows the individual sectors in OECD IO database; the second column shows the aggregated sectors used in this study; and the third column shows the ILO aggregation of 14 sectors. A slight difference between our aggregation and ILO's aggregation comes from the fact that we disaggregated "Manufacturing sector" (which is modeled by ILO as a single sector) into two sub-sectors: MA1 in our analysis covers the petroleum and refinery activities, while MA2 captures the rest of the manufacturing activities conducted in the manufacturing sub-sectors. MA2 is an important sector for all the countries examined in this study as it represents the agglomeration of several inter-connected industrial sectors. Bilateral linkages between manufacturing and the service sectors, including wholesale, retail, finance, real estate, hotels-tourism, etc. are important, and in this study, we aim to pay more attention to the output and employment effects created through the linkages concerned. Table 2 Sector aggregation Concerning youth unemployment due to COVID-19, ILO's global estimates conjecture that manufacturing (MA2), wholesale and retail (WHS), real estate (EST), and accommodation (HOT) sectors will be hit hard (see Table 1 on page 8 of ILO (2020)), which is the point of departure for the analysis conducted in this paper. It should be noted that the sample of the eight countries accounts for a substantial portion of the world GDP, and hence there is the need for developing strategies to avoid the bleak unemployment picture projected by ILO. The analysis of the current paper should provide critical information for use in the effective design of policy interventions targeting the four sectors. Government policies targeting employment in the hard-hit sectors should be informed of the characteristics of the backward and forward linkage structures of these sectors. Sector targeting The method developed is applied to target the four sectors identified by ILO (2020). If, for example, sector i is targeted for policy intervention, we first need to identify input suppliers of that sector, then identify input suppliers of sector i's input suppliers, followed sequentially by the identification of other input suppliers. This chain of backward linkages between the targeted sector and its first degree, second degree, third degree etc. input suppliers would show the network of upstream linkages of the targeted sector with the rest of the production system. The chain of linkages from the rest of the system to the targeted sector will fully identify the target sector's production dependencies. Likewise, the targeted sector is also characterized with respect to the type of consumers (both intermediate and final) of its commodities. We first need to identify the critical buyers (sectors) of the commodities produced by the targeted sector, and then sequentially identify the buyers of the commodities produced by the buyers of commodities of the targeted sector and so on. This type of downstream linkages would show how the target will be affected by changes in the demand for its commodities. With this type of forward sectoral links, we would characterize the commodity demand network of the targeted sector. Together, a combined map of backward and forward input–output flows from the perspective of the targeted sector will help us uncover the critical sectoral pathways of linkages which are most important for the performance of the targeted sector. The empirical analysis is based on a given threshold significance level of a multiplier. This level is set to be 15 percent, meaning that the analysis carried out considers those multipliers having an explanatory power of 15 percent or higher out of the total input/output multiplier of the sector targeted. The linkages shown represent those linkages accounting for 15 percent or more of the multipliers influencing the targeted sector.Footnote 7 In case of \({\underline{\mathbf{targeting \,\,\mathbf{MA2} }}}\), an interesting pattern of input-output flows arises across the countries examined. In four countries in Asia, agriculture (AGF), crude oil and mining (CO12), and WHS sectors supply significant input; in two European countries, financial business (FIN), transportation-storage-communication (TSC) and WHS sectors transfer significant input; in Turkey, electricity-gas-water (EGW) and HOT sectors reveal significant input flows; and in USA, interestingly, the composition of the critical input suppliers includes AGF, CO12, FIN and TSC, which is "almost" the union of the critical sectors in Asia and Europe. With respect to output flows, we observe that construction (CST) and EST sectors unanimously arise as critical sectors whose outputs are demanded in the rest of the economy. Concerning sectoral dependencies, we observe that {CO12, CST, EST, WHS, MA2} reveal strong dependencies. EST is vitally important to control the changes in the rest of the economies of Japan, Russia, Germany, UK, Turkey and USA. Of these six countries, USA, UK and Russia reveal a much stronger dependency structure implied by a large number of sector linkages. For example, in USA, we have the dependency structure of: $$\begin{aligned} \mathbf {EST}\dashrightarrow \mathbf {WHS}\,\,\, and\,\,\,\mathbf {EST}\dashrightarrow \mathbf {MA2}. \end{aligned}$$ In UK, the dependency structure is of: $$\begin{aligned} \mathbf {CST}\dashrightarrow \mathbf {EST}\dashrightarrow \mathbf {WHS}\dashrightarrow \mathbf {MA2}\dashrightarrow \mathbf {CST}, \end{aligned}$$ and in Russia, it is: $$\begin{aligned} \mathbf {EST}\dashrightarrow \mathbf {WHS}\dashrightarrow \mathbf {MA2}\,\,\, and\,\,\,\mathbf {WHS}\dashrightarrow \mathbf {CO12}\dashrightarrow \mathbf {MA2}. \end{aligned}$$ The larger the number of linkages, the higher the complexity of dependency, and the more challenging will be to design policy interventions that involve multiple sectors. In case of \({\underline{\mathbf{targeting \,\,\mathbf{WHS} }}}\), a similar pattern of linkages arises across the countries examined. In Asian countries, AGF, CO12 and MA2 supply significant input; in two European countries, FIN, MA2 and TSC transfer significant input; in Turkey, sectors EGW, HOT and MA2 reveal significant input flows; and in USA, the composition of the critical input suppliers includes AGF, CO12, FIN and TSC, which is "almost" the union of the critical sectors in Asia and Europe. With respect to output flows, we observe that CST, EST and MA2 play a critical role in all countries. Concerning sectoral dependencies, we observe that China and India do not show any sector dependencies, whereas others show varying degrees of dependencies among {CO12, CST, EST, MA2}. The highest degree of dependency is observed in UK, with a pathway: $$\begin{aligned} \mathbf {CST}\dashrightarrow \mathbf {EST}\dashrightarrow \mathbf {WHS}\dashrightarrow \mathbf {MA2}. \end{aligned}$$ This suggests that before targeting WHS, the implications on WHS of a change in CST and EST should be analyzed as the performance of WHS is strongly dependent on the type of changes in CST and EST. Russia is also facing somewhat weaker dependency, with a pathway: $$\begin{aligned} \mathbf {EST}\dashrightarrow \mathbf {WHS}\dashrightarrow \mathbf {CO12}\dashrightarrow \mathbf {MA2}. \end{aligned}$$ In case of \({\underline{\mathbf{targeting \,\,\mathbf{EST} }}}\), similarities exist among Asian countries and USA. AGF, CO12, MA2 and WHS play an important role in input supply; in Germany and UK, FIN and TSC still represent the core of input supply. Turkey reveals structural differences compared to other countries, in which case EGW, HOT and MA2 supply critical amount of input to the rest of the economy. What is interesting in the case of Turkey is that the publicly managed EGW and private sector HOT occupy a central place in input supply, but these sectors play no role in input supply in the other six countries examined. With this feature, Turkey is distinguished from the other six countries. Concerning output supply, except UK and Germany, CST and MA2 unanimously arise as two critical sectors whose outputs are consumed by others. Regarding sectoral dependencies, China, India, Germany and USA show no dependency, while others show dependency involving WHS. In case of \({\underline{\mathbf{targeting \,\,\mathbf{HOT} }}}\), the results look very similar to the case in which EST is targeted. Four Asian countries have the same sectors {AGF, CO12, MA2, WHS} significant in input supply; two European countries share commonality but Germany has a wider input supply network {FIN, MA2, TSC, WHS} compared to UK having two input supply sectors {FIN, TSC}. USA shows a combination of Asian and European networks, including {AGF, CO12, FIN, MA2, TSC, WHS}. Turkey is distinguished with a very different set of input suppliers, including {EGW, MA2}. Regarding output supply, except UK and Germany, CST, EST, and MA2 represent the core of output suppliers in Japan, India, Russia and Turkey, while CST and MA2 represent the core suppliers in China and USA. With respect to sectoral dependencies, EST and WHS constitute the core of dependencies, which is extended by CST, CO12, and MA2 in Russia and UK. Connected components and community structures Drawing on the targeting-based networks across countries (see the 1st column of Figs. 6 and 7),Footnote 8 all of the IO systems examined show only one connected component (see the 2nd column of Figs. 6 and 7). This finding suggests that the networks shown in the 1st column are all connected, implying that a change in input supply or output supply of a sector will be transmitted to the rest of the network through either direct or indirect linkages. Any intervention to a single sector within the connected component will have repercussions in the rest of the network. However, the level of repercussions would vary across sectors in the network depending on the size of multipliers associated with each linkage. Selected sectors targeted at significance level of 0.15 in China, Japan, India and Russia Selected sectors targeted at significance level of 0.15 in Germany, UK, Turkey and USA A deeper analysis of a connected component is to search for communities (or clusters) within the connected component examined. Community analysis aims to detect partitions of the connected component in such a way as to reflect potentially different repercussions within each partition (or community). The community structures identified for each connected component are presented in the 3rd column of Figs. 6 and 7. The mapping of the communities identified shows that almost all connected components across countries and sectors have two communities (or clusters). In a more detailed policy design, each community should be individually targeted as a group as its members show similarity with respect to network betweenness centrality criterion.Footnote 9 It is also critical that policies should aim to strengthen the linkages connecting the two communities to maximize the overall benefits from the connectivity of the communities. Otherwise, positive externalities that may arise from one community will not be captured by policy interventions. Three constructs stand out for use in the design of policy interventions: (i) directed graphs describing input and output flow structure implied by targeting a specific sector, (ii) the underlying dependency pathways, and (iii) the key sectors that ensure the highest benefit from interactions in a network. Take, for example, Germany. It is characterized by the network of upstream and downstream pathways, simple dependency, \(\mathbf {EST}\dashrightarrow \mathbf {MA2}\), and key sectors {EST, MA2}. The first construct produces all the relevant pathways of sectors from/to the targeted MA2. The second suggests that, no matter which sector is targeted, MA2's performance strongly depends on the input and output of EST. The third construct is that these sectors are key as they have not only the largest multiplier values but also occupy the critical position in the network. In the case of UK, a very complex pathway arises: in which case CST plays a key role both as a source of policy change and as a sink of the impact of the change concerned (i.e., a loop starting from a change in CST and ending with an effect on itself). The fact that it is a closed loop makes it challenging to control the changes along the chain of linkages, EST\(\dashrightarrow\)WHS\(\dashrightarrow\)MA2, because this two-edge pathway represents a constraint for CST. When, for example, WHS is targeted, its impact on CST as well as CST's impact on WHS via changes in EST must be considered because WHS is a member of the closed loop. The other countries can be analyzed in a similar fashion at will. For each country, we also identified key sectors in the sense described in Section 3.3 (see Figs. 8 and 9). EST and MA2 are identified as key sectors in Germany, USA, Turkey, and UK; MA2 and WHS are key sectors in Japan and Russia; and MA2 is key for China and India. Apparently, there is some kind of homogeneity in the maximum multiplier sectors across the countries. Across all the countries analyzed, MA2 is the key sector to be targeted to generate the maximum employment and output through its multiplier effects as well as its connectivity to the rest of the economy. Key sectors of the economy (1) Discussion of the findings Drawing on the findings elaborated in Section 4, we suggest ways to achieve the best employment and output outcomes at the country level. The key to success lies in ensuring that each country prioritizes the identified critical sectors, while considering community structures and pathways of sector dependencies as constraints of policy interventions. In other words, we propose to formulate an employment and growth strategy as a constraint optimization problem, the objective of which is to maximize output of a targeted sector(s) subject to sector specific as well as structural constraints, including the degree of sector connectedness, community structure (size and density), and pathways of sectoral dependencies. In what follows, we elaborate on how to employ the information generated in the formulation of policy interventions. First, the domain of any policy targeting with a view to ensuring the pre-COVID-19 employment level should necessarily include {AGF, CO12, CST, EST, FIN, MA2, WHS, HOT}, in which case {EST, MA2} are the core sectors with the largest multiplier effects and critical connectivity patterns both in input and output markets. Together, these cores would act as catalyst for the growth in other sectors through the input–output linkages. Second, in all the eight countries examined, except for USA, the policy intervention networks are composed of two communities (or clusters). Knowledge of the characteristics (i.e., number of sectors, their interactions, and linkage density) of the community structures identified should be utilized in employment policy design. In China, {CST, MA2, WHS} and {AGF, CO12} represent the two robust core communities reflecting the strongest linkages among its members, and these communities survive no matter which sector is targeted. This suggests that the highest gain in employment in China can be materialized by exploiting the linkage properties within individual communities, as well as the linkage strength between the communities. In Japan, there are two robust core communities, {CST, EST, MA2} and {AGF, CO12}, no matter which sector is targeted. Interestingly, members of the first community are linked to each other in output markets, while members of the second community interact only in input markets. This makes the targeting easier and more appealing. It is easier in the sense that if employment creation is targeted in output markets, the interactions among sectors in the first community should be examined; if, however, employment in input markets is targeted, then the interactions among sectors in the second community should be analyzed. It is appealing, because the sectors, where the final impact of targeting is expected are isolated in two different communities, because these communities are connected through the linkages in input markets only. In India, there are two robust core communities, {CST, EST} and {AGF, CO12, MA2, WHS}, no matter which sector is targeted. Members of the first community are linked to each other in output markets, while members of the second community are linked only in input markets. Similar to the case of Japan, targeting is easy and appealing. It is easy in the sense that if employment creation is targeted in output markets, the interactions among sectors in the first community should be examined; if, however, employment in input markets is targeted, then the interactions among sectors in the second community should be analyzed. It is appealing, because the sectors, where the final impact of targeting is expected are isolated in two different communities. Interestingly, the linkages between the two core communities are all about the interactions in output markets only, as opposed to the Japanese case in which the communities are linked through input market linkages. In Russia, there are two robust core communities, {CST, EST, MA2} and {AGF, CO12, WHS}. Members of the first community are linked to each other in both input and output markets, while members of the second community interact only in input markets. The two communities are linked through the input linkages only. If employment is targeted independent of market type, the first community should be examined; if, however, employment is targeted in input markets, the second community should be analyzed. These communities are linked in input markets, because they are connected through the linkages in input markets only. The two EU countries, Germany and the UK, share commonalities, while showing key differences from the Asian countries, including China, Japan, India and Russia. Both Germany and the UK have two identical communities: {EST, FIN, TSC} and {CST, MA2, WHS} when EST, MA2 and WHS are targeted. In both countries, the first community arises in input markets, while the second community has linkages in both input and output markets. The type of linkages connecting the two communities is different across Germany and the UK, however. In Germany, the two communities are connected through linkages both in input and output markets, while in the UK through input market linkages only. Germany and the UK show stronger differences when sector HOT is targeted. The communities differ both in terms of sector composition and the type of linkages connecting the communities. Therefore, HOT needs special attention when policies are designed to promote employment in this sector. The U.S. shows characteristics that have commonalities both with the Asian and the EU countries. Two robust communities, {AGF, CO12, MA2, WHS} and {CST, EST, FIN, TSC}, arise when EST, MA2 and WHS are targeted. The first community consisting of only input linkages is similar to the Asian case, while the second one consisting of both input and output linkages is similar to the EU case. These communities are connected through input and output linkages. The picture becomes quite different when HOT is targeted. Three communities emerge, two of which {AGF, CO12, WHS} and {EST, FIN, TSC} are all about input linkages, and the third one {CST, MA2, HOT} has mixed linkages. This reflects different dependency structure HOT has with the rest of the economy. Finally, Turkey shows a completely different linkage structure between two core communities: {HOT, WHS} and {CST, EST, EGW} no matter which sector is targeted. The first community is all about input linkages, while the second is mixed with input and output market linkages. These communities are also linked with mixed linkages. What is interesting and important is to observe EGW to play a significant role in the core economic activities. This observation is unique to Turkey as EGW has not been observed as critical in the other 7 countries examined. A third suggestion is that knowledge of the critical binary sectoral links ensuring cross-community connectedness is essential for informed policy interventions. The policies aimed to ensure the continuity of cross-community links should be integrated into wider economic policies to materialize potential employment benefits from the interactions between the communities. The potential gains from such connectedness will be forgone if the policies implemented dismantle or do not consider the connectedness of the existing communities. For example, in China, the connectedness of the two communities discussed above requires the presence of at least one linkage out of two: {(MA2, AGF), (MA2, FIN)}; in Japan, the presence of at least one linkage out of four: {(AGF, EST), (AGF, HOT), (WHS, MA2), (WHS, CO12)}, and so on. When there are more than two communities, which is the case in USA, then at least three linkages must be present to tie all the communities together. To sum up, the implementation of the algorithm and the findings are neither final nor complete. The results are reflecting only part of the big picture as they are conditional to the threshold significance level chosen. The study elaborates on ways to provide policy guidance based on the results obtained. A general policy recommendation based on the results is that coupling the targeted sector with its key partners should be the way forward to reap the full benefits of policy interventions. Such interventions should also exploit patterns of linkages between the targeted sector and its community in the production system. An unprecedented, COVID-19-driven unemployment challenge is addressed using network analysis of input–output matrices of eight countries, including China, Japan, India, Russia, Germany, Turkey, UK and USA. A novel algorithm is developed to identify critical input–output backward and forward linkages of a targeted sector. Based on these linkages, sectoral dependencies and pathways of sectoral interactions are characterized to generate critical information that is needed for the design of employment policy interventions. Using concepts from network analysis and OECD input–output data, this paper develops an algorithm to uncover critical patterns of sector linkages and features of country-level production systems. In order to respond to the projected COVID-19-related youth unemployment in manufacturing, real estate, wholesale and accommodation sectors, the paper produces information that can be used in employment strategy development in the context of the eight countries analyzed, which together account for about 60 percent of the world GDP. Employment strategy development is discussed with the help of a constrained optimization problem, the objective of which is to maximize employment under sector and production system constraints. The empirical configuration of sectoral pathways of interactions, sectoral input–output dependencies, and sectoral communities defines the domain of the constraints for optimal employment. Broad elements of an optimal employment strategy is then elaborated using this configuration. Manufacturing is found to be top priority sector to be targeted in all the eight countries, followed by real estate and wholesale sectors, and these sectors should be coupled with isolated communities of sectors to capture external employment effects. Needless to say, the closed-economy analysis carried out in this paper presents an incomplete picture of the actual employment possibilities as the current analysis does not take into account the employment creation effects of the trade-linkages across countries. Adopting an open-economy framework, future research should incorporate the sectoral production linkages among trading countries, and in doing so, potential international sources of employment creation in a given country can be explored. Such information would provide input to the design of evidence-based trade and employment policy. OECD multi-country input–output matrices are available to conduct the type of open-economy employment analysis we are advocating here. The input–output data for eight countries examined in this paper are publicly available at https://stats.oecd.org/Index.aspx?DataSetCode=IOTSI4_2018. Furthermore, the data will be available upon request. The reader is referred to (Easley and Kleinberg 2010; Borge-Holthoefer et al. 2013) for further reading on cascading behaviour in complex socio-technical networks. The FES issue is beyond the scope of this paper; however, it should be pointed out that the analysis of FESs in a country can be easily done by applying our method. In fact, we applied it to China for the period of 2005–2015, and the visual network structures that remain unchanged have been found. The results would be available upon request. The diagonal elements of the multiplier matrix are intentionally set to be equal to zero, in order to focus on the inter-sectoral connectivity. An empirical regularity is that a very large majority of IO multiplier matrices are diagonally dominant as their diagonal multipliers are larger than one. This is also the case for IO multiplier matrices of the countries examined in this paper. The reason is that a sector produces part of its total input demand in addition to the production of inputs demanded by the rest of the sectors in the economy. Miller and Blair (2009, pp. 90–96) explain this within inter-regional IO framework, and Henderson and Evans (2017) explains the same issue with an example IO matrix (see https://www.fwrc.msstate.edu/pubs/implan_2017.pdf). The reader is referred to Miller and Blair (2009) for an extensive description of how to use input–output matrices in policy analysis. The Algorithm has been developed by the authors. Mathematica Code developed at https://mathematica.stackexchange.com/questions/210169/how-can-i-generate-a-tailor-made-directed-graph-from-a-given-matrix has been adjusted to perform the computations in this paper. The adjusted Algorithm will be available upon request. Special thanks go to Mathematica expert recognized in Mathematica forum as @KGLR. see https://stats.oecd.org/Index.aspx?DataSetCode=IOTSI4_2018 for OECD input–output data for 64 countries over 11 years from 2005 through 2015. The figures and tables presenting the results were kept at a minimum due to the space limitation. It should be noted that targeting exercise was conducted for four sectors across eight countries but we only presented a single targeting for each country. The targeting networks not presented here will be made available upon request. The Girvan–Newman algorithm is applied to identify communities. This algorithm first identifies edges in a network that lie between communities and then removes them, leaving behind just the communities themselves. The algorithm employs the graph-theoretic betweenness centrality measure, which assigns a number to each edge which is large if the edge lies "between" many pairs of nodes. Borge-Holthoefer J, Banos RA, González-Bailón S, Moreno Y (2013) Cascading behaviour in complex socio-technical networks. J Complex Networks 1(1):3–24 Capocci A, Servedio VDP, Caldarelli G, Colaiori F (2005) Detecting communities in large networks. Physica A Stat Mech Appl 352(2):669–676 Charikar M (2000) Greedy approximation algorithms for finding dense components in a graph. In International Workshop on Approximation Algorithms for Combinatorial Optimization, pages 84–95. Springer Defourny J, Thorbecke E (1984) Structural path analysis and multiplier decomposition within a social accounting matrix framework. Economic J 94(373):111–136 Dietzenbacher E (1997) In vindication of the ghosh model: a reinterpretation as a price model. J Regional Sci 37(4):629–651 Easley D, Kleinberg J (2010) Networks, crowds, and markets: reasoning about a highly connected world. Cambridge University Press, Cambridge Fortunato S (2010) Community detection in graphs. Phys Rev E 486(3–5):75–174 arXiv:0906.0612.pdf Fortunato S, Latora V, Marchiori M (2004) Method to find community structures based on information centrality. Phys Rev 70:056104 Giatsidis , Thilikos DM, Vazirgiannis M (2011) Evaluating cooperation in communities with the k-core structure. In: Advances in Social Networks Analysis and Mining (ASONAM), 2011 International Conference on, IEEE, New York; pages 87–93 Henderson JE, Evans GK (2017) Single and multiple industry economic contribution analysis using IMPLAN. Forest and Wildlife Research Center, Research Bulletin FO468, Mississippi State University, 12 pp Hewings Geoffry JD, Jensen Rodney C, West Guy R, Sonis Michael, Jackson Randall W (1989) The spatial organization of production: an input-output perspective. Socio Economic Plan Sci 23(1–2):67–86 Hirschman AO (1977) A generalized linkage approach to development, with special reference to staples. Economic Dev Cult Change 25:67 ILO. Ilo monitor: Covid-19 and the world of work. fourth edition updated estimates and analysis. techreport, International Labor Organization, (2020). URL https://www.ilo.org/wcmsp5/groups/public/@dgreports/@dcomm/documents/briefingnote/wcms_745963.pdf International Monetary Fund Organisation for Economic Co-operation United Nations, European Commission, Development, and World Bank. System of National Accounts 2008. United Nations, New York (2009). ISBN 978-92-1-161522-7. https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=348954 Jensen RC, Dewhurst JH, West GR, Hewings GJD (1991) On the concept of fundamental economic structure. Regional input-output modeling: new development and interpretations, Avebury, Sydney, pages 228–49, Kleinberg J (2013) Cascading behavior in social and economic networks. In: Proceedings of the fourteenth ACM conference on Electronic commerce, pages 1–4 Lantner R, Carluer F (2004) Spatial dominance: a new approach to the estimation of interconnectedness in regional input-output tables. Ann Regional Sci 38(3):451–467 Loviscek AL (1982) Industrial cluster analysis-backward or forward linkages? Ann Regional Sci 16(3):36–47 Newman MEJ (2006) Modularity and community structure in networks. Proc Natl Acad Sci 103(23):8577–8582 Newman MEJ (2009) Random graphs with clustering. Phys Rev Lett 103(5):058701 Newman MEJ, Girvan M (2004) Finding and evaluating community structure in networks. Phys Rev E 69(2):026113 Ronald ME, Peter BD (2009) Input–output analysis: foundations and extensions. Cambridge University Press, 2 edn. https://doi.org/10.1017/CBO9780511626982 Schultz S (1977) Approaches to identifying key sectors empirically by means of input-output analysis. J Dev Studies 14(1):77–96 Taglioni D, Winkler D (2016) Making global value chains work for development. World Bank Publications Necessary acknowledgments have been made in the manuscript in the "Method" Section. ECOREC Economic Research and Consulting, Amsterdam, The Netherlands Tugrul Temel The World Bank, Washington, D.C , USA Paul Phumpiu Both authors read and approved the final manuscript. Correspondence to Tugrul Temel. The findings and interpretations expressed in this paper are entirely those of the authors. They do not necessarily reflect the view of the World Bank, its executive directors, or the countries they represent. Temel, T., Phumpiu, P. Pathways to recovery from COVID-19: characterizing input–output linkages of a targeted sector. Economic Structures 10, 29 (2021). https://doi.org/10.1186/s40008-021-00256-2 Revised: 30 October 2021 Input–output multipliers Pathways and communities of sectors Employment policy interventions Global employment JEL Classifications
CommonCrawl
Closed-form solutions for the Lucas-Uzawa growth model with logarithmic utility preferences via the partial Hamiltonian approach DCDS-S Home Unsteady MHD slip flow of non Newtonian power-law nanofluid over a moving surface with temperature dependent thermal conductivity August 2018, 11(4): 631-641. doi: 10.3934/dcdss.2018038 Symmetries and conservation laws of a KdV6 equation María Santos Bruzón , and Tamara María Garrido Department of Mathematics, University of Cádiz, PO.BOX 40, 11510 Puerto Real, Cádiz, Spain * Corresponding author: M.S. Bruzón. Received December 2016 Revised May 2017 Published November 2017 In the present work we make an analysis of the Korteweg-de Vries of sixth order. We apply the classical Lie method of infinitesimals and the nonclassical method, due to Bluman and Cole, to deduce new symmetries of the equation which cannot be obtained by Lie classical method. Moreover, we obtain ten different conservation laws depending on the parameters and we conclude that potential symmetries project on the infinitesimals corresponding to the classical symmetries. Keywords: KdV6, Lie group analysis, classical symmetries, nonclassical symmetries, conservation laws and classical potential symmetries. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: María Santos Bruzón, Tamara María Garrido. Symmetries and conservation laws of a KdV6 equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 631-641. doi: 10.3934/dcdss.2018038 S. C. Anco and G. W. Bluman, Direct construction of conservation laws from field equations, Physical Review Letters, 78 (1997), 2869-2873. doi: 10.1103/PhysRevLett.78.2869. Google Scholar S. C. Anco and G. W. Bluman, Direct construction method for conservation laws of partial differential equations part 2: General treatment, European Journal of Applied Mathematics, 13 (2002), 567-585. doi: 10.1017/S0956792501004661. Google Scholar S. C. Anco and G. W. Bluman, Direct constrution method for conservation laws of partial differential equations part 1: Examples of conservation law classifications, European Journal of Applied Mathematics, 13 (2002), 545-566. doi: 10.1017/S0956792501004661. Google Scholar S. C. Anco, Generalization of Noether's theorem in modern form to non-variational partial differential equations, To appear in Fields Institute Communications: Recent progress and Modern Challen ges in Applied Mathematics, Modeling and Computational Science, 79 (2017), 119-182, arXiv: 1605.08734. doi: 10.1007/978-1-4939-6969-2_5. Google Scholar G. W. Bluman and J. Cole, General similarity solution of the heat equation, Journal of Mathematics and Mechanics, 18 (1969), 1025-1042. Google Scholar G. W. Bluman and S. Kumei, On the remarkable nonlinear diffusion equation, Journal of Mathematical Physics, 21 (1980), 1019-1023. doi: 10.1063/1.524550. Google Scholar G. W. Bluman, S. Kumei and G. J. Reid, New classes of symmetries for partial differential equations, Journal of Mathematics and Mechanics, 29 (1988), 806-811. doi: 10.1063/1.527974. Google Scholar M. S. Bruzón, M. L. Gandarias and J. Ramírez, Symmetry and Perturbation Theory, World Scientific Publising Company, 2005. Google Scholar M. S. Bruzón, T. M. Garrido and R. de la Rosa, Conservation laws and exact solutions of a Generalized Benjamin-Bona-Mahony-Burgers equation, Chaos, Solitons & Fractals, 89 (2016), 578-583. doi: 10.1016/j.chaos.2016.03.034. Google Scholar P. J. Caudrey, R. K. Dodd and J. D. Gibbon, New hierarchy of koterweg de vries equations, Proceedings of the Royal Society of London series A-mathematical physical and engineering sciences, 351 (1976), 407-422. doi: 10.1098/rspa.1976.0149. Google Scholar P. A. Clarkson, Nonclassical symmetry reductions of the boussinesq equation, Chaos, Solitons & Fractals, 5 (1995), 2261-2301. doi: 10.1016/0960-0779(94)E0099-B. Google Scholar P. A. Clarkson and T. J. Priestley, Symmetries of a generalised boussinesq equation, Institute of Mathematics and Statistics, University of Kent at Canterbury. Google Scholar V. G. Drinfeld and V. V. Sokolov, Nonclassical symmetry reductions of the boussinesq equation, Doklady akademii nauk sssr, 258 (1981), 11-16. Google Scholar A. P. Fordy and J. Gibbons, Some remarkable nonlinear transformations, Physics Letters A, 75 (1980), p325. doi: 10.1016/0375-9601(80)90829-4. Google Scholar M. L. Gandarias and M. S. Bruzón, Classical and nonclassical symmetries of a generalized boussinesq equation, Journal of Nonlinear Mathematical Physics, 5 (1998), 8-12. doi: 10.2991/jnmp.1998.5.1.2. Google Scholar T. M. Garrido, A. A. Kasatkin, M. S. Bruzón and R. K. Gazizov, Lie symmetries and equivalence transformations for the barenblatt-gilman model, Journal of Computational and Applied Mathematics, 318 (2017), 253-258. doi: 10.1016/j.cam.2016.09.023. Google Scholar W. Hereman and B. Huard, symmgrp2009. max: A macsyma/maxima program for the calculation of lie point symmetries of large systems of differential equations, http://inside.mines.edu/whereman/software.html (2009). Google Scholar A. Karasu-Kalkanli, A. Karasu, A. Sakovich, S. Sakovich and R. Turhan, A new integrable generalization of the korteweg de vries equation, Journal of Mathematical Physics, 49 (2008), 073516, 10 pp. doi: 10.1063/1.2953474. Google Scholar D. J. Kaup, On the inverse scattering problem for cubic eigenvalue problems, Studies in Applied Mathematics, 62 (1980), 189-216. doi: 10.1002/sapm1980623189. Google Scholar P. J. Olver, Applications of Lie Groups to Differential Equations, Springer-Verlag, 1986. doi: 10.1007/978-1-4684-0274-2. Google Scholar J. Satsuma and R. Hirota, A coupled kdv equation is one case of the four-reduction of the kp hierarchy, Journal of the Physical Society of Japan, 51 (1982), 3390-3397. doi: 10.1143/JPSJ.51.3390. Google Scholar K. Sawada and T. Kotera, A method for finding n-soliton solutions of the k.d.v. equation and k.d.v. like equation, Progress of theoretical physics, 51 (1974), 1355-1367. doi: 10.1143/PTP.51.1355. Google Scholar J. Weiss, M. Tabor and G. Carnevale, The painlevé property for partial differential equations, Journal of Mathematical Physics, 24 (1983), 522-526. doi: 10.1063/1.525721. Google Scholar Wen-Xiu Ma. Conservation laws by symmetries and adjoint symmetries. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 707-721. doi: 10.3934/dcdss.2018044 M. S. Bruzón, M. L. Gandarias, J. C. Camacho. Classical and nonclassical symmetries and exact solutions for a generalized Benjamin equation. Conference Publications, 2015, 2015 (special) : 151-158. doi: 10.3934/proc.2015.0151 María-Santos Bruzón, Elena Recio, Tamara-María Garrido, Rafael de la Rosa. Lie symmetries, conservation laws and exact solutions of a generalized quasilinear KdV equation with degenerate dispersion. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020222 Stephen Anco, Maria Rosa, Maria Luz Gandarias. Conservation laws and symmetries of time-dependent generalized KdV equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 607-615. doi: 10.3934/dcdss.2018035 María Rosa, María de los Santos Bruzón, María de la Luz Gandarias. Lie symmetries and conservation laws of a Fisher equation with nonlinear convection term. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1331-1339. doi: 10.3934/dcdss.2015.8.1331 Zhijie Cao, Lijun Zhang. Symmetries and conservation laws of a time dependent nonlinear reaction-convection-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020218 Juan Belmonte-Beitia, Víctor M. Pérez-García, Vadym Vekslerchik, Pedro J. Torres. Lie symmetries, qualitative analysis and exact solutions of nonlinear Schrödinger equations with inhomogeneous nonlinearities. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 221-233. doi: 10.3934/dcdsb.2008.9.221 Carsten Collon, Joachim Rudolph, Frank Woittennek. Invariant feedback design for control systems with lie symmetries - A kinematic car example. Conference Publications, 2011, 2011 (Special) : 312-321. doi: 10.3934/proc.2011.2011.312 José F. Cariñena, Fernando Falceto, Manuel F. Rañada. Canonoid transformations and master symmetries. Journal of Geometric Mechanics, 2013, 5 (2) : 151-166. doi: 10.3934/jgm.2013.5.151 Miriam Manoel, Patrícia Tempesta. Binary differential equations with symmetries. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1957-1974. doi: 10.3934/dcds.2019082 Olivier Brahic. Infinitesimal gauge symmetries of closed forms. Journal of Geometric Mechanics, 2011, 3 (3) : 277-312. doi: 10.3934/jgm.2011.3.277 L. Bakker, G. Conner. A class of generalized symmetries of smooth flows. Communications on Pure & Applied Analysis, 2004, 3 (2) : 183-195. doi: 10.3934/cpaa.2004.3.183 Michael Baake, John A. G. Roberts, Reem Yassawi. Reversing and extended symmetries of shift spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 835-866. doi: 10.3934/dcds.2018036 Marin Kobilarov, Jerrold E. Marsden, Gaurav S. Sukhatme. Geometric discretization of nonholonomic systems with symmetries. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 61-84. doi: 10.3934/dcdss.2010.3.61 Júlio Cesar Santos Sampaio, Igor Leite Freire. Symmetries and solutions of a third order equation. Conference Publications, 2015, 2015 (special) : 981-989. doi: 10.3934/proc.2015.0981 Michael Hochman. Smooth symmetries of $\times a$-invariant sets. Journal of Modern Dynamics, 2018, 13: 187-197. doi: 10.3934/jmd.2018017 Robert I. McLachlan, Ander Murua. The Lie algebra of classical mechanics. Journal of Computational Dynamics, 2019, 6 (2) : 345-360. doi: 10.3934/jcd.2019017 Martin Oberlack, Andreas Rosteck. New statistical symmetries of the multi-point equations and its importance for turbulent scaling laws. Discrete & Continuous Dynamical Systems - S, 2010, 3 (3) : 451-471. doi: 10.3934/dcdss.2010.3.451 Rehana Naz, Imran Naeem. Exact solutions of a Black-Scholes model with time-dependent parameters by utilizing potential symmetries. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020122 Giovanni Rastelli, Manuele Santoprete. Canonoid and Poissonoid transformations, symmetries and biHamiltonian structures. Journal of Geometric Mechanics, 2015, 7 (4) : 483-515. doi: 10.3934/jgm.2015.7.483 María Santos Bruzón Tamara María Garrido
CommonCrawl
Containment definition in psychology What you described is exactly how i feel sometimes. Describe the concepts of self-complexity and self-concept clarity, and explain how they influence social cognition and behavior. Fire crews are hoping they can containment definition: The definition of containment is keeping something restricted or under control, or to describe preventing a hostile country or hostile influence from expanding its influence. If you have friends to talk to about it, then do it. A typical personification of this impulse is the snake that tempts Eve to violate her passive containment in the Garden, or the shadowy figure or animal in Fairy Tales that tempt the hero or heroine to break the status quo and do something 'evil,' i. containment measures and protocols that are applied to limit contact of organisms, particularly PATHOGENS and GMOs, with the external environment: physical containment, achieved by regulating access, restricting air circulation and providing other physical barriers; Sep 13, 2011 · The US adopted the policy of containment (an application of the theory) in their prosecution of the Cold War. What every you put you mind to do anything. the act or condition of containing. the definition notes that there's equality between sets A and B, if and only if A is included in B and B is included in A. Download a report with benchmark data and details for tracking this metric. Define Containment by Webster's Dictionary, WordNet Lexical Database, Dictionary of Computing, Legal Dictionary, Medical Dictionary, Dream Dictionary. g. I will now describe in more detail how this psychological transformation . After all, the set of possible outcomes becomes smaller as one observes longer prefixes / more of the process unfolding. n. Matthew Tull, PhD is a professor of psychology at the University of Toledo, specializing in post-traumatic stress disorder. 1. The dual risk of physician and mental health conditions often compound the cost of care. (AO1) In these studies, participants obeyed an authority figure by giving electric shocks to a learner. Suicide and Posttraumatic Stress Disorder (PTSD) provides information about suicidal thinking and PTSD. First laid out by George F. 3 Long term memory; 2. The more toxic that the dust, chemical or spray present is, the greater the level of containment that is required. Containment. Sleep offers us a retreat from the world and all its pain. This paper will attempt to discuss the analytic components of the holding environment and extrapolate theory to give meaning towards a syncretic result. One of the first things I teach my clients is containment skills. Horowitz's (1976, 1979) formulation of the ap- proach-avoidance dimension in response to stress is the most fully developed and will be discussed in more detail. The Journal of Analytical Psychology, 39(4), 419–461. New York: McGraw Hill. 3 Explanations of attachment; 3. But it does get better. Health psychology is one of the most rapidly developing fields in contemporary academic psychology. A growing form of damage containment is psychological damage "Containment" has been defined as a global commitment to resist . Describe self-awareness, self-discrepancy, and self-affirmation theories, and their interrelationships. Differentiate the various types of self-awareness and self-consciousness. ' Containment in this instance is a stopping of the spread of contamination and a controlling of infection or epidemic. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Other /More definition: Containment theory is a theory positing that every person possesses a containing external structure and a protective internal structure, both of which provide Containment measures only have a value if they are observed completely and at all times without breaking the containment, even during the filtration of the dust-laden process air. What is Projection. Use it as fuel, as ammunition. Amusement rides thrill us by accelerating our bodies. The act or condition of containing. Such enclosures are usually dome-shaped and made of steel-reinforced concrete. A short summary of containment theory is offered, and a foundation for the incorporation of catharsis theory is then laid. - 1 - 1. Further information and guidance on their application to work activities can be found Health Cost Containment and Efficiency Strategies Strategy Cost Containment Strategy and Logic Target of Cost Containment Evidence of Effect on Costs 1. You can also find multiple synonyms or similar words on the right of Containment. The communist Soviet Union and the capitalist West stood toe-to-toe with each other in Germany, especially in the capital, Berlin, As chief financial officers go their merry way, indiscriminately picking low-hanging fruit, underwriters gasp in silent horror as their meager funds for education and travel are sacrificed at the alter of "cost containment" A functional definition of pennywise and poundfoolish. The concept of containment; Attachment theory; Types of attachment; Communication; Fundamentals of communication; Communication and complex disability; Interaction; Challenges for children; Using Quality Circles; Your role; Managing change; Challenging behaviour; Physical intervention; Positive handling; Level D introduction; The causes of challenging behaviour Cost containment is a process of judiciously reducing costs in a business or limiting them to a constant level. A gas-tight shell or other enclosure around a nuclear reactor to confine fission products that otherwise might be released to the atmosphere in the event of an accident. Download free CBT worksheets, handouts, and self-help guides. The cost containment process is an important management function that helps keep costs down to only necessary and intended expenses in order to satisfy financial targets. INNER CONTAINMENT AND DELINQUENCY some boys develop and maintain non-delinquent patterns of behavior even though they appear to be experiencing the same environmental adversities as others who do become involved in delinquency. ) S. Aug 28, 2007 · The definition of oil containment booms is, 'a temporary floating barrier used to contain an oil spill. The containment theorists' answer is that when there are external forces pushing or pulling a boy Mar 18, 2015 · Emotional Containment When one is able to be with their emotions, it will mean that they have the ability to contain them; how they feel is not being expressed and it is not being denied either. Asked in History of the United Definition of ring containment in the Definitions. Containment theory is a forerunner of contemporary control theories (e. Each brief describes 1) cost containment strategy and logic; 2) the target; 3) rela-tion to the federal health reform law; 4) state and non-state Secondary Containment. Jun 03, 2019 · In some areas of psychology (especially in psychodynamic theory), psychologists talk about "defense mechanisms," or manners in which a person behaves or thinks in certain ways to better Psychology definition for Containment in normal everyday language, edited by psychologists, professors and leading students. Containment has been identified as primary task in residential child care (Ward 1995, Woodhead Emotional Containment When one is able to be with their emotions, it will mean that they have the ability to contain them; how they feel is not being expressed and it is not being denied either. In the box they are contained. Related pages within A Guide to Psychology and its Practice: Anger Looking for definition of Containment? Containment explanation. Kennan argued that to stop the expansion of Soviet communism, America should pursue a policy of 'containment. In Gestalt theory, a lot of distress and problems in human functioning occur because people get stuck between the various stages, unable to complete their gestalt. The containment of something dangerous or unpleasant is the act or process of keeping it under control within a particular area or place. Moscow, not Washington, would define the issues, would make the challenges, would select . containment measures and protocols that are applied to limit contact of organisms, particularly PATHOGENS and GMOs, with the external environment: physical containment, achieved by regulating access, restricting air circulation and providing other physical barriers; see BIOLOGICAL CONTAINMENT. Each curve, drop, loop, launch, or brake alters the rider's state of being, triggering inertial resistance -- the feeling that your body is headed in one direction while the ride is pulling you somewhere else. r. integrating means gradually abandoning the previous collective definitions of will explore the paradoxical nature of boundaries and containment and their role in anyway. According to psychologist John Bowlby, the earliest bonds formed by children with their parents (caregivers) have an important impact that continues throughout their life. Control theory provides an explanation for how behavior conforms to that which is generally expected in society. . This section is followed by a description of the four included empirical papers. Typically, they pop up when you don't expect them and they can happen at very inconvenient times. Items 1 - 9 of 9 Planning for damage containment is as an element of mitigation and actually . Start studying Containment Theory. But by this way I can't figure out how to prove it. Page 7 of 12 Encyclopedia of Criminological Theory: Reckless, Walter C. work of Reckless. Proof of Set containment by definition of Equality of Sets. Of hurt and hope and love. (in a nuclear power plant) an enclosure completely surrounding a nuclear reactor, designed to prevent the release of radioactive material in the event of an accident. In (ed. Problem effects have to be restrained and prompt action is important. Classic thinking teaches us of the four doors of the mind, which everyone moves through according to their need. Equalizing Health Provider Rates: All-Payer Rate Setting. How acceleration affects containment and restraint. For example, you might need to concentrate in a class and can't afford to be distracted by negative memories, or you might just be drained from thinking about negative memories all day long and could benefit from thinking about them for only a short time each day instead. the act of controlling or limiting something or someone harmful: 2. When performed properly, cost containment can ensure or increase profitability without undue difficulty created for those performing the job. Emphasis is on documented and fiscally calculated results, along with results that affect budgets, coverage, quality, prevention and wellness. What does containment mean? Proper usage and audio pronunciation (plus IPA phonetic transcription) of the word containment. Bion's theory of containing originates Containment is a geopolitical strategic foreign policy pursued by the United States. A 8-mark "evaluate" question awards 4 marks for AO1 (Describe) and 4 marks for AO3 (Evaluate). "Perhaps the greatest faculty our minds possess is the ability to cope with pain. A policy of checking the expansion or influence of a hostile power or ideology, as by the creation of strategic CONTAINER-CONTAINED. Chapter 2. A White Paper. net dictionary. 4 The Strange Situation Developmental Psychology Cognitive Development Aidan Sammons psychlotron. Contact me today, if you would like to learn more. image schemas related to containment will be pure static relations in timeless As Dewell ( 2005 ) suggested, the static, abstract definition of a container as. What does ring containment mean? Information and translations of ring containment in the most comprehensive dictionary definitions resource on the web. In other words, human functioning must be understood both in terms of the behaviors themselves, as well as the contextual containment of human functioning. Jeremiah Reuter. Pschology. Some control theories emphasize the National Conference of State Legislatures 3 Strategy Cost Containment Strategy and Logic Target of Cost Containment Evidence of Effect on Costs 7. (1959). Definition of containment: Limitation or restriction of the harmful consequences of a negative event. A central premise of containment theory, as with all control theories of crime and delinquency, is that behavior must be controlled, and in the absence of appropriate controls, people will deviate. Previous theory building emanating from the study at the centre of this paper established the utility of containment theory in making sense of physical restraint and informing residential child-care practice (Steckley, 2010). 2 Animal Studies; 3. Self-esteem should be like a tree, deeply rooted in -Psychology term which means, "Unified whole" -Refers to theo… -The eye differentiates an object form its surrounding area -We always want to organize things in groups, for it is a huma… -Occurs when elements are placed close together Gestalt -Psychology term which means, "Unified whole" Nov 28, 2019 · Gestalt psychology, school of psychology founded in the 20th century that provided the foundation for the modern study of perception. We use cookies to enhance your experience on our website, including to provide targeted advertising and track usage. A within-subject (repeated measures) ANOVA design was used, Mar 25, 2013 · However, I feel that this concept should become important in psychology too, and be seen as an essential prerequisite of overall well-being. The features selected will be listed in the Features list of the form shown below. 3. The great uroboric round breaks open and light is born into the world. Journal of Personality and Social Psychology, 16, 148-156. 1 Working memory model; 2. The self-concept is an important term for both social and humanistic psychology . by Geoff Mulligan . Meaning, pronunciation, picture, example sentences, grammar, usage notes, synonyms and more. diplomat George F. 1Caregiver-Infant Interactions; 3. 2 By: Megan Ortiz . What I did is to follow exactly the definition of Equation of Sets, meaning I tried to prove both directions of equation: I managed to prove (1) easily, but had problems with (2) and even could find a contradiction for Biological containment level measures: Guidance information Version 1. Kennan in 1947, the policy stated that communism needed to be contained and isolated, or else it would spread to neighboring countries. Vol. ySecond-Dimension ² indirect manipulation of rules to shape the outcome of competition ² the power to design the rules. Kai Erikson Studied Puritans and found that even they created deviance. Type of Facility Secondary Containment Rule Section(s) All Facilities General containment (areas with potential for discharge, such as piping–including flowlines, bulk storage containers, a number of health cost containment and cost efficiency ideas. Psychology Quotes. Psychology Tools creates resources to improve your therapy and save you time. A belief in the innate aggressive tendencies of human beings—that the ability to be aggressive toward others, at least under some circumstances, is part of our fundamental human makeup—is consistent with the principles of evolutionary psychology. No one is stronger or more dangerous than a man who can harness his emotions. Aug 3, 2017 The definition of containment according to the English Oxford Dictionary, is 'the action of keeping something harmful under control or within Aug 15, 2013 The Psychology and Value of Emotional Containment The above advice may seem to run counter to the conventional psychological wisdom. Containment was a foreign policy of the United States of America, introduced at the start of the Cold War, aimed at stopping the spread of Communism. Statement add up in order to web page. The theory starts with the premise that people are essentially Containment is a geopolitical strategic foreign policy pursued by the United States. Administrative Simplification in the Health System Streamlining administra-tive functions in the current health system (e. 5 Eye witness testimony; Past Paper question Memory; 2. Projection is a defense mechanism where a person projects his/her impulses, feelings, habits, and/or traits onto someone else and begins to identify his own traits in that 'someone else'. approach to containing, or preventing, the spread of Communism after World War II. 2. Containment - finding the psychological space for change James Barrett A key lesson from psychology is that productive work is healthy and containing for people – it produces a virtuous circle where the satisfaction of being stretched and of doing a job well inspires people to show even more initiative in their roles. Aug 15, 2013 · The Psychology and Value of Emotional Containment. Containment theory refers to a form of control theory which suggests that a series of both internal and external factors contribute to law-abiding behavior. The eleven basic colors have fundamental psychological properties that are universal, regardless of which particular shade, tone or tint of it you are using. an act or policy of restricting the territorial growth or ideological influence of another, especially a hostile nation. 3: Formulations of the person and the social context. Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Personality: Self concept · Personality testing · Theories · Mind-body problem In psychoanalysis ego strength is the ability of the ego to effectively deal with the demands of the id , the superego , and reality. The containment building is a gas-tight building (shell) or other enclosure around a nuclear reactor and a primary circuit. Citation. Chapter 4: Secondary Containment and Impracticability Determination Table 4-1: Secondary containment provisions in 40 CFR part 112. Attunement and attachment are related in that, mothers/fathers (caregivers) who are available and attuned to their child, 14 In a case such as this, therefore, in which the cost of packing is represented by the payment of financial compensation for the loss of containers, to be determined and paid separately after the imported goods have been consumed, it is necessary to adjust ex post facto the value of those goods for customs purposes and to recover the outstanding amount of customs duty under Article 2 of A cross-sector, multidisciplinary, and geographically spread panel of 47 experts representing mental health, youth care, juvenile justice, and education in Flanders participated i 22 hours ago · This definition feels unintuitive and I would have thought the direction of containment would go in the other direction: $\mathcal{A} = \mathcal{F}_0 \supseteq \mathcal{F}_m \supseteq \mathcal{F}_n$. The capacity for self -observation in psychotherapy. Jan 17, 2018 · Ive received my SQFI Corrective Action Report, and I need to submit: -Responsible Person -Root Cause -Corrective Action -Containment Plan -Preventative Action Im a little overwhelmed, and I cant find a definition of what a Containment Plan needs to includeI think Ive covered the Responsible person, root cause an Oct 14, 2019 · Read the contents of the August issue of HFMA's Healthcare Cost Containment newsletter. It is loosely related to the term cordon sanitaire which was later used to describe the geopolitical containment of the Soviet Union in the 1940s. to color psychology. , a system's ability to plan and prepare for, and absorb and adapt to, a new situation. His past…. That is, the attributes of the whole are not deducible from analysis of the parts in isolation. I know at my clinic there are a few therapists with good knowledge of T1D and psychology, so you could always talk to them too. Journal of Personality and Social Psychology, 88 (6), 969-983. Unlike most criminology theories that purport to explain why people offend, control theory offers the justification for why people obey rules. Those results make up continuous key phrases utilizing hardly any clear outlines and / or smashes. Containment definitions can be used in 2x, 3x or 3x Rough Milling operations. ACKNOWLEDGMENTS.   Although feeling your feelings is essential to trauma therapy, it The definition of containment according to the English Oxford Dictionary, is 'the action of keeping something harmful under control or within limits. Definition of containment in the AudioEnglish. The term was selected because of its widespread use by and relevance to the different professionals who work in general practice. Containment is provided in seemingly simple ways, through the rhythms, routines, boundaries and activities—and, most importantly, within key relationships (including but not always key-worker relationships) and a wider network of relationships within the home (Emond et al. Dictionary Term of the Day Articles Subjects Interim containment actions are a "first aid" that protects the customer from the problem until we define the root cause and implement permanent corrective actions. Title 40 of the Code of Federal Regulations (CFR) part 264 2006 Uniform Fire Code (UFC) in standard 60. Containment is an essential aspect of the purification process. , self‐control theory) aimed at explaining deviance and delinquency among youth populations. A Web Quest could incorporate the use of positive and negative exemplars to guide the categorization process. Containment - the role of a therapeutic milieu A therapeutic milieu can assist the client in holding or containing her painful emotions, allowing her to express internal conflict in a way that can bring about a greater sense of personal responsibility. Containment was a foreign policy strategy devised by George Kennan to prevent the spread of communism during the Cold War. There are various ways in which one adult can offer to another this holding (or containment). Containment collectively names all the things that staff do to prevent conflict events from occurring or seek to minimize the harmful outcomes (e. EPA, UFC and RCRA Secondary Containment requirements come from a variety of sources, with the main source being the Environmental Protection Agency. Find psychology articles, student resources and learn about the theories and perspectives that have shaped the discipline. The Self, besides being the centre of the psyche, is also autonomous, meaning that it exists outside of time and space. In general terms, the process of prisonization involves the incorporation of the norms of prison life into one's habits of thinking, feeling, and acting. This will give one the chance to listen to what is taking place within them and to allow their emotions to guide them. Look it up now! I provide attachment-based trauma counseling in Nashville, Tennessee, using EMDR and other trauma-informed therapies. with regard to objects relation theory, the idea that either the maternal party or the examiner Although the concept of containment is considered a technical term in psychotherapy theory, you experience it in your interactions with a good friend or a close Aug 4, 2016 Without containment we feel out of control, emotions or thoughts threaten to overwhelm us and 'plastic' meaning it can change at any age. 3. Gestalt theory emphasizes that the whole of anything is greater than its parts. (or object) . Short Essay on the Cold War and Containment | Ultius Take 10% OFF— Expires in h m s Use code save10u during checkout. PSYB3 3. Containment is the incorporation of threat into an integrated self-structure, without overwhelming the self. Containment theory is a theory positing that every person possesses a containing external structure and a protective internal structure, both of which provide defense, protection, or insulation against delinquency. Rogers, C. Shiel Jr. A central premise of containment theory, as with all control theories of crime and delinquency, is that behavior must be controlled, and in the absence of appropriate controls Containment Actions. , standard-ized forms and processes, streamlined claims processing, Sep 19, 2012 · Related posts: Needs Satisfaction Cycle Here is a graphic of the Gestalt needs satisfaction cycle. Containment refers to putting away thoughts, feelings, or images, allowing you to feel safe and to get things done without being distracted. The well is defined in terms of its pressure containment boundary. A release of excrement from a faulty or overloaded child's diaper. May 20, 2018 Article (PDF Available) in Psychological Science 12(2):141-7 · April . By contrast, the classical model of psychoanalytic theory, like the ideogram in relation to the word, ultimately allows of only one possible form of thought. The act or condition Trauma symptoms can come out of nowhere. And it can be crucial for a patient to be thus held in order to recover, or to discover maybe for the first time, a capacity for managing life and life's difficulties without continued avoidance or suppression. Instead, it will root your sessions firmly in the present while working toward a future in which your current problems have less of an impact on your life (Psychology Today, n. (1994). n. com, a free online dictionary with pronunciation, synonyms and translation. Jun 03, 2019 · In some areas of psychology (especially in psychodynamic theory), psychologists talk about "defense mechanisms," or manners in which a person behaves or thinks in certain ways to better Control Theory in Sociology: Definition & Concept. A psychologist can also help a person to manage other problems that may be associated with the trauma, such as depression, stress, drug and alcohol use, or relationship problems. Cognitive Psychology Cognitive psychology refers to the study of human mental processes and their role in thinking, feeling, and behaving. Dehing, J. First is the door of sleep. org dictionary, synonyms and antonyms. 5 dusts that can reach deep into What is a containment building? A containment building is defined as a hazardous waste management unit that is used to store or treat hazardous waste under the provisions of Subpart DD of parts 264 or 265 (§260. an attempt to keep another…. Sep 11, 2010 · I have clinical depression and pretty much all forms of anxiety and it sucks. In my distress, I clung to the first firm landmark available in my psychological Trauma, almost by definition, breaches people's normal boundaries. It is a concept that was developed within the psychoanalytic tradition by Wilfred Bion . A term coined by Jung as the female counterpoint to what Freud called the oedipus complex, it takes its name from the Greek myth of Elektra who, along with her brother Orestes, avenged the murder of their father, Agamemnon, by killing their mother Clytemnaestra and her lover Aegisthus. org Dictionary. Koch,Psychology: A study of a science. ' Containment booms help to prevent polluting the shore line. With regard to health administrati Definition of containment. regulations and guidance for work with wildtype and genetically modified - biological agents or materials contaminated with such agents. The central question of the theory asks why do people follow the law? Definition of containment noun in Oxford Advanced Learner's Dictionary. Requires suppliers to expedite conformance to quality standards, contain defective products, control the processes, perform redundant inspections, evaluate and improve data and secure cost. Corrosionpedia explains Containment The degree of containment required is directly proportional to the deg ree of toxicity present in the corrosion preventive substance being applied. 6 During the 1970s and early 1980s cost containment strategies focused on direct and indirect controls of health care supply. Containment Theory: Good boys identified with the norms and worked well with frustration while the bad boys were the opposite. prove by the definition of Equation Of Sets that if then . This subset of the social control theory involves the strain theory in that it demonstrates an individuals belief in common goals and morals of society, and it shows a lack of means for achieving those goals which in turn encourages deviant behavior as a means of achieving those goals. Containment Policy US foreign policy during Cold War as authorized by American diplomats Kennan, Byrnes, and Acheson to contain communism within the borders of the Soviet Union First 3 steps taken for this policy A passive containment heat removal system, control method thereof and pressurized water reactor; the passive containment heat removal system comprises an outer containment (2), an inner containment (1), an air duct (200) defined between the inner containment (1) and the outer containment (2) having an air inlet (4) and an air outlet (5), a spraying assembly (10) disposed outside the inner cost containment: The process of controlling the expenses required to operate an organization or perform a project within pre-planned budgetary constraints. Help us get better. ' Learn about The containment/nurturance phase of individuation serves the psychological . Through this process, a pressure and "heat" is created that may fuel a process of inner transformation. Baillargeon: innate object knowledge. He is also associate head of research policy and director of the Berlin hub of the European Observatory on Health Systems and Policies. Containment is a term that refers to the tight control of substances, animals, infections, outbreaks, or people. Definition of the Containment acronym term used in manufacturing. Attachment is an emotional bond to another person. S. Definition: Containment is the retention of hazardous material so as to ensure that it is effectively prevented from dispersing into the environment, or released only at an acceptable level. First, we discuss the market for pharmaceuticals and why it does not function as a competitive market and so does not naturally allocate an efficient level of resources to innovation. The Psychological Effects of Incarceration: On the Nature of Institutionalization. Like is added to like: hurt to hurt, anger to anger, joy to joy and loss to loss. 1 Multi-store model of memory; 2. The containment is the most characteristic structure of a nuclear power plant. The definition of Containment is followed by practically usable example sentences which allow you to construct you own sentences based on it. 8. of containment. First lets disassemble the codification of targeted principles to create a clear topographic model in this highly abstract concept. : the act, process, or means of keeping something within limits. Containment is the action or policy of keeping another country's power or area of control within acceptable limits or boundaries. This policy was designed to restrict Soviet expansion. 2 Cognitive development Piaget's account of object permanence Piaget (1954) claims that infants do not conceive of objects that have an independent existence, separate from themselves. Trauma Information Pages provides a comprehensive listing of trauma support info, disaster info, and related mental health issues on the Internet. In 1946, U. After World War II (WWII) ended, the Soviet Union took control of most of Eastern Europe, creating client states in countries such as Poland, Czechoslovakia, and East Germany. The holding may or may not involve actual physical holding; otherwise, with emotional holding the client's anxiety, alarm, confusion, distress, and pain are all managed safely by the therapist. According to psychological analysis by Deborah Larson, Truman felt a need to prove his decisiveness and feared that aides would make The psychology of self is the study of either the cognitive, conative or affective representation of . containment definition: 1. Sep 07, 2018 · home / medterms medical dictionary a-z list / surveillance and containment definition Medical Definition of Surveillance and containment Medical Author: William C. 4 Forgetting; 2. Access a PDF of the issue. Containment, policy of definition at Dictionary. containment synonyms, containment pronunciation, containment translation, English dictionary definition of containment. Containment may occur in specially built containment spaces. Learn more. 3 2006 International Fire Code (IFC) in 2704. It is now the sixth largest among 56 divisions of the American Psychological Association. Emotional Containment When one is able to be with their emotions, it will mean that they have the ability to contain them; how they feel is not being expressed and it is not being denied either. Geoff Mulligan has been designing and building Network security products, including Sun Microsystem's premiere firewall product SunScreen and the DEC SEAL, for the past seven years and has been instrumental in development of the Internet for the past 17 years. Information about containment in the AudioEnglish. This article selects the term 'containment' as a vehicle for an exploration of multi-professional work and communication in the general practice setting. Containment, reciprocity and behaviour management: Preliminary evaluation of a brief early intervention (the solihull approach) for families with infants and young children. Meaning of containment. Meaning of ring containment. This is a short essay that focuses on the containment policy of the United States during the Cold War, the world's greatest ideological conflict. Agency Theory is a credible theory because it is supported by Milgram's observational studies into obedience. Solution-focused brief therapy doesn't require a deep dive into your childhood and the ways in which your past has influenced your present. in the numerous definitions of a psychological contract that have been By retracing shifts in the meaning, usage, and perception of the doctrine of 'Soviet containment', this article provides a balanced account of the extent to which In order to examine the meaning of these concepts of containment and competence, Due to the hegemony of developmental psychology and theories of Jan 31, 2018 Ethical Nursing Care Versus Cost Containment: Considerations to . Jan 01, 2014 · The Psychology and Value of Emotional Containment. The capacity for self -observation in psychotherapy - 2 - (most notably in Buddhism) and in ancient Greek philosophy (as captured by the inscription on the Delphi temple: know thyself ). A term used to describe a fart that has escaped all attempts to keep it contained under a comforter or blanket. Containment is part of Phase 1 of trauma therapy. But I had to prove the claim using the definition of equality of sets. Psychological containment is an aspect of resilience and refers to the capacity to internally manage the troubling thoughts and feelings and behavior that arise Apr 7, 2013 Psychology Definition of CONTAINMENT: noun. The therapist or healer will create a safe space and provide the containment that one needs in order to face their emotions. There is a popular idea today in the psychological realm that concerns the beneficial effect of May 2, 2014 Definition Psychological Theories. How to use containment in a sentence. Since the high-speed railway was induced in Korea, "derailment containment walls" have been constructed to mitigate and minimize accident damage by preventing trains from colliding with catenary poles or falling under a bridge when they are derailed by earthquake, buckling, or defects in tracks/trains in bridge sections. Insofar as the analytic situation establishes the conditions for a new realization in the transference, the elements of psychoanalysis need to be capable Containment definition is - the act, process, or means of keeping something within limits. Interim containment actions are a "first aid" that protects the customer from the problem until we define the root cause and implement permanent corrective actions. In such a holding (and appropriately safe and boundaried) environment, Define containment. The idea was to make other countries Oct 30, 2019 A robust rework containment and reduction framework is developed. d. Control theory is one point of view that attempts to answer this question. In quality deviations our first response should be to protect the customer. , MD, FACP, FACR containment in enhanced behavioral supports homes and with Disability Rights California regarding appropriate safeguards for the protection of individuals' rights. ). p. From the mid-1980s to the mid-1990s the focus switched to setting budgets. Apr 07, 2013 · Psychology Definition of CONTAINMENT: noun. Containment Definition All Opdefs (except Mill2) (CAM Level) Use this form to define containment attributes for tool path operations. Cognitive Psychology – Memory. Because community psychology assumes that intervention and research necessarily involve work with contextually relevant diverse groups, it is important to assess the meaning, activity, situation, and consequences of human contexts. The Divisions of Health Psychology of the British Psychological Society and the European Health Psychology Society are also thriving. Like those people in the paintings your father liked, we are men made up of boxes, chambers of loss and triumph. Additionally, individuals who are friends may have more similarities than two strangers, and thus may be more likely to exhibit similar body language regardless of mirroring. Who is the author of this theory? What is outer containment and the two ways/examples to do this? The "containment policy" was the U. atomic warfare and the definition of the cold war as a cont. Introduction In this introductory section the theoretical f oundations for concepts of self -observation are described. 1 . In Walter Reckless …generalized this finding into a containment theory, which argued that there are inner and outer forces of containment that restrain a person from committing a crime: the inner forces stem from moral and religious beliefs as well as from a personal sense of right and wrong; the outer forces come…. : Containment Theory According to Reckless, "It is important to incorporate Redl's thinking on the ego and superego as the behavior control system within the person, so as to indicate that the components of containment theory can be specified by psychiatrists Holding Environment. What is Containment Building. Define containment. A nursing theory is a set of concepts, definitions, relationships, and assumptions or propositions derived from nursing models or from other disciplines and project a purposive, systematic view of phenomena by designing specific inter-relationships among concepts for the purposes of describing, explaining, predicting, and /or prescribing. A Web Quest similar to the strategies of concept attainment includes task definition, process, resources, evaluation, and conclusion. It is aimed at not allowing the spread of a wide range of conditions and things: disease, infection, populations, political ideas. A highly developed theory of the value of insight and self - observation is found as early as in Buddhist scriptures from almost Definition(s) Pressure Containment Barrier. Why behavioral health should be part of a holistic healthcare approach. The purpose of these Guidelines, although primarily to provide guidance to EBSH providers regarding the use of restraint or containment, is multifaceted. When building his models, Massaro adopted the US National Academy of Sciences' definition of resilience, i. Reinhard Busse is a professor and the department head for health care management in the Faculty of Economics and Management at the Berlin University of Technology. 2. I am new to Jung and I was searching for an appropriate definition and/or Interpretation and containment both go on at once in clinical work, although of whole- and part-object transferences and to integrate ego-psychological and 2840 words (11 pages) Essay in Psychology. , individual. e. The problem was the extent to which Kennan's strategy relied on psychology. Practically all nuclear power plants built during the last few decades include a containment building. Jung also called the Self Containment: An archetype? Meaning of madness in Jung and Bion. Baumeister (1999) provides the following self-concept definition: "The individual's belief about himself or herself, including the person's attributes and who and what the self is". I would define the theory of containment as the capacity of one individual. the containment of health costs. Mirroring may be more pervasive in close friendships or romantic relationships, as the individuals regard each other highly and thus wish to emulate or appease them. Defining the nature of 'holding' and 'containing'. 14/12/17 Containment is similar and yet fundamentally different to holding. Written by Kathi Stringer. Containment was a foreign policy strategy followed by the United States during the Cold War. Containment refers to the energetic space between you and your psychotherapist. org. As we discussed above, containment techniques are a means of temporarily suppressing bothersome feelings and memories, to shield ourselves from their impact. 6 Improving accuracy of EWT; Developmental Psychology – Attachment. Containment may be described as the ability of an individual to "stay with" the suffering of another being, whilst psychologically and emotionally holding the anguish in a way that allows the emotion to be survived by the bearer. Any equipment that is vital to controlling the pressure This paper explores the economics literature on cost containment and health care innovation, with most of the focus on pharmaceutical innovation. Apr 07, 2013 · Psychology Definition of COST CONTAINMENT: a program objective which pursues regulation of prices associated with regulating and administering the program results. Mar 21, 2019 · Containment structure. with regard to objects relation theory, the idea that either the maternal party or the examiner facilitates development and relieves worries by maint Sign in Psychological containment is an aspect of resilience and refers to the capacity to internally manage the troubling thoughts and feelings and behavior that arise as a consequence of stress. The main objective of this part of the problem solving process is to isolate the effects of the problem by implementing containment actions. The Social Control Theory, originally known as The Social Bond Theory in 1969, was developed by Travis Hirschi. In general industry, it is important to protect the operator from hazardous substances and above all to free the environment from pm2. , 2016; Ward, 1995). Containment and NATO. yFirst²Dimension ² direct force or competition, typically with a winner-take-all perspective. Security through Containment. A theory of therapy, personality and interpersonal relationships as developed in the client-centered framework. If you feel this has been of value, please leave a comment, like or get in touch. A problem may be poor quality, marginal product design, or a process or system that is unpredictable. Care process redesign. https ://. The Social Control Theory. doi: not been differentiated in the psychological and cognitive linguistic literatures: . 6 Deborah Welch Larson, Origins of Containment: A Psychological Explanation . $\endgroup$ – Ami Gold Nov 16 '15 at 17:27 The page not only provides Urdu meaning of Containment but also gives extensive definition in English language. uk. Seeks evidence that appears to confirm their beliefs and avoid evidence that contradicts them. It is loosely . Perhaps your hurt will become compassion; your frustration, surrender; your anger, Containment Actions. A psychologist is able to assess trauma and help people better understand and respond through coping strategies and techniques. II. 10). Payment rates that are the same for all patients receiving the same service or treatment from the same provider. In order to understand what projective identification is, we must first understand what projection involves. IVR Containment Rate is a KPI that measures the rate at which the calls are handled within the company's IVR system without need for live contact. It's the atmosphere the therapist creates that conveys a sense of safety, allowing you to more comfortably move through your emotions. A fabulous excellent earning a living usual can be 4 hundred. About the Author. medication, special observation, seclusion, etc. and containment events suggests that infants sort events into distinct. The sample consisted of thirteen families recruited over a six week period by health visitors. containment definition in psychology
CommonCrawl
BMC Systems Biology Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): systems biology Laplacian normalization and bi-random walks on heterogeneous networks for predicting lncRNA-disease associations Yaping Wen1 na1, Guosheng Han1 & Vo V. Anh1,2 BMC Systems Biology volume 12, Article number: 122 (2018) Cite this article Evidences have increasingly indicated that lncRNAs (long non-coding RNAs) are deeply involved in important biological regulation processes leading to various human complex diseases. Experimental investigations of these disease associated lncRNAs are slow with high costs. Computational methods to infer potential associations between lncRNAs and diseases have become an effective prior-pinpointing approach to the experimental verification. In this study, we develop a novel method for the prediction of lncRNA-disease associations using bi-random walks on a network merging the similarities of lncRNAs and diseases. Particularly, this method applies a Laplacian technique to normalize the lncRNA similarity matrix and the disease similarity matrix before the construction of the lncRNA similarity network and disease similarity network. The two networks are then connected via existing lncRNA-disease associations. After that, bi-random walks are applied on the heterogeneous network to predict the potential associations between the lncRNAs and the diseases. Experimental results demonstrate that the performance of our method is highly comparable to or better than the state-of-the-art methods for predicting lncRNA-disease associations. Our analyses on three cancer data sets (breast cancer, lung cancer, and liver cancer) also indicate the usefulness of our method in practical applications. Our proposed method, including the construction of the lncRNA similarity network and disease similarity network and the bi-random walks algorithm on the heterogeneous network, could be used for prediction of potential associations between the lncRNAs and the diseases. Long non-coding RNAs (lncRNAs) form a new class of important ncRNAs, with length longer than 200nt [1–3]. Accumulating evidences have indicated that a large quantity of lncRNAs play critical roles in many important biological processes such as chromatin modification, transcriptional and post-transcriptional regulation, genomic splicing, differentiation, immune responses, and cell cycle control [1–4]. Mutations and dysregulations of these lncRNAs have been found to be linked to the development and progression of various complex human diseases [2, 3]. Computational models have been developed to predict potential associations between lncRNAs and diseases. Chen et al. [4] had an assumption that functionally similar lncRNAs tend to associate with similar diseases and vice versa. Based on this assumption, Chen et al. [4] proposed a method of Laplacian regularized least squares for lncRNA-disease association (LRLSLDA) to infer human lncRNA-disease associations. LRLSLDA calculates the Gaussian interaction profile kernel similarity for both diseases and lncRNAs based on known lncRNA-disease associations, and computes the lncRNA expression similarity of Spearman correlation coefficient between each lncRNA pair, then utilizes Laplacian regularized least squares in the lncRNA space and disease space to combine the optimal classifiers in these spaces to identify potential associations. LRLALDA is a semi-supervised classification algorithm that does not require negative training samples. However, a major issue of LRLSLDA is how to combine two classifiers and how to select suitable parameters. Chen et al. [5] developed two novel calculation models for lncRNA functional similarity (LNCSIM). Chen et al. [6] proposed a fuzzy measure-based LNCRNA functional similarity computational model(FMLNCSIM). Chen et al. [7] introduced the model of KATZ measure to predict potential lncRNA-disease association. Based on the fact that non-coding genes are often cooperated in human diseases to predict potential lncRNA-disease association, Peng et al. [8] proposed a new vector to represent diseases, and applied the newly vectorized data for a positive unlabeled learning algorithm to predict and rank disease-related lncRNAs. Ding et al. [9] proposed a model constructing lncRNA-disease-gene tripartite graphs (TPGLDA) includes gene-disease associations and lncRNA-disease associations, and then applied to the process of resource-allocation on tripartite graphs to construct the potential lncRNA-disease association. However, TPGLDA only focuses on unweighted tripartite graphs. Some models predict novel associations without referring to known associations between lncRNAs and diseases. Chen [10] proposed a model of hypergeometric distribution for lncRNA-disease association (HGLDA) to predict potential lncRNA disease associations. Zhou et al. [11] proposed a rank-based method called RWRHLD, which integrates the miRNA-lncRNA association network, disease-disease similarity network and known lncRNA-disease association network into a heterogeneous network and implementing a random walk with restart on this heterogeneous network to predict novel lncRNA-disease associations. However, RWRHLDA cannot be applied to lncRNAs without a known miRNA interaction partner. Some computational models have been applied to predict lncRNA-disease associations based on random walk on networks. Chen et al. [12] considered the limitations of traditional random walks with restart (RWR), and proposed a model of improved random walk with restart (IRWRLDA) to predict lncRNA-disease associations. Sun et al. [13] proposed a method of RWRlncD based on global network to predict potential lncRNA disease associations. However, RWRlncD only considers lncRNAs which have known associations with the disease and ignores lncRNAs that are currently not associated with the disease. Considering the differences in the network topology of lncRNA and disease, Gu et al. [14] proposed a random walk model on global networks for predicting lncRNA-disease associations (GrwLDA). Yu et al. [15] proposed a model that performs bi-random walks to predict lncRNA-disease associations (BRWLDA). However, BRWLDA only considers the semantic similarity of the disease, and the transitional probability between diseases is only empirically estimated. In this study, we propose a novel computational model of Laplacian normalization and bi-random walks on heterogeneous networks for predicting lncRNA-disease associations (Lap-BiRWRHLDA). Firstly, the method calculates the Gaussian interaction profile kernel similarity of lncRNAs and diseases by known lncRNA-disease associations. Next, we integrate the two sources of similarity to construct an lncRNA-lncRNA similarity network. The disease-disease similarity network can be constructed by the profile kernel similarity of diseases. Subsequently we perform Laplacian normalization on the similarity matrices of lncRNAs and diseases as the transpose matrices. Furthermore, we apply random walks on the lncRNA similarity network and the disease similarity network, respectively. Finally, we use a weighted average of random walks on both networks as a predictor of lncRNA disease associations. We believe that the higher scores of lncRNA-disease associations will have greater possibility for further verification. To evaluate our proposed method, we utilize leave-one-out cross-validation experiments to demonstrate its superior performance compared with existing approaches. Furthermore, the analyses of three cancers (namely, breast cancer, lung cancer, and liver cancer) effectively support the practical application of our method. We then use Lap-BiRWRHLDA to infer potential lncRNA-disease associations. Some high-score results are successfully verified by the LncRNADisease and Lnc2Cancer databases. Leave-one-out cross-validation To assess the performance of our proposed method, we use the leave-one-out cross-validation to perform the assessment. We leave out each known lncRNA-disease association in turn as test sample, while other known relationships are used as training samples and all unknown relationships are taken as candidate samples. Since disease similarity and lncRNA similarity depend on the Gaussian interaction profile kernel similarity of the known lncRNA-disease association, the disease similarity and lncRNA similarity will change when we delete a known lncRNA-disease association, so we will get different similarities. A receiver-operating characteristics (ROC) curve is applied to determine the predictive performance, which plots the correlation between true-positive rate (TPR) indicating sensitivity and false-positive rate (FPR) indicating specificity at different thresholds. Sensitivity represents the percentage of the left-out associations achieving the ranking higher than a given threshold; specificity means the percentage of candidate associations achieving the ranking lower than this given threshold. When we vary thresholds, we will obtain the corresponding different TPRs and FPRs. In this way, ROC is drawn and AUC is calculated. As a result, Lap-BiRWRHLDA achieved the AUC of 0.8409, 0.8527 and 0.8429 for three datasets used, respectively. The effect of parameters in Lap-BiRWRHLDA Parameter α controls the probability of the random walk restart. To optimize the parameter α, we increased α from 0.1 to 1 with step size 0.1, and then calculated the corresponding AUC value by LOOCV. After experimental verification, we chose α=0.9, and we achieved the AUC values of 0.8409. The experimental results indicate that Lap-BiRWRHLDA offers better performance on the LncRNADisease dataset on October 2012, when α=0.9 is selected. Similarly we achieved the AUC values of 0.8527 (α=0.2) and 0.8429 (α=0.8) based on Lnc2Cancer dataset on July 2016, and the LncRNADisease dataset on April 2016. Performance comparison with other methods We compared Lap-BiRWRHLDA with previous published methods in LOOCV based on the LncRNADisease dataset on October 2012. (1) LRLSLDA [4] computes Gaussian interaction profile kernel similarity for both diseases and lncRNAs from known lncRNA-disease associations and lncRNA expression profiles, and then applies the framework of Laplacian regularized least squares to identify potential associations. (2) GrwLDA [14] predicts potential associations by a random walk model on global networks for predicting lncRNA-disease associations (GrwLDA). The comparison is shown in Fig. 1. We also compared our method, LRLSLDA and GrwLDA in LOOCV based on the LncRNADisease dataset on April 2016. The comparison is shown in Fig. 2. Figure 3 shows the comparison of Lap-BiRWRHLDA, LRLSLDA and GrwLDA in LOOCV based on the Lnc2Cancer dataset on July 2016. These comparisons consistently indicate a better performance of our method over the state-of-the-art methods for predicting lncRNA-disease associations. Performance comparison between La-BiRWRHLDA, LRLSLDA and GrwLDA based on the LncRNADisease dataset on October 2012 Performance comparison between La-BiRWRHLDA, LRLSLDA and GrwLDA based on the LncRNADisease dataset on April 2016 Performance comparison between La-BiRWRHLDA, LRLSLDA and GrwLDA based on the Lnc2Cancer dataset on July 2016 We also compared the LncRNADisease dataset on April 2018 with the LncRNADisease dataset on October 2012, then selected 50 lncRNA-disease associations which were unverified in the LncRNADisease dataset on October 2012 but were verified in the LncRNADisease dataset on April 2018. We compared our method with LRLSLDA, GrwLDA by independently testing the ranking of the 50 relationships. Through experimental tests, our method has 30 rankings higher than the LRLSLDA method, and 41 rankings higher than the GrwLDA method. To further highlight the performance of Lap-BiRWRHLDA, we studied the predictive performance of three cancers: breast cancer, lung cancer, and liver cancer. For each type of cancer, we take the top 10 most probable lncRNAs as candidates associated with this cancer. Next, we manually checked these lncRNAs by mining biomedical literature from the LncRNADisease dataset and the Lnc2Cancer dataset. Breast cancer is the second leading cause of female cancer deaths, comprising 22% of all cancers in women [16, 17]. Lap-BiRWRHLDA identifies potential lncRNAs associated with breast cancer and six of the top 10 verified by the recent LncRNADisease dataset. The list in Table 1 shows the lncRNAs associated with breast cancer. Lung cancer is one of the fastest increases in morbidity and mortality, and one of the greatest threats to human health and life. In the past 50 years, many countries have reported that the incidence and mortality of lung cancer are significantly higher. The incidence and mortality of male lung cancer account for the first place among all malignant tumors. Lap-BiRWRHLDA identifies eight out of the top 10 verified (see Table 2). Liver cancer is the fifth most commonly diagnosed cancer and the second most frequent cause of cancer deaths in men worldwide [18, 19]. Lap-BiRWHLDA correctly identifies five liver cancer related lncRNAs. Table 3 lists the lncRNAs related to liver cancer. From these case studies, we can conclude that Lap-BiRWRHLDA is a powerful tool for predicting lncRNA-disease associations with a high level of reliability. Table 1 Breast cancer associated lncRNAs in the top 10 ranking list of Lap-BiRWLDA Table 2 Lung cancer associated lncRNAs in the top 10 ranking list of Lap-BiRWLDA Table 3 Liver cancer associated lncRNAs in the top 10 ranking list of Lap-BiRWLDA Accumulated experimental evidences have shown that lncRNAs play an important role in the human complex disease mechanism, and mutations or disorders of lncRNAs are associated with various complex diseases. More and more evidences show that it is crucial to propose an effective computational model to infer potential lncRNA-disease associations. In this article, we proposed a novel computational model of Laplacian normalization and bi-random walks on heterogeneous networks for predicting lncRNA-disease associations. Our method shows better performance in LOOCV experiments by comparison with previous methods. In 50 unverified lncRNA-disease associations experiments, We compared our method with LRLSLDA, GrwLDA. The results indicated that our method has the higher ranking. Furthermore, the study of the cases of breast cancer, lung cancer, and liver cancer shows that our method improves the performance of predicting potential relationships. Although our method can improve the prediction accuracy, it still has some limitations. For example, construction of the disease-disease similarity matrix relies on the Gaussian interaction profile kernel similarity matrix for diseases from the known disease-lncRNA associations. In further work, we will improve our method in the following aspects: Firstly, Lap-BiRWRHLDA relies on the calculation of similarity matrix when constructing an lncRNA similarity network, and so the incompleteness of data may affect the final performance. Therefore, the integration of gene disease correlation data or the addition of more bioinformatics data may improve the performance of our method. These aspects have been considered in previous methods such as TPGLDA [9] and BRWLDA [15]. Secondly, the bi-random algorithm performs random walk restarts on lncRNA similarity networks and disease similarity networks separately; how to better integrate random walks on two networks is an issue in our future research. In this study, we proposed a method called Lap-BiRWRHLDA to predict the relationship between lncRNA and diseases. This model utilizes the Laplacian normalization of the lncRNA similarity matrix and the disease similarity matrix. Then constructs a heterogeneous network based on lncRNA similarity network, disease similarity network and available lncRNA-disease associations. Next, it applies bi-random walks on the heterogeneous network to predict potential associations between lncRNAs and diseases. Our method can be used to better identify potential associations between lncRNAs and diseases. The reason why our method has good results is mainly due to two factors. On the one hand, we exploit the similarity of lncRNAs by integrating Gaussian interaction profile kernel similarity of lncRNA and lncRNA expression similarity, and then apply Laplacian normalization. We also rely on lncRNA similarity matrices to construct an lncRNA similarity network. On the other hand, the bi-random walk algorithm simulates random walk restarts on the lncRNA similarity network and disease similarity network; we then infer the relationship between lncRNAs and diseases by weighted averaging. We believe that the higher the score of potential lncRNA-disease relationship is, the higher the probability of association is. We downloaded three data sets of lncRNA-disease associations from the supplementary files of published articles [4, 8], which contains 293 experimentally confirmed lncRNA-disease relationships between 167 diseases and 118 lncRNAs from the LncRNADisease database on October 2012 [4], 454 known lncRNA-disease associations between 162 diseases and 187 lncRNAs from the LncRNADisease database on April 2016, and 594 lncRNA-disease associations between 79 diseases and 310 lncRNAs from the Lnc2Cancer database on July 2016 [8]. The adjacency matrix of lncRNA-disease associations is denoted as A, where the value A(i,j) of row i and column j is 1 if disease d(i) is related to lncRNA l(j), otherwise it is 0. Let L={l(1),l(2),⋯,l(nl)} denote the set of lncRNAs, and D={d(1),d(2),⋯,d(nd)} denote the set of diseases. We also downloaded lncRNA expressions and the gene expression levels from the supplementary files of the published articles [4, 8], which contain 21626 expression profiles across 22 human tissues or cell types and 60245 gene expression levels in 16 tissues. Let set L1, where L1 is composed of lncRNAs with lncRNA expression profiles (L1⊆L). According to the previous approaches [4], if l(i), l(j)∈L1, we calculated the Spearman correlation coefficient of l(i) and l(j) as the lncRNA expression similarity. The lncRNA expression similarity matrix is represented by matrix SPC, where SPC(l(i),l(j)) is the expression similarity between l(i) and l(j) if they belongs to L1, otherwise 0. Laplacian normalization Suppose that M=M(i,j),i,j=1,2,⋯,N, is a symmetric matrix, D is a diagonal matrix of which D(i,i) is the sum of row i of M and D(i,j)=0 for i≠j. M is normalized by \(\hat {M} =D^{-1/2}MD^{-1/2}\), which also yields a symmetric matrix. The elements of \( \hat {M}\) are defined by $$ \hat{M}(i,j)=\frac{M(i,j)}{\sqrt{D(i,i)D(j,j)}} $$ This process is called Laplacian normalization of M. It is often used to normalize a weighted matrix of a network [4, 20, 21]. Construction of the lncRNA-lncRNA similarity matrix Based on the assumption that similar diseases tend to show a similar interaction or non-interaction with the lncRNAs, the Gaussian interaction profile kernel similarity of lncRNAs can be calculated from known lncRNA-disease associations [4]. The lncRNA interaction profile IP(l(i)) is a binary vector which is 1 if lncRNA l(i) is related to the disease, 0 otherwise, defined as the i-th column of the adjacency matrix A of the known lncRNA-disease association network constructed above. Then we can calculate the Gaussian interaction profile kernel similarity of lncRNA l(i) and lncRNA l(j) from their interaction profiles as $$ KL(l(i),l(j))=exp\left(-\gamma_{l}\Vert IP(l(i))-IP(l(j))\Vert^{2}\right), $$ where the parameter γl controls the kernel bandwith, which is calculated based on the new kernel bandwidth parameter \(\gamma _{l}^{\prime } \) as follows: $$ \gamma_{l}=\gamma_{l}^{\prime }/\left(\frac{1}{nl}\sum_{i=1}^{nl}\Vert IP(l(i))\Vert^{2}\right), $$ where nl denotes the number of lncRNAs. For simplicity we set \(\gamma _{l}^{\prime }=1\) as in the previous works [4, 22]. Following previous approaches [4], we construct the similarity of lncRNAs by combining the lncRNA expression similarity and Gaussian interaction profile kernel similarity. We denote by SL the lncRNA similarity matrix, where the element SL(i,j) defines the similarity between lncRNA l(i) and lncRNA l(j) as $$ {\begin{aligned} SL(l(i),l(j))=\left\{ \begin{array}{cc} ew\cdot {SPC(l(i),l(j))}+(1-ew)\cdot {KL(l(i),l(j))}, &\text{if \({l(i)},{l(j)}\in L_{1}\)} \\ KL(l(i),l(j)), & \text{otherwise} \end{array} \right. \end{aligned}} $$ where SPC(l(i),l(j)) represents the expression profile similarity of lncRNA l(i) and lncRNA l(j), and its value is the Spearman correlation coefficient of lncRNA l(i) and lncRNA l(j), so the matrix SPC is a symmetric matrix. KL(l(i),l(j)) represents the Gaussian interaction profile kernel similarity of lncRNA l(i) and lncRNA l(j), so the matrix KL is also a symmetric matrix. Therefore the lncRNA similarity matrix SL is a symmetric matrix. In Eq. (4), ew is the weight coefficient of lncRNA expression similarity; for simplicity we set ew=1/2. Next, using Laplacian normalization, the element SL(i,j) is calculated through two steps: $$ LL^{\prime }(i,j)=\left\{ \begin{array}{cc} \frac{SL(i,j)}{\sqrt{\sum {_{i}SL(i,j)}\sum {_{j}SL(i,j)}}}, &\text{\(SL(i,j)\neq{0}\)} \\ 0, & \text{otherwise} \end{array} \right. $$ $$ LL(i,j)=\left\{ \begin{array}{cc} \frac{LL^{\prime }(i,j)}{\sum {_{j}LL^{\prime }(i,j)}}, &\text{\(SL(i,j)\neq{0}\)} \\ 0. & \text{otherwise} \end{array} \right. $$ Construction of the disease-disease similarity matrix Similar to lncRNAs, the Gaussian interaction profile kernel similarity of diseases can be constructed as $$ KD(d(i),d(j))=exp\left(-\gamma_{d}\Vert IP(d(i))-IP(d(j))\Vert^{2}\right). $$ Here IP(d(i)) is defined as the i-th row of the adjacency matrix A of the known lncRNA-disease association. It is a binary vector representing the relationship between disease d(i) and each gene. The Gaussian interaction profile kernel similarity matrix KD is a symmetric matrix. The parameter γd is calculated as $$ \gamma_{d}=\gamma_{d}^{\prime }/\left(\frac{1}{nd}\sum_{i=1}^{nd}\Vert IP(d(i))\Vert^{2}\right), $$ where nd denotes the number of diseases; for simplicity we set \(\gamma _{d}^{\prime }=1\) as in the previous works [4, 22]. From relevant research [4, 19], to improve the predictive accuracy of disease similarity, we apply the logistic function transformation to represent the similarity of diseases. The disease similarity is redefined as $$ SD(d(i),d(j))=\frac{1}{1+\exp ({c\cdot {KD(d(i),d(j))+d}})}, $$ where c and d are two parameters, for which we adopt the same parameter selection as in the previous studies [4, 19], i.e. c=−15,d= log(9999). The disease similarity matrix SD is a symmetric matrix. Next, using Laplacian normalization, the element SD(i,j) is calculated through two steps: $$ LD^{\prime }(i,j)=\left\{ \begin{array}{cc} \frac{SD(i,j)}{\sqrt{\sum {_{i}SD(i,j)}\sum {_{j}SD(i,j)}}}, &\text{\(SD(i,j)\neq{0}\)} \\ 0, & \text{otherwise} \end{array} \right. $$ $$ LD(i,j)=\left\{ \begin{array}{cc} \frac{LD^{\prime }(i,j)}{\sum {_{i}LD^{\prime }(i,j)}}, &\text{\(SD(i,j)\neq{0}\)} \\ 0. & \text{otherwise} \end{array} \right. $$ Construction of the heterogeneous network We first use the two matrices LD, LL to construct two networks, namely a disease similarity network, and an lncRNA similarity network. In the lncRNA similarity network, the edge between l(i) and l(j) is weighted by the similarity value of these two lncRNAs. Likewise, in the disease similarity network, the edge between d(i) and d(j) is weighted by the similarity value of these two diseases. Besides, the lncRNA-disease association network can be modeled as a bipartite graph. In this graph, the heterogeneous nodes correspond to either lncRNA or disease, and edges denote the presence or absence of the associations between them. If there is a known association between disease d(i) and lncRNA l(j), the weight of the edge is 1; otherwise it is 0. We divide the nodes of the heterogeneous network into two types. Those nodes connecting the lncRNA similarity network with the disease similarity network are called bridging nodes, and the other nodes are named internal nodes [21]. The heterogeneous network can be constructed by connecting the lncRNA similarity network and the disease similarity network via the known lncRNA-disease associations. A simple example of a heterogeneous network is illustrated in Fig. 4. An illustrative example of heterogeneous network. An illustrative example of heterogeneous network. The squares indicate the nodes of diseases, and the edges between disease nodes describe the weights determined by the similarity value between diseases. The circles indicate the nodes of lncRNAs, and the edges between lncRNAs nodes describe the weights determined by the similarity value between lncRNAs. The edges between diseases and lncRNAs indicate the known lncRNA-disease associations, and the dashed lines indicate the predicted potential lncRNA-disease relationship Lap-BiRWHLDA In this study, we develop a novel computational method called Lap-BiRWHLDA to predict human lncRNA-disease associations. Figure 5 shows the flowchart of Lap-BiRWHLDA. Firstly, lncRNA similarity and disease similarity can be calculated based on the known lncRNA-disease associations taken from the LncRNADisease database. Secondly, the global heterogeneous network is built by combining the lncRNA similarity network, the disease similarity network and the lncRNA-disease association network. Finally, the bi-random walk algorithm is performed on the heterogeneous network to obtain the association probability scores between lncRNAs and diseases. The flowchart of Lap-BiRWHLDA Suppose a random walker can jump from d(1) to l(1) and then to l(2). We can take d(1) as the starting node for the random walk. To simulate this process, we apply a random walk on the lncRNA similarity network. The iterative process can be described as $$ {RT}_{L}^{t}=\alpha {{RT}_{L}^{t-1}}LL+(1-\alpha){Rt}_{0}. $$ Similarly, we can also apply a random walk on the disease similarity network as follows: $$ {RT}_{D}^{t}=\alpha {LD}{RT}_{D}^{t-1}+(1-\alpha){Rt}_{0}, $$ where α is a parameter to control the restart probability for the random walker, \({RT}_{L}^{t}\) is the predicted association between lncRNA l and disease d in the t-th iteration, \({RL}_{D}^{t}\) is the predicted relevance between disease d and lncRNA l in the t-th iteration, with $$ {RT}_{L}^{0}={RT}_{D}^{0}={Rt}_{0}=A/sum(A). $$ After the bi -random walks in the disease similarity network and in the lncRNA similarity network in the t-th step, Lap-BiRWHLDA further combines \( {RT}_{L}^{t}\) and \({RT}_{D}^{t}\) into RTt as follows: $$ {RT}_{L}^{t}={RT}_{D}^{t}=RT^{t}=\frac{{RT}_{L}^{t}+{RT}_{D}^{t}}{2}. $$ After several steps, when the change between RTt+1 and RTt is less than 10−10, we obtain the steady prediction score matrix RT, where RT(i,j) is the probability of potential association disease d(i) and lncRNA l(j). AUC: Areas under ROC curve Lap-BiRWRHLDA: LOOCV: Receiver-operating characteristics Kapranov P, Cheng J, Dike S, Nix DA, Duttagupta R, Willingham AT, Stadler PF, Hertel J, Hackermller J, Hofacker IL, Bell I, Cheung E, Drenkow J, Dumais E, Patel S, Helt G, Ganesh M, Ghosh S, Piccolboni A, Sementchenko V, Tammana H, Gingeras TR. RNA maps reveal new RNA classes and a possible function for pervasive transcription. Science. 2007; 316(5830):1484–8. Mercer TR, Dinger ME, Mattick JS. Long non-coding RNAs: insights into functions. Nat Rev Genet. 2009; 10(3):155–9. Wapinski O, Chang HY. Long noncoding RNAs and human disease. Trends Cell Biol. 2011; 21(6):354–61. Chen X, Yan GY. Novel human lncRNA-disease association inference based on lncRNA expression profiles. Bioinformatics. 2013; 29(20):2617–24. Chen X, Yan CC, Luo C, et al. Constructing lncRNA functional similarity network based on lncRNA-disease associations and disease semantic similarity. Sci Rep. 2015; 5:11338. Chen X, Huang YA, Wang XS, et al. FMLNCSIM: fuzzy measure-based lncRNA functional similarity calculation model. Oncotarget. 2016; 7(29):45948–58. Chen X. KATZLDA: KATZ measure for the lncRNA-disease association prediction. Sci Rep. 2014; 5:16840. Peng H, Lan C, Liu Y, et al. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes. Oncotarget. 2017; 8(45):78901–16. Ding L, Wang M, Sun D, et al. TPGLDA: Novel prediction of associations between lncRNAs and disease via lncRNA-disease-gene tripartite graph. Sci Rep. 2018; 8(1):1065. Chen X. Predicting lncRNA-disease associations and constructing lncRNA functional similarity network based on the information of miRNA[J]. Sci Rep. 2015; 5:13186. Zhou M, Wang X, Li J, et al. Prioritizing candidate disease-related long non-coding RNAs by walking on the heterogeneous lncRNA and disease network. Mol BioSyst. 2014; 11(3):760. Chen X, You ZH, Yan GY, et al. IRWRLDA: improved random walk with restart for lncRNA-disease association prediction. Oncotarget. 2016; 7(36):57919–31. Sun J, Shi H, Wang Z, et al. Inferring novel lncRNA-disease associations based on a random walk model of a lncRNA functional similarity network. Mol BioSyst. 2014; 10(8):2074–81. Gu C, Li XY, Cai LJ, et al. Global network random walk for predicting potential human lncRNA-disease associations. Sci Rep. 2017; 7(1):12442. Yu G, Fu G, Lu C, et al. BRWLDA: bi-random walks for predicting lncRNA-disease associations. Oncotarget. 2017; 8(36):60429–46. Donahue HJ, Genetos DC. Genomic approaches in breast cancer research. Brief Funct Genom. 2013; 12(5):391–6. Karagoz K, Sinha R, Arga KY. Triple Negative Breast Cancer: A Multi-Omics Network Discovery Strategy for Candidate Targets and Driving Pathways. Omics-a J Integr Biol. 2015;19(2). Bosch FX, Ribes J, Borrs J. Epidemiology of Primary Liver Cancer. Sem Liver Dis. 1999; 19(03):271–85. Center MM, Jemal A. International trends in liver cancer incidence rates. Cancer Epidemiol Biomarkers Prev. 2011; 20(11):2362–8. Vanunu O, Magger O, Ruppin E, et al. Associating genes and protein complexes with disease via network propagation. PLoS Comput Biol. 2010; 6(1):e1000641. Zhao ZQ, Han GS, Yu ZG, Li JY. Laplacian normalization and random walk on heterogeneous networks for disease-gene prioritization. Comput Biol Chem. 2015; 57(C):21–28. Van LT, Nabuurs SB, Marchiori E. Gaussian interaction profile kernels for predicting drug-target interaction. Bioinformatics. 2011; 27(21):3036. The authors thank the anonymous referees, especially Prof. Jinyan Li at the University of Technology Sydney, for suggestions that helped improve the paper substantially. Publication of this article was sponsored by the Natural Science Foundation of China (Grant No.11401503), Natural Science Foundation of Hunan Province of China (Grant No. 2016JJ3116), Outstanding Youth Foundation of Hunan Educational Committee (Grant No.16B256). The data sets used in this study are available from the corresponding author on reasonable request. About this supplement This article has been published as part of BMC Systems Biology Volume 12 Supplement 9, 2018: Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-9. Yaping Wen and Guosheng Han contributed contributed equally to this work. School of Mathematics and Computational Science, Xiangtan University, Hunan, 411105, China Yaping Wen, Guosheng Han & Vo V. Anh Department of Mathematics, Swinburne University of Technology, PO Box 218, Hawthorn, Vic 3122, Australia Vo V. Anh Yaping Wen Guosheng Han YPW and GSH contributed to the conception and design of the study and developed the method. YPW implemented the algorithms and analyzed the data and results. GSH gave the ideas and supervised the project. YPW wrote the manuscript, GSH and Vo V Anh reviewed the final manuscript. All authors read and approved the final manuscript. Correspondence to Guosheng Han. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Wen, Y., Han, G. & Anh, V.V. Laplacian normalization and bi-random walks on heterogeneous networks for predicting lncRNA-disease associations. BMC Syst Biol 12, 122 (2018). https://doi.org/10.1186/s12918-018-0660-0 Bi-random walk Disease similarity network LncRNA similarity network
CommonCrawl
npj breast cancer Connected-UNets: a deep learning architecture for breast mass segmentation Annotation-efficient deep learning for automatic medical image segmentation Shanshan Wang, Cheng Li, … Hairong Zheng Deep Learning to Improve Breast Cancer Detection on Screening Mammography Li Shen, Laurie R. Margolies, … Weiva Sieh A generative adversarial network-based abnormality detection using only normal images for model training with application to digital breast tomosynthesis Albert Swiecicki, Nicholas Konz, … Maciej A. Mazurowski Area-based breast percentage density estimation in mammograms using weight-adaptive multitask learning Naga Raju Gudhe, Hamid Behravan, … Arto Mannermaa A Fast and Refined Cancer Regions Segmentation Framework in Whole-slide Breast Pathological Images Zichao Guo, Hong Liu, … Yueliang Qian Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach William Lotter, Abdul Rahman Diab, … A. Gregory Sorensen Detecting and classifying lesions in mammograms with Deep Learning Dezső Ribli, Anna Horváth, … István Csabai A machine and human reader study on AI diagnosis model safety under attacks of adversarial images Qianwei Zhou, Margarita Zuley, … Shandong Wu Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net Nader Aldoj, Federico Biavati, … Marc Dewey Asma Baccouche ORCID: orcid.org/0000-0001-6236-86261, Begonya Garcia-Zapirain2, Cristian Castillo Olea2 & Adel S. Elmaghraby1 npj Breast Cancer volume 7, Article number: 151 (2021) Cite this article Cancer imaging Breast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset. Breast cancer is the most common type of cancer that is leading to death among women, where 41,170 death cases were reported in the United States in 2020 and it represents a rate of 15% of estimated deaths against the other types of cancer1. Studies emphasized the importance of frequent mammography screening in order to reduce the mortality rate by detecting the breast tumors early before being spread to normal tissues and other healthy organs2. Therefore, mammograms are inspected every day by radiology experts to search for abnormal lesions and detect the location, shape and type of any suspicious regions in the breast. Although this process is considered crucial and requires more precision and accuracy, it remains expensive and exposed to error, due to the increasing number of daily screening mammograms3. Medical image segmentation task helps doctors to extract detailed information of the suspicious regions of tumors for further diagnosis and pathology findings. Thus, an automated system can benefit from the high numbers of mammograms and handle this process automatically. In the last years, the advance in computer vision applications and algorithms has shown remarkable results in developing tools to assist doctors in detecting and segmenting tumors with the lowest possible error in many medical image applications and particularly in mammography4,5,6. Traditional techniques for tumor segmentation, such as region-growing, active contour and watershed, relied on extracting handcrafted features that only represent gray-level, texture, and morphology to label the pixels and indicate the contour surrounding the mass tumors, while excluding the background tissue7,8. Computer-aided diagnosis (CAD) development for breast cancer imaging has been recently renewed to cope with the rapid emergence of deep learning algorithms and Artificial intelligence, and it highlights the new systems that may hold real potential to improve clinical care9,10,11. Recently, the success of deep learning models was highlighted in many medical applications for their capability to extract high-level features directly without knowledge assistance12,13,14,15. Convolutional neural networks (CNNs) were among the first architectures that attempted to label pixels surrounding objects at different scales and shapes16. In respect of medical image segmentation models, the encoder–decoder networks, the fully convolutional network (FCN), were developed and widely known for their ability to extract deep and semantic features and map them with fine-grained details of the target objects over complex backgrounds17,18. With the introduction of skip connection, the encoder–decoder architectures were transformed to UNet architecture that was successfully implemented in many medical image segmentation works19,20,21. Another variation of the FCN, the full resolution convolutional network (FrCN), was introduced by Al-Antari et al.22 to segment the detected breast masses, and it produced a Dice score of 92.69% and a Jaccard similarity coefficient of 86.37% on INbreast dataset. Accordingly, Zhu et al.23 employed a multi-scale FCN model followed by a conditional random field (CRF) for mammographic mass segmentation, and they achieved Dice score of 90.97% on the INbreast dataset and 91.30% on the DDSM-BCRP dataset. Another work proposed by Singh et al.24 was inspired by the FCN architecture and developed a conditional Generative Adversarial Network (cGAN) for breast tumor segmentation. The work achieved a Dice score of 92.11% and an Intersection over Union (IoU) score of 84.55% on the INbreast dataset and a Dice score of 88.12% and an IoU score of 79.87% on a private dataset. Medical image segmentation usually presents challenging cases; and consequently, FCN network suffered from low segmentation accuracy due to the loss of spatial resolution in the case of small objects and irregular shapes. Therefore, a new model, called UNet, was introduced by Ronneberger et al.25 to overcome the limitation of FCN models. UNet proposed to integrate the high-level features from the decoder with the low-level features from the encoder. This fusion was maintained with skip connections that made the UNet architecture adequate in several medical applications and particularly in mammography. A work by Soulami et al.26 relied on an end-to-end UNet model for the detection, segmentation, and classification of breast masses in one stage, where the segmentation evaluation showed a Dice score of 90.5% for both DDSM and INbreast datasets. Similarly, Abdelhafiz et al.27 implemented a Vanilla UNet model to segment mass lesions in entire mammograms, and it achieved a mean Dice score of 95.1% and a mean IoU score of 90.9% on both digitized film-based and fully digitized MG images. Inspired by the success of UNet and its variations to improve the overall performance, we propose an architecture that connects two simples UNets, called Connected-UNets. We revisit the original idea of UNet architecture, which added skip connections between an encoder and a decoder network, and we similarly apply another modification of skip connection oppositely between a decoder and an encoder network after cascading a second UNet. Therefore, the final architecture presents two cascaded encoders and decoders that are all alternately connected via different skip connections. We expand the idea of recovering the fine-grained features that are lost in the encoding path of UNet, and we apply it to encode the high-resolution features by connecting them to the previously decoded features. We also add the Atrous Spatial Pyramid Pooling (ASPP) mechanism to the standard UNet architecture and we apply the proposed architecture on two other variations, AUNet and ResUNet, to develop the Connected-AUNets and the Connected-ResUNets. We implement the architectures for segmenting regions of interest (ROI) of breast mass tumors that were previously detected from mammograms of two widely used datasets: Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and from an independent private dataset. We integrate the detection and localization step, presented in a previous work, with the new segmentation step into a final framework that also proposes a preliminary data-enhancement step. In fact, we evaluate the architectures by adding synthetic data generated using an image-to-image translation method, cycle-consistent Generative Adversarial Network (CycleGAN), between the different mammography datasets. All experiments using the proposed architecture models were conducted on a PC with the following specifications: Intel(R) Core (TM) i7-8700K processor with 32 GB RAM, 3.70 GHz frequency, and one NVIDIA GeForce GTX 1090 Ti GPU. Python 3.6 was used for conducting all experiments. In this segmentation stage, only correctly detected and classified masses by YOLO model were considered and the false predictions were discarded as similarly highlighted in previous works7,23. Some cases of mammograms have more than one detected mass lesion, therefore, a total of 1467, 112, and 638 masses were, respectively, considered from the CBIS-DDSM, INbreast, and the private dataset. Our network is applied on single detected ROIs and therefore our intention was to consider mammograms with multiple lesions at the detection stage and treat them separately as single ROIs of mass lesions. The predicted ROI masses were next resized into 256 × 256 using a bi-cubic interpolation in case the original size is small, or using an inter-area resampling interpolation in case it is large. All images were preprocessed to remove additional noise and degradation caused by the scanning technique of digital X-ray mammography28,29. Thus, we applied a histogram equalization to enhance the compressed regions and smooth the distribution of the pixels that helps the pixel segmentation. All images were normalized to a range of [0, 1]. To train the proposed segmentation deep learning models, a large amount of annotated samples should be prepared to generalize the learning curve of the models. Due to the limited amount of ROI masses in each dataset, we have augmented the original ROIs four times by rotating them with the angles Δθ = {0°, 90°, 180°, 270°}. We have also transformed them twice differently using the Contrast Limited Adaptive Histogram Equalization (CLAHE) method. Consequently, raw data of single ROI images were augmented six times into a total of 8802, 672, and 3828 of ROI masses were, respectively, prepared from the CBIS-DDSM, INbreast, and the private dataset to train and test the proposed architectures. Evaluation metrics and experimental setup Segmentation stage is evaluated using the Dice similarity score, also called F1-score, which represents a coupled average of the intersection between areas and the total areas as indicated in Eq. (1). Accordingly, we use another evaluation metric, the IoU score, also called the Jaccard score, which is detailed in Eq. (2). A good performance of segmentation is achieved where the pixels surrounding all the masses are correctly segmented and thus a binary mask is generated from the segmented contour of the mass lesions with a high Dice score and IoU score. $${{{\mathrm{Dice}}}}\;{{{\mathrm{score}}}}\left( {A,B} \right) = \frac{{2 \times {{{\mathrm{Area}}}}\;{{{\mathrm{of}}}}\;{{{\mathrm{Intersection}}}}\left( {A,B} \right)}}{{{{{\mathrm{Area}}}}\;{{{\mathrm{of}}}}\left( A \right) + {{{\mathrm{Area}}}}\;{{{\mathrm{of}}}}\left( B \right)}} = \frac{{2 \times \left( {A \cap B} \right)}}{{A + B}}$$ $${{{\mathrm{IoU}}}}\;{{{\mathrm{score}}}}\left( {A,B} \right) = \frac{{{{{\mathrm{Area}}}}\;{{{\mathrm{of}}}}\;{{{\mathrm{Intersection}}}}\left( {A,B} \right)}}{{{{{\mathrm{Area}}}}\;{{{\mathrm{of}}}}\;{{{\mathrm{Union}}}}\left( {A,B} \right)}} = \frac{{A \cap B}}{{A \cup B}}$$ To train the proposed architecture models, a learning rate of 0.0001 with Adam optimizer is employed. A weighted sum of Dice and IoU losses is used as a segmentation loss function using the Dice score and IoU score between true and predicted samples, as detailed in Eq. (3). $$\begin{array}{ll}{{{\mathrm{Segmentation}}}}\;{{{\mathrm{loss(true,predicted)}}}}& =\,\, - \left(0.4 \times {{{\mathrm{Dice}}}}\;{{{\mathrm{score}}}}\left( {{{{\mathrm{true,predicted}}}}} \right)\right.\\ &\,\,\quad\left.+\, 0.6 \times {{{\mathrm{IoU}}}}\left( {{{{\mathrm{true,predicted}}}}} \right) \right)\end{array}$$ Each mammography dataset is randomly split into groups of 70%, 20%, and 10%, respectively, for training, testing, and validation sets as shown in Table 1 that highlights the data distribution of each mammography dataset. It is important to highlight that in Table 1 that some of the raw MGs have multiple ROIs. Accordingly, 100 epochs and 8 mini-batches are used to optimize the network parameters with the training and validation sets. Table 1 Data distribution of the mammography datasets. In order to evaluate our integrated framework system, we first define a segmentation accuracy measure to be the mean IoU score for correctly identified ROIs based on a 90% overlap threshold and we refer to it as IoU_90 score as shown in Eq. (4). Then, a final segmentation accuracy is introduced as an end-to-end accuracy for the two stages, explained in Eq. (5). $${{{\mathrm{IoU}}}}_{90}\;{{{\mathrm{score}}}} = \left\{ {\begin{array}{*{20}{c}} {{{{\mathrm{mean}}}}\left( {{{{\mathrm{IoU}}}}\;{{{\mathrm{scores}}}}\forall {{{\mathrm{ROIs}}}}} \right)} & {{{{\mathrm{if}}}}\;{{{\mathrm{IoU}}}}\;{{{\mathrm{score}}}}\left( {A,B} \right) \ge 90} \\ {{{{\mathrm{Not}}}}\;{{{\mathrm{applicable}}}}} & {{{{\mathrm{if}}}}\;{{{\mathrm{IoU}}}}\;{{{\mathrm{score}}}} < 90} \end{array}} \right.$$ $${{{\mathrm{Final}}}}\;{{{\mathrm{segmentation}}}}\;{{{\mathrm{accuracy}}}} = {{{\mathrm{Detection}}}}\;{{{\mathrm{accurcay}}}}\;{{{\mathrm{rate}}}} \times {{{\mathrm{IoU}}}}_{90}\;{{{\mathrm{score}}}}$$ Quantitative segmentation results As shown in Table 2, the results are measured for each testing set where we computed the two evaluation metrics for the segmented maps per pixel and compared them to the original ground truth. Table 2 Segmentation performance of our proposed networks on the test sets. The comparative results show that the proposed Connected-UNets architecture performs better than the standard UNet in terms of Dice score and IoU score for all the experimental datasets. We also enhanced the segmentation performance of the standard AUNet and ResUNet using the architecture. Accordingly, the results show a comparison of the standard architectures where ResUNet achieved better results than the AUNet, and the later architecture had a better performance than the UNet. The results emphasize the advantages of the attention mechanism and the residual blocks that were added to the simple UNet. We clearly notice an improvement of Dice score by 3.6% using the Connected-UNets, 3.4% using the Connected-AUNets, and 4% using the Connected-ResUNets on the CBIS-DDSM dataset. For the INbreast dataset, we improved the Dice score by 4.15% using the Connected-UNets, 2.17% using the Connected-AUNets, and 1.42% using the Connected-ResUNets. Similarly, we had an improvement of Dice score on the private dataset by 5.85% using the Connected-UNets, 5.57% using the Connected-AUNets, and 2.3% using the Connected-ResUNets. Moreover, the segmentation performance of our proposed Connected-UNets and its variations against the standard UNet, AUNet and ResUNet, was evaluated by the area under curve (AUC) over test sets of all datasets. Segmented images were first generated using each model where pixels were predicted in the range of 0 and 255. After that, predicted images were normalized of scores in the range of 0 and 1. Similarly, ground truth images were normalized to have values of either 0 or 1. Therefore, the problem was transformed into a binary classification task of pixels and consequently receiver operating characteristic (ROC) was computed between the predicted pixels and their true values. Figure 1 shows a comparison of ROC curves where we clearly notice that the proposed architectures outperform all standard models with an average AUC of 0.79 for the CBIS-DDSM, 0.94 for the INbreast, and 0.95 for the private dataset. Fig. 1: Performance of mass segmentation using the different architectures in terms of ROC curves on the test sets of CBIS-DDSM, INbreast, and the private datasets. ROC curve plots with True positive Rate (TPR) against the False Positive Rate (FPR) and area under curve for pixel-wise evaluation of the standard models (UNet, AUNet, and ResUNet) and for the proposed architecture models (Connected-UNets, Connected-AUNets, and Connected-ResUNets). Subplot on the left shows ROC curve plot for the CBIS-DDSM dataset. Subplot on the right shows ROC curve plot for the INbreast dataset. Subplot on the bottom shows ROC curve plot for the private dataset. According to Table 2, the private dataset had the best segmentation performance along with the proposed architectures as it represents the best image resolution among the used mammography datasets. Therefore, we applied the CycleGAN model to translate images from CBIS-DDSM and INbreast datasets (i.e., weak domains) into the private dataset (i.e., strong domain). Synthetic images were then created after training the CycleGAN model between the unpaired datasets and generating the new ROI masses for the CBIS-DDSM and the INbreast as shown in the examples below in Fig. 2, where we clearly see the enhanced quality of the new ROI masses that benefit from each dataset's texture. Fig. 2: Samples of synthetic data from CBIS-DDSM and INbreast datasets generated by CycleGAN model using the private dataset. Top rows in each subplot show original mammograms respectively from CBIS-DDSM (INbreast) dataset. Bottom rows show their corresponding synthetic mammograms generated by CycleGAN model that was trained respectively between CBISDDSM (INbreast) and the private dataset. Furthermore, we trained the proposed architectures on the original and synthetic images to predict the segmentation mappings. Table 3 shows the improvement of segmentation's performance of all the standard and architectures using the joined dataset of original and synthetic images. In fact, we notice an increase of Dice score on the CBIS-DDSM by 3.76% using the standard UNet, 3.97% using the standard AUNet, and 4.17% using the ResUNet. Similarly, we have an improved Dice score of 4.8% using the Connected-UNets, 4.11% using the Connected-AUNets, and 4.51% using the Connected-ResUNets. Table 3 Comparison of the proposed architectures after adding synthetic CBIS-DDSM and INbreast. The integrated framework is finally evaluated using all suggested models. As the end-to-end performance depends on the first detection and localization step which used YOLO model, the segmentation step was first reported using the segmentation accuracy measure IoU_90score that was later multiplied by the detection accuracy rate to form a final segmentation accuracy. Table 4 shows a comparison of final segmentation results of the different models after using the detection accuracy rate of 95.7%, 98.1%, and 98%, respectively, for CBIS-DDSM, INbreast, and the private dataset30. Consequently, we reported a final segmentation performance with a maximum accuracy of 86.91%, 93.03%, and 95.39% using the Connected-ResUNets architecture model, respectively, for CBIS-DDSM, INbreast, and the private dataset. Table 4 Final segmentation performance of our proposed networks on the test sets. Finally, a comparison of the results of the latest state-of-the-art methods and models to segment the breast masses is listed in Table 5. Our proposed architectures outperformed the UNet model and its current variations. Comparing the Dice score and the IoU score with the other methods shows that we achieved the highest segmentation performance using the architectures on the two public datasets: CBIS-DDSM with a Dice score of 89.52% and an IoU score of 80.02%, and INbreast with a Dice score of 95.28% and an IoU score of 91.03% using the Connected-ResUNets. We surpassed the work of Ravitha Rajalakshmi et al.31 by 6.62% Dice score on the CBIS-DDSM dataset, and the work of Li et al.32 by 2.56% Dice score on the INbreast dataset. Table 5 Comparison of the proposed architectures and state-of-the-art methods. Qualitative segmentation results We applied a post-processing step to all segmented ROI masses by simply removing any outlier point that is far away from the main contour of the lesions. Consequently, we extracted all the possible contours from the binary masks and we only selected the one with the largest area. This was applied for the output of the standard UNet models, the Connected-UNets model and their variations. Figure 3 shows examples of the segmented ROI masses generated by the experimental models against their ground truth images. We clearly observe the different quality of the segmentation maps and results of the Connected-UNets and their variations always contain less error and capture more precision compared to the ground truth. Observing the segmentation results, we can see that Connected-ResUNets is more capable to predict the smallest details of the tumor's boundary than the other architectures. Overall, the proposed architectures outperform the standard architectures and this indicates the power of the proposed architectures to learn complex features through the connections added between the two UNets in the proposed Connected-UNets, which takes advantages of the decoded features as another input in the encoder pathway. Fig. 3: Examples of the segmentation results on the test set of the datasets. Subplot on the top shows two samples of mammograms from the CBIS-DDSM dataset. Subplot on the middle shows two samples of mammograms from the INbreast dataset. Subplot on the bottom shows two samples of mammograms from the private dataset. Each sample from the two rows indicates its corresponding ground-truth (binary) image, segmentation output images of the standard models (UNet, AUNet, and ResUNet) vs segmentation output of the proposed architecture models (Connected-UNets, Connected-AUNets, and Connected ResUNets). Accordingly, a visual comparison of the Connected-ResUNets model, which uses the suggested ASPP block to connect each encoder and decoder, opposed to the same model without using the ASPP block is represented in Fig. 4. We can conclude that the ASPP block added more precision to the segmentation results. Fig. 4: Examples of the segmentation results for the proposed architecture Connected-ResUNets comparing with and without using ASPP block. Top row shows an example of original mammogram from CBIS-DDSM dataset. Middle row shows an example of original mammogram from INbreast dataset. Bottom row shows an example of original mammogram from the private dataset. Each original mammogram indicates its corresponding ground-truth (binary) image, and segmentation output image of the Connected-ResUNets without including the ASPP block vs a segmentation output image of the Connected- ResUNets with including the ASPP block. After that, a qualitative segmentation comparison of the proposed Connected-ResUNets against the basic ResUNet architecture is presented in Fig. 5. Additionally, a comparison of Dice score and IoU score for each corresponding ROI mass is also achieved. We observe that the proposed architecture model is capable to capture well the smallest details of the tumors having different shapes and sizes from all the used datasets. Hence, it is clear that predicted contours by the Connected-ResUNets is the closest to the ground truth contours and this is also justified with the highest Dice score and IoU score values. Fig. 5: Examples of the segmented masses for the proposed architecture Connected-ResUNets comparing with the ResUNet. Top row shows three examples of mammograms from the CBIS-DDSM dataset. Middle row shows three examples of mammograms from the INbreast dataset. Bottom row shows three examples of mammograms from the private dataset. Each example indicate contours of their ground-truth images, contours of segmentation output images of the standard ResUNet, and contours of segmentation output images of the proposed Connected-ResUNets. Each row also include a comparative bar chart of IoU score and Dice score of ResUNet and Connected-ResUNets models. Additionally, we compared the segmentation results of one of the proposed architecture models, Connected-ResUNets, after adding the synthetic images that were generated by the CycleGAN model to the training data. Figure 6 shows a better-segmented contour of the mass tumor using the additional synthetic images. The results of the new training data yield more precise pixel segmentation that is close to the ground truth images. Consequently, the segmentation results' quality proves the advantage of adding synthetic images to enhance the segmentation quality and it confirms the ability of cross-modality synthesis to augment the size of the data and enhance quality by embracing other similar domains. Fig. 6: Examples of the segmentation results for the proposed architecture Connected-ResUNets with and without adding the synthetic data. Subplot on the top shows two samples of mammograms from the CBIS-DDSM dataset. Subplot on the bottom shows two samples of mammograms from the INbreast dataset. Each rows indicate original mammogram, its corresponding synthetic mammogram, ground-truth (binary) image, segmentation output image of the Connected-ResUNets model trained without adding synthetic data, and segmentation output image of the Connected-ResUNets model trained with adding synthetic data. Finally, we applied two state-of-the-art methods that we discussed, by Al-Antari et al.22 and Li et al.32, to segment ROIs from all the mammography datasets, and visual comparison shows that predictions of two suggested models FrCN and CR-UNET are slightly close to the ground truth images but they do not capture precisely the contours. Examples shown in Fig. 7 are selected to be challenging for segmentation and our proposed architecture models showed better visual results to segment the mass lesions. Fig. 7: Examples of the segmentation results for the proposed architecture models again two state-of-the-art methods FrCN22 and CR-UNET32. Top row shows an example of original mammogram from CBIS-DDSM dataset. Middle row shows an example of original mammogram from INbreast dataset. Bottom row shows an example of original mammogram from the private dataset. Each original mammogram indicates its corresponding ground-truth (binary) image, a segmentation output image of a state-of-the-art model FrCN, a segmentation output image of a state-of-the-art model CR-UNET, vs segmentation output image of the proposed architecture models (Connected-UNets, Connected-AUNets, and Connected-ResUNets). Deep learning models have recently achieved remarkable success in segmenting mass tumors in mammograms. Recent studies involved the UNet as one of the state-of-the-art architectures and tried to modify it for a better segmentation performance22,23,24,25,26,27,28,29. In this study, we introduced an architecture, called Connected-UNets, which fully connects two single UNets using additional skip connection paths. The network also employs the ASPP mechanism as a transition block in order to overcome the challenge of losing resolution, particularly in the case of small tumors. The new mass segmentation architecture expands the ability of skip connections to reconstruct the details lost in the encoding pathway by revoking the first decoded features and connect them with the additional encoded inputs. We implemented the architecture on two variations of UNet, as the Attention UNet (AUNet) and the Residual UNet (ResUNet). The results of the proposed architectures showed the segmentation improvement compared to the basic architectures as shown in Table 2 with a maximum Dice score of 89.52% on the CBIS-DDSM dataset and 95.28% on the INbreast dataset. Moreover, the quantitative evaluation indicated the advantage of the ResUNet and AUNet on segmenting the mass tumors. Hence, the improved architectures' Connected-AUNets and Connected-ResUNets outperformed the Connected-UNets in all the used mammography datasets. Comparison of the segmentation map results of each model approve the enhancement made to the standard models to provide a precise segmentation of the mass boundaries as shown in Fig. 3. Limitations of the proposed architectures can occur on the long training time of an average of 0.638 s per epoch, which is due to the high computation of the neural networks that have more trainable parameters than the standard architecture models. This paper provides an architecture to segment the breast masses in mammograms. The proposed architecture incorporates the recent modifications that were suggested to overcome the challenges of pixel-to-pixel segmentation in medical images, such as attention mechanism, residual blocks, ASPP concept, etc. The improved segmentation performance is made after benefiting from the information decoded using one UNet and propagated again to a second UNet. In addition, synthetic data were created using the CycleGAN model for augmenting the training data. This applies a quality translation between domains in order to embrace the different quality of the existing mammography datasets (i.e., X-ray filter, full-field digital mammography (FFDM)). In conclusion, this work integrated our recent work30 using YOLO model for mass detection and the proposed segmentation architecture models in order to provide a complete clinical tool for mass tumor diagnoses. Future work aims at expanding this tool to assist radiologists for more automatic breast cancer diagnosis such as tumor classification and shape prediction. Technical background UNet is one of the state-of-the-art models that was developed for medical image segmentation. Inspired by the FCN. As the name indicates, the network has a symmetric architecture showing a U-shape. It consists of a down-sampling path and an up-sampling path. The remarkable contribution of UNet architecture was the introduction of the skip connections path that added an advantage to the standard architecture. This helps to recover the spatial information that gets lost during the down-sampling path due to the pooling operations. Many potential scopes of improvement were recently suggested on the UNet architecture to improve its performance and enhance the quality of the segmentation. Ravitha Rajalakshmi et al.31 introduced a deeply supervised U-Net model (DS U-Net) associated with dense CRFs to segment suspicious regions on mammograms. The model was tested and gave a Dice score of 82.9% and 79%, respectively, on CBIS-DDSM and INbreast datasets. Accordingly, a Conditional Residual UNet, called CRUNet, was also suggested by Li et al32. to improve the performance of the standard UNet for breast mass segmentation, and it achieved a Dice score of 92.72% on the INbreast dataset. Inspired by the residual mechanism, Abdelhafiz et al.33 proposed the Residual UNet, called RUNet or ResUNet, by adding residual blocks to the standard convolutional layer in the encoder pathway in order to add deeper effect to the network. The work was applied for mass segmentation and then detected binary maps were fed to a ResNet model for classification into benign or malignant. The segmentation results yielded a Dice score of 90.5% and a mean IoU score of 89.1% on the INbreast dataset. Similarly, Ibtehaz et al.34 developed an architecture, called MultiResUNet, which showed a remarkable gain in performance for biomedical image datasets. Another variation of the UNet was suggested using the attention mechanism that showed remarkable success in medical image segmentation35. Consequently, Oktay et al.36 integrated the attention gate into the standard UNet to propose a new Attention UNet, called AUNet. This improved the prediction performance across CT pancreas segmentation and yielded a Dice score of 83.1%. Similarly, Li et al.37 developed an Attention dense UNet for breast mass segmentation that was compared to three basic state-of-the-art models, UNet, AUNet and DenseNet. The suggested model achieved a Dice score of 82.24% on the original DDSM database. In another work suggested by Sun et al.38, a attention-guided dense-upsampling network was developed for breast mass segmentation in whole mammograms. The architecture achieved a Dice score of 81.8% on the CBIS-DDSM dataset and 79.1% on the INbreast dataset. Aligned with the improvement made in encoder–decoder architecture to deal with the limitations encountered in medical images segmentation, the ASPP module was successfully integrated into many networks39. This showed effectiveness in breast mass segmentation in a work presented by Wang et al.40 that achieved a Dice score of 91.10% and 91.69%, respectively, on the INbreast and DDSM-BCRP datasets. Studying the UNet architecture showed the unknown architecture's depth and the restrictive design of skip connections. Therefore, an architecture, named UNet++, was introduced by Zhou et al.41 to alleviate the network depth and redesign the standard skip connections. The work was evaluated on six medical images datasets with multiple modalities, and it demonstrated consistent performance for semantic and instance segmentation tasks. A similar variation model, called U-Net+, was employed by Tsochatzidis et al.42 to segment ROI mass before integrating it with the classification stage by a CNN model. The segmentation performance showed a Dice score of 0.722 and 0.738, and a Jaccard index of 0.565 and 0.585, respectively, on the CBIS-DDSM and DDSM-400 datasets. Moreover, to deal with challenging medical images, Jha et al.43 presented a DoubleUNets architecture that uses two encoders and two decoders in sequence and an ASPP module. The network showed a better performance than the baselines and UNet on four medical segmentation datasets. In the same context, a Contour-Aware Residual W-Net, called WRC-Net, was suggested by Das et al.44, which consists of double UNets. The first UNet was designed to predict objects boundaries and the second UNet generated the segmentation map. Additionally, a variation of the UNet was presented by Tran et al.45, named TMD-UNet, which modified the interconnection of the network node, replaced the standard convolutions with dilated convolutions layers, and developed dense skip connections. The network showed superior results than popular models for liver, polyp, skin lesion, spleen, nuclei, and left atrium segmentation. With the significant attention given to improving the performance of neural networks' algorithms, many studies have focused on enhancing the quality of medical images that are acquired using multiple imaging modalities. It is often difficult for medical applications to collect enough instances, therefore, synthetic data were recently adopted to increase a size of dataset, within either the same images modality, or using cross-modality translation. Accordingly, Alyafi et al.46 employed Deep Convolutional GAN (DCGAN) to generate synthetic mammograms with mass lesions to enhance the classification performance in imbalanced datasets. Another recent technique that was widely used for unpaired image-to-image translation is the CycleGAN, and it was developed by Zhu et al.47. This technique learns two mappings by transforming images between two different domains using two GANS and maintains their reconstruction by a cycle-consistency loss and hence the name. In fact, CycleGAN was adopted by Becker et al.48 in order to artificially inject or remove suspicious features and thus increase the size of the BCDR and INbreast datasets. Moreover, a cross-modality synthesis approach was introduced by Cai et al.49, it was inspired by CycleGAN between CT and magnetic resonance images (MRI) and it was applied on 2D/3D images for segmentation. Another similar work by Hiasa et al.50 extended the CycleGAN approach by adding gradient consistency loss and aimed for MRI-to-CT synthesis. The work yielded an improved segmentation accuracy on musculoskeletal images. Upon such an idea, Huo et al.51 proposed an end-to-end synthesis and segmentation network (EssNet) to conduct the unpaired MRI-to-CT images synthesis and CT splenomegaly segmentation without using manually annotated CT. This achieved a higher Dice score of 91.88% than the state-of-the-art performance. Proposed architecture Inspired by the efficiency of the skip connections, we propose an architecture, called Connected-UNets, which alternately connects two UNets using additional skip connections. Figure 8 shows an overview of the proposed architecture, where it consists of two standard encoder and decoder blocks and two ASPP blocks for the transition between the two pathways. We suggest connecting the first decoder and the second encoder blocks with additional modified skip connections in order to reconstruct the decoded information in the first UNet before being encoded again in the second UNet. Each encoder block includes two convolution units, which consist of 3 × 3 convolutions followed by an activation ReLU (Rectified Linear Unit) and a batch normalization (BN) layer. A maximum pooling operation is then applied for the output of each encoder block before passing the information to the next encoder. Each decoder block consists of a 2 × 2 transposed convolution unit (i.e., deconvolution layer) that is concatenated with the previous encoder output, and then the result is fed into two convolution blocks, which consist of 3 × 3 convolutions followed by an activation ReLU and a BN layer. Fig. 8: The proposed Connect-UNets architecture. Architecture shows two cascaded encoders (i.e. down-sample pathway) (red arrows) and decoders (i.e. up-sample pathway) (yellow arrows), all alternately connected via skip connection (i.e. dashed lines) and ASPP blocks. An input image is fed to the first block, and a segmentation (binary) image is returned by the last block. Encoders are represented by Convolution layer + Batch Normalization (blue blocks), and Activation layer (dark blue blocks). Decoders are represented by Transposed convolution (green blocks), and Convolution layer + Activation layer (light blue blocks). The transition between the down-sample and the up-sample paths is made with an ASPP block. As the name indicates, this technique uses "Atrous" (which means "holes" in French) convolution to allow having a larger receptive field in the transition path without losing resolution. After going through the first UNet, a second UNet is attached through new skip connections that use information from the first up-sampling pathway. First, the result of the last decoder block is concatenated with the same result after being fed into a 3 × 3 convolution layer followed by an activation ReLU and a BN layer. This serves as the input of the first encoder block of the second UNet. The output of the maximum pooling operations of each three encoder blocks are fed into a 3 × 3 convolutions layers and then concatenated with the output of the last previous decoder block. The result is next down-sampled to the next encoder block. The last encoder block of the second UNet is sent into the ASPP block and the rest is similar to the first UNet, as explained in Supplementary Fig. 1. Finally, the last output is given to a 1 × 1 convolutions layer that is followed by a sigmoid activation layer to generate the predicted mask. In addition to the proposed architecture that is applied on the standard UNet, we propose another variation by adding an attention block during the up-sampling path, called AUNet model. This integrates the attention mechanism with the skip connections between the encoder and decoder blocks. Indeed, the additional attention block should allow the network to weigh the low-level features (i.e., down-sampled information) before being concatenated with the high-level features (i.e., up-sampled information) during the skip connections. Thus, a new Connected-AUNets architecture is introduced as illustrated in Supplementary Fig. 2. Motivated by the improvement made to the UNet architecture to be robust enough for segmenting medical images with different scales, we replace the standard convolution blocks with residual convolution blocks, as detailed in Supplementary Fig. 3, to become the Residual UNet (ResUNet), and consequently we proposed a Connected-ResUNets architecture as detailed in Supplementary Fig. 4. Consequently, adding the residual convolution blocks should enhance the UNet architecture to reconcile the features learned at each scale of the down-sampling pathway and take full advantage of the information propagated that may cause degradation of the deep network. Image synthesis using CycleGAN Given our limited size of annotated datasets and differences in their resolutions, we propose to apply image synthesis on our mammography datasets to improve the results of the segmentation. In this study, we propose a cross-domain image synthesis using one of the effective methods: cycle Generative Adversarial Network (CycleGAN)50 to enhance our images dataset. In a CycleGAN architecture, a deep learning model learns the mapping pixel, color distribution, shape and texture between two datasets52. In fact, a standard GAN model comprises generator and discriminator networks that are trained alternately such that a generator network tries to produce fake data that is realistic enough to trick the decimator. A CycleGAN is the recent extension of GAN models that is particularly designed for image-to-image translation using unpaired datasets. It has been considered as an effective deep learning technique for style transfer, domain adaptation, and data synthesis53,54,55. The architecture, as shown in Supplementary Fig. 5, consists of two generators and two discriminator networks. In this work, we developed the CycleGAN model using the available tutorial in Keras webpage (https://keras.io/examples/generative/cyclegan). The generator network consists of nine residual blocks and up-sampling blocks. We did not change the proposed networks and parameters and we prepared our unpaired input datasets to fit with the model. Integrated framework: mass detection, image synthesis, and mass segmentation Our final framework detects and localizes breast masses in a first step, and then segments them in a second step. It also involves an advanced data-enhancement method as a preliminary step before applying the mass segmentation. This step should not only alleviate the low-resolution mammograms, but also augment the size of the mammography datasets. In fact, the introduced architecture is applied to the ROIs of breast masses that were detected from the previous stage. Our framework applied the You-Only-Look-Once (YOLO) model in previous work30 to locate suspicious breast lesions and distinguish between mass and calcification lesions. Therefore, bounding boxes around the suspicious objects were predicted from the entire mammograms. We evaluated the methodology and provided a maximum detection accuracy of 98.1% for mass lesions. Given the different scales of breast masses, our methodology expands some bounding boxes coordinates by adding extra space around the small tumors. Thus, we obtain the ROI images and we scale them to 256 × 256 pixels, which is the optimal input size found experimentally for segmentation networks. Finally, the detected ROI masses' images and their generated synthetic images are fed directly into the segmentation stage using our proposed architecture as shown in Fig. 9. Fig. 9: The proposed integrated framework. a Original mammogram with ground truth of mass (red). b Detected ROI of mass (yellow) superimposed on the original mammogram. c Detected ROI (i.e., input mass) obtained with ground truth (red) (Domain X). d Detected ROI obtained from a different mammography dataset (Domain Y). e Original ROI (Domain X) and Synthetic ROI (transferred from Domain Y to Domain X). f Output segmented binary mask of input mass. g Segmented output mass where tissue is masked. The clinical data was approved by the institutional review boards and ethical committees of each participation center. The public CBIS-DDSM dataset was registered under clinical trial NCBITAXON and the public INbreast dataset was registered under clinical trial INESC. The private collection was approved by the ethical committee of Mexico under registration INC-2018-1017. Written informed consent was obtained from all patients before enrollment. In this study, we evaluated the proposed architectures on two public datasets, the CBIS-DDSM and INbreast datasets and a private dataset. CBIS-DDSM56 is an updated and standardized version of the Digital Database for Screening Mammography (DDSM) dataset, where images were converted to Digital Imaging and Communications in Medicine (DICOM) format. It contains 2907 mammograms from 1555 unique patients, where 1467 are Mass images. Mammograms were acquired with two different views for each breast (i.e., MLO and CC). Original images have average size of 3000 × 4800 pixels and are associated with their pixel-level ground truth for mass regions. INbreast57 is a public database of full-field digital mammography (FFDM) images and prepared in DICOM format. It presents 410 mammograms where only 107 cases include mass lesions in both MLO and CC views of 115 unique patients. The raw images were annotated with experts, and have an average size of 3328 × 4084 pixels. Additionally, the private dataset is a collection of mammograms from the National Institute of Cancerology (INCAN) in Mexico City. The mammograms present stages 3 and 4 of breast cancer of 389 cases with mass lesions obtained from 208 unique patients. Images were collected from CC, MLO, ML, and AT views, and have an average of 300 × 700 pixels. Samples of entire mammograms and their ROI masses are illustrated in Fig. 10. It can be visually observed that the images have different resolutions and this is due to the different modality and tools configurations that were used to acquire and store the mammograms. Fig. 10: Samples from the public and private mammography datasets with zoomed-in ROI of mass ground truth (red). a CBIS-DDSM mammogram example of MLO view. b INbreast mammogram example of MLO view. c Private mammogram of CC view. The public mammography dataset CBIS-DDSM generated and analyzed during the current study is available in the Cancer Imaging Archive, https://wiki.cancerimagingarchive.net/display/Public/CBIS-DDSM. The public mammography dataset INbreast generated and analyzed during the current study is available from the corresponding author Inês Domingues, Porto, Portugal, on resoanble request after signing a transfer agreement. The private mammography dataset generated during and analyzed during the current study is available from the corresponding author Cristian Castillo Olea through the oncologist Dr. Eric Ortiz in the National Institute of Cancerology, Mexico. The code for custom algorithms and data preprocessing is provided as part of the replication package. It was written in Python v3.6. It is publicly available as a git repository on GitHub at https://github.com/AsmaBaccouche/Connected-Unets-and-more. Siegel, R. L., Miller, K. D. & Jemal, A. Cancer statistics, 2020. CA Cancer J. Clin. 70, 7–30 (2020). Lauby-Secretan, B. et al. Breast-cancer screening—viewpoint of the IARC Working Group. N. Engl. J. Med. 372, 2353–2358 (2015). Celik, Y. et al. Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recognit. Lett. 133, 232–239 (2020). Taghanaki, S. A. et al. Deep semantic segmentation of natural and medical images: a review. Artif. Intell. Rev. 54, 137–178 (2021). Wang, G., Li, W., Ourselin, S. & Vercauteren, T. Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In International MICCAI Brainlesion Workshop, 178–190 (Springer, Cham, 2017). Giacomello, E., Loiacono, D. & Mainardi, L. Brain MRI tumor segmentation with adversarial networks. In 2020 International Joint Conference on Neural Networks (IJCNN), 1–8 (IEEE, 2020). Dhungel, N., Carneiro, G. & Bradley, A. P. A deep learning approach for the analysis of masses in mammograms with minimal user intervention. Med. Image Anal. 37, 114–128 (2017). Shi, P., Zhong, J., Rampun, A. & Wang, H. A hierarchical pipeline for breast boundary segmentation and calcification detection in mammograms. Comput. Biol. Med. 96, 178–188 (2018). Gao, Y., Geras, K. J., Lewin, A. A. & Moy, L. New frontiers: an update on computer-aided diagnosis for breast imaging in the age of artificial intelligence. Am. J. Roentgenol. 212, 300–307 (2019). Henriksen, E. L., Carlsen, J. F., Vejborg, I. M., Nielsen, M. B. & Lauridsen, C. A. The efficacy of using computer-aided detection (CAD) for detection of breast cancer in mammography screening: a systematic review. Acta Radiol. 60, 13–18 (2019). Mullooly, M. et al. Application of convolutional neural networks to breast biopsies to delineate tissue correlates of mammographic breast density. npj Breast Cancer 5, 1–11 (2019). Dhungel, N., Carneiro, G. & Bradley, A. P. Deep learning and structured prediction for the segmentation of mass in mammograms. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 605–612 (Springer, Cham, 2015). Hesamian, M. H., Jia, W., He, X. & Kennedy, P. Deep learning techniques for medical image segmentation: achievements and challenges. J. Digit. Imaging 32, 582–596 (2019). Murtaza, G. et al. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges.Artif. Intell. Rev. 53, 1655–1720 (2020). Tajbakhsh, N. et al. Embracing imperfect datasets: a review of deep learning solutions for medical image segmentation. Med. Image Anal. 63, 101693 (2020). Xu, X. et al. Breast region segmentation being convolutional neural network in dynamic contrast enhanced MRI. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 750–753 (IEEE, 2018). Huang, G., Liu, Z., Van Der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 4700–4708 (IEEE, 2017). Zhou, S. et al. High-resolution encoder–decoder networks for low-contrast medical image segmentation. IEEE Trans. Image Process. 29, 461–475 (2019). Tang, P. et al. Efficient skin lesion segmentation using separable-UNet with stochastic weight averaging. Comput. Methods Programs Biomed. 178, 289–301 (2019). Li, S., Chen, Y., Yang, S. & Luo, W. Cascade dense-UNet for prostate segmentation in MR images. In International Conference on Intelligent Computing, 481–490 (Springer, Cham, 2019). Jalalian, A. et al. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection. EXCLI J. 16, 113 (2017). Al-Antari, M. A., Al-Masni, M. A., Choi, M. T., Han, S. M. & Kim, T. S. A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int. J. Med. Inform. 117, 44–54 (2018). Zhu, W., Xiang, X., Tran, T. D., Hager, G. D. & Xie, X. Adversarial deep structured nets for mass segmentation from mammograms. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 847–850 (IEEE, 2018). Singh, V. K. et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst. Appl. 139, 112855 (2020). Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241 (Springer, Cham, 2015). Soulami, K. B., Kaabouch, N., Saidi, M. N. & Tamtaoui, A. Breast cancer: one-stage automated detection, segmentation, and classification of digital mammograms using UNet model based-semantic segmentation. Biomed. Signal Process. Control 66, 102481 (2021). Abdelhafiz, D., Bi, J., Ammar, R., Yang, C. & Nabavi, S. Convolutional neural network for automated mass segmentation in mammography. BMC Bioinformatics 21, 1–19 (2020). Al-Masni, M. A. et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput. Methods Programs Biomed. 157, 85–94 (2018). Hai, J. et al. Fully convolutional densenet with multiscale context for automated breast tumor segmentation. J. Healthc. Eng. 2019, 1–11 (2019). Baccouche, A., Garcia-Zapirain, B., Castillo Olea, C. & Elmaghraby, A. S. Breast lesions detection and classification via yolo-based fusion models. Comput. Mater. Contin. 69, 1407–1425 (2021). Ravitha Rajalakshmi, N., Vidhyapriya, R., Elango, N. & Ramesh, N. Deeply supervised U‐Net for mass segmentation in digital mammograms. Int. J. Imaging Syst. Technol. 31, 59–71 (2021). Li, H., Chen, D., Nailon, W. H., Davies, M. E. & Laurenson, D. Improved breast mass segmentation in mammograms with conditional residual U-net. In Image Analysis for Moving Organ, Breast, and Thoracic Images, 81–89 (Springer, Cham, 2018). Abdelhafiz, D., Nabavi, S., Ammar, R., Yang, C. & Bi, J. Residual deep learning system for mass segmentation and classification in mammography. In Proc. 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, 475–484 (ACM, 2019). Ibtehaz, N. & Rahman, M. S. MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020). Sinha, A. & Dolz, J. Multi-scale self-guided attention for medical image segmentation. IEEE J. Biomed. Health Informatics 25, 121–130 (2021). Oktay, O. et al. Attention U-net: learning where to look for the pancreas. Preprint at https://arxiv.org/abs/1804.03999 (2018). Li, S., Dong, M., Du, G. & Mu, X. Attention dense-U-net for automatic breast mass segmentation in digital mammogram. IEEE Access 7, 59037–59047 (2019). Sun, H. et al. AUNet: attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms. Phys. Med. Biol. 65, 055005 (2020). Chen, L. C., Papandreou, G., Schroff, F. & Adam, H. Rethinking Atrous convolution for semantic image segmentation. Preprint at https://arxiv.org/abs/1706.05587 (2017). Wang, R. et al. Multi-level nested pyramid network for mass segmentation in mammograms. Neurocomputing 363, 313–320 (2019). Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N. & Liang, J. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39, 1856–1867 (2019). Tsochatzidis, L., Koutla, P., Costaridou, L. & Pratikakis, I. Integrating segmentation information into CNN for breast cancer diagnosis of mammographic masses. Comput. Methods Programs Biomed. 200, 105913 (2021). Jha, D., Riegler, M. A., Johansen, D., Halvorsen, P. & Johansen, H. D. DoubleU-net: a deep convolutional neural network for medical image segmentation. In 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), 558-564 (IEEE, 2020). Das, S. et al. Contour-aware residual W-Net for nuclei segmentation. Procedia Comput. Sci. 159, 1479–1488 (2019). Tran, S. T., Cheng, C. H., Nguyen, T. T., Le, M. H. & Liu, D. G. TMD-UNet: triple-unet with multi-scale input features and dense skip connection for medical image segmentation. Healthcare 9, 54 (2021). Alyafi, B. et al. Quality analysis of DCGAN-generated mammography lesions. In 15th International Workshop on Breast Imaging (IWBI2020) (International Society for Optics and Photonics, 2020). Zhu, J. Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE International Conference on Computer Vision, 2223–2232 (IEEE, 2017). Becker, A. S. et al. Injecting and removing suspicious features in breast imaging with CycleGAN: a pilot study of automated adversarial attacks using neural networks on small images. Eur. J. Radiol. 120, 108649 (2019). Cai, J., Zhang, Z., Cui, L., Zheng, Y. & Yang, L. Towards cross-modal organ translation and segmentation: a cycle- and shape-consistent generative adversarial network. Med. Image Anal. 52, 174–184 (2019). Hiasa, Y. et al. Cross-modality image synthesis from unpaired data using CycleGAN. In International Workshop on Simulation and Synthesis in Medical Imaging, 31–41 (Springer, Cham, 2018). Huo, Y. et al. Adversarial synthesis learning enables segmentation without target modality ground truth. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 1217–1220 (IEEE, 2018). Yoo, T. K., Choi, J. Y. & Kim, H. K. CycleGAN-based deep learning technique for artifact reduction in fundus photography. Graefe's Arch. Clin. Exp. Ophthalmol. 258, 1631–1637 (2020). Russ, T. et al. Synthesis of CT images from digital body phantoms using CycleGAN. Int. J. Computer Assist. Radiol. Surg. 14, 1741–1750 (2019). Sandfort, V., Yan, K., Pickhardt, P. J. & Summers, R. M. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci. Rep. 9, 1–9 (2019). Bao, F., Neumann, M. & Vu, N. T. CycleGAN-based emotion style transfer as data augmentation for speech emotion recognition. In INTERSPEECH, 2828–2832 (2019). Lee, R. S. et al. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 4, 1–9 (2017). Moreira, I. C. et al. INbreast: toward a full-field digital mammographic database. Acad. Radiol. 19, 236–248 (2012). The authors would especially like to express their gratitude to the National Institute of Cancerology (INCAN) in Mexico City for providing the private mammography dataset. Thanks also to the radiologists Dr. Kictzia Yigal Larios and Dr. Raquel Balbás at FUCAM A.C., and Dr. Guillermo Peralta and Dr. Néstor Piña at Cancer Center Tec100 by MRC International. Department of Computer Science and Engineering, University of Louisville, Louisville, KY, 40292, USA Asma Baccouche & Adel S. Elmaghraby eVida Research Group, University of Deusto, Bilbao, 4800, Spain Begonya Garcia-Zapirain & Cristian Castillo Olea Asma Baccouche Cristian Castillo Olea Adel S. Elmaghraby A.B. conceived the idea, developed and implemented the methods. B.G-Z. helped with formulating and validating the experiments and analysis. B.G-Z. and A.S.E. supervised the project. A.B. wrote the paper. C.C.O. provided the data. All authors reviewed and edited the manuscript. Correspondence to Asma Baccouche. Supplementary information. Reporting summary. Baccouche, A., Garcia-Zapirain, B., Castillo Olea, C. et al. Connected-UNets: a deep learning architecture for breast mass segmentation. npj Breast Cancer 7, 151 (2021). https://doi.org/10.1038/s41523-021-00358-x DOI: https://doi.org/10.1038/s41523-021-00358-x An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks Machine learning on MRI radiomic features: identification of molecular subtype alteration in breast cancer after neoadjuvant therapy Hai-Qing Liu Si-Ying Lin Hui-Ying Zhao European Radiology (2022) npj Breast Cancer (npj Breast Cancer) ISSN 2374-4677 (online) Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly. Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer
CommonCrawl
Was Hilbert ambivalent about set theory? There is the well-known quote of Hilbert: "No one shall drive us from the paradise which Cantor has created for us." [D. Hilbert: "Über das Unendliche", Mathematische Annalen 95 (1925) p. 167] On the other hand Hilbert concludes his paper: Finally we will return to our original topic and draw the conclusions of all our investigations about the infinite. On balance the result is this: The infinite is nowhere realized; it is neither present in nature nor admissible as the foundation of our rational thinking – a remarkable harmony of being and thinking. [loc cit, p. 190] Further Hilbert devised "Hilbert's hotel": An infinite hotel is completely filled with guests. Another guest arrives. He gets room no. 1 after every resident guest has moved on by one number from $n$ to $n+1$. Even infinitely many guests can be accommmodated when every resident guest doubles his room number. Did Hilbert set up this example in order to counter Cantor's list? Hilberts infite hotel is really infinite, unfinished, extendable but Cantor's list is not. Two different interpretations of one and the same infinity. Only that allows to conclude that the antidiagonal, as a new guest differing from all resident guests or entries of the list, cannot be inserted, for instance into the first position when every other entry moves on by one "room number". Without Cantor's arbitrary constraint even all infinitely many antidiagonals that ever could be constructed could be accommodated. Cantor's theorem would go up in smoke. My question: Is there evidence that Hilbert intended this counter argument? Or did he not realize that it is a counter argument? But then what was his purpose? mathematics mathematicians set-theory hilbert WilhelmWilhelm $\begingroup$ Frankly speaking, I do not think taht a first-class mathematician like Hilbert ever hold such simple contradictory statement. I would prefer to assert that we do not read correctly his words. $\endgroup$ – Mauro ALLEGRANZA Feb 19 '18 at 13:46 $\begingroup$ I'm not sure I understand the point being made here. In the diagonal argument, "non-extendability" is our hypothesis, not an "arbitrary constraint". Cantor is using his definition of countable. If $\mathbb R$ is countable, then there exists a "list" of all real numbers. By hypothesis, this list cannnot be extended because it is a "complete" list containing all of the real numbers so there are no antidiagonals to be added. $\endgroup$ – Nick Feb 19 '18 at 19:12 $\begingroup$ @ Mauro ALLEGRANZA: You can be sure that Hilbert said precisely these words. $\endgroup$ – Wilhelm Feb 19 '18 at 20:56 $\begingroup$ @Nick R: Non-extendability is our hypothesis? Only concerning the enumeration of the list. The real numbers are extended by producing the antidiagonal number. If this real number had been existing already when creating the list, we would certainly have inserted it, wouldn't we? If not that amounts to deliberately cheating. And of course all antidiagonals ever produced belong to a countable set and could be accomodated in a single list like all new guests in Hilbert's hotel. $\endgroup$ – Wilhelm Feb 19 '18 at 21:01 No, he was not, as one can see from the full passage from Hilbert's lecture On the Infinite delivered June 4, 1925, before a congress of the Westphalian Mathematical Society in Munster, and published in Mathematische Annalen vol. 95 (1926): "In summary, let us return to our main theme and draw some conclusions from all our thinking about the infinite. Our principal result is that the infinite is nowhere to be found in reality. It neither exists in nature nor provides a legitimate basis for rational thought — a remarkable harmony between being and thought. In contrast to the earlier efforts of Frege and Dedekind, we are convinced that certain intuitive concepts and insights are necessary conditions of scientific knowledge, and logic alone is not sufficient. Operating with the infinite can be made certain only by the finitary. The role that remains for the infinite to play is solely that of an idea — if one means by an idea, in Kant's terminology, a concept of reason which transcends all experience and which completes the concrete as a totality — that of an idea which we may unhesitatingly trust within the framework erected by our theory." "An idea which we may unhesitatingly trust" does not exactly sound ambivalent. Hilbert was a formalist, not a platonist, he did not believe that "actually infinite" actually exists, or that it needs to exist to be talked about. Being a Kantian regulative idea ("noumenon"), finitely axiomatized into a formal theory, was more than enough for mathematics, according to him. The only requirement is that the said theory be consistent, and the whole lecture was to promote the so-called Hilbert programme of proving consistency of infinitary theories by finitary means. It should be clear from this that Hilbert's hotel is neither real, nor meant to counter Cantor. What Cantor did or did not assume in the diagonal argument is moot since it is derivable in finitely many steps from set-theoretic axioms. Inspection of Hilbert's earlier 1924 lecture, where it is described (along with a similarly minded infinite dance party and a quip "in a world with an infinite number of houses and occupants there will be no homeless"), confirms it. It was a smalltalk introduction of the difference between the finite and the infinite aimed at lay audience, it is not connected to Cantor, discussed or even mentioned in that lecture, or any other, after being introduced. Here is from The True (?) Story of Hilbert's Infinite Hotel by Kragh: "It was merely an example and one that he attached no particular importance to. Nor did other people at the time find it important. Had the hotel not been resuscitated by Gamow more than two decades later it might well be unknown today. The only allusion to it before 1947 that I know of is from a textbook on calculus published in 1938 and written by Otto Haupt, a mathematician at Erlangen University." Not the answer you're looking for? Browse other questions tagged mathematics mathematicians set-theory hilbert or ask your own question. What motivated Cantor to invent set theory? Historical Instances of Set Theory Did Hilbert really have dyscalculia? Did Poincaré say that set theory is a disease? Are there any records that show how Hilbert came to "invent" or "discover" Hilbert spaces? When did set theory throw off theology? Did Eudoxus really set out to present irrationals as Dedekind cuts? What brought about the need for real analysis and formal logic in recent years?
CommonCrawl
Home/Mathematics/Justify whether x2 + y2 = 3xy has a horizontal tangent at (2, 4). No, because 3y -3x2 70 at (2,4) 2y-3x Yes, because 3y - 3x2 2y - 3x oat (2,4) 3x2-12 Yes, because 3-2y 0 at (2,4) 3x2 - 12 No, because 3-2y 70 at (2,4) Question Solved1 Answer Justify whether x2 + y2 = 3xy has a horizontal tangent at (2, 4). No, because 3y -3x2 70 at (2,4) 2y-3x Yes, because 3y - 3x2 2y - 3x oat (2,4) 3x2-12 Yes, because 3-2y 0 at (2,4) 3x2 - 12 No, because 3-2y 70 at (2,4) XGBJ8U The Asker · Calculus Transcribed Image Text: Justify whether x2 + y2 = 3xy has a horizontal tangent at (2, 4). No, because 3y -3x2 70 at (2,4) 2y-3x Yes, because 3y - 3x2 2y - 3x oat (2,4) 3x2-12 Yes, because 3-2y 0 at (2,4) 3x2 - 12 No, because 3-2y 70 at (2,4) KXBP37 The First Answerer Given x^(3)+y^(2)=3xyNow, differeniate w.r. to y.{:[3x^(2)+2y(dy)/(dx)=3x(dy)/(dx)+3y],[=>quad(dy)/(dx)(2y-3x)=3y-3x^(2)],[=>quad(dy)/(dx)=(3y-3x^(2))/(2y-3x)],[ ... See the full answer A reservoir has the shape of a truncated cone as shown in the figure above. It measures 33 m high. The radius of the base is 11 m and the top radius is 33 m. It is filled with water. We want to calculate the amount of work required to pump all of its water through the hose standing 11 m above the reservoir. (i) Let xx be the height in metres measured from the base of the reservoir. The weight of a thin water layer between heights xx and x+Δxx+Δx is approximately P(x)ΔxPxΔx . What is P(x)Px ? (ii) The work in Joules required to pump the thin water layer to 11 m above the reservoir is approximately w(x)ΔxwxΔx . What is w(x)wx ? (iii) Using previous results, determine the amount of work required to pump all of the water to 11 m above the reservoir. 1 m 3 m 3 m 1 m A reservoir has the shape of a truncated cone as shown in the figure above. It measures 3 m high. The radius of the base is 1 m and the top radius is 3 m. It is filled with water. We want to calculate the amount of work required to pump all of its water through the hose standing 1 m above the reservoir. (i) Let I be the height in metres measured from the base of the reservoir. The weight of a thin water layer between heights I and I + Ac is approximately P (Az. What is P (2) ? P(x) = FORMATTING: Express the answer as a function of . Recall that the density of water is p=1000 Kg/m3 and the acceleration due to gravity on the earth's surface is g = 9.8 m/s2 () The work in Joules required to pump the thin water layer to 1 m above the reservoir is approximately w(1) Ar. What is w (2) ? FORMATTING: Express the answer as a formula (iii) Using previous results, determine the amount of work required to pump all of the water to 1 m above the reservoir FORMATTING: If you round your answer, ensure that the round-off error is less than 1% of the value. Work Number Joules The smooth surface of the vertical cam is defined in part by the curve \( r=(0.2 \cos \theta+0.3) \mathrm{m} \) (Figure 1) If the forked rod is rotating with a constant angular velocity of \( \dot{\theta}=4 \mathrm{rad} / \mathrm{s} \), determine the force the cam and the rod exert on the 1.5-kg roller when \( \theta=30^{\circ} \). The attached spring has a The smooth surface of the vertical cam is defined in part by the curve \( r=(0.2 \cos \theta+0.3) \mathrm{m} \) (Figure 1) If the forked rod is rotating with a constant angular velocity of \( \dot{\theta}=4 \mathrm{rad} / \mathrm{s} \), determine the force the cam and the rod exert on the 1.5-kg roller when \( \theta=30^{\circ} \). The attached spring has a stiffnesss \( k=30 \mathrm{~N} / \mathrm{m} \) and an unstretched length of \( 0.1 \mathrm{~m} \). Express your answers in newtons using three significant figures separated by a comma. Flower Arrangements has just completed operations for the year ended December 31, 2024. This is the third year of operations for the company. The following data have been assembled for the business: E (Click the icon to view the assembled data.) Prepare the income statement of Flower Arrangements for the year ended December 31, 2024. (If a box is not used in the table leave the box empty; do not select a label or enter a zero.) Data Table Insurance Expense $ 4,000 Salaries Expense $ 40,000 Service Revenue 75,000 Accounts Payable 6,200 Utilities Expense 600 Office Supplies 2,000 Rent Expense 13,000 Ruth, Withdrawals 1,800 Ruth, Capital, Jan. 1, 2024 12,500 Accounts Receivable 6,000 Cash 3,800 Equipment 28,200 Owner contribution during the year 5,700 Net Income Print Done 4. In humans, the antibiotic Ceftriaxone has the mean values for the apparent volume of distribution 300 L and total clearance CL=14L/h and t1/2= 8 h. . Salmonella infection is susuceptible to Ceftriaxone at concntration above 5 ug/mL and patient gets overdosed at the levels above 80ug/L. Calculate an IV dose ad infusion rate that provides steady-state level of 30ug/mL in human. b)The doctor decides to give a dose of 12 mg and started a constatnt infusion at the same Ro calculated . Ceftriaxone is eliminated as unchanges in the urine and the drug concentrations were recorded as; Time (h.) Con. Ug/L 0 4 12 6 24 6.8 36 7.5 360 10 a) Compute the Vd for patient (10pts) b) Compute the CL (10pts) 4. In humans , the antibiotic Ceftriaxone has the mean values for the apparent volume of distribution 300 L and total clearance CL=14L/h and t1/2= 8 h. . Salmonella infection is susuceptible to Ceftriaxone at concntration above 5 ug/mL and patient gets overdosed at the levels above 80ug/L. Calculate an IV dose ad infusion rate that provides steady-state level of 30 ug/mL in human. b)The doctor decides to give a dose of 12 mg and started a constatnt infusion at the same R, calculated . Ceftriaxone is eliminated as unchanges in the urine and the drug concentrations were recorded as ; Time (h.) Con. Ug/L 0 4 12 6 24 6.8 36 7.5 360 10 a) Compute the Vd for patient (10pts) b) Compute the CL (10pts) please calculate both part of question thanks 4. In humans, the antibiotic Ceftriaxone has the mean values for the apparent volume of distribution \( 300 \mathrm{~L} \) and total clearance \( \mathrm{CL}=14 \mathrm{~L} / \mathrm{h} \) and \( \mathrm{t} 1 / 2=8 \mathrm{~h} \). Salmonella infection is susuceptible to Ceftriaxone at concntration above \( 5 \mathrm{ug} / \mathrm{mL} \) and patient gets overdosed at the levels above 80ug/L. Calculate an IV dose ad infusion rate that provides steady-state level of \( 30 \mathrm{ug} / \mathrm{mL} \) in human. b)The doctor decides to give a dose of \( 12 \mathrm{mg} \) and started a constatnt infusion at the same \( \mathrm{R}_{0} \) calculated. Ceftriaxone is eliminated as unchanges in the urine and the drug concentrations were recorded as ; a) Compute the Vd for patient (10pts) b) Compute the CL (10pts)
CommonCrawl
Identification of S1PR3 gene signature involved in survival of sepsis patients Anlin Feng1, Wenli Ma1, Reem Faraj1, Gabriel T. Kelly1, Stephen M. Black2, Michael B. Fallon1 & Ting Wang ORCID: orcid.org/0000-0001-6446-40561 BMC Medical Genomics volume 14, Article number: 43 (2021) Cite this article Sepsis is a life-threatening complication of infection that rapidly triggers tissue damage in multiple organ systems and leads to multi-organ deterioration. Up to date, prognostic biomarkers still have limitations in predicting the survival of patients with sepsis. We need to discover more prognostic biomarkers to improve the sensitivity and specificity of the prognosis of sepsis patients. Sphingosine-1-phosphate (S1P) receptor 3 (S1PR3), as one of the S1P receptors, is a prospective prognostic biomarker regulating sepsis-relevant events, including compromised vascular integrity, antigen presentation, and cytokine secretion. Until now, no S1PR3-related prognostic gene signatures for sepsis patients have been found. This study intends to obtain an S1PR3-associated gene signature from whole blood samples to be utilized as a probable prognostic tool for patients with sepsis. We obtained an 18-gene S1PR3-related molecular signature (S3MS) from the intersection of S1PR3-associated genes and survival-associated genes. Numerous important immunity pathways that regulate the progression of sepsis are enriched among our 18 genes. Significantly, S3MS functions greatly in both the discovery and validation cohort. Furthermore, we demonstrated that S3MS obtains significantly better classification performance than random 18-gene signatures. Our results confirm the key role of S1PR3-associated genes in the development of sepsis, which will be a potential prognostic biomarker for patients with sepsis. Our results also focus on the classification performance of our S3MS as biomarkers for sepsis, which could also provide an early warning system for patients with sepsis. Sepsis is a severe and life-threatening clinical syndrome with a primary systemic bacterial infection. Alarmingly, the occurrence of sepsis is rising with a 17% increase between 2000 and 2010 [1]. At the same time, the mortality of sepsis has risen unacceptably high with a 31% increase between 1999 and 2014 [2]. This elevation in mortality rate is partially due to the complexity of its immunological syndrome and multiorgan involvement [3]. An effective prognostic biomarker that can predict the clinical outcome of sepsis patients appropriately is in high demand in clinical practice. Biomarkers for sepsis may be utilized as a diagnostic or prognostic tool. Circulating proteins including pro-inflammatory cytokines, complement proteins, or immunosuppressive phase proteins, have previously been identified as single biomarkers for sepsis [4, 5]. However, single biomarkers lack specificity and sensitivity as prognostic tools for sepsis patients. The biomarkers containing multiple genes deliver better prognostic power. Several studies [6, 7] attempted to combine pro-inflammatory and anti-inflammatory markers and is the method to most likely succeed in predicting the disease development of sepsis patients. Meanwhile, a few gene signatures associated with immune responses in sepsis were also found in recent years. Emma E Davenport et al. [8] unified genomics approach and heterogeneity in sepsis to group patients by different immune responses. Miguel Reyes et al. [9] identified a unique CD14+ monocyte state which is inflated in sepsis patients. In general, biomarkers to characterize immune status are powerful tools for predicting the development and progression of sepsis. Sphingosine 1-phosphate (S1P) is a bioactive lipid with specific targets and functions in both intracellular and extracellular environments. Following release from the cell, S1P acts as a ligand upon binding to five subtypes of S1P receptors (S1PRs) 1–5 which belong to the G protein-coupled receptor (GPCRs) family, triggering many receptor-dependent cellular signaling pathways. Among all these five S1PRs, S1PR3 regulates many parts of the vascular barrier and inflammatory responses in several pathological disorders related to the sepsis mediated pro-inflammatory response [10, 11]. S1PR3 is mainly expressed in the cardiovascular system, lungs, kidney, and spleen organs [12]. The S1P ligation of S1PR3 could affect various organ system functions such as vascular permeability signaling [13], heart rate, and blood pressure [14]. S1P-S1PR3 axis played a pivotal role in sepsis. Two independent studies have reported that S1P levels in the blood are reduced in sepsis, and its expression correlates with the severity of sepsis [15, 16]. S1PR3′s function in sepsis has been studied by multiple groups. Niessen [17] et al. suggested that the S1P–S1PR3 axis regulates late-stage inflammation amplification in sepsis. Hou et al. [18] also showed that S1PR3 signaling is bactericidal because it is related to the function of monocytes and the deficiency of S1PR3 could therefore increase susceptibility to sepsis, S1PR3 expression levels were upregulated in monocytes from sepsis patients. Higher levels of monocytic S1PR3 drove the innate immune response against bacterial infection and highly associated with preferable outcomes. S1PR3 is also noted to be significantly upregulated in acute respiratory distress syndrome [19], a pulmonary consequence of sepsis, suggesting a key role in inflammation regulation. Therefore, S1PR3 is essential for survival in sepsis, and S1PR3-regulated cell signaling pathways in sepsis may offer a novel role in therapy. The S1P-S1PR3 signaling may have the predictive power for estimating the activity of host innate immune responses and subsequent pathogen elimination. We hypothesize that multiple S1PR3-related genes, in combination with pro- and anti-inflammatory cytokines, could correlate with clinical outcomes in sepsis patients. We desire to find those genes regulated by S1PR3 which may reflect the clinical outcome of sepsis patients. To achieve this aim, we examined whole blood gene expression in two standalone cohorts from Gene Expression Omnibus (GEO) and identified a gene signature of 18 genes significantly associated both with S1PR3 and sepsis survival. Our results propose that S1PR3-associated genes may expand the outcome prediction in sepsis. Sepsis datasets Two sepsis datasets with whole blood samples (GSE54514 and GSE33118) from the GEO database (https://www.ncbi.nlm.nih.gov/gds/) were chosen as our research subjects (Table 1). The discovery cohort GSE54514 contains whole blood transcriptome data of 35 survivors and non-survivors of sepsis, samples were collected daily for up to 5 days from sepsis patients. For validation cohort GSE33118, whole blood samples from 20 sepsis patients were tested before specific treatment. Table 1 S1PR3-related gene signatures Series matrix files containing pre-processed gene expression values were obtained from series GSE54514 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE54514) and GSE33118 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE33118). All chip probe sets in matrix files had been transformed into corresponding gene symbols by utilizing chip platform files. Identifying S1PR3-related sepsis gene signature We detected the DEGs between 26 survivors and 9 non-survivors in the discovery cohort and set them as sepsis survival-related genes. The limma package [33] in R (version: 3.5.2) was used to identify DEGs in this study. The S1PR3-associated genes based on signaling pathways and associated proteins were confirmed by searching the STRING database (https://string-db.org/) [22]. We intersected the sepsis survival-related genes and S1PR3-associated genes, and these intersected genes were our S1PR3-related sepsis gene signature. KEGG pathway analysis and PPI network We used different methods to display our gene sets' functional profiles and interactions. The KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway analyses were performed by the clusterProfiler [21] which was a visualization tool for analyzing functional profiles such as enriched pathways for gene clusters. We constructed the PPI network based on the STRING protein and protein interactions data being visualized by Cytoscape 3.5. Correlation matrix made by corrplot was used to highlight the most correlated genes in our gene table. Expression score and risk score Each patient was allocated with an expression and risk score from gene expression and corresponding weight values of 18 genes. The linear formula corresponding to expression and risk score are: $$\begin{aligned} expression\,score & = \mathop \sum \limits_{i = 1}^{n} \left( {{ }\frac{{e_{i} - \mu_{i} }}{{s_{i} }}} \right) \\ risk\,score & = \mathop \sum \limits_{i = 1}^{n} W_{i} \left( {{ }\frac{{e_{i} - \mu_{i} }}{{s_{i} }}} \right) \\ \end{aligned}$$ Here, n is the count of genes included in the gene signature in each dataset, Wi represents the weight of the ith gene (see in Table 1), ei represents the expression level of the ith gene, and μi and si are the corresponding mean and standard deviation value for the ith gene among whole samples. R (version 3.5.0) was utilized to perform all the statistical calculations in this study. Receiver operating characteristic (ROC) curves and principal component analysis (PCA) was applied to prove the differentiating power of our S3MS on sepsis survival status. R package pROC (version 1.16.1) was used to visualize the ROC curve and compute the area under the curve (AUC). For PCA analysis, R built-in prcomp function was utilized to compute principal components, and the R package factorextra (version 1.0.7) to build the PCA plot. We set FDR < 0.05 as the statistically significant cutoff in this study. Identification of an S1PR3-related molecular signature (S3MS) associated with sepsis survival Firstly, we identified all S1PR3-interactive proteins. STRING (https://string-db.org/) is an online biological database with acknowledged or predicted protein–protein interactions data. Utilizing the STRING database, we found a total of 226 S1PR3-related genes (Additonal file 1: Table S1) with a high confidence interaction score (interaction score > 0.7) in addition to all the active interaction sources, and all co-expression correlation r values of S1PR3 with S1PR3-related genes in discovery cohort had been shown in Additonal file 2: Table S2. Next, we defined all sepsis survival-related genes by setting the differentially expressed genes (DEGs) between the 26 survivors and 9 non-survivors as survival-related genes in the discovery cohort (NIH GEO GSE54514). 1078 up-regulated and 1134 down-regulated genes (false discovery rate [FDR] < 5% and fold change [FC] > 1.5) in non-survivors were found and characterized as sepsis survival-related genes (Additonal file 3: Table S3). S1PR3-related genes from the STRING database and sepsis survival genes from our discovery cohort were then characterized. KEGG is a collection of databases for protein-coding gene functions, combining transcriptome with signaling pathways [20]. In this study, we applied an R package-clusterProfiler to detect enriched KEGG pathways amongst our genes [21]. The most significantly enriched KEGG pathways among S1PR3-interactive genes include chemokine signaling, neuroactive ligand−receptor interaction, and human cytomegalovirus infection (Fig. 1a) while the sepsis survival-related genes exhibited significant enrichment of ribosome, tuberculosis, and several immune-related pathways (Fig. 1b). Enriched KEGG pathways. Enriched KEGG pathways among the S1PR3-related genes (a) and sepsis survival-related genes (b) To find the connection between S1PR3 pathways and genes that affect sepsis survival, we intersected the S1PR3-related genes and sepsis survival-related genes and identified 18 overlapping genes (Fig. 2a). These 18 genes are classified as our sepsis gene signature derived from S1PR3-related genes in this study and defined as S1PR3-related molecular signatures (S3MS) (Table 1). Interestingly, within the human genome, this overlap is statistically significant (hypergeometric p-value < 0.05), suggesting S1PR3 related genes are significantly enriched among survival-associated genes in sepsis. The heatmap demonstrates that the 18 genes (S3MS) can discriminate non-survivors from survivors through different gene expression patterns (Fig. 2b). Graphical maps of genome sequence provide a method to rapidly look at the features of specific genes. A chromosome-based circular genome plot was utilized to illustrate all DEGs' genome positions within chromosome (x-axis) and corresponding FC values (y-axis). Using a circular genome plot (Fig. 2c), we found that the 18 genes scattered in different genome regions suggesting that these genes are enriched in key pathways but not genetically linked due to chromosomal locations. S3MS in sepsis patients. Venn plot (a) shows sepsis survival and S1PR3-associated overlapping genes. Heatmap (b) shows the S3MS gene expression in the discovery cohort. Red represents increased gene expression, while blue means decreased gene expression; (c) Genome circular plot for sepsis survival DEGs to illustrate all DEGs' genome positions within chromosome (x-axis) and corresponding FC values (y-axis) Next, we characterized the biological function and interaction of these 18 genes in the S3MS. KEGG pathways such as sphingolipid signaling, chemokine signaling, Rap1 signaling, and cAMP signaling, besides others (Fig. 3a) were found to be significantly enriched among these 18 genes. Several of these pathways were also associated with S1PR3 and sepsis survival-related immune pathways (KEGG pathways) identified in Fig. 1. This strongly indicates that the S3MS signature builds a bridge between S1PR3 and sepsis survival-related genes. The protein and protein interaction (PPI) network analysis [22] provides biological insights such as the classification of protein complexes. Our results (Fig. 3b) from the PPI network analysis clearly suggests that the 18 genes can be grouped into two gene clusters. One gene cluster was associated with the VEGF signaling pathway, Rap1 signaling pathway, and Fc gamma R-mediated phagocytosis, while the other was associated with Staphylococcus aureus infection, chemokine signaling pathway, and cytokine–cytokine receptor interaction (Fig. 3b). Additionally, the correlation matrix demonstrates that 16 genes have a mostly positive correlation with each other while only 2 genes (CXCL10 and RAC1) demonstrated a negative correlation (Fig. 3C). Our results confirm that the 18 genes identified in our S3MS had a strong connection and relationship. Biological characteristics of S3MS. Enriched KEGG pathways (a), PPI (b) and pairwise Pearson correlations matrix (c) among the S3MS S3MS predicts clinical outcomes in both discovery and validation cohorts We developed a risk assessment scoring method to measure the possibility of sepsis risk in patients using a linear combination of the 18-gene expression values (Table 1). Each value was given a weighted value which indicates the direction of differential expression in non-survivors, and patients were assigned a score based on those measures. Our results focused on both the discovery and validated cohorts. The results are in line with our expectations: risk scores from non-survivors were significantly higher than those of survivors in our study (Fig. 4a). Therefore, our gene signature from S1PR3 has the potential to predict clinical outcomes in sepsis. The S3MS-based sepsis risk score discriminates non-survivors from survivors. a Box plot of the S3MS-based risk scores in non-survivors and survivors. b ROC curves of the S3MS-based risk scores in distinguishing non-survivors from survivors. c Predictive power of the S3MS-based AUC values in the discovery and validation cohort compared with random gene signatures from the whole genome and sepsis survival-related genes Classification power of the 18-gene signature We investigated the classification performance of the S1PR3 gene signature in the discovery and validated datasets. The AUC under the ROC curve were 0.998 and 0.8 for the discovery and validation cohorts, respectively (Fig. 4b). A bioinformatics study by Venet et al. shows that most gene signatures randomly selected from the human genome with the same gene size were sometimes better than published gene signatures [23]. We collected 10,000 random gene signatures through random selection from the whole genome or sepsis survival-related genes and produced either an expression score for the entire genome or a risk score for the sepsis survival-related genes for each patient. Corresponding AUC values were then calculated for each random gene signature. Our gene signature had better power on the classification of sepsis survival than randomly generated genes with the same gene count (better than 95% percent random gene signatures in the whole genome) (Fig. 4c). Principal component analysis (PCA) was also performed to simplify the complexity in high-dimensional data like our 18-gene expression pattern. PCA (Fig. 5a, b) showed that the 18-gene signature thoroughly distinguished non-survival patients from survival patients in the discovery cohort, and only slightly overlapped in the validation cohort. PCA plot of S3MS for the discovery cohort (a) and validation cohort (b) Overall, our 18-gene had a great classification power of discriminating the non-survivors from survivors on a gene expression level. Sepsis is a systemic inflammatory response to pathogenic microorganisms such as gram-positive or gram-negative bacteria [24]. The mortality rate of sepsis is still relatively high (20% and 80% [25, 26]) despite improvement in sepsis treatment. Regulation of the immune response plays a vital role in the pathogenesis of sepsis [27, 28]. S1PR3, our target molecule, belongs to a family of GPCRs that regulate host innate immunity. Jinchao Hou et al. [18] validated that S1PR3 expression in macrophages is up-regulated following bacterial stimulation and ameliorated the severity of sepsis. Additionally, S1PR3 regulates various inflammatory responses and vascular barrier function in several pathological septic conditions [29]. S1PR3 is an attractive biomarker candidate due to its involvement in several signaling pathways and gene co-expression related to sepsis. Finding S1PR3 gene signatures in sepsis will reveal several biological or clinical features of the pathogenesis and progression of sepsis. In this study, we chose two gene expression datasets from the GEO database which contain whole blood samples from sepsis patients with clinical outcome information to identify an S1PR3 gene signature. Our investigations yielded the following observations: (1) Identification of an 18-gene signature in this study which could discriminate non-survivors from survivors at the gene expression level. (2) 18-gene signature is an "independent" prognostic marker for predicting or estimating the potential risk of sepsis. (3) Signaling pathways enriched in our gene signature linked S1PR3 pathways with severe sepsis-related processes. Biomarkers are a non-invasive clinical method that could objectively predict or evaluate usual biological processes or host response to therapy. Up to now, the use of molecular biomarkers has been only concentrated on the diagnosis of sepsis, no gene signatures are used in the prognosis of severe sepsis. No biomarkers with single genes are likely to adequately reveal the rapidly evolving nature of a septic patient's status. Some studies [6, 7] have attempted to combine several pro-inflammatory biomarkers or both pro- and anti-inflammatory markers randomly. However, we specifically derived the 18-gene signature based on the S1PR3 processes which are strongly related to the development of sepsis. Our gene signature performs very well as a novel biomarker for sepsis survival based on the performance of our risk assessment score in both the discovery and validation cohorts. This novel biomarker exhibits significance in three ways: (1) Independence. Our S3MS was not derived from a combination of other biomarkers, so it is a method to discover new sepsis biomarkers based on key signaling pathways; (2) Performance. S3MS performs not only better than random gene signatures from the whole genome with the same size but better than sepsis survival gene signatures with the same gene size (see Fig. 4). (3) Growth potential. Unlike most biomarkers such as circulating protein, gene signature can be adjusted dynamically to functionally perform better. In order to account for the potential for false-positive results that are typical of multiple-testing [23], we have utilized random gene signature comparison. Therefore, resampling tests were utilized in our study in discovery and validation cohorts to answer this doubt. S3MS-based AUC values in the discovery and validation cohort were compared with random gene signatures from the whole genome and sepsis survival-related genes. The results showed that the classification power of the 18-gene signature is mostly better than that of the gene sets randomly chosen from the whole genome. Comparing to random genes or combined biomarkers, our S3MS also demonstrated a strong relationship with signaling pathways and protein interactions. Numerous signaling pathways have been identified as enriched within our gene signature. The VEGF (vascular endothelial growth factor) pathway, which regulates vascular development in angiogenesis, was enriched in our gene signature. Kiichiro et al. [30] highlighted the importance of the VEGF pathway in altering sepsis morbidity and mortality. Many innate or adaptive immune pathways (chemokine signaling pathway, cytokine–cytokine receptor interaction, and Rap1 signaling pathway) were enriched within our 18-gene signature. Down-regulation of these immune pathways is known to worsen the prognosis of sepsis. Otherwise, 18 genes had intense interactions with each other. Hence, the 18-gene signature from S1PR3-related genes not only predicted the clinical outcome of sepsis patients but also revealed the signaling pathways which could play a pivotal role in the development and progression of sepsis. Among S1PRs 1–5, we found both S1PR1 [31] and S1PR3-related gene signature are capable of predicting the survival of patients with sepsis. Several studies [18, 32] already indicated that S1P-S1PR3 signaling drove bacterial killing, S1PR3 was associated with preferable sepsis outcomes. In contrast to S1PR3, S1PR1 didn't have corresponding experimental support, so the S1PR3-related signature holds more prognostic value than S1PR1. S1PR1 and S1PR3 play a diverse role and belongs to different signaling pathways in sepsis progression. S1P–S1PR1 signaling plays a critical role in supporting the integrity of the endothelial barrier, while S1P-S1PR3 signaling drives bacterial killing in macrophages. Only 4 common genes among S1PR1-associated genes (557 genes) and S1PR3-related genes (226 genes). In general, S1PR1 and S1PR3 gene signature have diverse molecular mechanisms in the pathology of sepsis, so we decided to publish them separately. In this study, we showed that the S1PR3-related protein-coding gene signature is capable of predicting which patients are at an elevated risk of developing severe sepsis. However, our work in the S1PR3-related gene signature was only based on bioinformatics methods. The potential power of our gene signature needs to be verified by clinical investigations. In conclusion, we identified a gene signature containing 18 protein-coding genes capable of being reproducible predictors of clinical outcomes in patients with sepsis. Thus, our results could have a potential value in clinical evaluations and disease monitoring in patients with sepsis and may ultimately help improve sepsis treatment algorithms based on severity risk. The gene expression and clinical data of GSE54514 and GSE33118 were downloaded from GEO database, while the gene functional interaction data was downloaded from STRING database version 11. DEGs: Differentially expressed genes GPCRs: G protein-coupled receptor GEO: Kyoto Encyclopedia of Genes and Genomes PPI: Protein–protein interaction PCA: ROC: Receiver operating characteristic S1P: Sphingosine-1-phosphate S1PR3: S1P receptor 3 S3MS: S1PR3-related molecular signature Buehler SS, Madison B, Snyder SR, Derzon JH, Cornish NE, Saubolle MA, et al. Effectiveness of practices to increase timeliness of providing targeted therapy for inpatients with bloodstream infections: a laboratory medicine best practices systematic review and meta-analysis. Clin Microbiol Rev. 2015;29:59–103. Epstein L, Dantes R, Magill S, Fiore A. Varying estimates of sepsis mortality using death certificates and administrative codes—United States, 1999–2014. MMWR Morb Mortal Wkly Rep. 2016;65:342–5. Oberholzer A, Oberholzer C, Moldawer LL. Sepsis syndromes: understanding the role of innate and acquired immunity. Shock Augusta Ga. 2001;16:83–96. Biron BM, Ayala A, Lomas-Neira JL. Biomarkers for sepsis: What is and what might be? Biomark Insights. 2015;10(Suppl 4):7–17. Faix JD. Biomarkers of sepsis. Crit Rev Clin Lab Sci. 2013;50:23–36. Andaluz-Ojeda D, Bobillo F, Iglesias V, Almansa R, Rico L, Gandía F, et al. A combined score of pro- and anti-inflammatory interleukins improves mortality prediction in severe sepsis. Cytokine. 2012;57:332–6. Gouel-Chéron A, Allaouchiche B, Guignant C, Davin F, Floccard B, Monneret G, et al. Early interleukin-6 and slope of monocyte human leukocyte antigen-DR: a powerful association to predict the development of sepsis after major trauma. PLoS ONE. 2012;7:e33095. Davenport EE, Burnham KL, Radhakrishnan J, Humburg P, Hutton P, Mills TC, et al. Genomic landscape of the individual host response and outcomes in sepsis: a prospective cohort study. Lancet Respir Med. 2016;4:259–71. Reyes M, Filbin MR, Bhattacharyya RP, Billman K, Eisenhaure T, Hung DT, et al. An immune-cell signature of bacterial sepsis. Nat Med. 2020;26:333–40. Bajwa A, Huang L, Kurmaeva E, Gigliotti JC, Ye H, Miller J, et al. Sphingosine 1-phosphate receptor 3-deficient dendritic cells modulate splenic responses to ischemia-reperfusion injury. J Am Soc Nephrol JASN. 2016;27:1076–90. Nussbaum C, Bannenberg S, Keul P, Gräler MH, Gonçalves-De-Albuquerque CF, Korhonen H, et al. Sphingosine-1-phosphate receptor 3 promotes leukocyte rolling by mobilizing endothelial P-selectin. Nat Commun. 2015;6. Ishii I, Friedman B, Ye X, Kawamura S, McGiffert C, Contos JJA, et al. Selective loss of sphingosine 1-phosphate signaling with no obvious phenotypic abnormality in mice lacking its G protein-coupled receptor, LP B3/EDG-3. J Biol Chem. 2001;276:33697–704. Murch O, Collin M, Hinds CJ, Thiemermann C. Lipoproteins in inflammation and sepsis. I Basic science Intensive Care Medicine. 2007;33:13–24. Xiang Y, Laurent B, Hsu CH, Nachtergaele S, Lu Z, Sheng W, et al. RNA m6A methylation regulates the ultraviolet-induced DNA damage response. Nature. 2017;543:573–6. Winkler MS, Nierhaus A, Holzmann M, Mudersbach E, Bauer A, Robbe L, et al. Decreased serum concentrations of sphingosine-1-phosphate in sepsis. Crit Care. 2015;19. Frej C, Linder A, Happonen KE, Taylor FB, Lupu F, Dahlbäck B. Sphingosine 1-phosphate and its carrier apolipoprotein M in human sepsis and in Escherichia coli sepsis in baboons. J Cell Mol Med. 2016;20:1170–81. Niessen F, Schaffner F, Furlan-Freguia C, Pawlinski R, Bhattacharjee G, Chun J, et al. Dendritic cell PAR1-S1P3 signalling couples coagulation and inflammation. Nature. 2008;452:654–8. Hou JC, Chen QX, Wu XL, Zhao DY, Reuveni H, Licht T, et al. S1PR3 signaling drives bacterial killing and is required for survival in bacterial sepsis. Am J Respir Crit Care Med. 2017;196:1559–70. Sun X, Singleton PA, Letsiou E, Zhao J, Belvitch P, Sammani S, et al. Sphingosine-1-phosphate receptor-3 is a novel biomarker in acute lung injury. Am J Respir Cell Mol Biol. 2012;47:628–36. Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28:27–30. Yu G, Wang L-G, Han Y, He Q-Y. clusterProfiler: an R package for comparing biological themes among gene clusters. Omics J Integr Biol. 2012;16:284–7. Snel B, Lehmann G, Bork P, Huynen MA. STRING: a web-server to retrieve and display the repeatedly occurring neighbourhood of a gene. Nucl Acids Res. 2000;28:3442–4. Venet D, Dumont JE, Detours V. Most random gene expression signatures are significantly associated with breast cancer outcome. PLoS Comput Biol. 2011;7. Crowley SR. The pathogenesis of septic shock. Heart Lung J Acute Crit Care. 1996;25:124–34. Kaukonen K-M, Bailey M, Suzuki S, Pilcher D, Bellomo R. Mortality related to severe sepsis and septic shock among critically ill patients in Australia and New Zealand, 2000–2012. JAMA. 2014;311:1308. Zhang H, Desai NN, Olivera A, Seki T, Brooker G, Spiegel S. Sphingosine-1-phosphate, a novel lipid, involved in cellular proliferation. J Cell Biol. 1991;114:155–67. Annane D, Bellissant E, Cavaillon JM. Septic shock. Lancet. 2005;365:63–78. Cristofaro P, Opal SM. The Toll-like receptors and their role in septic shock. Expert Opin Ther Targets. 2003;7:603–12. Benechet AP, Menon M, Xu D, Samji T, Maher L, Murooka TT, et al. T cell-intrinsic S1PR1 regulates endogenous effector T-cell egress dynamics from lymph nodes during infection. Proc Natl Acad Sci. 2016;113:2182–7. Yano K, Liaw PC, Mullington JM, Shih S-C, Okada H, Bodyak N, et al. Vascular endothelial growth factor is an important determinant of sepsis morbidity and mortality. J Exp Med. 2006;203:1447–58. Feng A, Rice AD, Zhang Y, Kelly GT, Zhou T, Wang T. S1PR1-associated molecular signature predicts survival in patients with sepsis: SHOCK. 2020;53:284–92. Shea BS, Opal SM. The role of S1PR3 in protection from bacterial sepsis. Am J Respir Crit Care Med. 2017;196:1500–2. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucl Acids Res. 2015;43:e47. The abstract of this study has been published in 42nd Annual Conference on Shock (Feng A, Rice AD, Kelly GT, Wang T. Identification of S1PR3 Gene Signature Involved in Survival of Sepsis Patients. SHOCK. 2019;51:50-51). This study is supported in part by National Institutes of Health Grants (P01HL134610, P01HL146369, R01HL142212, and T32HL007249). Each funding body played a role in the data analysis and interpretation. Department of Internal Medicine, College of Medicine-Phoenix, University of Arizona, 475 N. 5th Street, Phoenix, AZ, 85004, USA Anlin Feng, Wenli Ma, Reem Faraj, Gabriel T. Kelly, Michael B. Fallon & Ting Wang Department of Medicine, College of Medicine-Tucson, University of Arizona, Tucson, AZ, USA Stephen M. Black Anlin Feng Wenli Ma Reem Faraj Gabriel T. Kelly Michael B. Fallon Ting Wang AF and TW designed the study and performed analyses; WM, RF, SB, GK, and MF gave a lot of suggestions and joined the interpretation of data; AF drated the main article; and all authors reviewed and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Ting Wang. The list of S1PR3-related genes. Co-expression correlation r values of S1PR3 with S1PR3-related genes. The list of sepsis survival-related genes. Feng, A., Ma, W., Faraj, R. et al. Identification of S1PR3 gene signature involved in survival of sepsis patients. BMC Med Genomics 14, 43 (2021). https://doi.org/10.1186/s12920-021-00886-2 Microarray S1PR3 S3MS
CommonCrawl
The continuous morbidostat: A chemostat with controlled drug application to select for drug resistance mutants Local integral manifolds for nonautonomous and ill-posed equations with sectorially dichotomous operator January 2020, 19(1): 175-202. doi: 10.3934/cpaa.2020010 Droplet phase in a nonlocal isoperimetric problem under confinement Stan Alama 1, , Lia Bronsard 1, , Rustum Choksi 2, and Ihsan Topaloglu 3, Department of Mathematics and Statistics, McMaster University, Hamilton, ON, Canada Department of Mathematics and Statistics, McGill University, Montréal, QC, Canada Department of Mathematics and Applied Mathematics, Virginia Commonwealth University, Richmond, VA, USA Received August 2018 Revised March 2019 Published July 2019 Figure(2) We address small volume-fraction asymptotic properties of a nonlocal isoperimetric functional with a confinement term, derived as the sharp interface limit of a variational model for self-assembly of diblock copolymers under confinement by nanoparticle inclusion. We introduce a small parameter $ \eta $ to represent the size of the domains of the minority phase, and study the resulting droplet regime as $ \eta\to 0 $. By considering confinement densities which are spatially variable and attain a unique nondegenerate maximum, we present a two-scale asymptotic analysis wherein a separation of length scales is captured due to competition between the nonlocal repulsive and confining attractive effects in the energy. A key role is played by a parameter $ M $ which gives the total volume of the droplets at order $ \eta^3 $ and its relation to existence and non-existence of Gamow's Liquid Drop model on $ \mathbb{R}^3 $. For large values of $ M $, the minority phase splits into several droplets at an intermediate scale $ \eta^{1/3} $, while for small $ M $ minimizers form a single droplet converging to the maximum of the confinement density. Keywords: Nonlocal isoperimetric problem, $ \Gamma $-convergence, self-assembly of diblock copolymers, confinement, phase separation, uniformly charged liquid. Mathematics Subject Classification: 35Q70, 49Q20, 49S05, 74N15, 82D60. Citation: Stan Alama, Lia Bronsard, Rustum Choksi, Ihsan Topaloglu. Droplet phase in a nonlocal isoperimetric problem under confinement. Communications on Pure & Applied Analysis, 2020, 19 (1) : 175-202. doi: 10.3934/cpaa.2020010 Emilio Acerbi, Nicola Fusco and Massimiliano Morini, Minimality via second variation for a nonlocal isoperimetric problem, Comm. Math. Phys., 322 (2013), 515–557. doi: 10.1007/s00220-013-1733-y. Google Scholar Stan Alama, Lia Bronsard and Ihsan Topaloglu, Sharp interface limit of an energy modelling nanoparticle-polymer blends, Interfaces Free Bound, 18 (2016), 263–290. doi: 10.4171/IFB/364. Google Scholar Frank S. Bates and Glenn H. Fredrickson, Block copolymers–designer soft materials, Physics Today, 52 (1999), 32-38. Google Scholar Marco Bonacini and Riccardo Cristoferi, Local and global minimality results for a nonlocal isoperimetric problem on $\mathbb{R}^ N$, SIAM J. Math. Anal., 46 (2014), 2310–2349. doi: 10.1137/130929898. Google Scholar Almut Burchard, Rustum Choksi and Ihsan Topaloglu, Nonlocal shape optimization via interactions of attractive and repulsive potentials, Indiana Univ. Math. J., 67 (2018), 375–395. doi: 10.1512/iumj.2018.67.6234. Google Scholar Djalil Chafaï, Nathael Gozlan and Pierre-André Zitt, First-order global asymptotics for confined particles with singular pair repulsion, Ann. Appl. Probab., 24 (2014), 2371–2413. doi: 10.1214/13-AAP980. Google Scholar Rustum Choksi, Cyrill B. Muratov and Ihsan Topaloglu, An old problem resurfaces nonlocally: Gamow's liquid drops inspire today's research and applications, Notices Amer. Math. Soc., 64 (2017), 1275-1283. Google Scholar Rustum Choksi and Mark A. Peletier, Small volume fraction limit of the diblock copolymer problem: Ⅰ. Sharp-interface functional, SIAM J. Math. Anal., 42 (2010), 1334-1370. doi: 10.1137/090764888. Google Scholar Yao-Li Chuang, Maria R. D'Orsogna, Daniel Marthaler, Andrea L. Bertozzi and Lincoln S. Chayes, State transitions and the continuum limit for a 2D interacting, self-propelled particle system, Phys. D, 232 (2007), 33-47. doi: 10.1016/j.physd.2007.05.007. Google Scholar Rupert Frank, Rowan Killip and Phan Thành Nam, Nonexistence of large nuclei in the liquid drop model, Lett. Math. Phys., 106 (2016), 1033–1036. doi: 10.1007/s11005-016-0860-8. Google Scholar Rupert L. Frank and Elliot H. Lieb, A "liquid-solid" phase transition in a simple model for swarming, based on the "no flat-spots" theorem for subharmonic functions, Indiana Univ. Math. J., 67 (2018), 1547-1569. doi: 10.1512/iumj.2018.67.7398. Google Scholar Rupert L. Frank and Elliott H. Lieb, A compactness lemma and its application to the existence of minimizers for the liquid drop model, SIAM J. Math. Anal., 47 (2015), 4436-4450. doi: 10.1137/15M1010658. Google Scholar Glenn Fredrickson, Equilibrium Theory of Inhomogeneous Polymers, Oxford Science Publications, 2005. Google Scholar Valeriy V. Ginzburg, Feng Qiu, Marco Paniconi, Gongwen Peng, David Jasnow and Anna C Balazs, Simulation of hard particles in a phase-separating binary mixture, Phys. Rev. Lett., 82 (1999), 4026-4029. Google Scholar Shay Gueron and Itai Shafrir, On a discrete variational problem involving interacting particles, SIAM J. Appl. Math., 60 (2000), 1–17 (electronic). doi: 10.1137/S0036139997315258. Google Scholar Vesa Julin,, Isoperimetric problem with a Coulomb repulsive term, Indiana Univ. Math. J., 63 (2014), 77–89. doi: 10.1512/iumj.2014.63.5185. Google Scholar Hans Knüpfer and Cyrill B. Muratov, On an isoperimetric problem with a competing nonlocal term Ⅰ: The planar case, Comm. Pure Appl. Math., 66 (2013), 1129-1162. doi: 10.1002/cpa.21451. Google Scholar Hans Knüpfer and Cyrill B. Muratov, On an isoperimetric problem with a competing nonlocal term Ⅱ: The general case, Comm. Pure Appl. Math., 67 (2014), 1974-1994. doi: 10.1002/cpa.21479. Google Scholar Hans Knüpfer, Cyrill B. Muratov and Matteo Novaga, Low density phases in a uniformly charged liquid, Comm. Math. Phys., 345 (2016), 141-183. doi: 10.1007/s00220-016-2654-3. Google Scholar Pierre-Louis Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. Ⅰ, Ann. Inst. H. Poincaré Anal. Non Linéaire, 1 (1984), 109–145. Google Scholar Jiangfeng Lu and Felix Otto, Nonexistence of a minimizer for Thomas-Fermi-Dirac-von Weizsäcker model, Comm. Pure Appl. Math., 67 (2014), 1605–1617. doi: 10.1002/cpa.21477. Google Scholar Jiangfeng Lu and Felix Otto, An isoperimetric problem with Coulomb repulsion and attraction to a background nucleus, arXiv: 1508.07172, 2015. Google Scholar Francesco Maggi, Sets of Finite Perimeter and Geometric Variational Problems, volume 135 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, first edition, 2012. doi: 10.1017/CBO9781139108133. Google Scholar Daniela Morale, Vincenzo Capasso and Karl Oelschläger, An interacting particle system modeling aggregation behavior: from individuals to populations, J. Math. Biol., 50 (2005), 49–66. doi: 10.1007/s00285-004-0279-1. Google Scholar Phan Thành Nam and Hanne van den Bosch, Nonexistence in Thomas–Fermi–Dirac–von Weizsäcker theory with small nuclear charges, Math. Phys. Anal. Geom., 20 (2017), Art. 6, 1-32. doi: 10.1007/s11040-017-9238-0. Google Scholar Takao Ohta and Kyozi Kawasaki, Equilibrium morphology of block copolymer melts, Macromolecules, 19 (1986), 2621–2632. Google Scholar Etienne Sandier and Sylvia Serfaty, Vortices in The Magnetic GInzburg-LAndau Model, Progress in Nonlinear Differential Equations and their Applications, 70. Birkhäuser Boston, Inc., Boston, MA, 2007. Google Scholar An-Chang Shi and Baohui Li, Self-assembly of diblock copolymers under confinement, Soft Matter, 9 (2013), 1398–1413. Google Scholar James H. von Brecht, David Uminsky, L. Bertozzi, Theodore Kolokolnikov and Andrea L. Predicting pattern formation in particle interactions, Math. Models Methods Appl. Sci., 22(suppl. 1) (2012), 1140002, 1-31. doi: 10.1142/S0218202511400021. Google Scholar Figure 1. The attraction to the origin and scaling at the rate $ \delta = \eta^{1/3} $ Figure 2. Minimizing configurations of the second-order energy $ \mathsf{F}_{m^1,\dots,m^n} $ with equal mass $ m^i = 1/100 $ for 100 particles with varying powers $ q $ of degenerate penalization $ \rho(x)-\rho_0\sim |x|^q $. Minimizing configurations are obtained as steady-states of the gradient flow of the energy $ \mathsf{F}_{m^1,\dots,m^n} $ Annalisa Cesaroni, Matteo Novaga. The isoperimetric problem for nonlocal perimeters. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 425-440. doi: 10.3934/dcdss.2018023 Antonio De Rosa, Domenico Angelo La Manna. A non local approximation of the Gaussian perimeter: Gamma convergence and Isoperimetric properties. Communications on Pure & Applied Analysis, 2021, 20 (5) : 2101-2116. doi: 10.3934/cpaa.2021059 Pavel Krejčí, Songmu Zheng. Pointwise asymptotic convergence of solutions for a phase separation model. Discrete & Continuous Dynamical Systems, 2006, 16 (1) : 1-18. doi: 10.3934/dcds.2006.16.1 Julián Fernández Bonder, Analía Silva, Juan F. Spedaletti. Gamma convergence and asymptotic behavior for eigenvalues of nonlocal problems. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2125-2140. doi: 10.3934/dcds.2020355 Lorenza D'Elia. $ \Gamma $-convergence of quadratic functionals with non uniformly elliptic conductivity matrices. Networks & Heterogeneous Media, 2022, 17 (1) : 15-45. doi: 10.3934/nhm.2021022 Ihsan Topaloglu. On a nonlocal isoperimetric problem on the two-sphere. Communications on Pure & Applied Analysis, 2013, 12 (1) : 597-620. doi: 10.3934/cpaa.2013.12.597 Hartmut Schwetlick, Daniel C. Sutton, Johannes Zimmer. On the $\Gamma$-limit for a non-uniformly bounded sequence of two-phase metric functionals. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 411-426. doi: 10.3934/dcds.2015.35.411 Kelei Wang. The singular limit problem in a phase separation model with different diffusion rates $^*$. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 483-512. doi: 10.3934/dcds.2015.35.483 Pavel Krejčí, Elisabetta Rocca, Jürgen Sprekels. Phase separation in a gravity field. Discrete & Continuous Dynamical Systems - S, 2011, 4 (2) : 391-407. doi: 10.3934/dcdss.2011.4.391 Lucia Scardia, Anja Schlömerkemper, Chiara Zanini. Towards uniformly $\Gamma$-equivalent theories for nonconvex discrete systems. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 661-686. doi: 10.3934/dcdsb.2012.17.661 James M. Scott, Tadele Mengesha. Self-Improving inequalities for bounded weak solutions to nonlocal double phase equations. Communications on Pure & Applied Analysis, 2022, 21 (1) : 183-212. doi: 10.3934/cpaa.2021174 Gianni Dal Maso. Ennio De Giorgi and $\mathbf\Gamma$-convergence. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1017-1021. doi: 10.3934/dcds.2011.31.1017 Alexander Mielke. Deriving amplitude equations via evolutionary $\Gamma$-convergence. Discrete & Continuous Dynamical Systems, 2015, 35 (6) : 2679-2700. doi: 10.3934/dcds.2015.35.2679 Haiyan Yin, Changjiang Zhu. Convergence rate of solutions toward stationary solutions to a viscous liquid-gas two-phase flow model in a half line. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2021-2042. doi: 10.3934/cpaa.2015.14.2021 Alain Miranville, Giulio Schimperna. Nonisothermal phase separation based on a microforce balance. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 753-768. doi: 10.3934/dcdsb.2005.5.753 Tatiana Odzijewicz. Generalized fractional isoperimetric problem of several variables. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2617-2629. doi: 10.3934/dcdsb.2014.19.2617 Jun Wang, Qiuping Geng, Maochun Zhu. Existence of the normalized solutions to the nonlocal elliptic system with partial confinement. Discrete & Continuous Dynamical Systems, 2019, 39 (4) : 2187-2201. doi: 10.3934/dcds.2019092 David Bourne, Howard Elman, John E. Osborn. A Non-Self-Adjoint Quadratic Eigenvalue Problem Describing a Fluid-Solid Interaction Part II: Analysis of Convergence. Communications on Pure & Applied Analysis, 2009, 8 (1) : 143-160. doi: 10.3934/cpaa.2009.8.143 Sylvia Serfaty. Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1427-1451. doi: 10.3934/dcds.2011.31.1427 Mauro Fabrizio, Claudio Giorgi, Angelo Morro. Isotropic-nematic phase transitions in liquid crystals. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 565-579. doi: 10.3934/dcdss.2011.4.565 Stan Alama Lia Bronsard Rustum Choksi Ihsan Topaloglu
CommonCrawl
Assessing the utility and limitations of accelerometers and machine learning approaches in classifying behaviour during lactation in a phocid seal Courtney R. Shuert ORCID: orcid.org/0000-0002-3202-48971, Patrick P. Pomeroy ORCID: orcid.org/0000-0003-1603-56302 & Sean D. Twiss ORCID: orcid.org/0000-0002-1923-88741 Classifying behaviour with animal-borne accelerometers is quickly becoming a popular tool for remotely observing behavioural states in a variety of species. Most accelerometry work in pinnipeds has focused on classifying behaviour at sea often quantifying behavioural trade-offs associated with foraging and diving in income breeders. Very little work to date has been done to resolve behaviour during the critical period of lactation in a capital breeder. Capital breeding phocids possess finite reserves that they must allocate appropriately to maintain themselves and their new offspring during their brief nursing period. Within this short time, fine-scale behavioural trade-offs can have significant fitness consequences for mother and offspring and must be carefully managed. Here, we present a case study in extracting and classifying lactation behaviours in a wild, breeding pinniped, the grey seal (Halichoerus grypus). Using random forest models, we were able to resolve 4 behavioural states that constitute the majority of a female grey seals' activity budget during lactation. Resting, alert, nursing, and a form of pup interaction were extracted and classified reliably. For the first time, we quantified the potential confounding variance associated with individual differences in a wild context as well as differences due to sampling location in a largely inactive model species. At this stage, the majority of a female grey seal's activity budget was classified well using accelerometers, but some rare and context-dependent behaviours were not well captured. While we did find significant variation between individuals in behavioural mechanics, individuals did not differ significantly within themselves; inter-individual variability should be an important consideration in future efforts. These methods can be extended to other efforts to study grey seals and other pinnipeds who exhibit a capital breeding system. Using accelerometers to classify behaviour during lactation allows for fine-scale assessments of time and energy trade-offs for species with fixed stores. Observing animals has been the hallmark approach of ethological studies. Often credited with formalizing the field, Altmann [1] gave researchers a toolkit for sampling behaviour state and context in the field through comparable and repeatable approaches to measures of activity and interaction. Quantitative observational studies have been used to understand behaviour in a wide range of contexts, such as individual- or population-level foraging decisions [2, 3] and investigating the mechanisms for conflict and conflict-avoidance [4]. Comparative observations also allow examination of how behaviour may vary over time such as differences between day and night activities [5, 6] or across individuals, including personality types and consistent individual differences [7,8,9]. With the advancement of animal-borne data loggers, researchers have been able to extend the application of behavioural studies to species that have typically been very difficult to observe in the wild, such as marine mammals. More specifically, triaxial accelerometers have been used to infer behaviour remotely in pinnipeds when they are unobservable during trips to and from feeding aggregations [10,11,12,13,14] and other at-sea activities [15, 16]. Often, these accelerometry deployments focus on building coarse-scale activity budgets for resolving energetics associated with foraging and diving or towards more fine-scale event detection, such as head-striking behaviour, to infer the rate of prey consumption relative to energy expenditure at sea [17,18,19,20]. These studies tend to focus on species who exhibit an income approach to the reproductive period of their life history, in which they must regularly supplement their energy stores to maintain and provision their pups, or focus on detecting and classifying behaviour outside of the reproductive period (e.g. [10, 14]). While accelerometers have been used extensively to study the behaviour of terrestrial animals, rarely has any accelerometry research been geared to the consequences of behaviour associated with the brief, but important on-land portion of seal life history (e.g. [21,22,23,24,25,26]). The application of machine learning methods has also become a popular tool for remotely classifying behaviour from accelerometers in a variety of species (e.g. [27,28,29,30].). While accelerometers often present a novel tool for capturing behaviour, the associated data sets can quickly become monumental tasks to examine manually [31]. Supervised machine learning presents a way to overcome this. By using a period of time where the behaviour of an individual is known, a concurrent set of accelerometry data can be labelled and used to train a classification algorithm of choice in order to classify behaviour outside of the observable period [31]. Many different algorithms are available to use in classification, ranging from simple linear discriminant analyses (e.g. [32]) and decision tree algorithms (e.g. [33]) to more advanced black box type approaches such as random forests (e.g. [24]), support vector machines (e.g. [27]), and artificial neural networks (e.g. [34]). Gaining access to individuals in order to build a training data set can often be challenging. Captive surrogates have been used with accelerometers mounted in an analogous way to those in the wild and used to train an algorithm to classify the behaviour of their wild counterparts (e.g. [22, 23, 35, 36]). One such study noted, however, that captive surrogates may not exhibit behaviour in the same mechanistic fashion as those in the wild which may lead to poor, yet undetectable, model performance in classification of unknown data in wild individuals [26]. Having access to behavioural information in a wild context is therefore key to ensuring that trained data match that of a wild cohort of individuals and will likely more accurately characterize behaviour when out of sight. During their 18-day period on shore, breeding female grey seals have fixed resources that they must allocate to maintain themselves and their pup [3, 37,38,39]. Behavioural decisions and small fluctuations in activity likely have an impact on this energetic allocation. Grey seals offer a good system to look at activity in detail, but visual observations to assess behaviour are limited to daylight hours. During the UK grey seal breeding season in the autumn, this may only be about one-third of their daily cycle at best. The use of supervised machine learning algorithms would be extremely powerful in order to elucidate behaviour outside of this limited observable time. While many previous studies have evaluated the mechanics of behaviour at sea, the authors have been unable to find any published studies that attempt to resolve and classify lactation and breeding behaviour on land in grey seals and other pinnipeds using accelerometry (e.g. [40]). Accelerometry-derived activity will also allow not only for the assessment of behaviour overnight, an area of research that is largely either ignored or inaccessible (e.g. [5]), but also will overcome the limitations of visual focal sampling by recording data continuously and simultaneously over many individuals, free from observer biases. In order to conserve resources, grey seals typically tend to remain inactive for long periods of time and only move about to either reposition themselves relative to a pup or to intercept a threat, be it a male or another female and her pup [38]. Grey seals are also known to occasionally travel to and from pools for thermoregulation, but the cost of which is largely unevaluated [41, 42]. Most active behaviours are therefore limited to those such as vigilance or pup-checking where the head may be in motion, leaving the body largely unmoving. Consistent individual differences in time spent alert have already been shown as an important indicator of stress management and coping styles in grey seals [9, 43]. While many studies advise placement of accelerometers close to the centre of mass as a better indicator of energy expenditure (e.g. [31]), head-mounted accelerometers may give a better indication of vigilance, an important indicator of stress management in many terrestrial animals [44,45,46,47,48,49]. This motivated the comparison of the resolution of data from both head (vigilance) and torso-mounted accelerometers (energy expenditure) in the same context and directly assess trade-offs associated with behaviour detection for a largely inactive model species (Fig. 1). Our study encompassed two successive breeding seasons, during which time individuals were exposed to varying environmental and animal density conditions across years on the breeding colony that may confound an in situ accelerometry study. As grey seals are typically site faithful [50], the amount of variability and repeatability between years for accelerometry feature characteristics measured in repeat capture females were quantified as well as the amount of variance between individual females. Example of accelerometers mounts for female grey seals. Example of attachment set up for a a head-mounted accelerometer, and b a torso-mounted accelerometer in addition to a head mount, contained within a custom-designed ballistic nylon footprint on a female grey seal. Tag-frame axes labelled with arrows pointing in direction of positive acceleration values for each axis (X, Y, and Z). Each accelerometer was configured to measure ± 2 g at 50 Hz (2015) and 25 Hz (2016). Heart rate monitor also pictured in panel B as part of larger study design The main aim of this study was to build a useable ethogram of behavioural states as derived from accelerometers during lactation to potentially extend to other efforts to study grey seals and other pinnipeds who exhibit a capital breeding system. Video footage of female grey seals was decoded using a very detailed ethogram of behaviours as part of a larger effort to study grey seal ethology. These detailed behaviours were condensed into broader categories of 8 behavioural states and used to label the concurrent acceleration data collected during the 2015 and 2016 breeding seasons on the Isle of May, Scotland. Several females in 2016 were equipped with two accelerometers, one on the head and one on the torso, to evaluate the effect of placement on behaviour detection. Due to an unforeseen glitch in the firmware of the accelerometers, sampling rates differed between seasons (50 Hz in 2015; 25 Hz in 2016). Labelled accelerometry data were then used to train a random forest algorithm using a subset of training data (60%), with model performance assessed through the remaining data (40%) separately for each season. In order to examine trade-offs in behaviour detection with accelerometer placement, separate random forest models were constructed for a subset of individuals who were tagged with both an accelerometer on the head and torso. Random forest model results from pooled data were also compared to results of random forests fit to each individual. This was done in order to compare and contrast the trade-offs in model accuracy and training data sample size. In addition, we wished to evaluate the stereotypy of behaviours for females recaptured in two subsequent breeding seasons, with the 2015 data subsampled to match the sampling rate of 2016, by quantifying the amount of inter-individual variability present in the accelerometry features using variance and repeatability estimates. Using random forests, we were able to classify four of six core behaviours (Rest, Presenting/Nursing, Alert, and Flippering pup) reliably during lactation in grey seals (Table 1). Between years and accelerometer placement schemes, static behaviours (Rest, Presenting/Nursing, Flippering pup) were consistently classified well based on measures of precision (true positive accuracy), recall (sensitivity), and F1 (the harmonic mean of Precision and Recall) between training (60%) and testing data (40%). All non-Rest behaviours were misclassified to some extent as Rest, resulting in a high number of false positives (values in italics across the top row; Table 2). Accelerometers sampling at a higher frequency (50 Hz in 2015; Fig. 2) was better able to classify behaviours such as Alert than those sampling at a lower frequency (25 Hz in 2016; Table 3), resulting in an F1 of 45% greater for 2015. However, torso-mounted accelerometers generally performed better than head-mounted accelerometers on many of the static behaviours associated with lactation, such as Presenting/Nursing and Rest, despite the lower sampling rate. This resulted in F1 being 29% greater for accelerometers mounted on the torso against those mounted on the head in 2016 (Table 3). Locomotion events, however, were completely undetected in the random forest models for torso-mounted accelerometers. Error estimates and out-of-bag errors (bootstrapped samples from random forest model building) against number of trees grown can be found in the supplementary materials (see Additional files 1–3). Table 1 Ethogram of female grey seal behaviour during lactation Table 2 Confusion matrix of behaviour classified from random forests Precision and recall for head-mounted accelerometers. Scatter plot of precision and recall for the random forest model for head-mounted accelerometers for 2015 (sampled at 50 Hz) on lactating grey seals. Behaviours include Rest, Alert, Presenting/Nursing, Locomotion, Comfort Movements, and Flippering pup as defined in Table 1 Table 3 Comparison of behaviour classification across accelerometer mounts Of the feature variables calculated to summarize the acceleration data (see definitions and derivations in Table 4), components relating to static acceleration (those relating to body posture) were found to be the most important for classifying behaviours. According to random forest models, stZ, stX, stY ranked as top three most important variables, followed by Pitch and Roll relative to the decreasing Gini index (Fig. 3). Gini will approach zero as each of the branches contain a single behavioural category; therefore, a greater decrease in mean Gini indicates that the feature variable in question is more important for splitting these branches and differentiating the behaviours within the random forest model [53]. Summaries of these top five feature variables with respect to behaviour can be found in the additional files (see Additional file 4) as well as a list of full Gini index rankings of all features (Additional file 5). Power spectrum densities in all acceleration dimensions and those pertaining to VeDBA and VeDBAs were also very important (Additional file 5). Table 4 Summary of feature variables extracted from acceleration data Variable importance for classifying female grey seal behaviour. Ten feature variables with the highest mean decrease in Gini, indicating the relative importance of each of the feature variables within the random forest model classifying 6 behaviours in lactating grey seals using head-mounted accelerometers (2015, 50 Hz). Top feature variables included static acceleration components (stZ, stY, stX) and their derivatives, pitch and roll, as well as smoothed VeDBA and elements of power spectrum densities (PSD1, PSD2) in the X and Y dimensions as defined in Table 4 The effects of year and individual on the top feature variable, stZ, were modelled as a generalized linear mixed effects model with maternal post-partum mass a fixed effect to account for the potential influence of inter-annual variation in cost of transport associated with changes in mass between years. The variance of these two random effects, individual and year, were computed over 1000 bootstrapped samples using the package 'rptR' for repeat capture females in R [63, 64]. Overall, Presenting/Nursing and Comfort Movement were found to vary greatly between individuals for the top feature variable, stZ, for torso-mounted data (Fig. 4). The variance component due to individuals was 12.2 ± 5.3%, for Presenting/Nursing and 21.2 ± 9.6% for Comfort Movement across bootstrapped samples (Table 5). Other behaviours, however, showed less than 5% variance. No variance was explained by the effect of year across bootstrapped samples. However, top feature variables most likely to be associated with the position and movement mechanics for each behaviour appear to be repeatable across individuals, indicating varying degrees of stereotypy (Table 5). Alert and Locomotion, largely upright behaviours, appear to be consistent for each seal with respect to stZ, while Rest and Presenting/Nursing, where the head is most often tilting in a downward direction, were consistent and repeatable with respect to stX (Table 5). Flippering pup was found to be highly significant and repeatable within individuals between years with respect to Roll, potentially indicating a side preference and a high degree of stereotypy (adjusted-R = 0.925; D = 1070, p < 0.001 as determined from a likelihood ratio test). This led to evidence that some females lay preferentially on one side of their body (as indicated by Roll) during the Flippering pup behaviour, potentially indicating lateralization given its highly significant repeatability (Fig. 5). Four of the females were found to preferentially lay on their right side, where Roll was significantly less than 0 as determined through a one sample signed rank test ('0J': V = 148, p < 0.001; '74,789': V = 1017, p < 0.001; '74,904': V = 3598, p < 0.001; and '74,962': V = 1207, p < 0.001; see Fig. 5). Likewise, five additional females were found to preferentially lay on their left side, where Roll was significantly greater than 0 as determined through a one sample signed rank test ('45,447' V = 145,710, p < 0.001; '58,038': V = 46,524, p < 0.001; '74,920': V = 475,420, p < 0.001; '72,146': V = 1,125,800, p < 0.001; and '4H': V = 84,251, p < 0.001; see Fig. 5). Individual variability of behaviours with respect to static acceleration in the Z-axis. Boxplot of each behavioural group (Rest, Alert, Presenting/Nursing (Nurse), Locomotion (Loco.), Comfort movements (CM), and Flippering pup (Flip. Pup)) with respect to static acceleration in the Z-axis (stZ) for torso-mounted accelerometers, the feature variable found to be most important in differentiating behaviour in the final random forest model. A high degree of variability existed between individuals and would likely contribute to a lower Precision and Recall when fitting random forests using pooled data Table 5 Variance and repeatability estimates for individual ID Individual differences in side preference for Flippering pup behaviour. Boxplot of static acceleration in the Y-axis, as represented by the derivative Roll, with respect to individual for repeat capture females. Some females appear to show preference for being positioned on the right (values towards − 1) or the left (values towards + 1), indicating individual lateralization in a female–pup interaction (Flippering pup) and was found to be highly significantly repeatable. Those with (**) were found to spend significantly more time on their right (R) or left (L) side as determined through a one sample signed rank test Four behaviours, representing upwards of 90% of a lactating female grey seal's activity budget, were classified well using accelerometry. Overall, several core behaviours of grey seals during lactation were resolved more successfully than others and the reasons varied. Behaviours that were largely stationary, such as Rest and Presenting/Nursing, were best classified in our random forest model. We were also able to reliably classify a form of mother–pup interaction, Flippering pup, with many females showing a specific bias towards left- or right-side positioning, potentially indicating a form of lateralization. Our two movement behaviours of interest, Locomotion and Comfort Movement, were poorly classified regardless of sampling rate (year) or accelerometer placement, despite being among the most popular behaviours to classify in the literature across taxa [54, 65,66,67]. Torso-mounted accelerometers generally performed better than head-mounted accelerometers on the same individuals, but a higher sampling rate still achieved better classification for most behaviours. While a higher sampling rate may have achieved better classification overall at the cost of a shorter deployment time, especially with the consideration of technical issues from tag malfunction in this study, we were still able to resolve a coarse level of behaviour with 4 of 6 target behaviours classified reliably. It was notable that individuals differed significantly, as indicated by individual ID contributing a large portion of variance in modelling. Individuals were largely consistent within themselves, however, in the mechanics of behaviour between years. Limitations of behavioural classification Random forests have been used to classify behaviour in a wide range of taxa, including domestic sheep (Ovis aries, [68]), Eurasian beavers (Castor fibre; [69]), brown hares (Lepus europaeus; [24]), puma (Puma concolor; [70]), griffon vultures (Gyps fulvus; [32]), and other pinniped species (e.g. [40]). In all these studies, only three or four behavioural states with extremely disparate feature characteristics could be discriminated successfully, as was the case in the current investigation. While random forests computationally intensive to train, they take much less time to classify new behavioural data and are generally robust given their two layers of randomness in classification [53]. Unsurprisingly in the estimates of error across trees (Additional files 1–3), movement behaviours (Locomotion and Comfort Movement) with the poorest Precision and Recall also had the highest error rates. Some on-land behaviours of interest in female grey seals may be too variable in execution (amplitude of signal) and duration (presence in time) to classify accurately given the sensitivity of the accelerometers within the current study design in grey seals. In signal theory, random signals as might arise from a behaviour like Comfort Movement are very difficult to characterize [71]. These signals are often contaminated with multiple spectral densities and frequencies that will vary in magnitude over time. Often these signals violate the assumptions of transforms, such as the fast Fourier transform used here, that may lead to inconsistent features, even when properly windowed through more advanced signal processing methods; it may not be possible to accurately and consistently extract some of the behaviours of interest from acceleration data, even with the addition of more feature variables. Stationary behaviours during lactation Overall, static acceleration and its subsequent components were considered the most important features for discriminating behaviour. Rest and Presenting/Nursing were among the best classified on both head- and torso-mounted accelerometers (Precision of 69–75% and 72–80%, respectively, and Recall of 76–93% and 19–56%, respectively). These behaviours involve extensive periods of little to no movement, with only periodic adjustments of body position lasting for brief periods (e.g. Comfort Movements). Resting, and other static behaviours, is often the most easily identifiable behaviour as found in a variety of taxa through accelerometry [70, 72, 73]. Rest and Presenting/Nursing behaviours represent the key trade-off in energy conservation in lactating phocids, maximizing the transfer of finite energy stores to the pup [39, 74,75,76,77,78]. Rest and Presenting/Nursing represent most (65–90%) of a female grey seal's activity budget in the wild [38, 79,80,81]. In the current study, these two behaviours represented almost half of the testing data (Table 2). As capital breeders, grey seal mothers do not return to sea to forage and supplement their energy stores [82]. Resting often seems to be viewed in ethology as the leftover period of a behavioural activity budget. Grey seals of both sexes must budget time spent resting in order to maximize their energy allocation to breeding [39, 83, 84]. For male grey seals, increasing time spent resting may extend tenure within a key breeding territory as they may spend several weeks on the colony without supplemental energy income [85]. A key aim of many studies of lactating phocids is to track the energetics of reproduction. While Rest can be variable in overall body positioning in grey seals, Presenting/Nursing is stereotypical as indicated by its relatively high repeatability, with females alternating regularly between lying on the right or left side to maximize access to both teats as indicated by the wide range of the static acceleration signal across years (Additional file 4). Maternal expenditure during lactation is most accurately quantified by the fat and protein content of milk, overall milk output, or enzyme activity levels as an indication of the female's ability to mobilize fat [82, 86, 87]. These previous studies often involved many repeated sampling events over the lactation period that potentially cause disturbance to both the female and her pup. When repeated physiological samples are unavailable, researchers often calculate mass transfer efficiency by measuring the ratio of the amount of maternal mass lost to the mass gained by the pup based on two capture events at the beginning and end of lactation [39]. Accelerometers may give a useful behavioural estimate of maternal effort in nursing to compare across populations, especially with respect to topographical considerations, tidal effects, or the effect of disturbance. While not directly useable as a measure of discrete energy transfer between females and pups, this behaviour may only be a useful indication of energetic differences relating to extreme outliers of low mass transfer efficiencies. The stationary pup interaction in the form of Flippering pup was also classified well, irrespective of accelerometer sampling protocols. This behaviour also had the lowest calculated inter-individual variability and the highest significant repeatability score with respect to body position. While many other pup-directed behaviours can be identified through conventional behavioural observation, this was the only other maternal behaviour that was reliably classified outside of Presenting/Nursing. Similar to Presenting/Nursing, females often engage in Flippering pup behaviour while lying on one side or the other, repeatedly stroking or scratching the pup. While this behaviour involves a similar body position to that of Presenting/Nursing or Rest, there is a slight average increase in the frequency associated with the x-axis of movement with this behaviour, making it relatively stereotypical in feature space. As this behaviour is often observed preceding nursing events, this may be an important tool for further assessing patterns in maternal care. Interestingly, some females appear to be selective in choosing which side to lay on, likely using their opposite front flipper to stroke the pup, as indicated by the slight saturation towards positive acceleration (indicating right side preference; significant in four females) or negative acceleration (indicating left side preference; significant in five females) in Roll (Fig. 5). Our definition of Flipper pup likely broadly defines a class of movement, but may contain differences in flippering associated with a positive affective state, generally preceding a nursing event, or with a negative affective state, such as stimulating a pup to move away from a threat source. It is likely that we would find stronger side preferences in this behaviour associated with these different affective states. These results add to a growing body of evidence for preferential lateralization in mammals, both for humans and others [88,89,90]. While we could detect no bias in Presenting/Nursing towards lying on the left or right, our result indicates that some grey seal females may exhibit a preference towards left handed flippering of the pup irrespective of affective state, which is consistent with research indicating that this will keep the pup in the left eye allowing control by the right hemisphere of the brain, associated kin recognition and threat recognition in mammals [88, 89, 91,92,93]. This intriguing evidence of handedness in female grey seals should be built upon by detailed studies of behaviour to assess degree of lateralization in other non-nursing mother–pup interactions and social contexts. Vigilance during lactation We were able to classify a single broad vigilance category well from accelerometry data when sampled at a high rate (Precision 64% and Recall 76% for 2015). Alert behaviour, even when the head is moving periodically to scan for threats, often involves many intermittent pauses of relative stillness. What traditionally an ethologist might classify as a single bout of vigilance or alert behaviour over a period of 1 min, an accelerometer might only characterize short periods detectable movement, accurately classified as Alert, interspersed with short periods of data that may behave as Rest. Given the fine-scale resolution of second-by-second behaviour, Alert may be indistinguishable as a single state lasting several seconds or minutes. In fact, Alert behaviours were most often mistaken for Rest. Some degree of post hoc thresholding might be necessary to improve the derivations of time-activity budgets of states over time. Vigilance has been studied extensively in a variety of terrestrial species [44, 46, 48, 94]. Understanding how individuals allocate time (and consequently energy) to vigilance has been a major topic of study in behavioural ecology. Often in ungulates and other prey species, this represents a trade-off associated with balancing time foraging and acquiring energy (head-down) and looking out for potential sources of danger (head-up; [21, 49, 68]). Studying the functions of vigilance has led to insight into the evolution of group living and predator–prey dynamics (e.g. [95, 96]). Even predators must balance vigilance activity, balancing vigilance for threats and prey items alike [46, 47]. Grey seals, too, must balance the time that they spend vigilant watching out for threats to their young, though we are only able to comment on the amount of time spent in a general state of Alert. With no indication of context, it is impossible to comment on the functionality of accelerometer-derived vigilance activity. Most terrestrial studies evaluating vigilance have used collar-mounted accelerometers [97, 98]. Other types of Alert or even social and aggressive behaviours and contexts may be better classified with the placement of an accelerometer in a location with a greater variety of postural dynamics, such as being glued on to the neck behind the head. The extraction of context-specific types of alert behaviours may allude to fine-scale decision-making during this sensitive period of development for mother and pup. Phocid locomotion on land Perhaps surprisingly, Locomotion was not well classified in our grey seals on land. Identifying modes of locomotion is a popular aim in the accelerometry literature, from flight to running to swimming [16, 65, 99, 100]. Locomotion types are often bounded by various biomechanical pressures that limit their interpretation [101, 102] and are easily identifiable and separable by their spectral densities and frequencies [70]. In other pinnipeds, differences in at-sea locomotion detected with tags mounted along the dorsal midline, often expressed as stroke frequency, are used as a reliable indicator of energetic expenditure [67]. Often, as in this study, frequency and spectral density elements are extracted using a fast Fourier transform [103]. This transform assumes that the signal is stable in time and space in order to dissolve it into its spectral elements [62, 71]. Behaviours like swimming in marine mammals are often stable and can last over many minutes or hours. However, if a signal is too brief or inconsistent in execution, this transform is not likely to accurately detect changes in frequency and power; the signal may be missed entirely. In the case of grey seals on land, locomotion is typically brief as females tend to stay within a few body lengths of their pups, with only the rare long-distance trip to a pool of water [41, 42]. In total, Locomotion only comprises about 1% of a female's activity budget, even across different seal breeding colonies where topographical differences may alter locomotory needs (e.g. [3, 81]). Generally grey seals appear to limit the time spent locomoting, likely as a mechanism for conserving energy and to avoid being away from offspring [38]. Female grey seals must prioritize maximizing energy stores upon arrival to a breeding colony to maintain themselves and nourish their pup during lactation while fasting [52]. While Locomotion was clearly present within the accelerometry signal upon visual inspection, with individual 'steps' visible, it generally was missed entirely by our classification algorithms as indicated by a high precision (92%) and extremely low recall (5.4%) when sampled at the highest rate in 2015. In addition to being brief, grey seal Locomotion on land may not be stereotypical enough to accurately classify when moving over short distances as females will often alternate between vigilance and directed movement, as well as being able to locomote while still on their side. Even though PSD was an important predictor of behavioural classification in the current study, Locomotion was only identifiable in head-mounted accelerometer deployments and was often confused with Alert or Rest behaviours, but very poorly classified (Table 2). Seal locomotion on land, especially at slower speeds, is typically led by the head and forelimbs, rather than the centre of mass. This may explain in part why Locomotion was marginally better classified in the head-mounted accelerometers, rather than on the torso. It may be possible for accelerometers mounted on the torso, but sampling at a higher rate to capture the more subtle movements, to accurately detect Locomotion and subsequent energy usage on land, but may still suffer from the confounding effects discussed above. Limitations of accelerometry and individual differences Context-dependent and interaction behaviours were removed from classification as they were unidentifiable in feature space given our study design. Several studies have also identified the confounding factors of classifying such contextual behaviours. One study on baboons found poor classification Precision and Recall when attempting to separate grooming behaviour when the individual was either the actor (grooming another) or the receiver (being groomed by another; [25]). Another study in captive elephants showed that although differences in affective state could be discriminated, acceleration needed to be sampled at extremely high levels (1000 Hz) in order to elucidate minute differences in postural dynamics [104]. Given the inherent trade-offs in battery longevity, storage capacity, and sampling rate as well as best practice recommendations for tagging, it is unlikely that this type of highly sensitive measurement could yet be applicable in a wild setting. Torso-mounted accelerometers show promise in extracting key behaviours while seals are on land, though a higher sampling rate that was used here may be necessary to classify behaviours with greater Precision and Recall. In addition, a higher sampling rate may be able to highlight minute differences in postural dynamics that may improve in the identification of contextual interactions in grey seals. Nevertheless, the resolution of behaviour identified in the current study is comparable to other previous efforts to classify behaviour in various other vertebrates, such as [13, 23, 40, 59, 66]. When examining inter-annual differences in behavioural mechanics for repeat capture females, it was found that individual ID included as a random effect explained a relatively high amount of variance. We found that while there was clear inter-individual differences in behaviour in certain behaviours, females were largely consistent within themselves between years. For comparison, we fitted random forests to individual seals and indeed found higher F1 values across the board for all behaviours. While building random forests for each individual certainly overcomes this inter-individual variability, clearly apparent in Fig. 4, with respect to behavioural mechanics, only a small subset of the individuals actually had enough training data to build a random forest for all 6 behaviours investigated here. One of our main aims by pooling data from all individuals was to increase the overall sample size of behavioural reference data, especially with the goal to overcome the difficulty of observing behaviour in a wild context without the use of captive surrogates. As with the results presented here, researchers must consider the trade-offs with data availability (in either a wild context or with captive surrogates) and random forest model accuracy (fitting to an individual or pooling data) within the context of the study at hand. While the exact reason for such a high amount of variance is unclear, differences in substrate within and among study locations on the colony likely contributed to inter-individual differences and may have confounded classification of behaviour from accelerometers, even when every effort is made to tag the same individuals. Care should be taken in future work to consider the overall effect of individual variability, especially associated with the surrounding context, when classifying behaviour using accelerometers (e.g. [40]). Several other studies have pointed out the potential confounding effects of environment in dictating the overall body position of an individual [54, 99]. Static acceleration was one of the most important predictors of behaviour in the favoured random forest model classifying our 6 behavioural states. While female grey seals tended to return to similar locations on the colony between years, the topography of the island is highly variable and has already been shown to be an important consideration in the behaviour of this species [3, 50, 79]. It is unclear how or whether the effect of topography on body position and dynamic movement can be addressed or corrected for without the application of more sensors to model movement within quantified fine-scale topography, such as the addition of magnetometers and GPS (e.g. [105]). Individuals did vary significantly within themselves with respect to Presenting/Nursing within static components of acceleration. Rather than being a mechanistic error, this likely indicates an attempt by females to maximize access to milk, ensuring the pup has fairly equal access to nipples during suckling bouts. Separating left and right side Presenting/Nursing may improve classification. In addition, it is more than likely that higher Precision and Recall might be achieved if the behaviours were defined exclusively by their mechanics. This would, however, be at the risk of losing what little contextual information is contained in the behaviours that were attempted to be classified, which, arguably, is key to understanding the functions of such behaviours. Head-mounted accelerometers were better able to identify rare behaviours using random forest models when sampling at a higher frequency than accelerometers sampling at a lower frequency. Accelerometers placed on the centre of gravity appear to show promise in extracting a number of key behavioural states during lactation and would likely benefit from a higher sampling rate than tested here. Grey seals often remain inactive for long periods of time during lactation to conserve resources. Most of the movement is therefore limited to head movement or postural changes for nursing. While we achieved a coarse level of behavioural resolution, it might be recommended to place accelerometers on the neck of breeding grey seals to access the greatest changes in position and postural dynamics, if additional sensor data are not possible. States identified using torso mounted accelerometers may be more important in quantifying differences in energetic expenditure. Improved accuracy could be achieved by attempting to classify fewer behaviours that are defined exclusively by their mechanics, but at potential loss of contextual and social information. It has also been shown that individuals may vary in the execution of behaviours in a wild context, supporting previous work that has flagged discrepancies within training data sets. Future work should consider this when training a classification algorithm using only a handful of animals as this may lead to poor detection in subsequent deployments. It is our hope that the results presented here may inform work in other species for classifying behaviour during lactation in other phocid seals. Study animals and accelerometer deployments This study focused on lactating adult female grey seals on the Isle of May in Scotland (56.1°N, 2.55°W), located in the outer Firth of Forth and managed by Scottish Natural Heritage as a National Nature Reserve. Adult female grey seals typically begin to arrive on the island in early October to pup and mate, with peak density around mid-November and slowly declining until mid-December [106]. Adult female grey seals were sampled both early and late in their approximate 18-day lactation period [39, 106, 107]. Accelerometer attachment took place at the initial sampling event, with removal at the final handling event. Fifty-three female grey seals were equipped with small data-logging accelerometers (AXY-Depth, TechnoSmart Europe, Italy) during the core of the lactation period (10.7 ± 2.7 days) for the 2015 and 2016 breeding seasons (n = 11 females recaptured in successive breeding seasons). All individuals during the 2015 and 2016 seasons were equipped with an accelerometer mounted on the head, while 10 individuals in the 2016 season were additionally equipped with an accelerometer on the torso, mounted roughly between the shoulder blades (Fig. 1). Tags were housed in custom-designed ballistic nylon pouches attached onto dry pelage using superglue (Loctite, formula 422; Fig. 1). Due to an unforeseen glitch in the firmware of the accelerometers, sampling rates differed between seasons (50 Hz in 2015; 25 Hz in 2016). This allowed us to capture a seal's fastest movements that last between 0.5 and 1 s (e.g. head lunges associated with intraspecific interactions). Derivation of accelerometry features Acceleration signals were processed to derive 33 separate feature variables measured in all three axes of movement X, Y, and Z [54, 55]. Static acceleration (stX-Z), the gravitational component indicating position and posture in each axis of movement, was calculated using a moving average filter over a 3 s overlapping window, or 150 data points when sampled at 50 Hz (75 data points at 25 Hz; [17, 54,55,56]). Dynamic acceleration (dyX–Z), the component due to movement and posture dynamics of an individual, was then calculated by subtracting the static component from the raw acceleration in each axis [23, 54, 55, 57]. Partial dynamic body acceleration (PBDAx-z) was calculated as the absolute value of dynamic acceleration in each axis [40, 58, 59]. Overall, dynamic body acceleration (ODBA) and vectorial dynamic body acceleration (VeDBA) were also calculated as, $$\begin{aligned} {\text{ODBA}} & = \left| {{\text{dy}}X} \right| + \left| {{\text{dy}}Y} \right| + \left| {{\text{dy}}Z} \right| \\ {\text{VeDBA}} & = \sqrt {{\text{dy}}X^{2} + {\text{dy}}Y^{2} + {\text{dy}}Z^{2} } \\ \end{aligned}$$ We also included a smoothed vector of VeDBA (VeDBAs), derived as a 3-s running mean as with static acceleration [60, 61]. The ratio of VeDBA to PDBA was also included to add the relative contribution of each axis of PBDA to the vector of movement [25]. The change in acceleration over time, the third derivative of position commonly referred to as jerk, was derived by taking the differential of each axis of acceleration. We also calculated the norm of jerk by taking the square root of the sum of the squared differential of acceleration in each dimension, $${\text{norm}}\,{\text{jerk}} = f_{\text{s}} * \sqrt {\sum {\text{diff}}\left( A \right)^{2} }$$ where fs is the sampling frequency in Hz and A is each axis of acceleration as outlined in [18]. Pitch and Roll in radians were derived by taking the arcsine of static acceleration in the heave (dorso-ventral movement) and sway (lateral movement) axes, respectively [54]. Once derived, these attributes were summarized by their mean over a 1-s window in order to match video observation resolution. To characterize oscillations in dynamic body movement, elements of power spectral density and frequency were also calculated for each second of acceleration data using Fourier analysis using methodology laid out in [25]. A fast Fourier transform decomposes an acceleration signal and translates it from a time domain signal to a stationary frequency domain signal whereby elements of frequency and power (amplitude) can be extracted [62]. Traditional Fourier analysis assumes that the signal continues indefinitely. Therefore, to avoid potential issues of spectral leakage and to sample enough of a data window to capture, cyclical behaviours like Locomotion, spectral elements were calculated over a window spanning 1 s on either side of the current time point [62]. In order to summarize these windows, the first two maximum power spectral density peaks (PSD) were extracted along with their associated frequencies (Freq) in each axis of movement [25]. A summary list of feature variables can be found in Table 4. Time-matching behaviours and training data sets Over the deployment period, each individual was sampled for behaviour using a focal sampling approach for at least 3 dedicated sessions during daylight hours [1]. Videos were recorded using a digital high definition video recorder (Panasonic HC-V700 1920 × 1080 resolution with 46 × zoom; Panasonic Corp.) on a tripod from at least 50 m away. Video footage for all individuals and years were decoded in real-time by the lead author (CRS) according to the ethogram of behavioural states as listed in Table 1 at a resolution of 1 s. Approximately 10% of the video footage was re-watched to check consistency in behavioural decoding, resulting in average difference in cumulative time spent in each behaviour of about 5 s per video (approximately 0.07 ± 1.8% difference in the resulting activity budget), with moderate agreement (Cohen's kappa = 0.57). Concurrent sections of summarized attributes of acceleration data were extracted and time-matched to the 8 behavioural states to create a set of training data for each year and tag attachment type. Labelled data for 2015 head-mounted accelerometers totalled 45.7 h (nind = 29 individuals), while 2016 head- and torso-mounted accelerometers totalled 91.3 (nind = 24) and 65.7 h (nind = 10), respectively, averaging 7.36 ± 15.5 h of video footage for each behaviour across all years. The mean proportion of time spent in each behaviour from video footage (± standard deviation) is included in Table 1 for all study females. Random forests The random forest algorithm is a fairly recent development and extension of classification and regression trees [53]. Classification trees are typically built by assembling binary partitions along increasingly homogenous regions with respect to the desired classification [108]. These homogeneous splits, referred to as nodes, are continuously subdivided until there is no longer a decrease in the Gini impurity index, G (or in this case, it will approach zero as a single behaviour is included in the node): $$G = \mathop \sum \limits_{i = 1}^{n} p_{i} \left( {1 - p_{i} } \right)$$ where n is the number of behavioural classes and pi is the proportion of each class in a set of observations. Random forest fits many of these classification trees to a data set, combining predictions from all trees to classify new data [25, 53, 108]. First, a training data set is sampled randomly with replacement, resulting in several bootstrapped samples. With each of these simulated data sets, the model grows one tree to classify the observations into different classes, or behaviours, by hierarchical decision-making down each node [53, 108]. This algorithm utilizes bootstrapped samples from the original data set to grow each individual tree, using a random selection of predictor variables, or in this case accelerometry features, to partition the data. Out-of-bag observations, those observations not included in each bootstrapped sample, are then used to calculate model accuracies and error rates and then averaged across all trees. Random forests offer a great number of iterations, in the form of number of trees grown, and several layers of randomness in order to build a robust and powerful tool for classification of new data, while limiting overfitting and problems associated with unbalanced data sets, as we might find in a seal's activity budget where rest often dominates the activity budget (e.g. [25, 38]). Random forests also have the advantage of allowing for the assessment of variable importance by way of subtracting the parent variable Gini index value relative to the next two subsequent Gini index values for each feature variable. For this machine learning algorithm, the data were split into a 60/40% training and testing sets and grew 500 trees using the 'randomForest' package in R [109]. Classification and assessment of random forests To compare model performance in each of the machine learning algorithms used in this study, Precision, Recall, and the F1 statistic were calculated from the resulting confusion matrices as produced from each of the cross-validations used with the testing data sets. Following cross-validation, resulting values of true positives (correctly classified positive values, TP), false positives (incorrectly classified positive values, FP), and false negatives (incorrectly classified values that were negative, FN) for each behavioural category were used to calculate Precision, Recall, and F1 [110]. Precision, also referred to as the true positive accuracy, was defined as the proportion of positive behavioural classifications that were correct [57], and was calculated as; $${\text{Precision}} = \frac{\text{TP}}{{{\text{TP}} + {\text{FP}}}}$$ Recall, also known as sensitivity, was defined as the proportion of new data pertaining to behaviours that were correctly classified as positive [57] and was calculated as; $${\text{Recall}} = \frac{\text{TP}}{{{\text{TP}} + {\text{FN}}}}$$ The F1 statistic represents the harmonic mean of Precision and Recall and was used as a metric for overall performance of each behavioural classification category as it computes the harmonic mean of both performance metrics [110]. F1 was calculated as; $$F1 = \frac{2}{{\frac{1}{\text{Precision}} + \frac{1}{\text{Recall}}}}$$ Values closer to 1 for all metrics stated above represent better model performance. Model creation and validation were performed separately for the 2015 and 2016 season, as well as separately for head-mounted and torso-mounted accelerometers (2016 only), resulting in 3 separate random forest models. Variable importance plots for the random forest models were also examined. Mechanics of behaviour The repeatability of the mechanics of behaviour with respect to features that were found to be most important in random forest model building was also assessed across seasons for repeat capture females (nind= 11), something that is rarely available in non-captive individuals. Due to an unforeseen malfunction in the firmware of the accelerometers, loggers had to sample at a lower rate in 2016 as previously mentioned. To achieve equivalent sampling rates between seasons, the 2015 accelerometry data were down-sampled by half when compared to the 2016 accelerometry data. Generalized linear mixed effects models were built to predict top feature variables that were deemed most relevant for each behaviour. Individual ID and year were included as random effects in the model. To account for the potential changes in cost-of-transport between years, individual estimated post-partum masses were added as a fixed effect R (package 'nlme'; [111]). Variance and repeatability estimates associated with individual ID and year were calculated using the 'rptR' package [63], calculated over 1000 bootstrapped samples. As a result of the inclusion of a fixed effect in this model, all repeatability measures are adjusted-R (adj.-R) as per [63]. Significance of repeatability was assessed through the use of a likelihood ratio test to compare to a model without the random effect within the package. stX, stY, stZ : static acceleration in each X-, Y-, and Z-axis dyX, dyY, dyZ : dynamic acceleration in each X-, Y-, and Z-axis PBDA: partial dynamic body acceleration ODBA: overall dynamic body acceleration VeDBA, VeDBAs: vectorial dynamic body acceleration, smoothed PSD: power spectrum density Freq: TP: true positives FP: FN: false negatives adjusted-R : Altmann J. Observational study of behavior: sampling methods. Behaviour. 1974;49:227–67. Witter LA, Johnson CJ, Croft B, Gunn A, Gillingham MP. Behavioural trade-offs in response to external stimuli: time allocation of an Arctic ungulate during varying intensities of harassment by parasitic flies. J Anim Ecol. 2012;81:284–95. Anderson SS, Harwood J. Time budgets and topography: how energy reserves and terrain determine the breeding behaviour of grey seals. Anim Behav. 1985;33:1343–8. Bishop AM, Lidstone-Scott R, Pomeroy P, Twiss SD. Body slap: an innovative aggressive display by breeding male gray seals (Halichoerus grypus). Mar Mamm Sci. 2014;30:579–93. Culloch RM, Pomeroy PP, Twiss SD. The difference between night and day: the nocturnal and diurnal activity budget of gray seals (Halichoerus grypus) during the breeding season. Mar Mamm Sci. 2016;32:400–8. Anderson SS. Day and night activity of Grey seal bulls. Mamm Rev. 1978;8:43–6. Briffa M, Greenaway J. High in situ repeatability of behaviour indicates animal personality in the beadlet anemone Actinia equina (Cnidaria). PLoS ONE. 2011;6:e21963. McGhee KE, Travis J. Repeatable behavioural type and stable dominance rank in the bluefin killifish. Anim Behav. 2010;79:497–507. Twiss SD, Franklin J. Individually consistent behavioural patterns in wild, breeding male grey seals (Halichoerus grypus). Aquat Mamm. 2010;36:234–8. Arthur B, Hindell M, Bester MN, Oosthuizen WC, Wege M, Lea MA, et al. South for the winter? Within-dive foraging effort reveals the trade-offs between divergent foraging strategies in a free-ranging predator. Funct Ecol. 2016;30:1623–37. Yeates LC, Williams TM, Fink TL. Diving and foraging energetics of the smallest marine mammal, the sea otter (Enhydra lutris). J Exp Biol. 2007;210:1960–70. Davis RW, Fuiman LA, Madden KM, Williams TM. Classification and behavior of free-ranging Weddell seal dives based on three-dimensional movements and video-recorded observations. Deep Sea Res Part II Top Stud Oceanogr. 2013;88–89:65–77. Battaile BC, Sakamoto KQ, Nordstrom CA, Rosen DAS, Trites AW. Accelerometers identify new behaviors and show little difference in the activity budgets of lactating northern fur seals (Callorhinus ursinus) between breeding islands and foraging habitats in the eastern Bering Sea. PLoS ONE. 2015;10:e0118761. Jeanniard-du-dot T, Guinet C, Arnould JPY, Speakman JR, Trites AW. Accelerometers can measure total and activity-specific energy expenditures in free-ranging marine mammals only if linked to time-activity budgets. Funct Ecol. 2016;31:377–86. McClintock BT, Russell DJF, Matthiopoulos J, King R. Combining individual animal movement and ancillary biotelemetry data to investigate population-level activity budgets. Ecology. 2013;94:838–49. Jeanniard-du-dot T, Trites AW, Arnould JPY, Speakman JR, Guinet C. Flipper strokes can predict energy expenditure and locomotion costs in free-ranging northern and Antarctic fur seals. Sci Rep. 2016;6:33912. Skinner JP, Norberg SE, Andrews RD. Head striking during fish capture attempts by Steller sea lions and the potential for using head surge acceleration to predict feeding behavior. Endanger Species Res. 2010;10:61–9. Ydesen KS, Wisniewska DM, Hansen JD, Beedholm K, Johnson M, Madsen PT. What a jerk: prey engulfment revealed by high-rate, super-cranial accelerometry on a harbour seal (Phoca vitulina). J Exp Biol. 2014;217:2239–43. Watanabe YY, Takahashi A. Linking animal-borne video to accelerometers reveals prey capture variability. Proc Natl Acad Sci USA. 2013;110:2199–204. Viviant M, Trites AW, Rosen DAS, Monestiez P, Guinet C. Prey capture attempts can be detected in Steller sea lions and other marine predators using accelerometers. Polar Biol. 2010;33:713–9. Moreau M, Siebert S, Buerkert A, Schlecht E. Use of a tri-axial accelerometer for automated recording and classification of goats' grazing behaviour. Appl Anim Behav Sci. 2009;119:158–70. Soltis J, Wilson RP, Douglas-Hamilton I, Vollrath F, King LE, Savage A. Accelerometers in collars identify behavioral states in captive African elephants Loxodonta africana. Endanger Species Res. 2012;18:255–63. McClune DW, Marks NJ, Wilson RP, Houghton JDR, Montgomery IW, McGowan NE, et al. Tri-axial accelerometers quantify behaviour in the Eurasian badger (Meles meles): towards an automated interpretation of field data. Anim Biotelem. 2014;2:5. Lush L, Ellwood S, Markham A, Ward AI, Wheeler P. Use of tri-axial accelerometers to assess terrestrial mammal behaviour in the wild. J Zool. 2016;298:257–65. Fehlmann G, O'Riain MJ, Hopkins PW, O'Sullivan J, Holton MD, Shepard ELC, et al. Identification of behaviours from accelerometer data in a wild social primate. Anim Biotelem. 2017;5:6. https://doi.org/10.1186/s40317-017-0121-3. Pagano AM, Rode KD, Cutting A, Owen MA, Jensen S, Ware JV, et al. Using tri-axial accelerometers to identify wild polar bear behaviors. Endanger Species Res. 2017;32:19–33. Hokkanen AH, Hänninen L, Tiusanen J, Pastell M. Predicting sleep and lying time of calves with a support vector machine classifier using accelerometer data. Appl Anim Behav Sci. 2011;134:10–5. Grünewälder S, Broekhuis F, Macdonald DW, Wilson AM, McNutt JW, Shawe-Taylor J, et al. Movement activity based classification of animal behaviour with an application to data from cheetah (Acinonyx jubatus). PLoS ONE. 2012;7:1–11. Joseph J, Torney C, Kings M, Thornton A, Madden J. Applications of machine learning in animal behaviour studies. Anim Behav. 2017;124:203–20. Ladds MA, Thompson AP, Kadar J-P, Slip DJ, Hocking DP, Harcourt RG. Super machine learning: improving accuracy and reducing variance of behaviour classification from accelerometry. Anim Biotelem. 2017;5:8. https://doi.org/10.1186/s40317-017-0123-1. Brown DD, Kays R, Wikelski M, Wilson RP, Klimley AP. Observing the unwatchable through acceleration logging of animal behavior. Anim Biotelem. 2013;1:1–16. Nathan R, Spiegel O, Fortmann-Roe S, Harel R, Wikelski M, Getz WM. Using tri-axial acceleration data to identify behavioral modes of free-ranging animals: general concepts and tools illustrated for griffon vultures. J Exp Biol. 2012;215:986–96. Nishizawa H, Noda T, Yasuda T, Okuyama J, Arai N, Kobayashi M. Decision tree classification of behaviors in the nesting process of green turtles (Chelonia mydas) from tri-axial acceleration data. J Ethol. 2013;31:315–22. Banerjee D, Biswas S, Daigle C, Siegford JM. Remote activity classification of hens using wireless body mounted sensors. In: Proceedings of BSN 2012 9th international work wearable implant body sensor networks. 2012. pp. 107–12. Campbell HA, Gao L, Bidder OR, Hunter J, Franklin CE. Creating a behavioural classification module for acceleration data: using a captive surrogate for difficult to observe species. J Exp Biol. 2013;216:4501–6. Dalton AJM, Rosen DAS, Trites AW. Season and time of day affect the ability of accelerometry and the doubly labeled water methods to measure energy expenditure in northern fur seals (Callorhinus ursinus). J Exp Mar Biol Ecol. 2014;452:125–36. Lydersen C, Kovacs KM. Behaviour and energetics of ice-breeding, North Atlantic phocid seals during the lactation period. Mar Ecol Prog Ser. 1999;187:265–81. Kovacs KM. Maternal behaviour and early behavioural ontogeny of grey seals (Halichoerus grypus) on the Isle of May, UK. J Zool. 1987;213:697–715. https://doi.org/10.1111/j.1469-7998.1987.tb03735.x. Pomeroy PP, Fedak MA, Rothery P, Anderson S. Consequences of maternal size for reproductive expenditure and pupping success of grey seals at North Rona, Scotland. J Anim Ecol. 1999;68:235–53. Ladds MA, Thompson AP, Slip DJ, Hocking DP, Harcourt RG. Seeing it all: evaluating supervised machine learning methods for the classification of diverse otariid behaviours. PLoS ONE. 2016;11:1–17. Twiss SD, Wright NC, Dunstone N, Redman P, Moss S, Pomeroy PP. Behavioral evidence of thermal stress from overheating in UK breeding gray seals. Mar Mamm Sci. 2002;18:455–68. Stewart JE, Pomeroy PP, Duck CD, Twiss SD. Finescale ecological niche modeling provides evidence that lactating gray seals (Halichoerus grypus) prefer access to fresh water in order to drink. Mar Mamm Sci. 2014;30:1456–72. https://doi.org/10.1111/mms.12126. Twiss SD, Culloch R, Pomeroy PP. An in-field experimental test of pinniped behavioral types. Mar Mamm Sci. 2012;28:E280–94. Burger J, Gochfeld M. Vigilance in African mammals: differences among mothers, other females, and males. Behaviour. 1994;131:153–69. Yorzinski JL, Chisholm S, Byerley SD, Coy JR, Aziz A, Wolf JA, et al. Artificial light pollution increases nocturnal vigilance in peahens. Peer J. 2015. https://doi.org/10.7717/peerj.1174. Caro TM. Cheetah mothers' vigilance: looking out for prey or for predators? Behav Ecol Sociobiol. 1987;20:351–61. Pangle WM, Holekamp KE. Functions of vigilance behaviour in a social carnivore, the spotted hyaena, Crocuta crocuta. Anim Behav. 2010;80:257–67. https://doi.org/10.1016/j.anbehav.2010.04.026. Arenz CL, Leger DW. Thirteen-lined ground squirrel (Sciuridae: Spermophilus tridecemlineatus) antipredator vigilance decreases as vigilance cost increases. Anim Behav. 1999;57:97–103. Kölzsch A, Neefjes M, Barkway J, Müskens GJDM, van Langevelde F, de Boer WF, et al. Neckband or backpack? Differences in tag design and their effects on GPS/accelerometer tracking results in large waterbirds. Anim Biotelem. 2016;4:13. Pomeroy PP, Anderson SS, Twiss SD, McConnell BJ. Dispersion and site fidelity of breeding female grey seals (Halichoerus grypus) on North Rona, Scotland. J Zool. 1994;233:429–47. Tinker MT, Kovacs KM, Hammill MO. The reproductive behavior and energetics of male gray seals (Halichoerus grypus) breeding on a land-fast ice substrate. Behav Ecol Sociobiol. 1995;36:159–70. Anderson SS, Fedak MA. Grey seal, Halichoerus grypus, energetics: females invest more in male offspring. J Zool. 1987;211:667–79. Cutler DR, Edwards TC, Beard KH, Cutler A, Hess KT, Gibson J, et al. Random forests for classification in ecology. Ecology. 2007;88:2783–92. Shepard ELC, Wilson RP, Quintana F, Laich AG, Liebsch N, Albareda DA, et al. Identification of animal movement patterns using tri-axial accelerometry. Endanger Species Res. 2008;10:47–60. Wilson RP, White CR, Quintana F, Halsey LG, Liebsch N, Martin GR, et al. Moving towards acceleration for estimates of activity-specific metabolic rate in free-living animals: the case of the cormorant. J Anim Ecol. 2006;75:1081–90. Taylor FJ. Principles of signals and systems. New York: McGraw-Hill Book Co.; 1994. Bidder OR, Campbell HA, Gómez-Laich A, Urgé P, Walker J, Cai Y, et al. Love thy neighbour: automatic animal behavioural classification of acceleration data using the k-nearest neighbour algorithm. PLoS ONE. 2014;9:e88609. Green JA, Halsey LG, Wilson RP, Frappell PB. Estimating energy expenditure of animals using the accelerometry technique: activity, inactivity and comparison with the heart-rate technique. J Exp Biol. 2009;212:471–82. Fossette S, Gleiss AC, Myers AE, Garner S, Liebsch N, Whitney NM, et al. Behaviour and buoyancy regulation in the deepest-diving reptile: the leatherback turtle. J Exp Biol. 2010;213:4074–83. Gómez Laich A, Wilson RP, Gleiss AC, Shepard ELC, Quintana F. Use of overall dynamic body acceleration for estimating energy expenditure in cormorants. Does locomotion in different media affect relationships? J Exp Mar Biol Ecol. 2011;399:151–5. Gleiss AC, Wilson RP, Shepard ELC. Making overall dynamic body acceleration work: on the theory of acceleration as a proxy for energy expenditure. Methods Ecol Evol. 2011;2:23–33. Yost M, Cooper RA, Bremner FJ. Fourier analyses: a mathematical and geometric explanation. Behav Res Methods Instrum. 1983;15:258–61. Stoffel MA, Nakagawa S, Schielzeth H. rptR: repeatability estimation and variance decomposition by generalized linear mixed-effects models. Methods Ecol Evol. 2017;8:1639–44. R Core Team. R: a language and environment for statistical computing [Internet]. Vienna: R Foundation for Statistical Computing; 2016. https://www.r-project.org/. Spivey RJ, Bishop CM. Interpretation of body-mounted accelerometry in flying animals and estimation of biomechanical power. J R Soc Interface. 2013;10:20130404. Byrnes G, Lim NT-L, Yeong C, Spence AJ. Sex differences in the locomotor ecology of a gliding mammal, the Malayan colugo (Galeopterus variegatus). J Mammal. 2011;92:444–51. Williams TM, Fuiman LA, Horning M, Davis RW. The cost of foraging by a marine predator, the Weddell seal Leptonychotes weddellii: pricing by the stroke. J Exp Biol. 2004;207:973–82. https://doi.org/10.1242/jeb.00822. Alvarenga FAP, Borges I, Palkovic L, Rodina J, Oddy VH, Dobos RC. Using a three-axis accelerometer to identify and classify sheep behaviour at pasture. Appl Anim Behav Sci. 2016;181:91–9. Graf PM, Wilson RP, Qasem L, Hackländer K, Rosell F. The use of acceleration to code for animal behaviours; a case study in free-ranging Eurasian beavers Castor fiber. PLoS ONE. 2015;10:e0136751. Wang Y, Nickel B, Rutishauser M, Bryce C, Williams T, Elkaim G, et al. Movement, resting, and attack behaviors of wild pumas are revealed by tri-axial accelerometer measurements. Mov Ecol. 2015;3:2. Cadzow JA, Van Landingham HF. Signals, systems, and transforms. Englewood Cliffs: Prentice-Hall Inc.; 1985. Fossette S, Schofield G, Lilley MKS, Gleiss AC, Hays GC. Acceleration data reveal the energy management strategy of a marine ectotherm during reproduction. Funct Ecol. 2012;26:324–33. Portugal SJ, Green JA, Halsey LG, Arnold W, Careau V, Dann P, et al. Associations between resting, activity, and daily metabolic rate in free-living endotherms: no universal rule in birds and mammals. Physiol Biochem Zool. 2016;89:251–61. Mellish JE, Iverson SJ, Bowen WD, Hammill MO. Fat transfer and energetics during lactation in the hooded seal: the roles of tissue lipoprotein lipase in milk fat secretion and pup blubber deposition. J Comp Physiol B Biochem Syst Environ Physiol. 1999;169:377–90. Bowen WD, Oftedal OT, Boness DJ. Mass and energy transfer during lactation in a small phocid, the harbor seal (Phoca vitulina). Physiol Zool. 1992;65:844–66. Kovacs KM, Lavigne DM, Innes S. Mass transfer efficiency between harp seal (Phoca groenlandica) mothers and their pups during lactation. J Zool. 1991;223:213–21. Kovacs KM, Lavigne DM. Mass-transfer efficiency between hooded seal (Cystophora cristata) mothers and their pups in the gulf of St-Lawrence. Can J Zool. 1992;70:1315–20. McDonald BI, Crocker DE. Physiology and behavior influence lactation efficiency in northern elephant seals (Mirounga angustirostris). Physiol Biochem Zool Ecol Evol Approaches. 2006;79:484–96. Twiss SD, Caudron A, Pomeroy PP, Thomas CJ, Mills JP. Finescale topographical correlates of behavioural investment in offspring by female grey seals, Halichoerus grypus. Anim Behav. 2000;59:327–38. Twiss SD, Cairns C, Culloch RM, Richards SA, Pomeroy PP. Variation in female grey seal (Halichoerus grypus) reproductive performance correlates to proactive-reactive behavioural types. PLoS ONE. 2012;7:e49598. Robinson KJ, Twiss SD, Hazon N, Pomeroy PP. Maternal oxytocin is linked to close mother-infant proximity in grey seals (Halichoerus grypus). PLoS ONE. 2015;10:1–17. Mellish JE, Iverson SJ, Bowen WD. Variation in milk production and lactation performance in grey seals and consequences for pup growth and weaning characteristics. Physiol Biochem Zool. 1999;72:677–90. Bishop A, Pomeroy P, Twiss SD. Breeding male grey seals exhibit similar activity budgets across varying exposures to human activity. Mar Ecol Prog Ser. 2015;527:247–59. Sparling CE, Speakman JR, Fedak MA. Seasonal variation in the metabolic rate and body composition of female grey seals: fat conservation prior to high-cost reproduction in a capital breeder? J Comp Physiol B Biochem Syst Environ Physiol. 2006;176:505–12. Bishop AM, Stewart JE, Pomeroy P, Twiss SD. Intraseasonal temporal variation of reproductive effort for male grey seals. Anim Behav. 2017;134:167–75. https://doi.org/10.1016/j.anbehav.2017.10.021. Mellish J-AE, Iverson SJ, Bowen WD. Metabolic compensation during high energy output in fasting, lactating grey seals (Halichoerus grypus): metabolic ceilings revisited. Proc R Soc B Biol Sci. 2000;267:1245–51. Iverson SJ, Bowen WD, Boness DJ, Oftedal OT. The effect of maternal size and milk energy output on pup growth in grey seals (Halichoerus grypus). Physiol Zool. 1993;66:61–88. Hill HM, Guarino S, Calvillo A, Gonzalez A, Zuniga K, Bellows C, et al. Lateralized swim positions are conserved across environments for beluga (Delphinapterus leucas) mother–calf pairs. Behav Process. 2017;138:22–8. Karenina K, Giljov A, Ingram J, Rowntree VJ, Malashichev Y. Lateralization of mother-infant interactions in a diverse range of mammal species. Nat Ecol Evol. 2017;1:1–4. Giljov A, Karenina K, Malashichev Y. Facing each other: mammal mothers and infants prefer the position favouring right hemisphere processing. Biol Lett. 2018;14:20170707. https://doi.org/10.1098/rsbl.2017.0707. MacNeilage PF, Rogers LJ, Vallortigara G. Origins for the left & right brain. Sci Am. 2009;301:60–7. https://doi.org/10.1038/scientificamerican0709-60. Tommasi L, Vallortigara G. Hemispheric processing of landmark and geometric information in male and female domestic chicks (Gallus gallus). Behav Brain Res. 2004;155:85–96. Wendt PE, Risberg J. Cortical activation during spatial processing: relation between hemispheric asymmetry of blood flow and performance. Brain Cogn. 1994;24:87–103. Loughry WJ. Determinants of time allocation by adult and yearling black-tailed prairie dogs. Behaviour. 1993;124:23–43. Beauchamp G. Exploring the role of vision in social foraging: what happens to group size, vigilance, spacing, aggression and habitat use in birds and mammals that forage at night? Biol Rev. 2007;82:511–25. Willems EP, Hill RA. Predator-specific landscapes of fear and resource distribution: effects on spatial range use. Ecology. 2009;90:546–55. Martiskainen P, Järvinen M, Skön J-P, Tiirikainen J, Kolehmainen M, Mononen J. Cow behaviour pattern recognition using a three-dimensional accelerometer and support vector machines. Appl Anim Behav Sci. 2009;119:32–8. Signer C, Ruf T, Schober F, Fluch G, Paumann T, Arnold W. A versatile telemetry system for continuous measurement of heart rate, body temperature and locomotor activity in free-ranging ruminants. Methods Ecol Evol. 2010;1:75–85. Halsey LG. Terrestrial movement energetics: current knowledge and its application to the optimising animal. J Exp Biol. 2016;219:1424–31. Maresh JL, Adachi T, Takahashi A, Naito Y, Crocker DE, Horning M, et al. Summing the strokes: energy economy in northern elephant seals during large-scale foraging migrations. Mov Ecol. 2015;3:1–16. https://doi.org/10.1186/s40462-015-0049-2. King AM, Loiselle DS, Kohl P. Force generation for locomotion of vertebrates: skeletal muscle overview. IEEE J Ocean Eng. 2004;29:684–91. Schmidt-Nielsen K. Energy cost of swimming, flying, and running. Science. 1972;177:222–8. Watanabe S, Izawa M, Kato A, Ropert-Coudert Y, Naito Y. A new technique for monitoring the detailed behaviour of terrestrial animals: a case study with the domestic cat. Appl Anim Behav Sci. 2005;94:117–31. Wilson RP, Grundy E, Massy R, Soltis J, Tysse B, Holton M, et al. Wild state secrets: ultra-sensitive measurement of micro-movement can reveal internal processes in animals. Front Ecol Environ. 2014;12:582–7. Wilson RP, Shepard ELC, Liebsch N. Prying into the intimate details of animal lives: use of a daily diary on animals. Endanger Species Res. 2008;4:123–37. Pomeroy PP, Twiss SD, Duck CD. Expansion of a grey seal (Halichoerus grypus) breeding colony: changes in pupping site use at the Isle of May, Scotland. J Zool. 2000;250:1–12. Bennett KA, Speakman JR, Moss SEW, Pomeroy P, Fedak MA. Effects of mass and body composition on fasting fuel utilisation in grey seal pups (Halichoerus grypus Fabricius): an experimental study using supplementary feeding. J Exp Biol. 2007;210:3043–53. Breiman L. Random forests. Mach Learn. 1999;45:1–35. Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002;3:18–22. Powers DMW. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J Mach Learn Technol. 2011;2:37–63. Pinheiro J, Bates D, DebRoy S, Sarkar D, Team RC. nlme: linear and nonlinear mixed effects models. R package version 3.1-117; 2014. CRS, SDT, and PPP conceived the study. SDT led the field work with CRS. CRS collected the data and performed the analyses with support from SDT and PPP. CRS wrote the paper with input from all co-authors. All authors read and approved the final manuscript. The authors wish to thank two anonymous reviewers for the improvement of this manuscript from earlier versions. The authors would like to acknowledge S. Moss for extensive help in the design and implementation of this project as well as coordinating field operations and animal handling along with the rest of the Isle of May field crew, especially M. Bivens, K. Bennett, K. Robinson, and H. Wood. We would also like to thank Z. Fraser and J. Wells for helping to collect the extensive video footage over the two field seasons. The data sets used and analysed during the current study are available from the corresponding author on reasonable request. All applicable international, national, and/or institutional guidelines for the care and use of animals were adhered to in this study. All animal procedures were performed under UK Home Office project license #60/4009 and conformed to the UK Animals (Scientific Procedures) Act, 1986. All research was approved ethically by the Durham University Animal Welfare Ethical Review Board as well as by the University of St. Andrews Animal Welfare and Ethics Committee. Funding for this work was provided by the Durham Doctoral Studentship scheme at Durham University and supported by Natural Environment Research Council's core funding to the Sea Mammal Research Unit at the University of St. Andrews. Department of Biosciences, Durham University, Durham, DH1 3LE, UK Courtney R. Shuert & Sean D. Twiss Scottish Oceans Institute, University of St. Andrews, St. Andrews, KY16 8LB, UK Patrick P. Pomeroy Search for Courtney R. Shuert in: Search for Patrick P. Pomeroy in: Search for Sean D. Twiss in: Correspondence to Courtney R. Shuert. Additional file 1. Random forest error plot for 2015 head-mounted accelerometers. Error plot from random forest models for classifying 6 behavioural states (x0: Rest; x2: Alert; x4: Presenting/Nursing; x5: Locomotion; x6: Comfort Movement; x7: Flippering pup) in 2015 head-mounted accelerometers (50 Hz) on female grey seals across 500 trees. Out-of-bag error estimates across number of trees shown dark purple line (OOB). Additional file 3. Random forest error plot for 2016 torso-mounted accelerometers. Error plot from random forest models for classifying 6 behavioural states (x0: Rest; x2: Alert; x4: Presenting/Nursing; x5: Locomotion; x6: Comfort Movement; x7: Flippering pup) in 2016 torso-mounted accelerometers (25 Hz) on female grey seals across 500 trees. Out-of-bag error estimates across number of trees shown dark purple line (OOB). Additional file 4. Summary of feature variables for grey seal behavioural states. Summary statistics for top 5 most important feature variables from the 6 behavioural states classified using random forests on head-mounted acceleration data in 2015 (50 Hz). These top 5 variables identified from highest decrease in mean Gini. Feature variables are summarized by median as well as 1st and 3rd quartile. Additional file 5. Full variable importance table for random forest model. Full variable importance table for the random forest model classifying 6 behavioural states in female grey seals, representing decreasing mean Gini for each feature variable. Top 10 most important feature variables plotted in Fig. 3. Feature variable derivations can be found as a summary in Table 4. Shuert, C.R., Pomeroy, P.P. & Twiss, S.D. Assessing the utility and limitations of accelerometers and machine learning approaches in classifying behaviour during lactation in a phocid seal. Anim Biotelemetry 6, 14 (2018) doi:10.1186/s40317-018-0158-y Maternal behaviour Breeding behaviour Proceedings of the 6th Bio-Logging Science Symposium
CommonCrawl
> q-bio > arXiv:2206.07999v1 q-bio.QM q-bio.BM Quantitative Biology > Quantitative Methods Title: Learning diffusion coefficients, kinetic parameters, and the number of underlying states from a multi-state diffusion process: robustness results and application to PDK1/PKC$α$, dynamics Authors: Lewis R. Baker, Moshe T. Gordon, Brian P. Ziemba, Victoria Gershuny, Joseph J. Falke, David M. Bortz (Submitted on 16 Jun 2022 (this version), latest version 17 Jun 2022 (v2)) Abstract: Systems driven by Brownian motion are ubiquitous. A prevailing challenge is inferring, from data, the diffusion and kinetic parameters that describe these stochastic processes. In this work, we investigate a multi-state diffusion process that arises in the context of single particle tracking (SPT), wherein the motion of a particle is governed by a discrete set of diffusive states, and the tendency of the particle to switch between these states is modeled as a random process. We consider two models for this behavior: a mixture model and a hidden Markov model (HMM). For both, we adopt a Bayesian approach to sample the distributions of the underlying parameters and implement a Markov Chain Monte Carlo (MCMC) scheme to compute the posterior distributions, as in Das, Cairo, Coombs (2009). The primary contribution of this work is a study of the robustness of this method to infer parameters of a three-state HMM, and a discussion of the challenges and degeneracies that arise from considering three states. Finally, we investigate the problem of determining the number of diffusive states using model selection criteria. We present results from simulated data that demonstrate proof of concept, as well as apply our method to experimentally measured single molecule diffusion trajectories of monomeric phosphoinositide-dependent kinase-1 (PDK1) on a synthetic target membrane where it can associate with its binding partner protein kinase C alpha isoform (PKC$\alpha$) to form a heterodimer detected by its significantly lower diffusivity. All matlab software is available here: \url{this https URL} Subjects: Quantitative Methods (q-bio.QM); Biomolecules (q-bio.BM) Cite as: arXiv:2206.07999 [q-bio.QM] (or arXiv:2206.07999v1 [q-bio.QM] for this version) From: David Bortz [view email] [v1] Thu, 16 Jun 2022 08:38:53 GMT (716kb,D) [v2] Fri, 17 Jun 2022 06:13:05 GMT (716kb,D)
CommonCrawl
Why is lagrangian density correct? Active 10 years, 11 months ago The textbooks I have available explain that due to the infinite degrees of freedom of a field, the relevant object in QFT is the lagrangian density. A lagrangian is then obtained for the field by integrating over space. I find the justification for this procedure unclear. In classical mechanics, the lagrangians of two particles may be added only if the particules do not interract. Does it mean that the lagrangian density concept is only valid for a free field? What happens in the case of interracting particles? quantum-field-theory lagrangian-formalism locality WhelpWhelp $\begingroup$ Even classically, you can add interaction terms. For instance, if you're doing two coupled simple harmonic oscillators (with one fixed to a wall), you can have the Lagrangian $L = \frac{1}{2}m_{1}\dot x_{1}^{2} + \frac{1}{2}m_{2}\dot x_{2}^{2} - \frac{1}{2}k_{0} \left(x_{1}-x_{2}\right)^{2}+\frac{1}{2}k_{1}x_{1}^{2}$, which has a clear interaction term. $\endgroup$ – Jerry Schirmer $\begingroup$ I understand that it is possible to add interaction terms a posteriori, but can it be done a priori? To take an example, suppose the free lagrangian of particle A is La and that of particle B is Lb. The lagrangian of the system will be La+Lb if the particles do not interact. But what if they do? How is this reflected in the initial 'lagrangian density' ? $\endgroup$ – Whelp $\begingroup$ It is reflected in exactly the way that Jerry gave you an example of. $\endgroup$ $\begingroup$ If they do, then you have interaction terms in the Lagrangian. The a posterori, flimsy concept is not the interaction, it's the notion of a 'free particle'. Literally, you just lop off the part of the Lagrangian that is hard to solve classically, solve the rest, promote the solutions to operators, and call that a 'free particle'. And then you treat the rest as 'small' corrections. But the real theory was always defined by the full Lagrangian. Defining a 'free particle' is just an approximation technique. $\endgroup$ In classical mechanics, the lagrangians of two particles may be added only if the particules do not interract. I wouldn't say that. You can always write a Lagrangian $L$ for a system of two particles. In general, it takes the form $$L = L_1 + L_2 + L_i$$ where $L_i$ is an interaction term that depends on the coordinates and/or velocities of both particles. If and only if the particles don't interact, $L_i = 0$, and only in that case can you write the Lagrangian as the sum of individual particle Lagrangians $L_1$ and $L_2$. A similar idea applies in quantum field theory. Remember that QFT Lagrangian densities take forms like $$\mathcal{L}(\phi, \partial\phi) \sim (\partial\phi)^2 - m^2\phi^2 - \sum_n g_n\phi^n$$ Of course there are many different kinds, but in general there is always a kinetic term which involves the derivatives of the fields, and other terms which represent either the mass of the field or interactions between the field and itself or other fields. Now, in a sense, a derivative is a way of coupling the values of some object at different spacetime points. So it should make sense that the kinetic term of the actual Lagrangian $$L_\text{kin} \sim \int\mathrm{d}^3\mathbf{x}\ (\partial\phi)^2$$ couples the values of the field $\phi$ at different points in spacetime. This is analogous to the term $L_i$ in the classical Lagrangian which involves the coordinates of multiple particles, except here, coordinates are replaced by fields and particles are replaced by locations. So you have a term that couples the fields at different spacetime points. Notice, though, that in the rest of the Lagrangian, there are no derivatives. This means that outside of the kinetic term, there is no connection between what happens at different points in spacetime. Specifically, the interaction terms $$\int\mathrm{d}^3\mathbf{x}\ \sum_n g_n\phi^n$$ are local, which means that all field interactions occur at a single spacetime point. This is a simple way to ensure that interactions don't proceed differently when viewed from different reference frames. So there's no problem with integrating the interaction terms over all of space. You are confusing the concepts of "interactions" and "nonlocality". In realistic field theories, including all theories we ever used to study phenomena in the world around us, the interactions exist but they keep the physics local. As David mentioned, the Lagrangian density takes the form $${\mathcal L} = \sum_i \left[ (\partial_\mu \phi_i)^2 + m^2 \phi_i^2 \right] + O(\phi^{3+n}) $$ The sum over $i$ of the terms bilinear in $\phi$ or its first derivatives produces the free particles. But the higher-order terms - that I only wrote under the $O$ symbol - which are cubic, quartic, or of even higher orders - are responsible for all the interactions. In particular, the electromagnetic interaction between two charged objects boils down to their common interaction with the electromagnetic field via the interaction $${\mathcal L_{em}} = j_\mu A^\mu $$ where $j_\mu$ is the 4-vector including the charge density and the flux. Because of this local term at one point, the electromagnetic field is perturbed by the first charge. The electromagnetic field $A^\mu$ continues to propagate to another charge particle, much like if it were a free field, and then the local interaction term of the type above "clicks" again and makes the second particle accelerate according to the first particle's position and charge. This description is particularly optimized for quantum field theory where the photon is called the virtual particle - or a messenger of the interaction. However, even in classical field theory, one may use a similar language. Even the Lagrangian of a classical field theory that describes the electromagnetic interactions involving charge matter has a local form. As far as I understand, what you were proposing was that there would be bilocal or otherwise non-local terms in the Lagrangian $$ L = \int d^{d-1} x\,{\mathcal L}_{local} + \int d^{d-1} x\, d^{d-1} y\, F(x) G(y) $$ which would directly attract or repel some densities $F,G$ that exist at any pair of points $x,y$, right? This never works in field theory at the fundamental level. The Lagrangian above is not local - it is not an integral of a density - so fields at point $x$ would be immediately influenced by fields at any other point $y$. This would violate locality - an action at a distance - and it would contradict relativity because when combined with the principle of relativity, a violation of locality also means a violation of causality (the rule that causes must precede their effects). However, the bilocal Lagrangian above may be "approximately derived" by "integrating out the electromagnetic field". I can't explain exactly what it means, especially because this term is only standard in quantum mechanics (in classical physics, it corresponds to "solving $A_\mu$ away from the equations"). However, let me just say the conclusion. The bilocal interaction terms between two charges you are thinking about may be "approximately" derived from the fully local Lagrangian I began with. Because special relativity is a well-established fact about the reality and we have never observed any "action at a distance" - or interactions between two separated bodies occur because of a "messenger" that has to move along a path connecting the two objects - all Lagrangians of field theories we ever study are written as integrals of a Lagrangian density. This feature is called "locality". Not the answer you're looking for? Browse other questions tagged quantum-field-theory lagrangian-formalism locality or ask your own question. Determine if Theory is Unitary from Lagrangian Do "typical" QFT's lack a lagrangian description? What is "momentum density" and why it important to QFT? Relationship between locality, causality, and free theories Physical understanding of the regularization of benign infinities in free field theories The Lagrangian of a free particle in Landau & Lifshitz When is Schwartz's method for "integrating out" a field valid? Non-Wilsonian UV completion Lagrangian density: What assumption are we making and how could it change?
CommonCrawl
Rate and Determinants of Excessive Fat-Free Mass Loss After Bariatric Surgery Original Contributions Malou A. H. Nuijten ORCID: orcid.org/0000-0002-1200-78891, Valerie M. Monpellier2, Thijs M. H. Eijsvogels1, Ignace M. C. Janssen2, Eric J. Hazebroek3 & Maria T. E. Hopman1 Obesity Surgery volume 30, pages 3119–3126 (2020)Cite this article Fat-free mass (FFM) loss is a concerning aspect of bariatric surgery, but little is known about its time-course and factors related with excessive FFM loss. This study examined (i) the progress of FFM loss up to 3 years post-bariatric surgery and (ii) the prevalence and determinants of excessive FFM loss. A total of 3596 patients (20% males, 43.5 ± 11.1 years old, BMI = 44.2 ± 5.5 kg/m2) underwent sleeve gastrectomy (SG) or Roux-en-Y gastric bypass (RYGB) surgery. Bioelectrical impedance analysis was performed preoperatively and 3, 6, 9, 12, 18, 24 and 36 months post-surgery. Changes in body composition were assessed by mixed model analysis. Prevalence of excessive FFM loss (based on three different cutoff values: ≥ 25%, ≥ 30% and ≥ 35% FFM loss/weight loss (= %FFML/WL)) was estimated and its determinants were assessed by linear regression analysis. Highest rates of FFM loss were found at 3 and 6 months post-surgery, reflecting 57% and 73% of peak FFM loss, respectively. Prevalence of excessive FFM loss ranged from 14 to 46% at 36 months post-surgery, with an older age (β = 0.14, 95%CI = 0.10–0.18, P < .001), being male (β = 3.99, 95%CI = 2.86–5.12, P < .001), higher BMI (β = 0.13, 95%CI = 0.05–0.20, P = .002) and SG (β = 2.56, 95%CI = 1.36–3.76, P < .001) as determinants for a greater %FFML/WL. Patients lost most FFM within 3 to 6 months post-surgery. Prevalence of excessive FFM loss was high, emphasizing the need for more vigorous approaches to counteract FFM loss. Furthermore, future studies should assess habitual physical activity and dietary intake shortly after surgery in relation to FFM loss. Bariatric surgery is considered the most effective strategy in patients with morbid obesity to achieve long-lasting weight loss, improve quality of life and reduce comorbidities [1, 2]. However, bariatric surgery is also associated with nutritional deficiencies and excessive loss of fat-free mass (FFM) [3, 4]. Since FFM consists for 30–50% of muscle mass, it plays an important role in several metabolic mechanisms, such as functional capacity, resting energy expenditure, thermoregulation and bone (re)modelling [5]. Therefore, excessive FFM loss is detrimental for patients because this may lead to difficulties in daily life activities, lower quality of life, weight regain, fat accumulation and higher risk of developing sarcopenia and osteoporosis [5,6,7,8]. These consequences are especially of concern in post-bariatric patients since they may counteract the long-term success of surgery in terms of weight loss, quality of life and reduction of comorbidities. Previous studies reported average FFM reductions of 3–14 kg within 1 year post-surgery, whereas large interindividual variations in FFM loss were observed across patients [9,10,11]. These observations suggest that post-bariatric patients can already lose a substantial amount of FFM within 1 year post-surgery. However, little is known about the time-course of excessive FFM loss, since longitudinal studies with repetitive FFM measurements are scarce due to the radiation exposure of repetitive DEXA measurements. Furthermore, insight into the prevalence and determinants of excessive FFM loss could help to identify bariatric patients at risk. We examined (i) the progress of FFM loss up to 3 years post-bariatric surgery, (ii) the prevalence of excessive fat-free mass loss and (iii) determinants of excessive FFM loss. For this purpose, we analysed changes in body composition following bariatric surgery, using a large Dutch bariatric population. We hypothesized that FFM is predominantly lost within 6 months post-surgery, due to the acute impact of bariatric surgery on dietary intake. The prevalence of excessive FFM loss is expected to be substantial, whereas the magnitude of FFM loss may be associated with factors such as age, sex, preoperative BMI and type of surgery, because of the age-related decline in muscle protein synthesis and functional differences between bariatric procedures. In this retrospective study, data was extracted from the electronic medical reports of the Nederlandse Obesitas Kliniek (NOK, Dutch Obesity Clinic), a national clinic providing an extensive perioperative care programme for bariatric patients [12]. The NOK screens their patients on eligibility for bariatric surgery, based on the IFSO guidelines [13], including (i) BMI > 40 kg/m2 or BMI > 35 kg/m2 with comorbid conditions, (ii) > 6-month serious weight loss attempts and (iii) no psychological dysfunction with increased risk on causing medical problems. For the present study, all patients who underwent a primary laparoscopic Roux-en-Y gastric bypass (RYGB) surgery or sleeve gastrectomy (SG) between January 2015 and April 2016 were included. Patients who underwent a revisional bariatric procedure were excluded. Perioperative Care Programme The content of the NOK perioperative care programme was previously described in detail [12]. In short, the NOK provides an interdisciplinary care programme for bariatric patients consisting of pre- and post-bariatric group counseling focused on education about lifestyle change. The 7-week preoperative programme consists of weekly group visits containing three 1-h sessions with a dietician, psychologist and physiotherapist, respectively. After the preoperative programme, the bariatric procedure is performed, followed by a 15-month postoperative care programme. Again, patients visit the clinic once every 3 to 9 weeks for (group) sessions with a dietician, physiotherapist and psychologist with the aim to adopt a healthy lifestyle. During the perioperative care programme, the patient's progress is monitored with regular assessment of weight and body composition up to 5 years post-surgery. Patients have regular follow-ups with a physician (at 3 weeks and 3, 6, 9, 12 and 18 months after surgery). During these medical checks both weight and FFM loss are assessed by the bariatric care team. When FFM loss is deemed extensive by the treating physician, reasons for the extensive loss are assessed and, if necessary, treated by the physician. Moreover, patients will have extra individual consultations with the physician and/or dietician until FFM loss is halted. Nevertheless, there is currently no standardized protocol for the treatment of FFM loss. All data was collected by trained personnel of the NOK and directly uploaded into the patient's electronic medical record, which automatically detects errors or incorrect data to minimize human errors. At the start of the preoperative care programme, patient characteristics are collected, including age and sex. Furthermore, presence of obesity-related comorbidities such as hypertension, sleep apnoea, dyslipidaemia, arthrosis and diabetes mellitus was assessed by a physician based on information of the referring physician. Weight and Body Composition Weight and body composition measures were assessed preoperatively, and at 3, 6, 9, 12, 18, 24 and 36 months post-surgery. Height and waist circumference were measured using a non-elastic measuring tape. Body weight, fat percentage, fat mass and FFM were determined by bioelectrical impedance analysis (TANITA® brand, model BC-420MA) [14]. Percentages of total weight loss (%TWL), excess weight loss (%EWL), fat mass loss and FFM loss with respect to preoperative measures were calculated for each postoperative time point. Furthermore, the proportion of FFM loss from total weight loss (expressed in %FFML/WL) was calculated at each follow-up point as follows: $$ \%\mathrm{FFML}/\mathrm{WL}=\frac{\mathrm{FFM}\left(\mathrm{post}\right)-\mathrm{FFM}\left(\mathrm{preoperative}\right)}{\mathrm{Weight}\left(\mathrm{post}\right)-\mathrm{Weight}\left(\mathrm{preoperative}\right)}\times 100\% $$ Currently, no guidelines are available that define how much FFM loss is excessive after bariatric surgery. According to the Quarter FFM Rule, in healthy weight loss, the proportion of weight loss that can be attributed to FFM is around 25% [15]. Nevertheless, former studies in post-bariatric populations have showed that FFM loss after bariatric procedures, such as RYGB, is expected to be greater than 25% [9, 16]. In this study, we used three different cutoff values to determine presence of excessive FFM loss: 25%, 30% and 35% FFM loss of total weight loss (=FFML/WL). At each follow-up point, patients were allocated to the proportional or the excessive loss group, based on each cutoff value (≥ 25%-, ≥ 30%- and ≥ 35%FFML/WL, respectively). Statistical analyses were performed using SPSS (IBM SPSS Statistics for Windows, Version 24 IBM Corp., Armonk, NY, USA.). All continuous variables were visually inspected and tested for normality by the Shapiro-Wilk test, to decide for either parametric or non-parametric statistical analyses. Changes in body composition parameters up to 36 months post-surgery were assessed using mixed model analyses. To examine determinants of FFM loss, univariate and multivariate linear regression was performed with both 12-month and 24-month %FFML/WL as dependent variable and age, sex, type of surgery, preoperative BMI, and comorbidities as covariates. Moreover, a univariate and multivariate logistic regression analysis was performed on 12-month and 24-month excessive FFM loss (defined as ≥ 25%FFML/WL) with the same covariates. To assess the effect of missing data, analyses were performed for the total cohort (all patients) and a subgroup of patients with maximum 1 missing value between the preoperative and 36-month measurement (full data analysis). Statistical significance was assumed at P < .05 (two-sided). The total cohort consisted of 3596 patients (80% females) with an age of 43.5 ± 11.1 years and a preoperative BMI of 44.2 ± 5.5 kg/m2. A total of 3022 patients (84%) underwent a RYGB, whereas 574 patients (16%) underwent a SG. Preoperative prevalence of comorbidities was 1324 patients (36.8%) with hypertension, 710 patients (19.7%) with dyslipidaemia, 467 patients (13.0%) with sleep apnoea, 437 patients (12.2%) with arthrosis and 783 patients (21.8%) with diabetes mellitus. Preoperative body composition parameters are summarized in Table 1. Table 1 Changes in body composition parameters up to 36 months post-surgery Changes in Body Composition over Time Body composition parameters were significantly lower at each follow-up measurement compared with preoperative measures (all P < .001; Table 1). Most favourable body composition, i.e. lowest weight, BMI and fat percentage, was reached at 18 months post-surgery with a corresponding 33.1% TWL and 79.4% EWL. After this 18-month time point, some weight regain of approximately 4.3 kg occurred up to 36 months post-surgery, which mainly consisted of a significant increase in fat mass (+ 4.0 kg) whereas FFM stabilized after 18 months post-surgery. Post-surgery changes of fat mass loss and FFM loss are displayed in Fig. 1. Lowest fat mass and FFM were both reached at 18 months post-surgery, with 52.1 ± 15.5% loss of preoperative fat mass and 14.5 ± 7.7% loss of preoperative FFM. Fat mass was significantly lower compared with each former measurement up to 18 months post-surgery, and subsequently increased up to 36 months post-surgery (Fig. 1a). Likewise, FFM was significantly lower compared with each former time point up to 18 months post-surgery, but no significant changes occurred up to 36 months post-surgery (Fig. 1b). We also observed a large interindividual variability of FFM loss, with a 95%CI from − 1.02 to 29.97% of preoperative FFM at 18 months post-surgery. Changes in fat mass (a) and fat-free mass (b) with respect to preoperative measures up to 36 months post-surgery. Error bars reflect standard deviation (1SD). *P < 0.05 with respect to former measurement. Fat mass significantly decreased to 52.1% of preoperative fat mass at 18 months post-surgery, followed by a significant increase in fat mass. FFM significantly decreased to 14.5% of preoperative FFM at 18 months post-surgery, with no significant changes up to 36 months post-surgery. Highest rates of fat mass loss and FFM loss were observed at 3 and 6 months post-surgery The highest rate of FFM loss occurred in the first 6 months after surgery, with already 57% FFM loss of the peak FFM loss (i.e. 18-month FFM loss) after 3 months and 73% FFM loss after 6 months. The same pattern was seen for fat mass loss, with 55% and 78% loss of 18-month peak fat mass loss for 3 months and 6 months, respectively. Full data analyses of the group with maximum 1 missing value showed similar patterns of fat mass loss (18-month fat mass loss 50.4 ± 17.9%, with 54% and 77% of the peak loss at 3 and 6 months, respectively) and FFM loss (18-month FFM loss 15.1 ± 7.3%, with 55% and 74% of peak loss at 3 and 6 months, respectively) (see Supplemental Figure 1). Prevalence of Excessive FFM Loss Proportions of fat mass loss and FFM loss within total weight loss are displayed in Fig. 2. At 3 months post-surgery, patients lost on average 5.4 ± 4.7 kg FFM with respect to 23.3 ± 6.3 kg of weight, reflecting a proportion of 23.0%. This proportion of FFM loss decreased to 20.9% at 9 months post-surgery and subsequently increased again to 24.7% up to 36 months post-surgery. Weight loss with respect to preoperative weight with its proportions of fat mass loss and FFM loss. Bars reflect weight loss in kilogrammes with standard deviation. Percentages of fat mass loss and FFM loss are displayed within the bars. FM, fat mass; FFM, fat-free mass. Proportion of FFM loss of total weight loss decreased from 3 to 9 months post-surgery and subsequently increased again up to 24.7% at 36 months post-surgery Based on the cutoff value of 25%, excessive FFM loss was found in 1324 patients (43.3%) at 3 months post-surgery (Fig. 3). This prevalence of excessive FFM loss decreased to 28.3% at 12 months post-surgery, followed by an increase in prevalence up to 46.2% at 36 months post-surgery. Prevalences of excessive FFM loss based on the cutoff values of 30% and 35% showed a similar pattern over time, with lower prevalences ranging from 12.7 to 25.9% and from 6.9 to 15.8% for the 30% and 35% cutoff values, respectively. Prevalence of excessive FFM loss in our cohort at each measuring point based on the cutoff values of ≥ 25%, ≥ 30% and ≥ 35% FFML/WL. For each cutoff value, prevalence of excessive FFM loss decreased from 3 to 9 months post-surgery. Thereafter, prevalence increased again up to 36 months post-surgery Determinants of FFM Loss Univariate linear regression analysis revealed a significant impact of all covariates on %FFML/WL at 12 months (Table 2). Even greater effect sizes were found for the association of covariates on %FFML/WL at 24 months. An older age, being male, SG, having a higher preoperative BMI, and presence of hypertension, dyslipidaemia, arthrosis or diabetes were related to a greater %FFML/WL. All covariates were subsequently included in the multivariate model. This multivariate model showed similar results with older age, male gender, SG and higher preoperative BMI as determinants for a greater %FFML/WL both at 12 months and 24 months post-surgery. Nevertheless, the significant associations of %FFML/WL with preoperative comorbidities disappeared in the multivariate model, with the exception of dyslipidaemia at 24 months post-surgery. Overall fit (i.e. R2) of the multivariate model was 3.4% for 12-month %FFML/WL and 5.0% for 24-month %FFML/WL. Logistic regression on 12-month and 24-month excessive vs. non-excessive FFM loss (based on cutoff value of 25%FFML/WL) confirmed the results of our linear model: older age, male gender, higher preoperative BMI and SG are related to excessive FFM loss (Supplemental table 1). Table 2 Univariate and multivariate linear regressions on 12-month and 24-month %FFML/WL The present study examined the progress of FFM loss up to 36 months after bariatric surgery and determined the prevalence and determinants of excessive FFM loss. We found that patients lose a substantial amount of FFM after bariatric surgery, with the highest rate at 3 to 6 months post-surgery. Furthermore, prevalence of excessive FFM loss ranged from 14 to 46% at 36 months post-surgery, dependent on the cutoff value, with an older age, being male, higher preoperative BMI and SG as determinants of a greater proportion of FFM loss. These findings indicate that FFM loss is a substantial problem which occurs in a large part of the post-bariatric patients, while counteracting measures should be applied within 3 to 6 months post-surgery. Our study shows that FFM loss seems excessive in at least 1 out of 7 patients (i.e. 14% of the patients exceeds the 35%FFML/WL threshold at 36 months post-surgery). Although no previous studies assessed prevalence of excessive FFM loss, substantial amounts of FFM loss with large variation between individuals have been reported in other studies [17,18,19]. These findings suggest that FFM loss is not sufficiently tackled by post-bariatric care. Because of the essential role of FFM in several processes (e.g. metabolic health, functional capacity and bone remodelling), excessive FFM loss could potentially lead to decreases in resting energy expenditure, loss of bone mass, difficulties in daily life activities and demotivation for exercise [5, 20, 21]. These consequences could eventually counteract the success of bariatric surgery and increase health risks. This is the first study to longitudinally assess FFM loss following bariatric surgery with repetitive measurements in the first year post-bariatric surgery as well as a long-term follow-up. We found that the highest rates of FFM loss were observed at 3 and 6 months post-surgery, and a plateau phase was reached at 18 months post-surgery. Our findings align with previous studies reporting substantial FFM loss at 6 months post-surgery with little change up to 12 months post-surgery [17, 18]. We also found that being male, older ager, sleeve gastrectomy and higher preoperative BMI were related to greater FFM loss. These determinants of FFM loss could help to identify patients at risk. A previous study also identifies high preoperative BMI and being male as determinants of post-bariatric FFM loss [11]. A potential explanation for these findings may relate to the larger preoperative FFM and differences in muscle fibre composition between men and women. Large muscles predominantly consist of type II fibres, which are more susceptible to atrophy in periods of disuse or decreased energy intake [22]. Female skeletal muscle also has a higher ability to metabolize fat lipids and, therefore, better adapt to nutrient deprivation [23]. Furthermore, the effect of age on FFM loss could be explained by the age-related decline in muscle mass, which accelerates with higher age. Therefore, older patients inherently have a diminished muscle protein synthetic response to anabolic stimuli [24] and are already more prone to lose FFM compared with younger patients. One unexpected finding of the present study was the higher FFM loss in SG compared with RYGB surgery. A potential explanation for our findings may relate to confounding by indication. Our SG patients had a higher preoperative BMI (47.0 ± 7.4 kg/m2 vs. 43.7 ± 4.9) and were relatively older (44 ± 11 vs. 39 ± 12 years) and more often male (24% vs. 19%) compared with RYGB patients. These patient characteristics were already related to a higher FFM loss and could therefore have distorted our findings on type of surgery. Our prediction model only explained 3.4 to 5.0% of the variation in FFM loss, suggesting a multifactorial process in which other factors are of greater importance. Especially physical activity and dietary intake are known to have an essential role in muscle synthesis and breakdown [5]. Although all patients were actively coached on these factors by our care team, no information on exact nutrient intake and exercise patterns of our population can be given since they were not assessed during the programme. Current literature also lacks regular, concurrent and objective assessment of physical activity patterns and dietary intake within 6 months post-bariatric surgery, which emphasizes the need for such studies to understand the role of these factors in FFM loss. This is the first study assessing FFM loss in a large nationwide cohort with high compliance rates, resulting in sufficient samples at each time point. Furthermore, total cohort and full data subgroup analyses revealed similar results, suggesting a great robustness of our data. A limitation of this study was the use of BIA to assess body composition. BIA provides a simple, inexpensive and non-invasive alternative for dual-energy X-ray absorptiometry (DXA) measurements, and studies show high correlations between BIA and DXA [25, 26]. Nevertheless, some caution in interpreting results regarding FFM in populations with obesity is warranted, since predictive equations in conventional BIA rely on a normal hydration status. Obesity is related to variations in hydration of soft tissue (e.g. excess cellular water and greater body water in trunk region) and weight loss can also influence hydration status [27, 28]. A review showed that validation studies for use of BIA in obese populations reported an overestimation of FFM by 2.87 kg (range 1.0–5.18 kg); however, other studies found a non-significant underestimation of FFM by BIA [29]. Despite these small cross-sectional differences between BIA and reference measures, BIA will have a repeatable and constant bias when the same machine in measurement protocol is used in a longitudinal design [29]. This suggests that the restrictions of BIA are limited in our study, since we use repetitive measurements. Moreover, our large cohort also limits the influence of potential outliers. Therefore, data of the current study still provides relevant insights into the prevalence of excessive FFM loss and its determinants. Another limitation was that our cutoff values for excessive FFM loss were not specific for post-bariatric populations. However, the lack of studies on the impact of FFM loss on health parameters makes it difficult to determine how much FFM loss is harmful and therefore excessive. Nevertheless, with this study, we aimed to get some insight into the possible magnitude of this issue. Novel studies on the impact of post-bariatric FFM loss on long-term health could help to develop guidelines on FFM loss after bariatric surgery. Post-bariatric patients lost a substantial amount of FFM up to 18 months post-surgery, with highest rates of FFM loss at 3 and 6 months post-surgery. FFM loss was considered excessive in 7 to 46% of the population, dependent on follow-up moment and cutoff value. The high prevalence of excessive FFM loss emphasizes that post-bariatric care should have more focus on FFM loss and its consequences for the patient. Factors such as age, sex, preoperative BMI and type of surgery could help identifying patients at risk for excessive FFM loss. Future studies assessing dietary intake and physical activity within 6 months post-surgery are warranted to address its multifactorial etiology. Sjostrom L. Review of the key results from the Swedish Obese Subjects (SOS) trial - a prospective controlled intervention study of bariatric surgery. J Intern Med. 2013;273(3):219–34. https://doi.org/10.1111/joim.12012. Gloy VL, Briel M, Bhatt DL, et al. Bariatric surgery versus non-surgical treatment for obesity: a systematic review and meta-analysis of randomised controlled trials. BMJ. 2013;347:f5934. https://doi.org/10.1136/bmj.f5934. PubMed PMID: 24149519; PubMed Central PMCID: PMCPMC3806364 Bal BS, Finelli FC, Shope TR, et al. Nutritional deficiencies after bariatric surgery. Nat Rev Endocrinol. 2012;8(9):544–56. https://doi.org/10.1038/nrendo.2012.48. de Freitas Junior WR, Ilias EJ, Kassab P, et al. Assessment of the body composition and the loss of fat-free mass through bioelectric impedance analysis in patients who underwent open gastric bypass. ScientificWorldJournal. 2014;2014:843253. https://doi.org/10.1155/2014/843253. PubMed PMID: 24523649; PubMed Central PMCID: PMCPMC3913190 Wolfe RR. The underappreciated role of muscle in health and disease. Am J Clin Nutr. 2006;84(3):475–82. https://doi.org/10.1093/ajcn/84.3.475. Nair KS. Aging muscle. Am J Clin Nutr. 2005;81(5):953–63. https://doi.org/10.1093/ajcn/81.5.953. Faria SL, Kelly E, Faria OP. Energy expenditure and weight regain in patients submitted to Roux-en-Y gastric bypass. Obes Surg. 2009;19(7):856–9. https://doi.org/10.1007/s11695-009-9842-6. van Venrooij LM, Verberne HJ, de Vos R, et al. Postoperative loss of skeletal muscle mass, complications and quality of life in patients undergoing cardiac surgery. Nutrition. 2012;28(1):40–5. https://doi.org/10.1016/j.nut.2011.02.007. Chaston TB, Dixon JB, O'Brien PE. Changes in fat-free mass during significant weight loss: a systematic review. Int J Obes. 2007;31(5):743–50. https://doi.org/10.1038/sj.ijo.0803483. Davidson LE, Yu W, Goodpaster BH, et al. Fat-free mass and skeletal muscle mass five years after bariatric surgery. Obesity (Silver Spring). 2018;26(7):1130–6. https://doi.org/10.1002/oby.22190. PubMed PMID: 29845744; PubMed Central PMCID: PMCPMC6014876 Guida B, Cataldi M, Busetto L, et al. Predictors of fat-free mass loss 1 year after laparoscopic sleeve gastrectomy. J Endocrinol Investig. 2018;41(11):1307–15. https://doi.org/10.1007/s40618-018-0868-2. Tettero OM, Aronson T, Wolf RJ, et al. Increase in physical activity after bariatric surgery demonstrates improvement in weight loss and cardiorespiratory fitness. Obes Surg. 2018;28(12):3950–7. https://doi.org/10.1007/s11695-018-3439-x. PubMed PMID: 30105664; PubMed Central PMCID: PMCPMC6223746 De Luca M, Angrisani L, Himpens J, et al. Indications for surgery for obesity and weight-related diseases: position statements from the International Federation for the Surgery of Obesity and Metabolic Disorders (IFSO). Obes Surg. 2016;26(8):1659–96. https://doi.org/10.1007/s11695-016-2271-4. PubMed PMID: 27412673; PubMed Central PMCID: PMCPMC6037181 Widen EM, Strain G, King WC, et al. Validity of bioelectrical impedance analysis for measuring changes in body water and percent fat after bariatric surgery. Obes Surg. 2014;24(6):847–54. https://doi.org/10.1007/s11695-014-1182-5. PubMed PMID: 24464517; PubMed Central PMCID: PMCPMC4078732 Heymsfield SB, Gonzalez MC, Shen W, et al. Weight loss composition is one-fourth fat-free mass: a critical review and critique of this widely cited rule. Obes Rev. 2014;15(4):310–21. https://doi.org/10.1111/obr.12143. PubMed PMID: 24447775; PubMed Central PMCID: PMCPMC3970209 Faucher P, Aron-Wisnewsky J, Ciangura C, et al. Changes in body composition, comorbidities, and nutritional status associated with lower weight loss after bariatric surgery in older subjects. Obes Surg. 2019;29(11):3589–95. Cole AJ, Kuchnia AJ, Beckman LM, et al. Long-term body composition changes in women following Roux-en-Y gastric bypass surgery. JPEN J Parenter Enteral Nutr. 2017;41(4):583–91. https://doi.org/10.1177/0148607115625621. PubMed PMID: 26838526; PubMed Central PMCID: PMCPMC5539958 Carrasco F, Ruz M, Rojas P, et al. Changes in bone mineral density, body composition and adiponectin levels in morbidly obese patients after bariatric surgery. Obes Surg. 2009;19(1):41–6. https://doi.org/10.1007/s11695-008-9638-0. Brissman M, Ekbom K, Hagman E, et al. Physical fitness and body composition two years after Roux-en-Y gastric bypass in adolescents. Obes Surg. 2017;27(2):330–7. https://doi.org/10.1007/s11695-016-2282-1. Gorissen SH, Remond D, van Loon LJ. The muscle protein synthetic response to food ingestion. Meat Sci. 2015;109:96–100. https://doi.org/10.1016/j.meatsci.2015.05.009. Cunningham JJ. Body composition as a determinant of energy expenditure: a synthetic review and a proposed general prediction equation. Am J Clin Nutr. 1991;54(6):963–9. https://doi.org/10.1093/ajcn/54.6.963. Ciciliot S, Rossi AC, Dyar KA, et al. Muscle type and fiber type specificity in muscle wasting. Int J Biochem Cell Biol. 2013;45(10):2191–9. https://doi.org/10.1016/j.biocel.2013.05.016. Lundsgaard AM, Kiens B. Gender differences in skeletal muscle substrate metabolism - molecular mechanisms and insulin sensitivity. Front Endocrinol (Lausanne). 2014;5:195. https://doi.org/10.3389/fendo.2014.00195. PubMed PMID: 25431568; PubMed Central PMCID: PMCPMC4230199 Breen L, Phillips SM. Skeletal muscle protein metabolism in the elderly: interventions to counteract the 'anabolic resistance' of ageing. Nutr Metab (Lond). 2011;8:68. https://doi.org/10.1186/1743-7075-8-68. PubMed PMID: 21975196; PubMed Central PMCID: PMCPMC3201893 Savastano S, Belfiore A, Di Somma C, et al. Validity of bioelectrical impedance analysis to estimate body composition changes after bariatric surgery in premenopausal morbidly women. Obes Surg. 2010;20(3):332–9. https://doi.org/10.1007/s11695-009-0006-5. Faria SL, Faria OP, Cardeal MD, et al. Validation study of multi-frequency bioelectrical impedance with dual-energy X-ray absorptiometry among obese patients. Obes Surg. 2014;24(9):1476–80. https://doi.org/10.1007/s11695-014-1190-5. Ritz P, Salle A, Audran M, et al. Comparison of different methods to assess body composition of weight loss in obese and diabetic patients. Diabetes Res Clin Pract. 2007;77(3):405–11. https://doi.org/10.1016/j.diabres.2007.01.007. Das SK, Roberts SB, Kehayias JJ, et al. Body composition assessment in extreme obesity and after massive weight loss induced by gastric bypass surgery. Am J Physiol Endocrinol Metab. 2003;284(6):E1080–8. https://doi.org/10.1152/ajpendo.00185.2002. Becroft L, Ooi G, Forsyth A, et al. Validity of multi-frequency bioelectric impedance methods to measure body composition in obese patients: a systematic review. Int J Obes. 2019;43(8):1497–507. https://doi.org/10.1038/s41366-018-0285-9. We would like to acknowledge the bariatric care team of the Nederlandse Obesitas Kliniek for collecting all data. TMHE is financially supported by a personal grant from the Dutch Heart Foundation (no. 2017T051). Radboud Institute for Health Sciences, Department of Physiology (392), Radboud University Medical Center, P.O. Box 1901, 6500 HB, Nijmegen, The Netherlands Malou A. H. Nuijten, Thijs M. H. Eijsvogels & Maria T. E. Hopman Nederlandse Obesitas Kliniek, Huis ter Heide, The Netherlands Valerie M. Monpellier & Ignace M. C. Janssen Department of Surgery, Rijnstate Hospital/Vitalys Clinics, Arnhem, The Netherlands Eric J. Hazebroek Malou A. H. Nuijten Valerie M. Monpellier Thijs M. H. Eijsvogels Ignace M. C. Janssen Maria T. E. Hopman Correspondence to Maria T. E. Hopman. MAH Nuijten declares that she has no conflict of interest. VM Monpellier works as research coordinator at the Nederlandse Obesitas Kliniek. TMH Eijsvogels declares that he has no conflict of interest. IMC Janssen is medical director of the Nederlandse Obesitas Kliniek. EJ Hazebroek declares that he has no conflict of interest. MTE Hopman declares that she has no conflict of interest. Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. For this type of study formal consent is not required. Informed consent was obtained from all individual participants included in the study. Changes in fat mass (A) and fat-free mass (B) with respect to preoperative measures up to 36 months post-surgery for total cohort (red) and full data subgroup (blue). Error bars reflect standard deviation (1SD). (PNG 387 kb). High resolution image (TIF 104323 kb). (DOCX 15.7 kb). Nuijten, M.A.H., Monpellier, V.M., Eijsvogels, T.M.H. et al. Rate and Determinants of Excessive Fat-Free Mass Loss After Bariatric Surgery. OBES SURG 30, 3119–3126 (2020). https://doi.org/10.1007/s11695-020-04654-6 Issue Date: August 2020 Fat-free mass
CommonCrawl
On the path-independence of the Girsanov transformation for stochastic evolution equations with jumps in Hilbert spaces Dynamic behavior and optimal scheduling for mixed vaccination strategy with temporary immunity Siyu Liu a, , Xue Yang a,b, , Yingjie Bi a, and Yong Li a,b,, School of Mathematics, Jilin University, Changchun 130012, China School of Mathematics and Statistics and Center for Mathematics and Interdisciplinary Sciences, Northeast Normal University, Changchun 130024, China ∗ Corresponding author: Yong Li Figure(3) / Table(1) This paper presents an SEIRVS epidemic model with different vaccination strategies to investigate the elimination of the chronic disease. The mixed vaccination strategy, a combination of constant vaccination and pulse vaccination, is a future development tendency of disease control. Theoretical analysis and threshold conditions for eradicating the disease are given. Then we propose an optimal control problem and solve the optimal scheduling of the mixed vaccination strategy through the combined multiple shooting and collocation (CMSC) method. Theoretical results and numerical simulations can help to design the final mixed vaccination strategy for the optimal control of the chronic disease once the new vaccine comes into use. Keywords: Mixed vaccination strategy, optimal control, biological systems, epidemic model. Mathematics Subject Classification: Primary: 37N25, 34H05; Secondary: 34K13. Citation: Siyu Liu, Xue Yang, Yingjie Bi, Yong Li. Dynamic behavior and optimal scheduling for mixed vaccination strategy with temporary immunity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1469-1483. doi: 10.3934/dcdsb.2018216 [1] R. M. Anderson and R. M. May, Infectious Diseases of Humans: Dynamics and Control, Oxford University Press, 1991. Google Scholar B. E. Asri, Deterministic minimax impulse control in finite horizon: The viscosity solution approach, ESAIM: Control, Optimisation and Calculus of Variations, 19 (2013), 63-77. doi: 10.1051/cocv/2011200. Google Scholar G. Barles, Deterministic impulse control problems, SIAM Journal on Control and Optimization, 23 (1985), 419-432. doi: 10.1137/0323027. Google Scholar A. Bensoussan and J. L. Lions, Impulse control and quasi-variational inequalities, Fruit Growing Research, 1984. Google Scholar L. T. Biegler, Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation, Computers & Chemical Engineering, 8 (1984), 243-247. doi: 10.1016/0098-1354(84)87012-X. Google Scholar P. Clayden, S. Collins, C. Daniels, M. Frick, M. Harrington, T. Horn, R. Jefferys, K. Kaplan, E. Lessem, L. McKenna and T. Swan, 2014 Pipeline Report: HIV, Hepatitis C Virus (HCV) and Tuberculosis Drugs, Diagnostics, Vaccines, Preventive Technologies, Research Toward a Cure, and Immune-Based and Gene Therapies in Development, New York, 2014. Google Scholar W. A. Coppel, Stability, Asymptotic Behavior of Differential Equations, American Mathematical Monthly, 1965. Google Scholar A. R. D. Cruz, R. T. N. Cardoso and R. H. C. Takahashi, Multi-objective design with a stochastic validation of vaccination campaigns, IFAC Proceedings Volumes, 42 (2009), 289-294. doi: 10.3182/20090506-3-SF-4003.00053. Google Scholar A. d'Onofrio, Stability properties of pulse vaccination strategy in SEIR epidemic model, Mathematical Biosciences, 179 (2002), 57-72. doi: 10.1016/S0025-5564(02)00095-0. Google Scholar A. d'Onofrio, Mixed pulse vaccination strategy in epidemic model with realistically distributed infectious and latent times, Applied Mathematics and Computation, 151 (2004), 181-187. doi: 10.1016/S0096-3003(03)00331-X. Google Scholar P. V. D. Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Biosciences, 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar S. Jana, P. Haldar and T. K. Kar, Mathematical analysis of an epidemic model with isolation and optimal controls, International Journal of Computer Mathematics, 94 (2017), 1318-1336. doi: 10.1080/00207160.2016.1190009. Google Scholar T. Khan, G. Zaman and M. I. Chohan, The transmission dynamic and optimal control of acute and chronic hepatitis B, Journal of Biological Dynamics, 11 (2017), 172-189. doi: 10.1080/17513758.2016.1256441. Google Scholar J. Li, The spread and prevention of tuberculosis, Chinese Remedies and Clinics, 13 (2013), 482-483. Google Scholar S. Liu, Y. Li, Y. Bi and Q. Huang, Mixed vaccination strategy for the control of tuberculosis: A case study in China, Mathematical Biosciences and Engineering, 14 (2017), 695-708. doi: 10.3934/mbe.2017039. Google Scholar Z. Lu, X. Chi and L. Chen, The effect of constant and pulse vaccination on SIR epidemic model with horizontal and vertical transmission, Mathematical and Computer Modelling, 36 (2002), 1039-1057. doi: 10.1016/S0895-7177(02)00257-1. Google Scholar A. Mubayi, C. Zaleta, M. Martcheva and C. Castillo-Chávez, A cost-based comparison of quarantine strategies for new emerging diseases, Mathematical Biosciences and Engineering, 7 (2010), 687-717. doi: 10.3934/mbe.2010.7.687. Google Scholar National Bureau of Statistics of China, Statistical Data of Category A and B Infectious Diseases 2011-2015. Available from: http://data.stats.gov.cn/easyquery.htm?cn=C01. Google Scholar National Bureau of Statistics of China, China Statistical Yearbook 2016, Birth Rate, Death Rate and Natural Growth Rate of Population, 2016. Available from: http://www.stats.gov.cn/tjsj/ndsj/2016/indexch.htm. Google Scholar K. E. Nelson and C. M. Williams, Early histroy of infectious disease: epidemiology and control of infectious diseases, in Infectious Disease Epidemiology: Theory and Practice, Jones and Bartlett Learning, (2014), 3-18. Google Scholar D. J. Nokes and J. Swinton, The control of childhood viral infections by pulse vaccination, IMA Journal of Mathematics Applied in Medicine & Biology, 12 (1995), 29-53. doi: 10.1093/imammb/12.1.29. Google Scholar B. Song, C. Castillo-Chávez and J. P. Aparicio, Tuberculosis models with fast and slow dynamics: The role of close and casual contacts, Mathematical Biosciences, 180 (2002), 187-205. doi: 10.1016/S0025-5564(02)00112-8. Google Scholar O. V. Stryk and R. Bulirsch, Direct and indirect methods for trajectory optimization, Annals of Operations Research, 37 (1992), 357-373. doi: 10.1007/BF02071065. Google Scholar J. Tamimi and P. Li, A combined approach to nonlinear model predictive control of fast systems, Journal of Process Control, 20 (2010), 1092-1102. doi: 10.1016/j.jprocont.2010.06.002. Google Scholar E. Verriest, F. Delmotte and M. Egerstedt, Control of epidemics by vaccination, Proceedings of the American Control Conference, 2 (2005), 985-990. doi: 10.1109/ACC.2005.1470088. Google Scholar Y. Yang, S. Tang, X. Ren, H. Zhao and C. Guo, Global stability and optimal control for a tuberculosis model with vaccination and treatment, Discrete and Continuous Dynamical Systems - Series B, 21 (2016), 1009-1022. doi: 10.3934/dcdsb.2016.21.1009. Google Scholar Y. Yang, Y. Xiao and J. Wu, Pulse HIV vaccination: feasibility for virus eradication and optimal vaccination schedule, Bulletin of Mathematical Biology, 75 (2013), 725-751. doi: 10.1007/s11538-013-9831-8. Google Scholar Y. Zhou, J. Wu and M. Wu, Optimal isolation strategies of emerging infectious diseases with limited resources, Mathematical Biosciences and Engineering, 10 (2013), 1691-1701. doi: 10.3934/mbe.2013.10.1691. Google Scholar Figure 1. Comparison between the constant vaccination strategy and mixed vaccination strategy with the same cost (w = 3). The red dashed line shows the constant vaccination strategy with $p = 1$. The blue solid line shows optimal mixed vaccination strategy with $p = 0.45, p_{c} = 0.2$ and $T = 5$. All the other parameters are shown in Table 1 Figure Options Download as PowerPoint slide Figure 2. Comparison between the constant vaccination strategy and optimal mixed vaccination strategy. The red dashed line shows the constant vaccination strategy with $p = 0.85 (0.6\leq p\leq 0.85)$. The blue solid line shows optimal mixed vaccination strategy with $0.6\leq u_{1}(t)\leq 0.85, 0.1\leq u_{2}(t)\leq 0.3$ and $5\leq N\leq 10$. All the other parameters are shown in Table 1 Figure 3. Optimal mixed vaccination strategy under limited vaccinated individuals with $0.6\leq u_{1}(t)\leq 0.85, 0.1\leq u_{2}(t)\leq 0.3$ and $5\leq N\leq 10$. All the other parameters are shown in Table 1 Table 1. Parameter values Parameter Value Source $\mu$ $0.0143~year^{{-1}}$ [19] $\varepsilon$ $6~year^{{-1}}$ [14] $\alpha$ $0.0015~year^{{-1}}$ [14] $c$ $0.05~year^{{-1}}$ Assumed $\gamma$ $0.4055~year^{{-1}}$ Assumed $\beta$ $0.4945$ Assumed Download as excel Siyu Liu, Yong Li, Yingjie Bi, Qingdao Huang. Mixed vaccination strategy for the control of tuberculosis: A case study in China. Mathematical Biosciences & Engineering, 2017, 14 (3) : 695-708. doi: 10.3934/mbe.2017039 Majid Jaberi-Douraki, Seyed M. Moghadas. Optimal control of vaccination dynamics during an influenza epidemic. Mathematical Biosciences & Engineering, 2014, 11 (5) : 1045-1063. doi: 10.3934/mbe.2014.11.1045 Shujing Gao, Dehui Xie, Lansun Chen. Pulse vaccination strategy in a delayed sir epidemic model with vertical transmission. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 77-86. doi: 10.3934/dcdsb.2007.7.77 Aili Wang, Yanni Xiao, Robert A. Cheke. Global dynamics of a piece-wise epidemic model with switching vaccination strategy. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2915-2940. doi: 10.3934/dcdsb.2014.19.2915 IvÁn Area, FaÏÇal NdaÏrou, Juan J. Nieto, Cristiana J. Silva, Delfim F. M. Torres. Ebola model and optimal control with vaccination constraints. Journal of Industrial & Management Optimization, 2018, 14 (2) : 427-446. doi: 10.3934/jimo.2017054 Yali Yang, Sanyi Tang, Xiaohong Ren, Huiwen Zhao, Chenping Guo. Global stability and optimal control for a tuberculosis model with vaccination and treatment. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 1009-1022. doi: 10.3934/dcdsb.2016.21.1009 Jingang Zhai, Guangmao Jiang, Jianxiong Ye. Optimal dilution strategy for a microbial continuous culture based on the biological robustness. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 59-69. doi: 10.3934/naco.2015.5.59 Qianqian Cui, Zhipeng Qiu, Ling Ding. An SIR epidemic model with vaccination in a patchy environment. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1141-1157. doi: 10.3934/mbe.2017059 Sanjukta Hota, Folashade Agusto, Hem Raj Joshi, Suzanne Lenhart. Optimal control and stability analysis of an epidemic model with education campaign and treatment. Conference Publications, 2015, 2015 (special) : 621-634. doi: 10.3934/proc.2015.0621 Maria do Rosário de Pinho, Helmut Maurer, Hasnaa Zidani. Optimal control of normalized SIMR models with vaccination and treatment. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 79-99. doi: 10.3934/dcdsb.2018006 Heman Shakeri, Faryad Darabi Sahneh, Caterina Scoglio, Pietro Poggi-Corradini, Victor M. Preciado. Optimal information dissemination strategy to promote preventive behaviors in multilayer epidemic networks. Mathematical Biosciences & Engineering, 2015, 12 (3) : 609-623. doi: 10.3934/mbe.2015.12.609 Xiaomei Feng, Zhidong Teng, Kai Wang, Fengqin Zhang. Backward bifurcation and global stability in an epidemic model with treatment and vaccination. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 999-1025. doi: 10.3934/dcdsb.2014.19.999 Geni Gupur, Xue-Zhi Li. Global stability of an age-structured SIRS epidemic model with vaccination. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 643-652. doi: 10.3934/dcdsb.2004.4.643 Islam A. Moneim, David Greenhalgh. Use Of A Periodic Vaccination Strategy To Control The Spread Of Epidemics With Seasonally Varying Contact Rate. Mathematical Biosciences & Engineering, 2005, 2 (3) : 591-611. doi: 10.3934/mbe.2005.2.591 Yujing Wang, Changjun Yu, Kok Lay Teo. A new computational strategy for optimal control problem with a cost on changing control. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 339-364. doi: 10.3934/naco.2016016 Hongyong Zhao, Peng Wu, Shigui Ruan. Dynamic analysis and optimal control of a three-age-classes HIV/AIDS epidemic model in China. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020070 Joaquim P. Mateus, Paulo Rebelo, Silvério Rosa, César M. Silva, Delfim F. M. Torres. Optimal control of non-autonomous SEIRS models with vaccination and treatment. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1179-1199. doi: 10.3934/dcdss.2018067 Holly Gaff, Elsa Schaefer. Optimal control applied to vaccination and treatment strategies for various epidemiological models. Mathematical Biosciences & Engineering, 2009, 6 (3) : 469-492. doi: 10.3934/mbe.2009.6.469 Ellina Grigorieva, Evgenii Khailov, Andrei Korobeinikov. Optimal control for an epidemic in populations of varying size. Conference Publications, 2015, 2015 (special) : 549-561. doi: 10.3934/proc.2015.0549 Wandi Ding. Optimal control on hybrid ODE Systems with application to a tick disease model. Mathematical Biosciences & Engineering, 2007, 4 (4) : 633-659. doi: 10.3934/mbe.2007.4.633 Siyu Liu Xue Yang Yingjie Bi Yong Li
CommonCrawl
Structural Breaks What are structural break models? Time series models estimate the relationship between variables that are observed over a period of time. Many models assume that the relationship between these variables stays constant across the entire period. Interactive Structural Break Simulator $\rho_1$ $\sigma$ However, there are cases where changes in factors outside of the model cause changes in the underlying relationship between the variables in the model. Structural break models capture exactly these cases by incorporating sudden, permanent changes in the parameters of models. Structural break models can integrate structural change through any of the model parameters. Bai and Perron (1998) provide the standard framework for structural breaks model in which some, but not all, of the model parameters are allowed to break at m possible break points, $$ y_t = x_t' \beta + z_t' \delta_j + u_t $$ $$ t = T_{j-1} + 1, \ldots , T ,$$ where $j = 1, \ldots, m+1$. The dependent variable $y_t$ is to be modeled as a linear combination of regressors with both time invariant coefficients, $x_t$, and time variant coefficients, $z_t$. Alternatively, the variance break model assumes that breaks occur in the variance of the error term such that $$ y_t = x_t' \beta + u_t ,$$ $$ var(u_t) = \sigma_1^2,~ t ≤ T_1 ,$$ $$ var(u_t) = \sigma_2^2,~ t > T_1 .$$ Why should I worry about structural breaks? "Structural change is pervasive in economic time series relationships, and it can be quite perilous to ignore. Inferences about economic relationships can go astray, forecasts can be inaccurate, and policy recommendations can be misleading or worse." -- Bruce Hansen (2001) Time series models are used for a variety of reasons -- predicting future outcomes, understanding past outcomes, making policy suggestions, and much more. Parameter instability diminishes the ability of a model to meet any of these objectives. Research has demonstrated: Many important and widely used economic indicators have been shown to have structural breaks. Failing to recognize structural breaks can lead to invalid conclusions and inaccurate forecasts. Identifying structural breaks in models can lead to a better understanding of the true mechanisms driving changes in data. Economic indicators with structural breaks In a 1996 study, Stock and Watson examine 76 monthly U.S. economic time series relations for model instability using several common statistical tests. The series analyzed encompassed a variety of key economic measures including interest rates, stock prices, industrial production, and consumer expectations. The complete group of variables studied was chosen by the authors to meet four criteria: The sample included important monthly economic aggregates and coincident indicators. The sample included important leading indicators. The series represented a number of different types of variables, spanning different time series properties. The variables had consistent historical definitions or adjustments could easily be made if definitions changed over time. In this study Stock and Watson find evidence that a "substantial fraction of forecasting relations are unstable." Based on this finding the authors make several observations: Systematic stability analysis is an important part of modeling. Failure to appropriate model "commonplace" instability in models, "calls into question the relevance of policy implications". There is an opportunity to improve on the forecasts made by fixed-parameter models. Rossi (2013) updates the data in this study to include data through 2000 and comes to the same conclusions that, there is clear empirical evidence of instabilities and that these instabilities impact forecast performance. Other studies find evidence for structural breaks in models of a number of economic and financial relationships including: International real interest rates and inflation (Rapach and Wohar, 2005) The equity premium (Kim, Morley and Nelson, 2005) Global house prices (Knoll, Schularick and Steger, 2014) CO2 emissions (Acarvci and Erdogan, 2016) The monetary policy reaction function (Inoue and Rossi, 2011). Forecast performance and structural breaks In the same 1996 study, Stock and Watson examine the impacts structural breaks can have on forecasting when not properly included in a model. In particular, the study compares the forecast performance of fixed-parameter models to models that allow parameter adaptivity including recursive least squares, rolling regressions, and time-varying parameter models. The study finds that in over half of the cases the adaptive models perform better than the fixed-parameter models based on their out-of-sample forecast error. The bottom line is that failing to account for structural changes leads results in model misspecification which in turn leads to poor forecast performance. "The potential empirical importance of departures from constant parameter linear models is undeniable" -- Koop and Potter (2011) A number of studies following this work have found similar results, showing that structural breaks can impact forecast performance. In their 2011 paper, Pettenuzzo and Timmermann show that including structural breaks in asset allocation models can improve long-horizon forecasts and that ignoring breaks can lead to large welfare losses. Inoue and Rossi (2011) show the importance of identifying parameter instabilities for improving the performance of DSGE models. When should I consider structural break models? Structural breaks aren't right for all data and knowing when to use them is important for building valid models. While there are statistical tests for structural breaks, which we discuss in the next section, there are some preliminary checks that help determine when you may need to consider structural breaks. Visual plots indicate a change in behavior Data plots provide a quick, preliminary method for finding structural breaks in your data. Visually inspecting your data can provide important insight about potential breaks in the mean or volatility of a series. Don't forget to examine both independent and dependent variables as sudden changes in either can change the parameters of a model. Poor out-of-sample forecasts Structural breaks in a model serve as one possible reason for poor forecast performance. A fixed parameter model cannot be expected to forecast well if the true parameters of the model change over time. Conversely, if your model isn't forecasting well, it may be worth considering if model instabilities could be playing a role. "Why do instabilities matter for forecasting? Clearly, if the predictive content is not stable over time, it will be very difficult to exploit it to improve forecasts" Theoretical support for model change There are many cases where economic theory suggests that there should be a change in a modeled relationship. In the cases that economic theory, or even economic intuition, points towards structural breaks the possibility should be considered. In some cases these changes may be widely acknowledged such as the change in volatility of a number of key economic indicators around the mid-1980s, known as "The Great Moderation", the decline in economic growth between 2007-2009 during the "Great Recession", and sudden changes in policy stances such as the "Volcker Rule" and "Zero lower bound" (Giacomini and Rossi, 2011). Beyond these, there could be a number of reasons for changes in models over time including legislative or regulatory changes, technological changes, institutional changes, changes in monetary or fiscal policy, or oil price shocks. What statistical tests are there for identifying structural breaks? Testing for structural breaks is a rich area of research and there is no one-size-fits-all test for structural breaks and which test to implement depends on several factors. Is the break date known or unknown? Is there a single break or multiple breaks? Knowing the statistical characteristics of both the breaks and your data help to ensure that the correct test is implemented. Below we highlight some of the classic tests that have shaped the field of structural break testing. The Chow Test The Chow (1960) test was one of the first tests which set the foundation for structural break testing. It is built on the theory that if parameters are constant then out-of-sample forecasts should be unbiased. It tests the null hypothesis that there is no structural break against the alternative that there is a known structural break at time $T_b$. The test considers a linear model split into samples at a predetermined break point such that $$ y_{t} = x_{t}'\beta_1 + u_t ,$$ $$ for~ t ≤ T_ b .$$ $$ for~ t > T_b .$$ The test estimates coefficients for each period and uses the out-of-sample forecast errors to compute an F-test comparing the stability of the estimated coefficients across the two periods. One key issue with the Chow test is that the break point must be predetermined prior to implementing the test. Furthermore, the break point must be exogenous or the standard distribution of the statistic is not valid. The Quandt Likelihood Ratio Test The Quandt Likelihood Ratio (QLR) (1960) test builds on the Chow test and attempts to eliminate the need for picking a break point by computing the Chow test at all possible break points. The largest Chow test statistic across the grid of all potential break points is chosen as the Quandt statistic as it indicates the most likely break point. This test was widely unusable because the limiting distribution of the test statistic under the assumption of an unknown break point was not known. However, the test became statistically relevant when Andrews and Ploberg (1994), developed an applicable distribution for the test-statistic for cases such as the Quandt test. The CUSUM Test In their 1975 paper Brown, Durban, and Evans propose the CUSUM test of the null hypothesis of parameter stability. The CUSUM test for instability is appropriate for testing for parameter instability in the intercept term. It is best described as a test for instability of the variance of post-regression residuals. The CUSUM test is based on the recursive least square estimation of the model $$ y_t = x_t'\beta_t + u_t $$ for all $k+1 ≤ t ≤ T$. This yields a set of estimates for $\beta$, ${\beta_{k+1}, \beta_{k+2}, \ldots, \beta_T}$. The CUSUM test statistic is computed from the one-step-ahead residuals of the recursive least squares model. It is based on the intuition that if $\beta$ changes from one period to the next then the one-step-ahead forecast will not be accurate and the forecast error will be greater than zero. This means the greater the CUSUM test statistic, the greater the forecast error, and the greater the statistical evidence in favor of parameter instability. The Hansen and Nyblom Tests The Hansen (1992) and Nyblom (1989) tests provide additional Lagrange multiplier parameter stability tests. The Nyblom test, tests the null hypothesis that all parameters are constant against the alternative of that some of the parameters are unstable. The Hansen test builds on the Nyblom test to allow for testing constancy in single parameters. A nice feature of these tests is that they do not require that the model is estimated by OLS. However, unlike the other tests mentioned, neither test provides a mean for identifying the breakpoint. Comparing parameter stability tests Chow Test Simple split-sample test of the null hypothesis of no break against the alternative of a one-time shift in parameters at a known break. Easy to implement. F-statistic with standard distribution. Must select an *exogenous* break date for $\chi^2$ distribution of statistic to be valid. Quandt Likelihood Ratio (QLR) Finds the maximum Chow statistics across all possible break points to test null hypothesis of no break against the alternative of a one-time break. Works with unknown break point. Graph of the QLR statistic can provide insight into location of break. Non-standard distribution that depends on the number of variables and series trimming. Computationally burdensome. CUSUM Uses standardized one-step ahead recursive forecast residuals to test the null hypothesis of constant parameters against the alternative of non-constant parameters. Recursive residuals can be efficiently computed using Kalman Filter. Ploberger and Kramer (1990) show that the OLS CUSUM test statistic can be constructed using the full-sample residuals rather than the recursive residuals. Power only in the direction of mean regressors. Tests for instability in intercept only. CUSUMSQ Has standard $\chi^2(t)$ distribution. Power only for changes in variance. Nyblom Test A locally most powerful Lagrange multiplier test of the null hypothesis that all coefficients are constant against the alternative that some of the coefficients are time-varying. Tests constancy of all parameters. Does not require that model is estimated by OLS. Does not provide information about location of break. Statistic has non-standard distribution which is dependent on the number of variables. Distribution is different if data are non-stationary. Hansen Test An extension to the Nyblom test that allows the test of the constancy of individual parameters. Tests are easy to compute. Tests are robust to heteroscedasticity. Individual tests are informative about the type of structural break Statistic has non-standard distribution which is dependent on the number of variables tested for non-stationarity. How are structural break models estimated? Structural break models present a number of unique estimation issues that must be considered. The location of the structural break in the data is often unknown and therefore must be estimated. It must be determined if the model is a pure structural break model, in which all regression parameters change, or a partial structural break model, in which only some of the parameters change. The pre and post break model parameters must be estimated. The exact method of estimation will depend on the characteristics of your data and the assumptions imposed on the model. However, the general guiding principle of least-squares estimation is similar to that of the model without structural breaks. The number and location of the structural breaks are chosen jointly with the parameters of the model to minimize the sum of the squared error. Estimation framework Bai and Perron (1998, 2003) provide the foundation for estimating structural break models based on least squares principles. Bai and Perron start with following multiple linear regression with m breaks where $j = 1, \ldots, m+1$. The dependent variable $y_t$ is to be modeled as a linear combination of regressors with both time-invariant coefficients, $x_t$, and time variant coefficients, $z_t$. This model is rewritten in matrix form as $$ Y = X\beta + \bar{Z}\delta + U $$ where $Y = (y_1, \ldots, y_T)'$, $X = (x_1, \ldots, x_T)'$, $U = (u_1, \ldots, u_T)'$, $\delta = (\delta_1', \ldots, d_{m+1}')'$ and $\bar{Z}$ is a matrix which diagonally partitions Z at $(T_1, \ldots, T_m)$ such that $ \bar{Z} = diag(Z_1, \ldots, Z_{m+1})$. For each time partition, the least squares estimates of $\beta$ and $\delta_j$ are those that minimize $$ (Y - X\beta - \bar{Z}\delta)'(Y - X\beta - \bar{Z}\delta) = \sum_{i=1}^{m+1} \sum_{t = T_{i-1} + 1}^{T_i} [y_t - x_t'\beta - x_t'\delta_i]^2 $$. Given the m partitions the estimates become $\hat{\beta}(\{T_j\})$ and $\hat{\delta}(\{T_j\})$. These coefficients and the m partitions are chosen as the global minimizers of the sum of the squared residuals across all partitions $$ (\hat{T}_1, \ldots, \hat{T}_m) = argmin_{T_1, \ldots, T_m} S_T(T_1, \ldots, T_m) $$ where $ S_T(T_1, \ldots, T_m) $ are the sum of squared residuals given $\hat{\beta}(\{T_j\})$ and $\hat{\delta}(\{T_j\})$ The computational burden of estimating both m break points and the period-specific coefficients is quite large. However, Bai and Perron (2003), show that the complexity can be significantly reduced using the concept of dynamic programming. The dynamic programming algorithm is based on the concept of the Bellman's principle. Given $SSR(\{T_{r,n}\})$, the sum of squared residuals associated with the optimal partition containing r breaks and n observations, the Bellman principle finds that the optimal partitions solve the recursive problem $$ SSR(\{T_{m,T}\}) = \underset{m = h ≤ j ≤ T-h}{min} [SSR(\{T_{m-1,j}\}) + SSR(j+1,T)] $$ where the sum of squared residuals, $SSR(j,i)$ , is found using the recursive residual, $v(i, j)$, such that $$ SSR(i, j) = SSR(i, j-1) + v(i,j)^2 $$ This dynamic problem can be solved sequentially first finding the optimal one-break partition, then finding the optimal two-break partitions and continuing until the optimal $m-1$ partitions are found. What are some alternatives to structural break models? Structural break models make a very specific assumption about how changes in model parameters occur. They assume that parameters shift immediately at a specific breakpoint. This may intuitively make sense when there are distinct and immediate changes in conditions that impact a model. However, there are other models that allow for different manners of parameter shifts. Time-varying parameter models assume that parameters change gradually over time while threshold models assume that model parameters change based on the value of a specified threshold variable. Markov-switching models offer an even different solution which assumes that an underlying stochastic Markov chain drives regime changes. Theory and statistical tests should drive the decision of which of these models you use. Structural break models are an important modeling technique that should be considered as part of any thorough time-series analysis. There is much evidence supporting both the prevalence of structural breaks in time series data and the detrimental impacts of ignoring structural breaks. Join our community to see why our users are considered some of the most active and helpful in the industry!
CommonCrawl
Energy-effective artificial internet-of-things application deployment in edge-cloud systems Part of a collection: Special Issue on Green Edge Computing Zhengzhe Xiang ORCID: orcid.org/0000-0003-1133-57221, Yuhang Zheng1,3, Mengzhu He1, Longxiang Shi1, Dongjing Wang2, Shuiguang Deng1,3 & Zengwei Zheng1 Peer-to-Peer Networking and Applications (2021)Cite this article Recently, the Internet-of-Things technique is believed to play an important role as the foundation of the coming Artificial Intelligence age for its capability to sense and collect real-time context information of the world, and the concept Artificial Intelligence of Things (AIoT) is developed to summarize this vision. However, in typical centralized architecture, the increasing of device links and massive data will bring huge congestion to the network, so that the latency brought by unstable and time-consuming long-distance network transmission limits its development. The multi-access edge computing (MEC) technique is now regarded as the key tool to solve this problem. By establishing a MEC-based AIoT service system at the edge of the network, the latency can be reduced with the help of corresponding AIoT services deployed on nearby edge servers. However, as the edge servers are resource-constrained and energy-intensive, we should be more careful in deploying the related AIoT services, especially when they can be composed to make complex applications. In this paper, we modeled complex AIoT applications using directed acyclic graphs (DAGs), and investigated the relationship between the AIoT application performance and the energy cost in the MEC-based service system by translating it into a multi-objective optimization problem, namely the CA\(^3\)D problem — the optimization problem was efficiently solved with the help of heuristic algorithm. Besides, with the actual simple or complex workflow data set like the Alibaba Cloud and the Montage project, we conducted comprehensive experiments to evaluate the results of our approach. The results showed that the proposed approach can effectively obtain balanced solutions, and the factors that may impact the results were also adequately explored. The rapid development and evolution of Artificial Intelligence (AI) theory and technology have brought a revolution to current information technology architectures. Especially, Internet-of-things (IoT) is one of them that faces both challenges and opportunities because of its role as the data source of the real-world. The concept of Artificial Intelligence of Things (AIoT) is the combination of Artificial intelligence technologies with the Internet of things infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytic. According to the report of GSMAFootnote 1, the global total of cellular IoT connections is forecasted to reach 3.2 billion by 2024. There would be no doubt that the tremendous increasing connections will create a huge AIoT application market that draws the attention of the world. Based on the IoT technology, a reliable publish/subscribe interaction framework can be established between IoT devices and AIoT application developers so that high-quality data can be collected systematically. Traditionally, this collecting process is conducted with the end-cloud mode, the widely distributed but resource-constrained IoT devices only need to sense and upload the real world information to the cloud, and the cloud will handle the data processing. However, the latency brought by long-distance transmission and traffic congestion of huge data in the network, as well as the high cost like energy consumption brought by data pre-processing limits its wide application in the typical centralized architecture. Fortunately, Multi-access Edge Computing (MEC) technique is proposed to solve the aforementioned problems [1,2,3]. Specifically, MEC is a novel paradigm that emerges recently as a reinforcement of mobile cloud computing, to optimize the mobile resource usage and enable wireless network to provide context-aware services [4, 5]. With the help of MEC, computation and transmission between mobile devices and the cloud are partly migrated to edge servers. Therefore, users can easily connect to their nearby edge servers via wireless network [6] and offload their tasks to them. The short-distance connection between users and edge servers can dramatically reduce the latency, and the computation capability of the edge servers is quite qualified to finish those conventional tasks. What's more, with the help of the container platforms in the limelight like Kubernetes, it will be easy to manage services (e.g. the data pre-processing services) in the MEC environment. However, these advantages cannot be the excuse of the carelessness in planning the multi-source AIoT sensing and analysing tasks — if the related services are not assigned to appropriate hosts, it may even obtain lower-quality result with much higher cost. More critically, as the edge servers are all resource-constrained [7, 8] and energy-consuming [9,10,11,12], there would be no enough resources for them to run if the data pre-processing services are not deployed on appropriate edge servers. Thus, it becomes more and more important to design a service deployment scheme as well as a resource allocation scheme to balance the quality and cost. The main contributions are summarized as follows: We investigated the development of artificial intelligence of things technology and discussed the feasibility of adopting the multi-access edge computing architecture to optimize the performance of the AIoT systems. We modeled the complex AIoT application with a directed acyclic graph, so that the execution of an AIoT application could be decomposed to several ordered AI services. Based on the proposed application model, we constructed an appropriate metric to measure the AIoT application system, and mathematically modeled the service deployment problems which aimed to optimize the performance and the cost under the constraints edge resources as a multi-objective programming problem. We designed and implemented an MOEA/D based algorithm to solve the problem, and conducted a series of experiments to evaluate the performance of the solutions. The results verified the improvement achieved by the proposed algorithm compared with other existing baselines. Besides, different configurations of the system were also investigated to explore the impacts of related factors. The rest of this paper is organized as follows. Section 2 introduces how multi-access edge computing techniques can be used in optimizing AIoT applications with the example of a famous AI model. Section 3 shows some representative research works about service placement and resource allocation in MEC environment. Section 4 presents definitions, concepts and components of the problem to be solved. Section 5 describes the approaches we adopted to solve this problem. Section 6 shows the experimental results including the factors that affect our algorithms. Finally, Sect. 7 concludes our contribution and outlines future work. Motivation scenario Recently, AI research has become more and more structural and systemic with the prosperity of deep learning (DL) theory and tools recently. With the help of mature libraries like Tensorflow, PyTorch, MindSpore, etc., researchers and developers can easily build their own models like building blocks. One main factor that facilitates this popularization lies on the common structure of these deep learning models – the directed acyclic graph (DAG) based computation workflow. There are many existing examples in AI research exhibiting the DAG structures. For example, Fig. 1 shows the structureFootnote 2 of DeepFM, a famous recommendation model proposed to predict click-through rate (CTR). Specifically, in this model, features of different fields are collected and wrapped as input, transformed to dense vectors with several embedding layers, and then are separately sent to the factorization machine (FM) and multi-layer perceptron (MLP) layers to generate the final output — we can clearly observe the data dependency and logic dependency in Fig. 1. The workflow of DeepFM Generally, these DL-based AI models can be deployed on cloud servers with sufficient resources, and the data collected by IoT devices are uploaded to these servers together for further inference. However, long-distance communication between IoT devices and cloud servers may cause unavoidable delays. At the same time, there is no need to upload the context information collected in different regions to the cloud instead of processing it on-site. Therefore, if different components of an AI model are reasonably deployed using the multi-access edge computing architecture, the data transmission efficiency will be improved and the performance of AI tasks in the IoT environment will be greatly ameliorated. Service placement in MEC The issue of service placement is not a novel one, since how the services are placed will dramatically affect the performance of a parallelism and distributed system, especially when the definition of performance varies in different scenarios — the optimal placement strategies are usually derived according to the objectives that people mainly focus on. For example, Ouyang et al. addressed the service placement challenge in terms of the performance-cost trade-off [13]. They applied the Lyapunov optimization technique to study the edge service performance optimization problem under long-term cost budget constraint. Similarly, Pasteris et al. considered the heterogeneity of edge node characteristics and user locations in optimizing the performance of MEC by placing multiple services [14]. They partitioned each edge node into multiple slots, where each slot contains one service, and proposed a deterministic approximation algorithm to solve it after reducing the problem to a set cover problem. Roy et al. went further on the similar topic by introducing the users' path prediction model in such a scenario [15]. They formulated the service replica placement problem as a multi-objective integer linear programming problem, and used binary particle swarm optimization algorithm to achieve near-optimal solutions within polynomial time. Yuan et al. used a greedy approximation algorithm to solve the service placement problem under the constraints of computing and storage resources [16]. They also adopted a 2-time-scale framework to reduce the higher operating costs caused by frequent cross cloud service migration. To achieve dynamic service placement, based on Lyapunov optimization method, Ning et al. proposed an approximation-based stochastic algorithm to approximate the expected future system utility, then a distributed Markov approximation algorithm is used to determine the service configuration [17]. Han et al. focused on the online multi-component service placement in edge cloud networks [18]. Considering the dependency between service components, they analyzed the delay of tree-like services solved the problem by an improved ant colony algorithm. Resource allocation in MEC The resource allocation issue follows after deciding the appropriate edge server to place service instances. The resource allocation issue is important and it is widely discussed in the research of communication and distributed system, especially in the research of computation offloading, the key problem of the MEC paradigm. For example, Yu et al. considered a cloudlet that provides services for multiple mobile devices [19], and they proposed a joint scheduling algorithm that guided the sub-carrier allocation for Orthogonal Frequency-Division Multiplexing Access (OFDMA) system and CPU time allocation for the cloudlet. Wang et al. also tried to explore the relationship between cost and resource. They formulated computation offloading decision, resource allocation and content caching strategy as an optimization problem, considering the total revenue of the network [20]. Focusing on saving energy of mobile users, Shuwaili et al. proposed a resource allocation approach over both communication and computation resources, while You et al. [21] also considered the resource of the cloud. Guo et al. took the average packet delay as the optimization goal of the edge container resource allocation problem [22], and proposed a delay-sensitive resource allocation algorithm based on A3C (asynchronous advantage actor-critic) to solve it. Bahreini et al. expressed the edge resource allocation problem (ERP) as a mixed integer linear problem (MILP) [23], proposed an auction-based mechanism, and proved that the proposed mechanism is individually rational, resulting in non-jealous allocation. It solves resource allocation and monetization challenges in MEC system. Yang et al. studied joint computing partition and resource allocation for delay-sensitive applications in MEC system [24]. They proposed a new efficient off-line algorithm, namely multi-dimensional Search adjustment Algorithm (MDSA), to solve this problem. In addition, they designed an online method, Cooperative Online Scheduling (COS), which is easy to deploy in real systems. In summary, these researches are quite valuable because they shed light on the fundamental concepts and inspired the thoughts of related topics in application deployment in MEC environment. However, the relationship among service placement, resource allocation, application performance and the energy consumption is still under the sea. Therefore, we go further by combining the resource allocation and service placement problems together to explore the trade-off between application performance and the energy consumption based on these works, and apply a simple but effective heuristic approach that optimizes the system in the end (See Table 1). Table 1 Symbol Description System model and problem description Although the example in Sect. 2 has given a brief illustration about the scenario, more details like costs, capacities and the cases of multi-application are ignored for briefness. Therefore, we will give a complete system model and then describe the performance-cost optimization problem. Server and network In a typical AIoT system, the remote server or cloud server is responsible for processing all the IoT context information sensed by IoT devices distributed in specific areas. However, it will be much different when introducing the edge-cloud system. In an edge-cloud system, a set of edge servers H = {\(h_1\), \(h_2\), ..., \(h_n\)} will be located to collect n different types of context data in these specific sensing areas, while each of the edge server is equipped with cloud-like computing and storage capability. The edge servers can easily extract the useful information from received data and perform analysis with their resources. In general, it is the mobile base station that acts the role of edge server [25]. To make full use of the resources of these edge servers, they further make up an edge-side ad-hoc computing cluster. For every edge server \(h_j\) \(\in H\), it can receive the information collected by IoT devices (the set of these devices is denoted with \(U_j\)) around, and the average transmission rate between edge server \(h_j\) and IoT devices in \(U_j\) is \(v^{e}_{j}\). Meanwhile, if it is necessary, data may be routed to and processed by any anther reachable edge server via the connection between edge servers. Formally, we use \({b}_{j,k}\) to describe the average transmission rate between the j-th edge server (source) and the k-th one (target). Since all edge servers can communicate with the cloud in an edge-cloud system, we use \(v^c_j\) to denote the average transmission rate between the cloud server and the j-th edge server. Particularly, we set \(b_{j,k}\) = \(v^e_j\) if the source is \(U_j\) and set \(b_{j,k}\) = \(v^c_j\) if the target is the cloud for simplification. The computing resource available on server \(h_j\) is described as \(\mu _{j}^{\star }\), which means the workloads (e.g., data size in bit) the server can handle on average within one second (bps). Without loss of generality, here we just consider the computation resource like CPU because most data processing tasks are computation-sensitive and the storage resource is adequate. The researchers can easily extend it by introducing more kinds of resources and their corresponding estimation models. DAG-based AIoT application Edge servers use program modules with specific functionalities to finish data processing tasks, and these program modules are usually called services. A service can be launched as an instance with the help of popular PaaS technology like Kubernetes. Here we assume \(S^{\mathbb {R}}\) = {\(s_1\), \(s_2\), ..., \(s_m\)} is the set of services that are involved in the edge-cloud system for information processing, and assume \(S^{\mathbb {V}}\) = (\(c_1\), \(c_2\), ..., \(c_n\)) is the virtual service set which stands for the collection of context data on the IoT devices in different regions. Evidently, these virtual services are closely bounded with the edge servers. For example, \(s^v_z\) should be deployed on edge server \(h_z\). For every \(s_i\) \(\in\) \(S^{\mathbb {R}}\cup S^{\mathbb {V}}\), we use \(I_i\) to describe the average size of data received by \(s_i\), \(O_i\) to describe the average size of data generated by \(s_i\), and \(w_i\) to describe the average workload of processing the received data. Apparently, \(I_i\) and \(w_i\) will be zero when \(s_i \in S^{\mathbb {V}}\) because we treat the IoT devices as data generator here. However, these atomic services cannot handle the scenarios individually where requirements are complex. Therefore, people develop service composition technology by putting them together and invoke them in a certain order. Generally, we can use G = (\(S^{'}\), E), a directed acyclic graph (DAG) to describe an AIoT application with its business logic by revealing the execution order of its related services. Here \(S^{'}\) \(\subset\) \(S^{\mathbb {R}}\cup S^{\mathbb {V}}\) is the related service set, and E = {\(<s_i\) \(\rightarrow\) \(s_j>\) \(|\) \(s_i\),\(s_j\) \(\in\) \(S^{'}\)} is the set of edges. By using the services in G according to the vertex topological order, and treating the output of \(s_i\) as the input of \(s_{j}\) for all \(s_i\) \(\rightarrow\) \(s_j\) \(\in E\) as a relay race, the AIoT application denoted with G can be executed step by step. Obviously, for any two individual services \(s_i\), \(s_j\) \(\in S^{'}\), the output of \(s_i\) will be the input of \(s_j\) if \(s_i\rightarrow s_j\) \(\in E\), and we can approximately assume that \(I_j\) = \(O_i\) in this case. AIoT application deployment scheme Obviously, there will be more than one AIoT application in an edge-cloud system. If we assume there are K AIoT applications \(\varvec{G}\) = (\(G_{1}\), \(G_{2}\), ..., \(G_{K}\)) in the system, and \(G_{k}\) = (\(S_{k}, E_{k}\)) stands for the k-th AIoT application which uses several services in \(S_{k}\) = \(S^{\mathbb {V}}_k \cup S^{\mathbb {R}}_k\) (the involved virtual and real services of the k-th AIoT application), we should consider how these applications can be deployed next. Usually, given an arbitrary AIoT application \(G_k\)=(\(S_{k}, E_{k}\)) in \(\varvec{G}\), we use a placement vector \(\varvec{p}^{k}\)={\(p^k_i\) \({\}}^{|S^{\mathbb {R}}_{k}|}_{i=1}\) (it's not necessary to consider the placement of service in \(S^{\mathbb {V}}\) because they are context-aware and bounded with the edge servers), and a resource allocation matrix \(\varvec{\mu }^k\)={\(\mu ^k_{j,i}\) \({\}}^{n,|S^{\mathbb {R}}_{k}|}_{j=1,i=1}\) to describe the deployment scheme of the k-th AIoT application, where \(p_i \in [1, n]\) is the index of the selected edge server to deploy service \(s_i\) and \(\mu _{j,i}\) is the resource allocated to service \(s_i\) on edge server \(h_j\). As the selected edge server belongs to H, and the used resource cannot exceed the maximum capacity, we will have the following constraints \(\forall s_i\) \(\in\) \(S^{\mathbb {R}}_k\): $$\begin{aligned} 1 \le p^k_i \le n \nonumber \\ {\displaystyle{\sum^{K}_{k=1}}} {\displaystyle{\sum^{m}_{i=1}}} {\mu }^k_{j,i} \le {\mu }^{\star }_{j} \end{aligned}$$ To demonstrate the concepts above, here use a system with 3 AIoT applications in Fig. 2 to help understanding. In the example shown in Fig. 2, we can find that there are 4 edge servers which cooperate with each other and connect to the cloud making up an edge-cloud system. Particularly, as these 4 edge servers locate in different places and serve different users, the collected data will have different contexts (shown in \(c_1\), \(c_2\), \(c_3\), \(c_4\)). To make full use of the collected context-aware data, there are 3 AIoT applications \(G_1\) = ({\(c_1\),\(s_1\),\(s_3\)}, {\(c_1\) \(\rightarrow\) \(s_1\), \(s_1\) \(\rightarrow\) \(s_3\)}), \(G_2\) = ({\(c_1\),\(c_2\),\(s_1\), \(s_2\), \(s_3\), \(s_5\)},{\(c_1\) \(\rightarrow\) \(s_2\), \(c_2\) \(\rightarrow\) \(s_1\), \(s_1\) \(\rightarrow\) \(s_3\),\(s_3\) \(\rightarrow\) \(s_5\),\(s_2\) \(\rightarrow\) \(s_5\)}) and \(G_3\) = ({\(c_4\),\(s_1\),\(s_3\),\(s_4\),\(s_5\)},{\(c_4\) \(\rightarrow\) \(s_1\),\(s_1\) \(\rightarrow\) \(s_4\),\(s_1\) \(\rightarrow\) \(s_3\),\(s_4\) \(\rightarrow\) \(s_5\),\(s_3\) \(\rightarrow\) \(s_5\)}) listed in the box, which stand for 3 different DL-based AI models in with specific data source (e.g. the first AIoT \(G_1\) receives the data \(c_3\) from \(h_3\)) in DAGs, to complete the data analysis tasks. Typically, these AIoT applications are deployed on the cloud, so all the context-aware data collected by IoT devices will be processed after being uploaded to the cloud. However, in the MEC paradigm the services involved in these AIoT applications can be separately deployed on the edge servers. Therefore, we can find that services \(s_1\)-\(s_5\) (in colored circles) are deployed in edge server \(h_1\)-\(h_4\). In this scenario, the collected context-aware data \(c_3\) will firstly processed by \(s_1\) on \(h_3\) and then by \(s_3\) on \(h_1\) (shown with the red curve) to implement the function of \(G_1\). An example of deploying 3 AIoT applications AIoT application performance evaluation AIoT applications will keep running on the edge servers to sense the world by collecting the configuration of the physical world in the service systems. Hence, it is vital for the AIoT application developers to improve the performance of their applications, and the average time cost of the applications in the system will be a representative indicator to measure the system performance. Taking advantage of the dependency in DAGs by adding a dummy service \(s^{\#}_k\) to \(G_{k}\) so that all the end services in \(G_k\) with 0 out-degree are directed to it, the completion time of \(s_i \in S^{\mathbb {R}}_k\) can be represented as: $$\begin{aligned} T^{C}(G_k, s_i) = \frac{w_i}{\mu ^k_{p_i,i}} + \max {\{T^{C}(G_k, s_{z}) + \frac{O_z}{b_{p_z,p_i}} \big | s_{z}\in \mathcal {F}(s_i)\}} \end{aligned}$$ and completion time of \(s_i\) \(\in\) \(S^{\mathbb {V}}_k\) will be \(T^{C}(s_i)\) = 0, where \(\mathcal {F}_{k}(s_i)\) is the precursor set of service \(s_i\) in AIoT application \(G_k\). For example, given an AIoT application \(G_1\) which is composed with three AIoT services in sequential order as {\(s_1\) \(\rightarrow\) \(s_2\) \(\rightarrow\) \(s_3\)}. We assume that service \(s_1\) is currently deployed on edge server \(h_3\), while \(s_2\) on \(h_2\) and \(s_3\) on \(h_1\). In this case, the used data will be collected by sensor \(c_2\) in \(h_2\)'s serving area. Therefore, we can have \(S^{\mathbb {V}}\) = {\(c_2\)}, \(S^{\mathbb {R}}\) = {\(s_1\), \(s_2\), \(s_3\)}, E = {\(c_1\) \(\rightarrow\) \(s_1\), \(s_1\) \(\rightarrow\) \(s_2\), \(s_2\) \(\rightarrow\) \(s_3\)}, \(\mathcal {F}_1(s_1)\) = {\(c_2\)}, \(\mathcal {F}_1(s_2)\) = {\(s_1\)}, \(\mathcal {F}_1(s_3)\) = {\(s_2\)}. To calculate the total time cost of running \(G_1\), we need to calculate \(T^c\left( G_1, s_1\right)\) first: $$\begin{aligned} T^c\left( G_1,s_1\right) =\frac{w_1}{\mu _{3,1}}+T^c\left( c_2\right) +\frac{I_1}{v_3^e} \end{aligned}$$ As the \(c_2\) is a sensor to collect data, it is obvious that \(T^c\left( c_2\right) =0\). Meanwhile, the \(s_1\) is the closest service to input end, so the input data is \(I_1\), which is equal to the output data of sensor \(c_2\), namely \(O_{c_2}\). Next, we start to calculate \(\ T^c\left( G_1,s_2\right)\). According to recursive expression in Eq. (2), we will have $$\begin{aligned} T^c\left( G_1,s_2\right) =\frac{w_2}{\mu _{2,2}}+T^c\left( G_1,s_1\right) +\frac{O_1}{b_{3,2}} \end{aligned}$$ Because \(s_2\) only has one precursor \(s_1\), only \(T^c\left( G_1,s_1\right) +\frac{O_1}{b_{3,2}}\) is used in this case. But if there are more precursor nodes, then we should choose the one takes the most time because it will be the bottleneck. Similarly, Finally, the output data obtained by \(s_3\) is transmitted to the cloud, so we have $$\begin{aligned} T^c\left( G_1,s_\#\right) =T^c\left( G_1,s_3\right) +\frac{O_3}{v_1^c} \end{aligned}$$ In this way, we can use the value of \(T^{C}(G_k, s^{\#}_k)\) to evaluate the time cost of the k-th AIoT application. Based on it, if the collected IoT context data packages are uploaded and used by the AIoT application with the frequency \(f_{k}\), the average time cost of the applications in this edge-cloud system will be represented as: $$\begin{aligned} T(\varvec{G}) = \sum ^{K}_{k=1} f_k \cdot T^{C}(G_k, s^{\#}_k). \end{aligned}$$ Energy consumption model It can be found that the driving force of the multi-access edge computing paradigm lies in its widely distributed, large-scale available edge resources (in order to complete as many tasks as possible locally). However, this feature will also result in the consumption of a large amount of energy when maintaining these edge servers. For example, in a typical multi-access edge computing scenario with base stations as edge servers, the power consumption of a single edge server will reach 2.2\(\sim\)3.7\(\times\)10\(^3\)W. Considering the about 9.3 million base stations in China, the total power consumption may be as high as 2.046\(\sim\)3.441\(\times\)10\(^{10}\)W. High energy consumption has brought great challenges to the promotion and popularization of the MEC paradigm. Therefore, here we also consider the energy consumption of running AIoT applications. Generally, the major energy is consumed in the process of computing. The computation energy is influenced by the clock frequency of the chip, and some techniques like the dynamic voltage scaling (DVS) technology [26] can use this property to adaptively adjust the energy consumption. In CMOS circuits [27], the energy consumption is proportional to the supply voltage. Moreover, it has been observed that the clock frequency of the chip f is approximately linearly proportional to the voltage. Therefore, the energy consumption can be expressed as E \(\propto\) \(f^2\) [28]. At the same time, as f is proportional to the allocated resource, we can model the energy consumption expense \(C(\varvec{G})\) of running applications: $$\begin{aligned} C(\varvec{G}) = \sum ^{K}_{k=1} \sum ^{n}_{j=1} \sum ^{m}_{i=1} \eta _j w_i(\mu ^{k}_{j,i})^2. \end{aligned}$$ Problem definition and formulation Based on the introduction of related concepts, now we can give the definition of the context-aware AIoT application deployment (CA\(^3\)D) problem clearly. In this CA\(^3\)D problem, the AIoT application developers would like to have an appropriate deployment scheme so that the average time cost of their applications can be minimized, as well as the energy consumption expense in an given MEC-based architecture. Therefore, we can now formulate the CA\(^3\)D problem as follows: $$\begin{aligned} P: \quad \quad&\min _{\varvec{x}=(\varvec{p}, \varvec{\mu })}\,\, F(\varvec{x}) = \bigg (T(\varvec{G}), C(\varvec{G})\bigg )\\ \nonumber s.t.\quad \quad&1 \le p^k_i \le n \\ \nonumber&\sum ^{K}_{k=1} \sum ^{m}_{i=1}\mu ^k_{j,i} \le \mu ^{\star }_j \end{aligned}$$ It is not hard to find that the objectives depend on the value of decision variables \(\varvec{p}\) and \(\varvec{\mu }\), and the bounded integer constraint for \(p^{k}_i\) makes the optimization problem mix-integer and nonlinear. Meanwhile, the requirement of optimizing both the application time cost and deployment cost (energy consumption) makes the problem to be a multi-objective optimization problem (MOOP). These properties challenge the solving of our CA\(^3\)D problem. Therefore, we turn to the heuristic method like MOEA/D [29] and try to find some sub-optimal solutions. Typically, an MOOP is solved with a decomposition strategy, which decomposes the original problem into several scalar optimization sub-problems and optimizes them simultaneously [30, 31]. For example in the classic MOEA/D method, the Tchebycheff decomposition is used to measure the maximum weighted distance between the objectives and their minimums \(\varvec{z}^*\) = (\(z^*_T\), \(z^*_C\)): $$\begin{aligned} g^{te}(\varvec{x}|\varvec{\lambda }, \varvec{z}^{*}) = \max \{\lambda _{T}|T(\varvec{x}) - z_T^{*}|, \lambda _{C}|C(\varvec{x}) - z_C^{*}|\} \end{aligned}$$ , where 0 \(\le\) \(\lambda _{C}, \lambda _{T}\) \(\le\)1 and \(\lambda _{C}\) + \(\lambda _{T}\) =1 are the constraints for weight vector \(\varvec{\lambda }\) = (\(\lambda _{T}\), \(\lambda _{C}\)). Obviously, the shorter the distance between \(f(\varvec{x})\) and its minimum is, the closer will the \(\varvec{x}\) be with the optimal solution. And with the weight vector, we can finally search the Pareto optimum in an iterative way. Algorithm 1 shows the detailed operations in solving the problem with the MOEA/D. In this process, each sub-problem will be optimized by using information from its several neighbors. It can be found in Algorithm 1 that several evolutionary operators like crossover and mutate are involved. Actually, MOEA/D provides the possibility to use traditional evolutionary algorithms like Genetic algorithm (GA) [32] to solve multi-objective problems. Therefore, we borrow the operators in GA, a kind of meta-heuristic algorithm inspired by the process of natural selection, to solve our target problem. Therefore, after encoding the decision variable \(\varvec{p}\) and \(\varvec{\mu }\) as \(\varvec{p}\) = (\(p^1_1\), ..., \(p^1_m\), ..., \(p^K_1\), ..., \(p^K_m\)) and \(\varvec{\mu }\) = (\(\mu ^{1}_{1,1}\), ...,\(\mu ^{1}_{1,m}\), ..., \(\mu ^{1}_{n,1}\), ...,\(\mu ^{1}_{n,m}\), ..., \(\mu ^{K}_{1,1}\), ...,\(\mu ^{K}_{1,m}\), ..., \(\mu ^{K}_{n,1}\), ...,\(\mu ^{K}_{n,m}\)) and combining them to get \(\varvec{x}\), the genetic algorithm will be embedded to Algorithm 1 with these operators. Obviously, the crossover operator makes it possible to obtain better solutions, and the mutation operation gives the algorithm the ability to avoid premature convergence. For the sake of simplicity and variable-controlling, we adopt the same parameter configuration for the following evolutionary algorithms including MOEA/D, that is, the initial population is N = 200, the number of iterations is \(MAX\_ITER\) = 200, the mutation probability is \(p_{m}\) = 0.1 and the crossover probability is \(p_c\) = 0.8. Meanwhile, the initial population is generated randomly. In the pseudo-code of the algorithm, we can clearly see that the main complexity of the MOEA/D algorithm comes from the for loop in lines 22-32, that is, to update their neighboring solutions for each individual where the number of individuals is N, the number of adjacent solutions is \(N_{A}\). For each objective function, the same operation needs to be performed. And there are two objective functions in our algorithm, namely \(T(\cdot )\) and \(C(\cdot )\). Thus, we can obtain that in an evolutionary iteration, the time complexity of the algorithm is \(O(N*N_{A})\), and the overall time complexity is \(O(MAX\_ITER * N * N_{A})\). Experiments and analysis To fully explore the impacts of the solution derived from the MOEA/D based algorithm, we partially use the dataset of Alibaba Cluster dataFootnote 3, which is published by Alibaba Group. It contains cluster trace of real production, and several containers are composed in DAG to finish complex tasks. Besides this, we also generate our experimental synthetic data with settings shown in Table 2 to perform our evaluations. What's more, to make the result convincing, the network and service parameters in this table are set close to reality. Besides the comparison with baselines, a series of comprehensive experiments were conducted on the simulation data in this section to explore the impact of different factors. Meanwhile, to the best of our knowledge, the CA\(^3\)D problem is the first attempt to consider the deployment of AIoT services as well as optimizing the resource allocation strategy in the MEC environment, none of the existing approaches in former research works can be directly adopted in our problem. Thus, we select the following intuitive and representative strategies as baselines: Equality-sensitive Deployment (ESD). In the equality-sensitive deployment strategy, resources of edge servers will be allocated in an equal way to the service instance of all the application \(G_k\) \(\in\) \(G\). This strategy is simple but easy to implement. It is practical in many cases so that it is used on plenty of real-world distributed systems. Frequency-based Deployment (FBD). In the frequency-based deployment strategy, service instances of AIoT applications will be placed on the most frequent edge servers where the related services are mainly used in these areas. Meanwhile, the resources of edge servers will be allocated according to the frequency so that the most frequently used services will have the most resource. It is an unbalanced but useful strategy, it addresses the on-premise property of the MEC paradigm. Workload-aware Deployment (WAD). In the workload aware deployment strategy, service instances will be placed on the busy edge servers, and resources of edge servers will be allocated according to the workload of services, so that the heaviest services will have the most resource. WAD is a reinforcement of the FBD strategy because it distinguishes the burden of different requests. Transmission-aware Deployment (TAD). In the transmission aware strategy, resources of edge servers will be allocated according to the communication service placement preference because the transmission time cost is usually the major part that affects the performance. With these settings in Table 2, we illustrate the average expense and service response time for these approaches in Fig. 3, and their running times in Fig. 4. Table 2 System Configurations The comparison with baselines The running times of different problem scales The convergence with the increasing of iterations The comparison with other evolutionary algorithms As we can see in Fig. 3, MOEA/D's optimization of the objective function shows its excellent capability to achieve a good balance between the performance and consumption. Different from the Pareto curve generated by the proposed algorithm, the results of baselines are scattered in this figure: among the baselines, TAD is significantly better than the other three strategies in terms of performance optimization, but the corresponding cost is also much higher. This is because the communication quality is often the main factor that affects the time cost. On the other hand, the expense in TAD will also increase when the resources of the server with better communication quality are all allocated. ESD is close to one solution of ours in the optimization of multi-objectives, but there is still a small gap. This is because evenly distributed resources among servers can play a positive role in the control of deployment expense — when there are not many requests, ESD also brings a splendid load balancing effect. However, our method can optimize the deployment of services and the resource allocation of each server in a more fine-grained manner so that it can achieve better results than ESD. In addition, our method can select the parameter configuration on the Pareto optimal curve according to different scenarios, and all the optimal solutions of the curve can achieve the best effect while (\(T(\varvec{G})\), \(C(\varvec{G})\)) is balanced. Contrary to its good performance, the MOEA/D takes time to calculate the value of the decision variables. The running time under different problem scales (namely, the number of services, number of edge servers and number of IoT applications) are shown in the Fig. 4. As the problem scale increases, the running time also increases. But from the convergence of our method illustrated in Fig. 5 (the GEN in Fig. 5 means the evolution generation of the population.For example, the curve labeled with GEN=115 means this is the curve that shows the result of the algorithm after the 115-th iteration), we can find that the early-stop trick will be capable here as the curve after 179-th generation is almost approximate to that after 195-th, while the result Pareto curve is gradually moving to the direction of better performance with increase of generations. The comparisons above show the difference between our approach and other heterogeneous approaches. In these comparisons, some of the baselines are representative but not designed to solve this specific problem. Thus, as MOEA/D is one of the evolutionary algorithms, the other kinds of algorithms are also applied on this CA\(^3\)D problem to check whether it is appropriate to select the MOEA/D in solving this problem. The comparisons between these evolutionary algorithms are shown in Fig. 6. From Fig. 6 we can find that these algorithms approximately show the same capability in balancing the system performance and cost (except the MOEAD-DE algorithm) while the MOEA/D algorithm shows small advantages on the Pareto frontier. The distribution of hyper-volume For multi-objective optimization, one of the widely used indicators to measure the performance of the algorithm is HV (hyper-volume), which represents the hyper-cube formed by the individual in the solution set and the reference point in the target space volume. Hyper-volume can simultaneously evaluate the convergence and distribution of the solution set, which means the larger the HV value is, the better the overall performance of the algorithm will be. Based on the same data and hyper-parameter configuration, we run our algorithm 200 times and finally get the histogram of its distribution as shown in Fig. 7. Besides the histogram, the violin-plot is also demonstrated on the upper left corner of this figure to provide a clearer visualization. It can be seen that almost 80% of the HV data are above 0.70, indicating that our algorithm has good convergence and can get a nice result at the most of time. A directed acyclical graph (DAG) showing the parallelization in the Montage design Besides the data set from Alibaba Cluster, here we also evaluate our approach on some more complex data structures or workflows like the Montage projectFootnote 4, which is an astronomical image mosaic engine, to illustrate the good portability of our method. The modules (in red ellipse) of Montage work together in the order shown in Fig. 8. Obviously, it is a more complex service-based application and it can also be deployed in the MEC system. In Fig. 9, we used Montage's DAG data to test our algorithm on the number of services of different scales. With the increasing of service number from montage-1 to montage-3, the Pareto frontiers are shown in Fig. 9. It can be seen from the results that the approach can also work to generate the optimal frontiers in the complex situations. Result of test on Montage workflow Impacts of system configurations The above comparisons show that the MOEA/D based algorithm will be practical in solving such a context-aware AIoT application deployment problem in MEC-based systems. Besides the comparison between approaches, we will further discuss the effects of various factors in the system. Fig. 10 The impacts of service and edge server settings Impacts of application and service Among the settings of our MOEA/D based algorithm, the service related factors are the average output data size (\(\bar{O}\)), the service workload (\(\bar{w}\)), the average service number (m) and the application type number (K). Therefore, in order to check the influence of these service related parameters on the optimization objective (\(T(\varvec{G})\), \(C(\varvec{G})\)), we set the system parameters to the configuration listed in Table 2, and adjust the average of the above related parameters respectively to observe their influences. Accordingly, the results are shown in Fig. 10a, b, g, i. That is, the Pareto optimal curves obtained from the algorithm based on MOEA/D shift upward gradually in the direction of performance decrease with the increase of \(\bar{O}\), \(\bar{w}\), K and m. In detail, with the increase of \(\bar{O}\), the Pareto optimal curve gradually drifts toward the direction of performance degradation. This is because when the amount of data increases, it brings more pressure on data transmission between servers. And this pressure will result in traffic congestion and performance deterioration. Secondly, as the average service workload \(\bar{w}\) increases, the performance decreases because the increasing workload will force the server to allocate more resources. Thirdly, with the increase of K, which means the increase of applications to be deployed, the Pareto curve moves to the upper right. This is because more service requests need to be processed when the total resources remain the same in this case, and the lack of resources will then reduce the processing efficiency. Finally, similar reasons like those of K decrease the performance with the increasing of service number m for the additional resource requirements. Impacts of server Similarly, we keep the system parameters fixed as shown in Table 2, and observe their impacts on the optimization objectives (\(T(\varvec{G})\), \(C(\varvec{G})\)) by adjusting the server related factors, which are the average deployment price related to the server \(\bar{\eta }\), the total number of resources of each server \(\bar{\mu }^{\star }\), and the total number of servers n. The results are shown in Fig. 10c, d, h. In summary, the Pareto curve of the optimization objective (\(T(\varvec{G})\), \(C(\varvec{G})\)) shows a trend to move up and right with the increases of \(\bar{\eta }\), and it moves in the direction of enhanced performance as \(\bar{\mu }^{\star }\) and n increase. In detail, the Pareto curve moves in the direction of performance decline with the average deployment price increases because the linear relationship between the average price increases and the total cost. Besides, as the average number of resources of the server increases, the performance of the entire system gradually improves. This is due to the relaxation of resource constraints, which can speed up the operation of a single service. However, as the cost increases with more used resources, the curve's trend of moving to the lower left is not particularly obvious. Finally, with the increase in the number of edge servers n, the Pareto curve moves to the lower left. This is because after the increase in the number of servers, while the total number of tasks remains the same, the optional edge servers in the system become diverse, and including some cost-effective servers. Deploying services on these servers will not only shorten the processing time but also lower the cost, so it can improve the overall performance. Impacts of network In the same way, we keep the system parameters fixed as shown in Table 2 and adjust the network parameters \(\bar{v}^e\) (\(\bar{v}^c\)), \(\bar{b}\) to detect the impacts of network quality on the system. The results are shown in Fig. 10e, f). We can find that the better the network condition is, the closer the Pareto curve of performance and cost is to the lower left. This is because as the network transmission rate increases, the data transmission time cost is reduced with other system parameters unchanged, so that each service can respond faster, which leads to performance improvements and make the Pareto curve move to the lower left. Conclusion and future work In this paper, we investigate, model and formulate the CA\(^3\)D problem in resource-constrained MEC environment. A series of numeric experiments are conducted based on the Alibaba and Montage dataset. Since the CA\(^3\)D problem is a mixed integer nonlinear programming multi-objective problem, which brings enormous challenges to find its optimal solutions. As a compromise scheme, we turn to the heuristic method like MOEA/D and try to find some sub-optimal solutions that may satisfy the requirements at a certain degree within the mentioned constraints. The comparison results show that our algorithm outperforms other representative baseline approaches. And the factor exploration shows how the system settings will become the bottlenecks. Obviously, this application modeling and problem solutions for CA\(^3\)D can be transformed into any other application in which the components have partial order dependence as a directed acyclic graph based on the demands of extensions. It has a instructive significance on how to optimize the deployment of components and resource allocation in such applications so as to achieve the optimal balance between performance and cost — in other words, it has a good compatibility. However, even the proposed solution can be practical in placing and allocating, it just aims to provide a good start for the system — when the system is established in a very unstable environment, the proposed solution may not guarantee its efficiency (for example, in a context fast-changing environment). Therefore, we are going to consider the uncertainty of real-time scheduling tasks and try to balance the effectiveness and robustness to make the system more self-adaptive in our future work, where the deep reinforcement learning (DRL) based methods may play an important role. https://www.gsmaintelligence.com/ https://d2l.ai/chapter_recommender-systems/deepfm.html https://github.com/alibaba/clusterdata/ http://montage.ipac.caltech.edu/docs/grid.html Deng S, Zhao H, Fang W, Yin J, Dustdar S, Zomaya AY (2020) Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet Things J 7(8):7457–7469 Shi W, Cao J, Zhang Q, Li Y, Xu L (2016) Edge computing: Vision and challenges. IEEE Internet Things J 3(5):637–646 Xiang Z, Deng S, Jiang F, Gao H, Tehari J, Yin J (2020) Computing power allocation and traffic scheduling for edge service provisioning. In 2020 IEEE International Conference on Web Services (ICWS), IEEE, pp. 394–403 Filippini I, Sciancalepore V, Devoti F, Capone A (2017) Fast cell discovery in mm-wave 5g networks with context information. IEEE Trans Mob Comput 17(7):1538–1552 Wang D, Xu D, Yu D, Xu G (2021) Time-aware sequence model for next-item recommendation. Appl Intell 51(2):906–920 Fan Q, Ansari N (2018) Application aware workload allocation for edge computing-based iot. IEEE Internet Things J 5(3):2146–2153 Chen Y, Deng S, Ma H, Yin J (2020) Deploying data-intensive applications with multiple services components on edge. Mob Netw Appl 25(2):426–441 Zhao H, Deng S, Zhang C, Du W, He Q, Yin J (2019) A mobility-aware cross-edge computation offloading framework for partitionable applications. In 2019 IEEE International Conference on Web Services, ICWS, IEEE, pp. 193–200 Bozorgchenani A, Mashhadi F, Tarchi D, Monroy SS (2020) Multi-objective computation sharing in energy and delay constrained mobile edge computing environments. IEEE Trans Mob Comput 20(10):2992–3005 Chen Y, Zhang Y, Wu Y, Qi L, Chen X, Shen X (2020) Joint task scheduling and energy management for heterogeneous mobile edge computing with hybrid energy supply. IEEE Internet Things J 7(9):8419–8429 Jiang C, Fan T, Gao H, Shi W, Liu L, Cerin C, Wan J (2020) Energy aware edge computing: A survey. Comput Commun 151:556–580 Mashhadi F, Monroy SAS, Bozorgchenani A, Tarchi D (2020) Optimal auction for delay and energy constrained task offloading in mobile edge computing. Comput Netw 183:107527 Ouyang T, Zhou Z, Chen X (2018) Follow me at the edge: Mobility-aware dynamic service placement for mobile edge computing. IEEE J Sel Areas Commun 36(10):2333–2345 Pasteris S, Wang S, Herbster M, He T (2019) Service placement with provable guarantees in heterogeneous edge computing systems. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications, IEEE, pp. 514–522 Roy P, Sarker S, Razzaque MA, Hassan MM, AlQahtani SA, Aloi G, Fortino G (2020) Ai-enabled mobile multimedia service instance placement scheme in mobile edge computing. Comput Netw 182:107573 Yuan B, Guo S, Wang Q (2021) Joint service placement and request routing in mobile edge computing. Ad Hoc Netw 120:102543 Ning Z, Dong P, Wang X, Wang S, Hu X, Guo S, Qiu T, Hu B, Kwok RY (2020) Distributed and dynamic service placement in pervasive edge computing networks. IEEE Trans Parallel Distrib Syst 32(6):1277–1292 Han P, Liu Y, Guo L (2021) Interference-aware online multi-component service placement in edge cloud networks and its ai application. IEEE Internet Things J Yu Y, Zhang J, Letaief KB (2016) Joint subcarrier and cpu time allocation for mobile edge computing. In 2016 IEEE Global Communications Conference (GLOBECOM), IEEE, pp. 1–6 Wang C, Liang C, Yu FR, Chen Q, Tang L (2017) Computation offloading and resource allocation in wireless cellular networks with mobile edge computing. IEEE Trans Wireless Commun 16(8):4924–4938 You C, Huang K, Chae H, Kim B-H (2016) Energy-efficient resource allocation for mobile-edge computation offloading. IEEE Trans Wireless Commun 16(3):1397–1411 Guo S, Zhang K, Gong B, He W, Qiu X (2021) A delay-sensitive resource allocation algorithm for container cluster in edge computing environment. Comput Commun 170:144–150 Bahreini T, Badri H, Grosu D (2021) Mechanisms for resource allocation and pricing in mobile edge computing systems. IEEE Trans Parallel Distrib Syst 33(3):667–682 Yang L, Liu B, Cao J, Sahni Y, Wang Z (2019) Joint computation partitioning and resource allocation for latency sensitive applications in mobile edge clouds. IEEE Trans Serv Comput 14(5):1439–1452 Mao Y, Zhang J, Letaief KB (2016) Dynamic computation offloading for mobile-edge computing with energy harvesting devices. IEEE J Sel Areas Commun 34(12):3590–3605 Rabaey JM, Chandrakasan AP, Nikolić B (2003) Digital integrated circuits: a design perspective, vol 7. Pearson education Upper Saddle River, NJ Burd TD, Brodersen RW (1996) Processor design for portable systems. J VLSI Signal Process Syst Signal Image Video Technol 13(2):203–221 Rizvandi NB, Taheri J, Zomaya AY (2011) Some observations on optimal frequency selection in dvfs-based energy consumption minimization. J Parallel Distrib Comput 71(8):1154–1164 Zhang Q, Li H (2007) Moea/d: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731 Ma X, Yu Y, Li X, Qi Y, Zhu Z (2020) A survey of weight vector adjustment methods for decomposition-based multiobjective evolutionary algorithms. IEEE Trans Evol Comput 24(4):634–649 Santiago A, Huacuja HJF, Dorronsoro B, Pecero JE, Santillan CG, Barbosa JJG, Monterrubio JCS (2014) A survey of decomposition methods for multi-objective optimization. In Recent Advances on Hybrid Approaches for Designing Intelligent Systems. Springer, pp. 453–465 Katoch S, Chauhan SS, Kumar V (2021) A review on genetic algorithm: past, present, and future. Multimed Tools Appl 80(5):8091–8126 This research was partially supported by the National Natural Science Foundation of China (No. 62102350, No. 62072402) and the Natural Science Foundation of Zhejiang Province (No. LQ21F020007, No. LQ20F020015). Intelligent Plant Factory of Zhejiang Province Engineering Lab, Zhejiang University City College, Hangzhou, China Zhengzhe Xiang, Yuhang Zheng, Mengzhu He, Longxiang Shi, Shuiguang Deng & Zengwei Zheng Computer & Software School, Hangzhou Dianzi University, Hangzhou, China Dongjing Wang College of Computer Science and Technology, Zhejiang University, Hangzhou, China Yuhang Zheng & Shuiguang Deng Zhengzhe Xiang Yuhang Zheng Mengzhu He Longxiang Shi Shuiguang Deng Zengwei Zheng Correspondence to Zengwei Zheng. The authors declare that they do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. This article is part of the Topical Collection: Special Issue on Green Edge Computing Guest Editors: Zhiyong Yu, Liming Chen, Sumi Helal, and Zhiwen Yu Xiang, Z., Zheng, Y., He, M. et al. Energy-effective artificial internet-of-things application deployment in edge-cloud systems. Peer-to-Peer Netw. Appl. (2021). https://doi.org/10.1007/s12083-021-01273-5 Service deployment
CommonCrawl
Susceptibility to Ebbinghaus and Müller-Lyer illusions in autistic children: a comparison of three different methods Catherine Manning ORCID: orcid.org/0000-0001-6862-25251, Michael J. Morgan2,3, Craig T. W. Allen4 & Elizabeth Pellicano4,5 Studies reporting altered susceptibility to visual illusions in autistic individuals compared to that typically developing individuals have been taken to reflect differences in perception (e.g. reduced global processing), but could instead reflect differences in higher-level decision-making strategies. We measured susceptibility to two contextual illusions (Ebbinghaus, Müller-Lyer) in autistic children aged 6–14 years and typically developing children matched in age and non-verbal ability using three methods. In experiment 1, we used a new two-alternative-forced-choice method with a roving pedestal designed to minimise cognitive biases. Here, children judged which of two comparison stimuli was most similar in size to a reference stimulus. In experiments 2 and 3, we used methods previously used with autistic populations. In experiment 2, children judged whether stimuli were the 'same' or 'different', and in experiment 3, we used a method-of-adjustment task. Across all tasks, autistic children were equally susceptible to the Ebbinghaus illusion as typically developing children. Autistic children showed a heightened susceptibility to the Müller-Lyer illusion, but only in the method-of-adjustment task. This result may reflect differences in decisional criteria. Our results are inconsistent with theories proposing reduced contextual integration in autism and suggest that previous reports of altered susceptibility to illusions may arise from differences in decision-making, rather than differences in perception per se. Our findings help to elucidate the underlying reasons for atypical responses to perceptual illusions in autism and call for the use of methods that reduce cognitive bias when measuring illusion susceptibility. Along with impaired social communication and interaction, autism is characterised by restricted, repetitive patterns of behaviour and interests, including atypical responses to sensory information (Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5) [1]). Sensory symptoms are common in autistic individuals [2] and impact many aspects of everyday functioning, including behaviour in educational settings [3], daily living skills [4] and family life [5]. Understanding atypical sensory functioning in autism is therefore of critical import. Vision is perhaps the sensory modality that has been most extensively studied in autistic individuals (see [6] for review). The first study to use visual illusions to characterise autistic perception was conducted by Happé [7]. She selected six illusions purported to result from the integration of features with their surrounding context: the Ponzo illusion, Ebbinghaus illusion (or Titchener's circles), Kanisza triangle, Müller-Lyer illusion, Hering illusion and Poggendorff illusion (see Table 1). The illusions were displayed on cards, and participants were asked to make simple judgments for each illusion (e.g. Müller-Lyer: 'are the lines the same size or different sizes?'; Ebbinghaus: 'are the circles the same size or different sizes?'). In the size illusions (Ebbinghaus, Müller-Lyer, Ponzo), the sizes of the features were identical, so a 'different' judgment was deemed to result from the inducing context. Strikingly, Happé reported that young people on the autism spectrum (aged 8–16 years, n = 25) were susceptible to fewer visual illusions than typically developing children matched for mental age (n = 21) and children with a learning difficulty matched for both mental age and chronological age (n = 21). A smaller proportion of autistic participants succumbed to each illusion compared to the other groups, apart from in the case of the Müller-Lyer illusion, in which case the majority of autistic individuals were also 'fooled' by the illusion. Table 1 Summary of previous studies assessing susceptibility to visual illusions in autistic individuals Attempts to reproduce Happé's [7] findings have had mixed success. Hoy, Hatton and Hare [8] presented the same task used by Happé to younger autistic children, aged 4–9 years (n = 17), and typically developing children matched in age and verbal ability (n = 17), and found no group differences in the number of illusions that children with and without autism were susceptible to. Yet, Bölte, Holtmann, Poustka, Scheurich and Schmidt [9] reported reduced susceptibility to illusions in autistic adults (n = 15) compared to that in typical adults matched in non-verbal and verbal ability (n = 15) using five variants of each of five illusions (Ebbinghaus, Ponzo, Müller-Lyer, Poggendorff, Hering) in a task very similar to that used by Happé. Other studies have used the method-of-adjustment, in which participants are asked to manipulate one stimulus until it is perceptually identical to another stimulus. In this task, participants do not need to give a verbal response, and there is scope to assess the strength of an illusory effect, rather than classifying responses as those that either do or do not indicate susceptibility to an illusion (cf. Happé [7]). Using this method, Ropar and Mitchell [10] found that autistic children aged 7 to 18 years (n = 23) were just as affected by the Ponzo and Ebbinghaus illusions as those from a range of comparison groups, including individuals with moderate learning difficulties and typically developing children and adults. Interestingly, however, the autistic children did not succumb to the horizontal-vertical illusion [11] and surprisingly showed heightened susceptibility to the Müller-Lyer illusion. Ropar and Mitchell further showed that there was no evidence of group differences in susceptibility to visual illusions when using a task modelled on that of Happé [7]. Ropar and Mitchell largely replicated their findings in a later study [12], demonstrating again that autistic children did not generally demonstrate a reduced susceptibility to illusions, apart from in the case of the horizontal-vertical illusion. More recently, Ishida, Kamio and Nakamizo [13] demonstrated that young people on the autism spectrum aged 10 to 16 years (n = 9) were less susceptible to the Ponzo illusion than typically developing children matched in age and IQ (n = 9) but were equally susceptible to the Müller-Lyer illusion. Furthermore, Mitchell, Mottron, Soulières and Ropar [14] presented the Shepard's table illusion to young autistic people aged between 12 and 29 years (n = 18) and age- and ability-matched typically developing participants (n = 18) and reported that while autistic individuals were susceptible to the illusion, the illusory effect was weaker than in the comparison group. Finally, some studies have used forced-choice methods to measure susceptibility to visual illusions in autism. Milne and Scope [15] assessed susceptibility to illusory 'Kanisza' figures by asking participants to judge whether a rectangle induced by surrounding shapes was 'thin' or 'fat'. In this task, autistic children aged 7 to 13 years (n = 18) showed no differences in accuracy or reaction time compared to non-autistic children with special educational needs (n = 16) and typically developing children (n = 20). More recently, Schwarzkopf, Anderson, de Haas, White and Rees [16] used a forced-choice task using the Ebbinghaus illusion, where participants were asked to judge which stimulus was larger on each trial. In this task, adults with Asperger's syndrome (n = 15) were equally susceptible to the illusion as neurotypical adults matched in age and ability (n = 12). Our review of previous studies investigating visual illusions in autism presents a complex picture (see Table 1 for summary, and [17] for meta-analysis). The same is also true for studies assessing the relationship between autistic traits and illusory perception in the general population. Walter, Dassonville and Bochsler [18] reported that individuals with higher scores on the Systemizing Quotient [19] were less susceptible to some illusions (the rod-and-frame, Roelofs, Ponzo and Poggendorff illusions), but not others (induced motion, Zollner, Ebbinghaus and Müller-Lyer). Meanwhile, susceptibility to illusions was not correlated with either scores on the empathizing quotient [20] or the autism spectrum quotient (AQ) [21]. Yet, Chouinard, Noulty, Sperandio and Landry [22] later reported that higher scores on the AQ were related to reduced susceptibility to the Müller-Lyer illusion but not the Ebbinghaus and Ponzo illusions. Chouinard, Unwin, Landry and Speriando [23] later failed to replicate this result, instead showing that only the Shepard's table and square-diamond illusions were correlated with AQ scores, out of 13 illusions tested. While it is clear that the evidence is mixed, those studies finding group differences between autistic individuals and comparison groups have nevertheless been suggested to reflect differences in autistic perception. For example, Happé [7] proposed that autistic individuals demonstrated reduced contextual integration, processing features more independently from their surrounding context than neurotypical individuals. This explanation was tightly linked to the weak central coherence account of autism [24, 25]. It has also been suggested that reduced susceptibility to some illusions may arise from weaker top-down influences on autistic perception [14, 26]. These ideas were later elaborated in a theory of autistic perception situated within a Bayesian framework [27]. Pellicano and Burr [27] proposed that autistic individuals have attenuated (broader) priors, which means that their perception is more influenced by incoming sensory information, and is thus more veridical. Yet, some illusions may lend themselves to Bayesian explanations more easily than others [28]. For example, it is easy to postulate a role for priors in the perception of the Kanisza triangle and the hollow-face illusion, whereas illusions arising from low-level sensory processing (e.g. the Ebbinghaus illusion) may be unrelated to Bayesian inference. It is important to consider, however, whether reports of reduced susceptibility to illusions in autism are really due to differences in perception at all. All previous studies assessing visual illusions in autism have confounded the observer's sensitivity to an illusion with the observer's subjective criterion for reporting whether the illusion was seen [29, 30]. Therefore, group differences in responses to illusions may have arisen due to differences in subjective criteria—or decisional bias, without necessitating underlying differences in perception: a possibility that is particularly likely when groups may differ according to cognitive and affective factors [30].Footnote 1 Indeed, the problem of distinguishing a perceptual from a cognitive bias is not confined to studies of autism, but applies to all Type 2 psychophysical measures of bias [29] such as visual after-effects [31, 32]. To circumvent this potential problem, Morgan et al. [29] advocated the use of a two-alternative forced-choice (2-AFC) procedure with a roving pedestal. Morgan et al. demonstrated how this method could be applied to a range of different perceptual phenomena. In the case of the Ebbinghaus illusion, for example, previous studies have asked autistic and non-autistic participants to determine which of two central circles is bigger (Fig. 1a). While a bias in responses could arise at the level of the percept, it could also reflect the observer's decisional criterion (e.g. to respond that the circle surrounded by large circles is smaller when the observer is unsure). Such a criterion could be particularly affected by an observer's previous exposure to an illusion. In Morgan et al.'s method, one reference stimulus of fixed size and two comparison stimuli are presented sequentially (Fig. 1b). One comparison stimulus (the standard) is a pedestal, which has a central circle that is either larger or smaller than that of the reference stimulus on a given trial. The other comparison stimulus (the test) has a central circle that is an increment larger than the pedestal. The two comparison stimuli have the same surrounding context circles, which differ from the context of the reference. The observer is asked whether the central circle of the first or second comparison is most similar in size to that of the reference. The order of presentation of the standard and test is randomised and the size of the pedestal (i.e. larger or smaller than the reference) is randomly interleaved throughout the task. Thus, in this task, participants cannot rely on strategies such as choosing the standard if they are unsure (as they do not know which stimulus is the standard on a given trial) or choosing a stimulus with a certain context (because they are required to choose between two stimuli with identical contexts). Using this method in conjunction with a signal detection theory framework [33], it is possible to characterise the observer's discrimination sensitivity and the observer's perceptual bias, whilst minimising the influence of decisional biases. Within this framework, an observer's discrimination sensitivity is limited by 'internal noise' [29], which refers to any source of variability that limits performance. Differences in perceptual bias between conditions of a task (e.g. small or large context circles in the Ebbinghaus stimulus) reflect illusion susceptibility. Methods for assessing the Ebbinghaus illusion. a Traditional method, where participants are asked whether the two stimuli have central circles that are the same size or not, and/or to judge which stimulus has the largest central circle. In this example, the central circles are identical in size. b Two-alternative-forced-choice method as described by Morgan et al. [29]. Participants are asked which of two sequentially presented comparison stimuli (the standard or test) has a central circle that is most similar in size to that presented in the reference. In this example, the central circle in the standard is 5% smaller than in the reference and the test is 4% larger than the standard Given this recent methodological advance in measuring illusion susceptibility, it seems timely to revisit the question of whether autistic individuals show reduced susceptibility to illusions. In this study, we measured susceptibility to two well-characterised contextual illusions: the Ebbinghaus illusion and the Müller-Lyer illusion, in autistic and typically developing children. In experiment 1, we used Morgan et al.'s [29] method to minimise the effects of higher-level decision-making strategies, in order to measure perceptual biases as purely as possible. To allow comparison with previous studies, we used more conventional methods in experiments 2 and 3. Experiment 2 used a similar task to that used by Happé [7] and experiment 3 used a method-of-adjustment task comparable to that used by Ropar and Mitchell [12]. The Ebbinghaus and Müller-Lyer illusions are two of the most frequently used illusions with autistic populations to date (see Table 1) and have led to mixed results. Our study allowed us to investigate whether such mixed results could be attributable to methodological differences. The use of these illusions in conjunction was also informative because they are both size illusions arising from the surrounding context. Reduced contextual integration could in theory lead to reduced susceptibility for both illusions, as has been shown in the case of the Ebbinghaus illusion [7]. Yet, a distinction can be drawn between the two illusions. In the Müller-Lyer illusion, the inducing context (i.e. the fins) touches the stimulus on which judgments are made—which is not the case with the Ebbinghaus illusion. This difference may mean that the context has a greater or more automatic influence on perception for the Müller-Lyer illusion, making autistic children more susceptible to this illusion in particular [7]. The use of these illusions together therefore allows us to characterise the nature of atypical integration of context in autistic individuals. Critically, in this experiment, we examine whether any differences in illusory perception between autistic children and typically developing children can be revealed when using rigorous methods to minimise the influence of cognitive bias. General procedure This study measured susceptibility to Ebbinghaus and Müller-Lyer illusions in autistic and typically developing children using three methods: one specifically designed to minimise cognitive biases (experiment 1) and two to allow comparison with previous studies (experiments 2 and 3). Children were tested individually in a quiet room as part of a wider battery of tasks within sessions of a public engagement of science event. Computer tasks were presented with a viewing distance of 50 cm. When children completed more than one experiment, the experiments were presented sequentially (i.e. experiment 1 was followed by experiment 2 and then experiment 3). Autistic and typically developing children aged 6 to 14 years were recruited from schools and community contacts in the Greater London area. Autistic children had previously received an independent clinical diagnosis of an autism spectrum condition according to International Classification of Diseases (ICD-10) [34] or DSM-IV [35] criteria. Typically developing children had no diagnosed developmental conditions, as reported by parents. Parents completed the Social Communication Questionnaire (SCQ) [36], and autistic children were administered the Autism Diagnostic Observation Schedule-2nd edition (ADOS-2) [37]. All autistic children scored above threshold for an autism spectrum condition on one or both measures, and no typically developing child scored above the threshold on the SCQ (score of 15; [36]). All children were cognitively able (IQ > 70), as assessed by the Wechsler Abbreviated Scales of Intelligence, Second Edition (WASI-II) [38]. Further details on the participants included in each experiment are provided below. Apparatus and stimuli Computer tasks (experiments 1 and 3) were presented on a Dell Precision laptop (1366 × 768 pixels, 60 Hz) using MATLAB and elements of the Psychophysics Toolbox [39–41]. White stimuli were presented on a mid-grey background, at 61% Weber contrast. Experiment 1: 2-AFC roving pedestal The dataset for the Ebbinghaus analysis included 29 autistic children (4 females) and 33 typically developing children (12 females). The groups did not differ significantly in age, t(60) = 1.49, p = .14, or non-verbal ability, t(60) = .41, p = .69, although the autistic children had lower verbal IQ scores, t(40.53) = 2.93, p = .006 (see Table 2 for scores). The dataset for the Müller-Lyer analysis included 33 autistic children (4 females) and 47 typically developing children (18 females). The groups did not differ significantly in age, t(78) = 1.36, p = .18, or non-verbal ability, t(78) = .33, p = .74, but differed in verbal ability, t(78) = 4.55, p < .001 (see Table 2 for scores). Twenty-one autistic children and 11 typically developing children were in the datasets for both the Ebbinghaus and Müller-Lyer versions of the experiment. Table 2 Characteristics of participants for each task in experiment 1 An additional five autistic children and two typically developing children were excluded from the Ebbinghaus analysis, and an additional four autistic children and one typically developing child were excluded from the Müller-Lyer analysis due to poor-fitting psychometric functions (see "Data screening and analysis" section). A further four of the youngest typically developing children were removed from each of the Ebbinghaus and Müller-Lyer datasets to ensure the groups matched adequately in age. The reference stimulus was centred horizontally, at the top of the screen. The comparison stimuli were positioned below the reference stimulus, to form a triad (see Fig. 2). In the Ebbinghaus task, the diameter of the central circle of the reference stimulus was fixed at 1.25°. The stimuli were either surrounded by eight small context circles with a diameter of .42° and positioned 1.25° from the centre of the stimulus or by four large context circles with a diameter of 1.67°, positioned 2.08° from the centre of the stimulus. In the Müller-Lyer task, the reference stimulus had a horizontal line that was 3° in length. Fins were 1° long, attached to the end of the horizontal line at an angle of 45° (either inward or outward). Schematic representation of stimuli used in experiment 1. a Context-free practice trial. b S-L context condition in Ebbinghaus task and O-I context condition in Müller-Lyer task. c L-S context condition in Ebbinghaus task and I-O in Müller-Lyer task. In all examples, the reference stimulus is presented at the top and the two comparison stimuli are presented below. In these examples, the left comparison stimulus is the standard (pedestal) and the right comparison stimulus is the test The task was based on the Ebbinghaus task devised by Morgan et al. [29], with three main modifications to make it suitable for child participants. First, the task was presented within the context of a developmentally appropriate trading game. Second, to minimise memory demands, the reference stimulus was always present on the screen, and the comparison stimuli were presented simultaneously (cf. [29]). Third, to reduce the number of trials, we omitted the context-free condition. The reference stimulus was presented continuously on the screen. The experimenter initiated each trial with a keypress, triggering the presentation of comparison stimuli for a duration of 1000 ms. In the Ebbinghaus version of the experiment, children were asked to identify which of the two comparison stimuli had a central circle most similar in size to that of the reference stimulus. In the Müller-Lyer version of the experiment, children were asked to identify which of the two comparison stimuli had a horizontal line most similar in size to that of the reference stimulus. Before completing the experimental trials, the task was explained to participants with four context-free demonstration trials (Fig. 2a) and four demonstration trials with context. Participants were also shown examples of stimuli on cards where necessary. Children were told that they were trading shapes in 'The Bank of Geometrica'. The reference stimulus was the 'most valuable shape' in the game, and participants had to choose which of the two comparison stimuli was the most similar in size to this stimulus. Participants were told that they would be able to trade the shapes that they had chosen for points. Throughout the session, children made their responses verbally (left/right) or by pointing, and the experimenter entered their responses using a keyboard. No specific feedback on performance was given although general encouragement was provided throughout. The participants completed the task in two context conditions, which were presented sequentially in a counterbalanced order. In one condition of the Ebbinghaus task (S-L; Fig. 2b), the reference stimulus had small context circles and the comparison stimuli had large context circles; in the other condition (L-S; Fig. 2c), the reference stimulus had large context circles and the comparison stimuli had small context circles. In the Müller-Lyer task, one condition (O-I; Fig. 2b) had outward fins on the reference and inward fins on the comparison stimuli, and the other condition (I-O; Fig. 2c) had inward fins on the reference and outward fins on the comparison stimuli. One comparison stimulus was a standard, and the other comparison stimulus was a test. For each context condition, participants completed 40 trials in which the standard was a pedestal below the reference, and 40 trials in which the standard was a pedestal above the reference. In the Ebbinghaus task, the central circle of the standard was either −5 or +5% of the diameter of that in the reference stimulus (i.e. 1.19° or 1.31°). In the Müller-Lyer task, the length of the horizontal line in the standard was either −20 or +20% of the length of that in the reference stimulus (i.e. 2.4° or 3.6°).Footnote 2 The pedestals were randomly interleaved throughout the task (i.e. a 'roving pedestal'; [29, 42]). The location of the standard stimulus (left or right) was randomised on each trial. The size of the test stimulus was guided by method of constant stimuli, with eight trials presented at five different levels for each pedestal (+1, +2, +4, +8, +16% of the diameter or length of the standard for the Ebbinghaus and Müller-Lyer experiments, respectively). These trials were presented in a randomised order. The 80 trials for each context condition were divided into blocks of 20 trials. After each block, participants were shown a cartoon cash register which calculated the 'points' they had obtained. These points were randomly allocated but provided motivation for children throughout the task. Each context condition took approximately 5 min. Data screening and analysis The psychophysical task is a comparison-of-comparisons task [42]. Using a signal detection theory approach [33], the standard (S) and test (T) stimuli can each be described by normal distributions with mean values corresponding to the physical size of the stimulus (p, p + t) plus perceptual bias (μ) and variances (σ 2) corresponding to performance-limiting internal noise [42]: $$ \begin{array}{c}\hfill \mathrm{S} \sim N\left( p+\mu,\ {\sigma}^2/2\right)\hfill \\ {}\hfill \mathrm{T} \sim N\left( p+ t + \mu,\ {\sigma}^2/2\right)\hfill \end{array} $$ Thus, the probability of choosing the standard can be calculated as: $$ \begin{array}{l}\mathrm{P}(S) = \mathrm{P}\left(\left| S\right| < \left| T\right|\right)\\ {} = \mathrm{P}\left({S}^2/{T}^2 < 1\right)\end{array} $$ where S 2/T 2 is a random variable with a doubly non-central F distribution [42]. Maximum likelihood psychometric functions were fit to each participant's data, for each combination of pedestal and context condition, assuming constant internal noise across conditions, but allowing bias to vary across the context conditions. Figure 3 shows psychometric functions for a typically developing child. In the S-L condition, the central circle of the reference stimulus tends to appear bigger than the central circles of the comparison stimuli. Thus, as the test is made larger than the pedestal in the negative pedestal trials (−5%), the participant becomes less likely to choose the standard (or pedestal), as s/he perceives the larger comparison stimulus (i.e. the test) to be most similar in size to the reference stimulus. In the positive pedestal condition, the participant may become more likely to choose the test as it increases in size, until it exceeds a limit at which the pedestal starts to look more similar in size to the reference stimulus. In the L-S condition, the central circle of the reference stimulus tends to appear smaller than the central circles of the comparison stimuli. Thus, as the test is made larger than the standard, the participant becomes more likely to choose the standard as it is the smallest stimulus. Thus, a negative bias is expected in condition S-L, and a positive bias is expected in condition L-S. The same logic can be applied to the Müller-Lyer illusion, whereby a negative bias is expected in condition O-I and a positive bias in condition I-O. Example dataset for the Ebbinghaus task. Maximum likelihood fits to the data for an 11-year-old typically developing child in the Ebbinghaus task, including two pedestal values (−5, +5%) and two context conditions (S-L: where the reference stimulus has small context circles and the comparison stimuli have large context circles, and L-S: where the reference has large context circles and the comparison stimuli have small context circles). The red and green lines represent fits to the data where internal noise (σ) is constant across context conditions, but bias (μ) is free to vary. The black line represents the fit of a single model where bias and internal noise are both held constant across the context conditions Assuming that the different context conditions are associated with the same value of internal noise, but different values of bias, we obtained one internal noise (σ) and two bias (μ) parameters, taking into account the pedestal value and fitted context bias for each observation [29, 42]. Internal noise and bias are expressed as Weber fractions with respect to the size of the reference stimulus (%). We screened the data for poorly fitting psychometric functions and removed datasets where the likelihood of the fit was particularly low (log likelihood <−110) and/or the internal noise value was 30 or above such as to make the slope of the psychometric function essentially flat. Such functions suggested that participants were not successfully discriminating between stimuli, which may have been due to inattentiveness or a lack of task understanding. Five autistic children and two typically developing children were removed from the Ebbinghaus analysis, and four autistic children and one typically developing child were removed from the Müller-Lyer analysis on this basis. To quantify the extent of bias associated with each illusion, we calculated the difference in bias between the two context conditions (i.e. BiasL-S − BiasS-L or BiasI-O − BiasO-I for the Ebbinghaus and Müller-Lyer tasks, respectively). Internal noise values were log-transformed to minimise the effects of skewness and kurtosis. Outliers—defined as points lying 3 or more standard deviations from the group mean—were replaced with points lying 2.5 standard deviations from the group mean [43]. Two outliers were identified in the bias values for the Ebbinghaus task (autistic n = 1; typically developing n = 1), and a further two were identified in the bias values for the Müller-Lyer task (autistic n = 1; typically developing n = 1), which were replaced with trimmed values. Note that the same pattern of results was obtained without outlier replacement (see Additional file 1). Shapiro-Wilks tests showed that the distribution of log-transformed internal noise values did not differ significantly from a normal distribution, in either task (ps ≥ .53). However, the bias values deviated from normality in both tasks (ps < .001). We therefore supplemented our analyses on these variables with bootstrapped analyses with 1000 samples, and bias-corrected 95% confidence intervals to ensure our results were robust to deviations from normality. The values of internal noise and perceptual bias in autistic and typically developing children are shown in Fig. 4. The average bias associated with the Müller-Lyer illusion (autistic: M = 73.97, SD = 47.51; typically developing: M = 83.51, SD = 79.79) was greater than that associated with the Ebbinghaus illusion (autistic: M = 42.63, SD = 39.07; typically developing: M = 55.84, SD = 55.91), although there was considerable individual variability. There was no significant difference in the extent of bias displayed by autistic children and typically developing children in the Ebbinghaus task, t(60) = 1.06, p = .29 (bootstrapped 95% CI for mean difference: [−38.11, 9.61]; p = .29), and no significant group difference in internal noise estimates, t(60) = 1.17, p = .25 (autistic: M = .90, SD = .18; typically developing: M = .95, SD = .21). Likewise, the groups did not differ significantly in terms of bias, t(78) = .61, p = .54 (bootstrapped 95% CI for mean difference: [−38.17, 19.42]; p = .52), or internal noise, t(78) = .78, p = .44 (autistic: M = .94, SD = .19; typically developing: M = .91, SD = .15), in the Müller-Lyer experiment. Internal noise and bias estimates for autistic children and typically developing children in experiment 1. Individual data points (small crosses) and group means (large crosses) are shown for Ebbinghaus stimuli (left panel) and Müller-Lyer stimuli (right panel). Distributions smoothed with kernel density functions are shown in red (autistic children) and green (typically developing children). Data are presented with outliers trimmed We conducted correlational analyses to investigate whether participant characteristics contributed to differences between participants. Internal noise in the Ebbinghaus experiment was negatively related to age [r = −.40, p = .001], with older children having lower levels of internal noise. Internal noise in the Müller-Lyer experiment was negatively related to both verbal IQ [r = −.27, p = .02] and non-verbal IQ [r = −.24, p = .03], with higher internal noise values associated with lower ability. To ensure that group differences in verbal IQ were not contributing to the results, we confirmed that there was no significant group difference in internal noise in the Müller-Lyer task whilst covarying the effect of verbal ability, F(1,77) = .12, p = .73. All other correlations between task measures and age and ability were non-significant, ps ≥ .11. To ensure that the non-significant difference in bias between autistic and typically developing children could not be accounted for by data insensitivity [44, 45], we quantified the relative evidence for the null and alternative hypotheses using the Bayesian independent t tests with a default Cauchy prior width of 1, implemented using JASP software [46]. The Bayes factors (BF) resulting from these tests reflect a continuum of evidence favouring the null and alternative hypotheses, with BF < 1/3 providing substantial evidence for the null hypothesis and BF > 3 providing substantial evidence for the alternative hypothesis [47]. The results confirmed that there was substantial evidence in support of the null hypothesis of no group differences in bias in both the Ebbinghaus (BF = .32) and Müller-Lyer (BF = .21) experiments. Robustness checks assessing the influence of the choice of prior are provided in Additional file 2. These results show that autistic children do not show altered susceptibility to the Ebbinghaus and Müller-Lyer illusions when using a novel method that minimises decision bias. For comparison, we next measured susceptibility to the same illusions using more traditional methods. In experiment 2, we revisited the paradigm used by Happé [7]. Experiment 2: same-different responses In the Ebbinghaus task, the dataset included 21 children in the autistic group (2 females) and 28 children in the typically developing group (11 females), with no differences between the groups in terms of age, t(47) = .93, p = .36, non-verbal ability, t(31.38) = 1.37, p = .18, or verbal ability, t(47) = 1.75, p = .09 (see Table 3 for scores). In the dataset for the Müller-Lyer task, there were 24 autistic children (1 female) and 42 typically developing children (15 females), matched in age, t(64) = 1.20, p = .23, and non-verbal ability, t(32.23) = .23, p = .82, but not in verbal ability, t(64) = 4.01, p < .001 (see Table 3 for scores). Seventeen autistic children and 19 typically developing children were included in the datasets for both versions of the experiment. Twelve autistic children and four typically developing children in the Ebbinghaus dataset were also included in the Ebbinghaus dataset in experiment 1, and 17 autistic children and 26 typically developing children included in the Müller-Lyer dataset were also in the Müller-Lyer dataset in experiment 1. A further two autistic children and 14 typically developing children were excluded from analysis in the Ebbinghaus task, and an additional five autistic children and 20 typically developing children were excluded from the analysis in the Müller-Lyer task because they incorrectly responded that the control stimuli differed in size. Stimuli were presented on A4 laminated cards, with three cards for each of the Ebbinghaus and Müller-Lyer illusions. The stimuli were presented in white on a mid-grey background, as in experiment 1. For each illusion, there was a context-free condition, where two circles (diameter .9 cm) or two horizontal lines (length 2.1 cm) were presented side-by-side for the Ebbinghaus and Müller-Lyer tasks, respectively. There were also two cards for each illusion that had the same stimuli with added context. For the Ebbinghaus illusion, one card had four large context circles (diameter 1.1 cm) on the left and eight small context circles (diameter .3) on the right, and the other card had small context circles on the left and large context circles on the right. For the Müller-Lyer illusion, one card had inward fins on the left and outward fins on the right, and the other card had outward fins on the left and inward fins on the right. The fins were .7 cm in length and were oriented at 45° as in experiment 1. The central circles and horizontal lines were always the same size. The relative sizes and configurations of the context circles and fins were the same as in experiment 1. The cards were shuffled to randomise the order of presentation, and the experimenter held up one card at a time. Following Happé [7], children were either asked 'Are the [circles/lines] the same size or different sizes?' or 'Are the [circles/lines] different sizes or the same size?'. The question order was counterbalanced across participants. If children responded 'different', they were asked to identify which was bigger. Children were prevented from touching the cards while making their judgments. Following Happé [7], participants were only included in the analysis if they correctly responded that the circles/lines were the same size in the context-free condition. We then counted the number of cards displaying context for which participants gave the expected incorrect judgment, yielding a score ranging from 0 to 2 for each illusion. Out of 21 autistic children, 13 (61.9%) succumbed to the Ebbinghaus illusion on both trials, 6 (28.6%) succumbed to the Ebbinghaus illusion on one trial only and 2 (9.5%) did not succumb to the illusion on either trial (Fig. 5). Out of 28 typically developing children, 12 (42.9%) succumbed to the Ebbinghaus illusion on both trials, 8 (28.6%) succumbed to the illusion on one trial only and 8 (28.6%) did not succumb to the illusion at all. Chi-squared analysis (with Yates correction) revealed no significant differences between the groups in the number of children who never succumbed to the illusion and the number of children who succumbed to the illusion in one or more trial, χ 2(1) = 1.64, p = .20. Logistic regression revealed that age and ability were not significant predictors of whether children succumbed to the illusion or not, ps ≥ .16. Proportions of autistic and typically developing (TD) children succumbing to the Ebbinghaus and Müller-Lyer illusions in neither, one or both trials, in experiment 2 In the Müller-Lyer task, 22 out of 24 autistic children (91.7%) succumbed to the illusion on both trials, while the remaining two children did not succumb to the illusion on either trial (8.3%). Thirty-nine out of 42 typically developing children (92.9%) succumbed to the illusion on both trials, 2 (4.76%) succumbed to the illusion on one trial and 1 did not succumb to the illusion on either trial (2.38%). The proportions of participants who succumbed to the illusion on one or more trials compared to those who never succumbed to the illusion did not differ between the autistic children and typically developing children, χ 2(1) = .25, p = .62 (with Yates correction). Age and ability did not significantly predict susceptibility in a logistic regression (ps ≥ .08). Bayesian contingency tables with independent multinomial sampling and a prior concentration of 1 implemented in JASP software [46, 48] were also used to compare group differences in the number of children who succumbed to the illusion on no trials, one trial or both trials. The null hypothesis is that there is independence between groups and responses, and the alternative hypothesis is that there is an association between the groups and responses. The results revealed substantial evidence for the null hypothesis in the Müller-Lyer experiment (BF = .07) but inconclusive evidence for either the null or alternative hypothesis in the Ebbinghaus task (BF = .64). Therefore, the data were insensitive to group differences in the Ebbinghaus task and larger samples would be required to conclusively determine whether the groups differ in this task. Robustness checks showing the influence of prior concentration can be found in Additional file 2. In sum, it appears that a similar proportion of autistic children are susceptible to the Ebbinghaus and Müller-Lyer illusions as typically developing children in this simple judgment task, although more data is required in the Ebbinghaus task. In line with the results from experiment 1, a greater proportion of children were susceptible to the Müller-Lyer illusion compared to the Ebbinghaus illusion, suggesting that it is a more compelling illusion for both autistic and typically developing children. As in experiment 1, there was no difference between groups, even for a measure which is contaminated by decision bias. However, these binary judgments (same/different) may not be sufficiently fine-grained to reveal subtle group differences. Indeed, it is possible that the majority of autistic children experience the illusions, but the strength of their effects may differ from that experienced by typically developing children. In experiment 3, we therefore used the method-of-adjustment. Experiment 3: method-of-adjustment Nineteen autistic children (1 female) and 38 typically developing children (15 females) completed the Ebbinghaus method-of-adjustment task. The groups were comparable in terms of age, t(55) = 1.13, p = .26, non-verbal ability, t(27.04) = 1.03, p = .31, and verbal ability, t(55) = 1.59, p = .12 (see Table 4 for scores). Twenty-four autistic children (1 female) and 58 typically developing children (20 females) completed the Müller-Lyer method-of-adjustment task. The groups of autistic children and typically developing children were matched in terms of age, t(80) = .75, p = .46, and non-verbal ability, t(30.48) = .29, p = .78, but the autistic children had lower verbal IQs than typically developing children, t(80) = 4.02, p < .001 (see Table 4 for scores). Seventeen of the autistic children and 36 of the typically developing children participated in both the Ebbinghaus and Müller-Lyer versions of the experiment. In the Ebbinghaus dataset, 16 autistic children and 27 typically developing children were included in the Ebbinghaus dataset in experiment 2, and 11 autistic children and eight typically developing children were included in the Ebbinghaus dataset in experiment 1. In the Müller-Lyer dataset, 20 autistic children and 39 typically developing children were also included in the Müller-Lyer dataset in experiment 2, and 15 autistic children and 39 typically developing children were included in the Müller-Lyer dataset in experiment 1. Two stimuli were presented side-by-side on the screen, in the same configuration as in experiment 2. In the Ebbinghaus task, one stimulus had small context circles, and one stimulus had large context circles. In the Müller-Lyer experiment, one stimulus had inward fins and one stimulus had outward fins. The context locations (i.e. whether the small context circles or inward fins were on the left or right) were counterbalanced among participants. The sizes of the context circles and fins were the same as in experiment 1. One stimulus was a reference stimulus, with the same dimensions as in experiment 1. The other was a comparison, in which the initial diameter of the central circle or length of the horizontal line was randomised between .68° and 1.82° or 2.43° and 3.86°, respectively. Children were asked to adjust the size of the comparison stimulus to match the size of the reference stimulus. The location of the comparison stimulus was signalled with a small green rectangle for 1000 ms before the stimuli appeared. Children used up and down arrow keys to make the comparison stimulus bigger or smaller, respectively, and pressed the space bar when they were satisfied that the two stimuli were the same size. There was no time limit. The task was presented in the context of a factory, 'GeoFactory', in which children were asked to make a shape that was the same as the one in the catalogue (i.e. the reference stimulus). Children were initially presented with a practice trial with a star shape, to familiarise them with the task and the response keys. Next, eight experimental trials were presented. Four trials were context-free (i.e. without context circles or fins), and four trials had context. We counterbalanced across participants whether the context-free or context trials were presented first. The locations of the reference and comparison stimulus (left/right) were varied across trials. In the Ebbinghaus task, there were two trials where the reference stimulus was surrounded by small context circles and two trials where the reference stimulus was surrounded by large context circles, and in the Müller-Lyer task, there were two trials where the reference stimulus was flanked by outward fins and two trials where it was flanked by inward fins. We refer to these conditions as S-L and L-S and O-I and I-O, respectively, for comparison with experiment 1. The order of trials was randomised. We computed the difference between the size of the adjusted comparison stimulus and the reference stimulus, as a proportion of the size of the reference stimulus, in context-free and context trials, before taking an average of the context-free trials and the trials in each context condition (S-L and L-S in the Ebbinghaus task and O-I and I-O in the Muller-Lyer task). As in experiment 1, a single value of bias was computed by calculating the difference between the two context conditions. As in experiment 1, points lying 3 or more standard deviations away from the group mean were replaced with those lying 2.5 standard deviations from the mean. There were no outliers in the Ebbinghaus task. One outlying value was found (an autistic child) for the Müller-Lyer task (context-free condition). Note that the same pattern of results was obtained when this outlying value was retained without replacement (Additional file 1). We also recorded response time and the number of keypresses between the stimulus onset and children pressing the space bar to indicate they had completed the trial. These values were log-transformed to minimise the effects of negative skew and subjected to outlier screening, although no outliers were found. Shapiro-Wilks tests showed that the distribution of context-free size judgments in the Ebbinghaus task and the bias values in the Müller-Lyer experiment did not differ significantly from a normal distribution (p = .06 and p = .25, respectively). However, the bias values in the Ebbinghaus task and the context-free judgments in the Müller-Lyer task did deviate from normality (p = .007 and p < .001, respectively). Where the assumption of normality was violated, we conducted bootstrapped analyses as in experiment 1. Individual and group results for context-free size judgments and bias estimates are shown in Fig. 6. First, we assessed group differences in the Ebbinghaus task. On average, the autistic children made the comparison stimulus slightly smaller than the reference stimulus in the context-free condition of the Ebbinghaus task, and the typically developing children made it slightly larger. However, the confidence intervals spanned 0 in both groups, suggesting that their perception was largely accurate (autistic: M = −.30, SD = 4.51, 95% CI = [−2.47, 1.87]; typically developing: M = .34, SD = 3.40, 95% CI = [−.77, 1.46]). Moreover, the groups did not differ significantly in their judgments, t(55) = .60, p = .55 (autistic: M = 25.29, SD = 21.24; typically developing: M = 26.53, SD = 12.08). Next, we compared the bias associated with the context in the Ebbinghaus illusion and found that the groups did not differ significantly, t(55) = .28, p = .78 (bootstrapped 95% CI for mean difference: [−12.56, 9.34], p = .83). Neither the bias nor the context-free size judgment was significantly related to age and verbal or non-verbal IQ, ps ≥ .53. Judgments made in the context-free trials and the extent of bias in the context trials in experiment 3, for autistic and typically developing children. Individual data points (small crosses) and group means (large crosses) are shown for Ebbinghaus stimuli (left panel) and Müller-Lyer stimuli (right panel). Distributions smoothed with kernel density functions are shown in red (autistic children) and green (typically developing children). Data are presented with outliers trimmed In contrast, the groups differed in their responses to context-free trials in the Müller-Lyer task, t(80) = 2.78, p = .007, d = .63 (bootstrapped 95% CI for mean difference: [.89, 4.99], p = .02). The autistic children had a tendency to make the comparison larger than the reference (M = 2.44, SD = 5.13, 95% CI = [.28, 4.61]), whereas the typically developing children were more accurate (M = −.39, SD = 3.75, 95% CI = [−1.37, .61]). There were also differences in the extent of bias in the context trials, with the autistic children showing a larger bias (M = 51.03, SD = 19.70) than typically developing children (M = 40.44, SD = 16.64), t(80) = 2.48, p = .015, d = .58. Neither the context-free judgments nor the bias values were significantly related to age and verbal or non-verbal ability (ps ≥ .06). As in experiment 1, the Müller-Lyer illusion was associated with a greater level of bias than the Ebbinghaus task, overall. Next, we investigated group differences in response times and numbers of keypresses (Table 5). There were no significant group differences in response times in either the Ebbinghaus or Müller-Lyer tasks, and no interactions between group and context condition (context-free, context), ps ≥ .16. Thus, response times were not analysed further. We then investigated the number of keypresses. In the Ebbinghaus task, a mixed ANOVA with group as a between-participants factor and context condition as a within-participants factor showed no significant effect of group nor interaction between group and condition in the number of keypresses (ps ≥ .13). However, in the Müller-Lyer task, the autistic children used significantly more keypresses than typically developing children in the Müller-Lyer task, F(1,80) = 14.11, p < .001, ɳ p 2 = .15. The effect of group did not interact with context condition (p = .88). In the Müller-Lyer task, the number of keypresses in the context-free condition was significantly correlated with the corresponding size judgment, r(80) = .32, p = .003, and the number of keypresses in the context condition was significantly correlated with the extent of bias, r(80) = .67, p < .001, suggesting that increased keypresses reflect size judgements in this task. Table 5 Means and standard deviations of response times (RT) in seconds and number of presses before making a decision in the context-free and context trials of the Ebbinghaus and Müller-Lyer tasks in experiment 3, for autistic and typically developing children As in the other experiments, we complemented our analysis of differences in bias with Bayesian statistics. In line with the results of our frequentist statistics, we found substantial evidence in support of the null hypothesis of no group differences in bias in the Ebbinghaus task (BF = .22). While there was relatively more evidence in favour of the alternative hypothesis in the Müller-Lyer task (BF = 2.87), this only constituted weak/anecdotal evidence, suggesting that more data is required to provide strong evidence. Robustness checks for these Bayesian t tests are provided in Additional file 2. In this study, we used three methods to characterise responses to Ebbinghaus and Müller-Lyer illusions in children on the autism spectrum and typically developing children. The first of these methods was designed to reduce the influence of decision biases on judgments, whereas the other two were methods that have been used previously with autistic populations and which may be contaminated by decision biases. Across all methods, the Müller-Lyer illusion had stronger effects on responses compared to the Ebbinghaus illusion. However, we were particularly interested in comparing the responses between autistic and typically developing children. We found no evidence of reduced susceptibility to the Ebbinghaus illusion in autistic children for any method. There was some indication of heightened susceptibility to the Müller-Lyer illusion, but only in a method-of-adjustment task (experiment 3) and not in the 2-AFC or the same-different methods (experiments 1 and 2). The evidence we found for heightened susceptibility to the Müller-Lyer illusion in the method-of-adjustment (albeit relatively weak) was not entirely unexpected. Ropar and Mitchell [10] similarly reported a pronounced bias in response to the Müller-Lyer illusion for autistic children aged 7 to 18 years using a method-of-adjustment task. Our lack of group differences for the Müller-Lyer same-different judgment task is also in line with previous research, as Happé [7] reported that autistic children were equally susceptible to the Müller-Lyer illusion as typically developing children, unlike for a range of other illusions in which they demonstrated reduced susceptibility. What can we conclude from these apparently conflicting results? Do autistic children really perceive the Müller-Lyer illusion differently to typically developing children? It could be argued that Happé's method (and that used in experiment 2) is too insensitive to reveal differences in the extent of illusion susceptibility between the groups, as it classifies children into those who do or do not experience the illusion. Indeed, it is clear from experiment 2 that the majority of children are susceptible to the Müller-Lyer illusion. However, it is particularly informative here that we found no group differences in the extent of bias in our novel 2-AFC method, which was specifically designed to reduce the influence of higher-level decisional strategies [29]. Thus, it is likely that any group differences in the method-of-adjustment Müller-Lyer task merely reflect differences in decisional criteria, rather than reflecting underlying differences in perception. It is interesting to note that Chouinard et al. [22] reported reduced susceptibility to the Müller-Lyer illusion in typical members of the population with high levels of autistic traits, as measured by the AQ. It seems that these results may not be generalizable to the autistic population, as no studies to date have reported reduced susceptibility to the Müller-Lyer illusion in those with a clinical diagnosis. Our results on the Ebbinghaus illusion are very clear, as we found no group differences in susceptibility to the illusion in any method we used. These results fit within a complex pattern of results from previous studies, including both reports of reduced susceptibility (e.g., [7]), and no differences in susceptibility (e.g. [10, 16]) for autistic individuals. Such discrepant results may arise in part from the use of different methodologies. Yet, here we found no differences in susceptibility between autistic and typically developing children across three different methods, including a task based on Happé [7]. It should be noted, however, that our stimuli differed from those used by Happé and others. For example, our stimuli were presented in white on grey, whereas Happé's stimuli were black and white, and the context circles in our Ebbinghaus stimuli did not touch, whereas they did in Happé's stimuli. Stimulus differences such as these may be contributing factors in determining the extent to which autistic children are influenced by the Ebbinghaus illusion. A further difference is that we tested cognitively able autistic children (IQ > 70), whereas Happé tested autistic children with a lower range of IQ scores (verbal IQ range 40–92), although here we found no evidence of a correlation between bias and IQ in the Ebbinghaus tasks. It is possible that previous reports of reduced susceptibility to the Ebbinghaus task resulted from atypical decision strategies in autistic populations, on which sampling differences may have a particularly pronounced effect. Anecdotally, many of our participants reported 'knowing' the illusions from science books and TV shows, which may have substantially affected their responses in experiments 2 and 3. A large number of the children we tested did not answer the control question correctly in experiment 2 (n = 16 in the Ebbinghaus task). As the control stimuli were perceptually identical, such responses again point to a strong role for decisional biases. Although we made extensive efforts to ensure that the samples tested in each experiment were of comparable age and non-verbal ability, it is a limitation of the current study that we were not able to test all experimental conditions within the same participants. The sample sizes used were relatively large for studies investigating susceptibility to visual illusions in an autistic population. Nevertheless, the exact sample size used varied between experiments and between groups of autistic and typically developing children. It is possible that the smaller samples were less sensitive to group differences than those with larger sample sizes. Indeed, our use of Bayesian statistics confirms the need for larger sample sizes to conclusively distinguish between the null and alternative hypotheses in certain conditions in experiments 2 and 3. Thus, future studies would benefit from collecting data from large samples for both the autistic and typically developing groups. Specifically, future research will need to confirm the key finding of increased bias to the Müller-Lyer illusion in the method-of-adjustment task in conjunction with similar levels of bias in the 2-AFC task, within the same sample of autistic participants. Previous reports of reduced susceptibility to visual illusions have been linked to theories of autistic perception and cognition, such as weak central coherence [7] and reduced influence of top-down information [14, 26, 27]. The results of this study and other studies refute the suggestion that children on the autism spectrum have pervasively different responses to visual illusions compared to typically developing children. Indeed, the results from experiment 1 that measure perceptual bias suggest that autistic children process the context in the Ebbinghaus illusion and Müller-Lyer illusion similarly to typically developing children (cf. the weak central coherence theory [25]). We may well expect distinct effects for different illusions. For example, autistic individuals may have reduced susceptibility to illusions that rely heavily on prior knowledge, such as the Shepard table illusion [14], despite not demonstrating reduced susceptibility to the Ebbinghaus illusion, which may result from lateral interactions in lower-level areas of the visual system such as V1 [49, 50]. A feasible hypothesis would be that we should only find atypical responses by autistic individuals to illusions that result from top-down processing. However, the state of existing research evidence does not yet allow us to make clear links with such theories, as previous reports of reduced susceptibility to illusions could be a result of atypical decisional strategies, rather than reflecting differences in perceptual processing. Adapting relatively bias-free methods to a range of different illusions will therefore be important in further investigating atypical visual perception in autism. One outstanding question is whether different illusions lead to differing levels of response bias. Indeed, the fact that we found significant group differences in performance in the method-of-adjustment Müller-Lyer task but not the Ebbinghaus task suggests that the Müller-Lyer illusion might be particularly sensitive to atypical decisional strategies—perhaps as a result of the illusion being stronger in general. The methodological issues we highlight here are not restricted to studies of autism, and we stress the importance of designing studies that minimise decision biases whenever the focus is on underlying perceptual mechanisms. Our study demonstrates that the method developed by Morgan et al. [29], which is relatively free of cognitive bias, can be adapted successfully for children and clinical populations. Our use of a child-friendly 'game' context ensured that child participants were engaged with the task and sufficiently motivated to complete the trials. Future studies may benefit from employing a similar method in order to determine whether atypical responses to illusions in other clinical groups, such as schizophrenia (e.g. [51]), reflect real perceptual differences compared to neurotypical populations. The method could also be used to investigate perceptual development. While we found no evidence of age-related changes in bias in the current sample, it is possible that this would become evident in a larger sample of children across discrete age groups—allowing the possibility to confirm whether age-related changes in susceptibility to visual illusions [52–54] really reflect underlying changes in perceptual functioning. It is worth noting here that Káldy and Kovács [54] used a 2-AFC method with the intention of minimising bias when assessing the strength of the Ebbinghaus illusion in children. Yet, the 2-AFC method alone does not eliminate decisional bias, as observers can still guess in favour of one of the two options when they are unsure [29]. The combined use of a 2-AFC method with a roving pedestal, as demonstrated here, ensures that perceptual bias can be measured as purely as possible, in a wide range of populations. Using a new method to measure susceptibility to Ebbinghaus and Müller-Lyer illusions while minimising the contaminating effects of decisional biases, we found no evidence of differences in susceptibility between autistic and typically developing children. These results provide an important step in bridging behaviour with biological substrates, suggesting that group differences in susceptibility to illusions may emerge in higher-level decision-making rather than at the level of the percept. Note that Ropar and Mitchell [10] previously considered the possibility that the performance of Happé's [7] autistic individuals could have resulted from biases in verbal responses. However, the point we make here applies to responses made both verbally and non-verbally. The pedestal sizes for the Ebbinghaus stimulus were taken from Morgan et al. [29]. Pilot testing revealed the need for a larger pedestal size for the Müller-Lyer illusion. 2-AFC: two-alternative-forced-choice AQ: Autism spectrum quotient DSM: FSIQ: Full-scale IQ ICD-10: International Classification of Diseases-10 PIQ: Performance IQ SCQ: Social Communication Questionnaire VIQ: Verbal IQ WASI-II: Wechsler Abbreviated Scales of Intelligence, 2nd Edition. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington: American Psychiatric Publishing; 2013. Ashburner J, Ziviani J, Rodger S. Sensory processing and classroom emotional, behavioral, and educational outcomes in children with autism spectrum disorder. Am J Occup Ther. 2008;62:564–73. Jasmin E, Couture M, McKinley P, Reid G, Fombonne E, Gisel E. Sensori-motor and daily living skills of preschool children with autism spectrum disorders. J Autism Dev Disord. 2009;39:231–41. Bagby MS, Dickie VA, Baranek GT. How sensory experiences of children with and without autism affect family occupations. Am J Occup Ther. 2012;66:78–86. Simmons DR, Robertson AE, McKay LS, Toal E, McAleer P, Pollick FE. Vision in autism spectrum disorders. Vision Res. 2009;49:2705–39. Happé FG. Studying weak central coherence at low levels: children with autism do not succumb to visual illusions. A research note. J Child Psychol Psychiatry. 1996;37:873–7. Hoy JA, Hatton C, Hare D. Weak central coherence: a cross-domain phenomenon specific to autism? Autism. 2004;8:267–81. Bölte S, Holtmann M, Poustka F, Scheurich A, Schmidt L. Gestalt perception and local-global processing in high-functioning autism. J Autism Dev Disord. 2007;37:1493–504. Ropar D, Mitchell P. Are individuals with autism and Asperger's syndrome susceptible to visual illusions? J Child Psychol Psychiatry. 1999;40:1283–93. Sanford EC. Experimental psychology. New York: Heath; 1898. Ropar D, Mitchell P. Susceptibility to illusions and performance on visuospatial tasks in individuals with autism. J Child Psychol Psychiatry. 2001;42:539–49. Ishida R, Kamio Y, Nakamizo S. Perceptual distortions of visual illusions in children with high-functioning autism spectrum disorder. Psychologia. 2009;52:175–87. Mitchell P, Mottron L, Soulières I, Ropar D. Susceptibility to the Shepard illusion in participants with autism: reduced top-down influences within perception? Autism Res. 2010;3:113–9. Milne E, Scope A. Are children with autistic spectrum disorders susceptible to contour illusions? Brit J Dev Psychol. 2008;26:91–102. Schwarzkopf DS, Anderson EJ, de Haas B, White SJ, Rees G. Larger extrastriate population receptive fields in autism spectrum disorders. J Neurosci. 2014;34:2713–24. Van der Hallen R, Evers K, Brewaeys K, van den Noortgate W, Wagemans J. Global processing takes time: a meta-analysis on local-global visual processing in ASD. Psychol Bull. 2015;141:549–73. Walter E, Dassonville P, Bochsler TM. A specific autistic trait that modulates visuospatial illusion susceptibility. J Autism Dev Disord. 2009;39:339–49. Baron-Cohen S, Richler J, Bisarya D, Gurunathan N, Wheelwright S. The systemizing quotient: an investigation of adults with Asperger syndrome or high–functioning autism, and normal sex differences. Philos T Roy Soc B. 2003;358:361–74. Baron-Cohen S, Wheelwright S. The empathy quotient: an investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences. J Autism Dev Disord. 2004;34:163–75. Baron-Cohen S, Wheelwright S, Skinner R, Martin J, Clubley E. The autism-spectrum quotient (AQ): evidence from asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. J Autism Dev Disord. 2001;31:5–17. Chouinard PA, Noulty WA, Sperandio I, Landry O. Global processing during the Müller-Lyer illusion is distinctively affected by the degree of autistic traits in the typical population. Exp Brain Res. 2013;230:219–31. Chouinard PA, Unwin KL, Landry O, Sperandio I. Susceptibility to optical illusions varies as a function of the autism-spectrum quotient but not in ways predicted by local-global biases. J Autism Dev Disord. 2016;46:2224–39. Frith U. Autism: explaining the enigma. Oxford: Basil Blackwell; 1989. Frith U, Happé F. Autism: beyond "theory of mind". Cognition. 1994;50:115–32. Ropar D, Mitchell P. Shape constancy in autism: the role of prior knowledge and perspective cues. J Child Psychol Psychiatry. 2002;43:647–53. Pellicano E, Burr D. When the world becomes 'too real': a Bayesian explanation of autistic perception. Trends Cogn Sci. 2012;16:504–10. Gregory RL. Editorial essay. Perception. 2006;35:431–2. Morgan MJ, Melmoth D, Solomon J. Linking hypotheses underlying Class A and Class B methods. Vis Neurosci. 2013;30:197–206. Skottun BC, Skoyles JR. Subjective criteria and illusions in visual testing: some methodological limitations. Psychol Res. 2014;78:136–40. Morgan MJ. Sustained attention is not necessary for velocity adaptation. J Vis. 2013;13:26. Morgan MJ. A bias-free measure of retinotopic tilt adaptation. J Vis. 2014;14:7. Green DM, Swets JA. Signal detection theory and Psychophysics. New York: Wiley; 1966. World Health Organisation. The ICD-10 classification of mental and behavioural disorders. Diagnostic criteria for research. Geneva: World Health Organisation; 1993. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 4th ed., text revision. Washington, DC: American Psychiatric Association; 2000. Rutter M, Bailey A, Lord C. Social Communication Questionnaire. Los Angeles: Western Psychological Services; 2003. Lord C, Rutter M, DiLavore P, Risi S, Gotham K, Bishop SL. Autism Diagnostic Observation Schedule. 2nd ed. Torrance: Western Psychological Services; 2012. Wechsler D. WASI-II: Wechsler abbreviated scale of intelligence. 2nd ed. San Antonio: Psychological Corporation; 2011. Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10:433–6. Kleiner M, Brainard DH, Pelli DG. What's new in Psychtoolbox-3? Perception. 2007;36(ECVP Abstract Supplement). Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision. 1997;10:437–42. Morgan M, Grant S, Melmoth D, Solomon JA. Tilted frames of reference have similar effects on the perception of gravitational vertical and the planning of vertical saccadic eye movements. Exp Brain Res. 2015;233:2115–25. Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Allyn and Bacon; 2007. Dienes Z. Using Bayes to get the most out of non-significant results. Front Psychol. 2014;5:781. Wagenmakers EJ. A practical solution to the pervasive problems of p values. Psychon Bull Rev. 2007;14:779–804. JASP Team. JASP (Version 0.7.5.6); 2016. Jeffreys H. Theory of probability. Oxford: Oxford University Press; 1961. Tahira J, Ly A, Morey RD, Love J, Marsman M, Wagenmakers E-J. Default "Gunel and Dickey" Bayes factors for contingency tables. Behav Res Methods. 2016;1–15. Bosking WH, Zhang Y, Schofield B, Fitzpatrick D. Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. J Neurosci. 1997;17:2112–27. Schwarzkopf DS, Song C, Rees G. The surface area of human V1 predicts the subjective experience of object size. Nat Neurosci. 2011;14:28–30. Uhlhaas PJ, Phillips WA, Mitchell G, Silverstein SM. Perceptual grouping in disorganized schizophrenia. Psychiat Res. 2006;143:105–17. Bremner AJ, Doherty MJ, Caparos S, de Fockert J, Linnell KJ, Davidoff J. Effects of culture and the urban environment on the development of the Ebbinghaus illusion. Child Dev. 2016;87:962–81. Doherty MJ, Campbell NM, Tsuji H, Phillips WA. The Ebbinghaus illusion deceives adults but not young children. Dev Sci. 2010;13:714–21. Kaldy Z, Kovács I. Visual context integration is not fully developed in 4-year-old children. Perception. 2003;32:657–66. We are extremely grateful to all the participants and families who participated in this research, to Abigail Croydon, Louise Neil and members of the CRAE team for help with testing, to Themis Karaminis for programming assistance and to Joshua Solomon and Kai Schreiber for helpful discussions. This research was funded by a Medical Research Council grant awarded to E.P. (MR/J013145/1) and a Scott Family Junior Research Fellowship awarded to C.M. at University College, Oxford. Research at CRAE is supported by The Clothworkers' Foundation and Pears Foundation. No funding bodies were involved in the design of the study or the collection, analysis and interpretation of data, or writing of the manuscript. The experimental scripts used to present the tasks are publicly available on the Open Science Framework: https://osf.io/j34cq. The datasets generated during the current study cannot be made publicly available because participants did not grant consent for the use of the data for these purposes. However, the data are available from the corresponding author on reasonable request. All authors (CM, CTWA, MJM and EP) designed the study. CM and CTWA collected the data. CM and MJM analysed the data. CM drafted the initial manuscript. All authors contributed to the manuscript and approved the final version. The study was approved by the UCL Institute of Education's Research Ethics Committee (FPS 456) and conducted in accordance with the principles of the Declaration of Helsinki. Parents gave their written informed consent, and children provided their verbal assent prior to participation. Department of Experimental Psychology, University of Oxford, 9 South Parks Road, Oxford, OX1 3UD, UK Catherine Manning Applied Vision Research Centre, City University, Northampton Square, London, EC1V 0HB, UK Michael J. Morgan Max-Planck Institute for Metabolism Research, Gleueler Str. 50, 50931, Köln, Germany Centre for Research in Autism and Education (CRAE), UCL Institute of Education, University College London, 55-59 Gordon Square, London, WC1H 0NU, UK Craig T. W. Allen & Elizabeth Pellicano School of Psychology, University of Western Australia, 35 Stirling Highway, Perth, WA, 6009, Australia Elizabeth Pellicano Search for Catherine Manning in: Search for Michael J. Morgan in: Search for Craig T. W. Allen in: Search for Elizabeth Pellicano in: Correspondence to Catherine Manning. Results with no replacement of outliers. Means, standard deviations and t-test statistics for group differences in bias in the Ebbinghaus and Müller-Lyer tasks in experiment 1 and Müller-Lyer context-free judgments in experiment 3 when outliers were not replaced. (PDF 9 kb) Results of robustness checks for Bayesian analyses. Results of robustness checks for Bayesian independent sample t tests in experiments 1 and 3 and for the Bayesian contingency test in experiment 2. (PDF 92 kb) https://doi.org/10.1186/s13229-017-0127-y Visual illusions Global processing
CommonCrawl
Only show content I have access to (57) Only show open access (10) Last 3 years (11) Physics And Astronomy (183) Materials Research (172) Earth and Environmental Sciences (11) Classical Studies (2) Language and Linguistics (1) MRS Online Proceedings Library Archive (169) The British Journal of Psychiatry (15) International Astronomical Union Colloquium (13) Animal Science (7) European Journal of Applied Mathematics (6) The Antiquaries Journal (6) British Journal of Nutrition (4) Developmental Medicine and Child Neurology (3) High Power Laser Science and Engineering (3) Journal of Fluid Mechanics (3) Proceedings of the Edinburgh Mathematical Society (3) Proceedings of the Nutrition Society (3) Proceedings of the Prehistoric Society (3) The Journal of Anatomy (3) Microscopy and Microanalysis (2) Materials Research Society (170) International Astronomical Union (19) The Royal College of Psychiatrists (15) BSAS (9) Society of Antiquaries of London (7) American Academy of Cerebral and Developmental Medicine (3) The Prehistoric Society (3) Brazilian Society for Microscopy and Microanalysis (SBMM) (2) Ryan Test (2) AEPC Association of European Paediatric Cardiology (1) Aust Assoc for Cogitive and Behaviour Therapy (1) Canadian Neurological Sciences Federation (1) Nestle Foundation - enLINK (1) Society for Disaster Medicine and Public Health, Inc. SDMPH (1) The Australian Society of Otolaryngology Head and Neck Surgery (1) The Roman Society - JRS and BRI (1) The Society for Libyan Studies (1) test society (1) Cambridge Handbooks in Language and Linguistics (1) Cambridge Studies in Biological and Evolutionary Anthropology (1) Case Studies in Neurology (1) Cambridge Handbooks of Linguistics (1) Structural and optical properties of Ba(Co1−xZnx)SiO4 (x = 0.2, 0.4, 0.6, 0.8) J. Anike, R. Derbeshi, W. Wong-Ng, W. Liu, D. Windover, N. King, S. Wang, J. A. Kaduk, Y. Lan Journal: Powder Diffraction / Volume 34 / Issue 3 / September 2019 Structural characterization and X-ray reference powder pattern determination have been conducted for the Co- and Zn-containing tridymite derivatives Ba(Co1−xZnx)SiO4 (x = 0.2, 0.4, 0.6, 0.8). The bright blue series of Ba(Co1−xZnx)SiO4 crystallized in the hexagonal P63 space group (No. 173), with Z = 6. While the lattice parameter "a" decreases from 9.126 (2) Å to 9.10374(6) Å from x = 0.2 to 0.8, the lattice parameter "c" increases from 8.69477(12) Å to 8.72200(10) Å, respectively. Apparently, despite the similarity of ionic sizes of Zn2+ and Co2+, these opposing trends are due to the framework tetrahedral tilting of (ZnCo)O4. The lattice volume, V, remains comparable between 626.27 Å3 and 626.017 (7) Å3 from x = 0 to x = 0.8. UV-visible absorption spectrum measurements indicate the band gap of these two materials to be ≈3.3 and ≈3.5 eV, respectively, therefore potential UV photocatalytic materials. Reference powder X-ray diffraction patterns of these compounds have been submitted to be included in the Powder Diffraction File (PDF). A single day of mixed-macronutrient overfeeding does not elicit compensatory appetite or energy intake responses but exaggerates postprandial lipaemia during the next day in healthy young men Kevin Deighton, Andy J. King, Jamie Matu, Oliver M. Shannon, Oliver Whiteman, Alice Long, Matthew D. Huby, Miroslav Sekula, Adrian Holliday Journal: British Journal of Nutrition / Volume 121 / Issue 8 / 28 April 2019 Print publication: 28 April 2019 Discrete episodes of overconsumption may induce a positive energy balance and impair metabolic control. However, the effects of an ecologically relevant, single day of balanced macronutrient overfeeding are unknown. Twelve healthy men (of age 22 (sd 2) years, BMI 26·1 (sd 4·2) kg/m2) completed two 28 h, single-blind experimental trials. In a counterbalanced repeated measures design, participants either consumed their calculated daily energy requirements (energy balance trial (EB): 10 755 (sd 593) kJ) or were overfed by 50 % (overfeed trial (OF): 16 132 (sd 889) kJ) under laboratory supervision. Participants returned to the laboratory the next day, after an overnight fast, to complete a mixed-meal tolerance test (MTT). Appetite was not different between trials during day 1 (P>0·211) or during the MTT in the fasted or postprandial state (P>0·507). Accordingly, plasma acylated ghrelin, total glucagon-like peptide-1 and total peptide YY concentrations did not differ between trials during the MTT (all P>0·335). Ad libitum energy intake, assessed upon completion of the MTT, did not differ between trials (EB 6081 (sd 2260) kJ; OF 6182 (sd 1960) kJ; P=0·781). Plasma glucose and insulin concentrations were not different between trials (P>0·715). Fasted NEFA concentrations were lower in OF compared with EB (P=0·005), and TAG concentrations increased to a greater extent on OF than on EB during the MTT (P=0·009). The absence of compensatory changes in appetite-related variables after 1 d of mixed macronutrient overfeeding highlights the limited physiological response to defend against excess energy intake. This supports the concept that repeated discrete episodes of overconsumption may promote weight gain, while elevations in postprandial lipaemia may increase CVD risk. Recent Canadian efforts to develop population-level pregnancy intervention studies to mitigate effects of natural disasters and other tragedies D. M. Olson, S. Brémault-Phillips, S. King, G. A.S. Metz, S. Montesanti, J. K. Olson, A. Hyde, A. Pike, T. Hoover, R. Linder, B. Joggerst, R. Watts Journal: Journal of Developmental Origins of Health and Disease / Volume 10 / Issue 1 / February 2019 The preconception, pregnancy and immediate postpartum and newborn periods are times for mothers and their offspring when they are especially vulnerable to major stressors – those that are sudden and unexpected and those that are chronic. Their adverse effects can transcend generations. Stressors can include natural disasters or political stressors such as conflict and/or migration. Considerable evidence has accumulated demonstrating the adverse effects of natural disasters on pregnancy outcomes and developmental trajectories. However, beyond tracking outcomes, the time has arrived for gathering more information related to identifying mechanisms, predicting risk and developing stress-reducing and resilience-building interventions to improve outcomes. Further, we need to learn how to encapsulate both the quantitative and qualitative information available and share it with communities and authorities to mitigate the adverse developmental effects of future disasters, conflicts and migrations. This article briefly reviews prenatal maternal stress and identifies three contemporary situations (wildfire in Fort McMurray, Alberta, Canada; hurricane Harvey in Houston, USA and transgenerational and migrant stress in Pforzheim, Germany) where current studies are being established by Canadian investigators to test an intervention. The experiences from these efforts are related along with attempts to involve communities in the studies and share the new knowledge to plan for future disasters or tragedies. Exponential Conductivity Increase in Strained MoS2 via MEMS Actuation A. Vidana, S. Almeida, M. Martinez, E. Acosta, J. Mireles, T. –J. King, D. Zubia Journal: MRS Advances / Volume 4 / Issue 38-39 / 2019 In this work, a poly-Si0.35Ge0.65 microelectromechanical systems (MEMS)- based actuator was designed and fabricated using a CMOS compatible standard process to specifically strain a bi-layered (2L) MoS2 flake and measure its electrical properties. Experimental results of the MEMS-TMDC device show an increase of conductivity up to three orders of magnitude by means of vertical actuation using the substrate as the body terminal. A force balance model of the MEMS-TMDC was used to determine the amount of strain induced in the MoS2 flake. Strains as high as 3.3% is reported using the model fitted to the experimental data. Role of magnetic field evolution on filamentary structure formation in intense laser–foil interactions HPL_EP HEDP and High Power Laser 2018 M. King, N. M. H. Butler, R. Wilson, R. Capdessus, R. J. Gray, H. W. Powell, R. J. Dance, H. Padda, B. Gonzalez-Izquierdo, D. R. Rusby, N. P. Dover, G. S. Hicks, O. C. Ettlinger, C. Scullion, D. C. Carroll, Z. Najmudin, M. Borghesi, D. Neely, P. McKenna Journal: High Power Laser Science and Engineering / Volume 7 / 2019 Published online by Cambridge University Press: 13 March 2019, e14 Filamentary structures can form within the beam of protons accelerated during the interaction of an intense laser pulse with an ultrathin foil target. Such behaviour is shown to be dependent upon the formation time of quasi-static magnetic field structures throughout the target volume and the extent of the rear surface proton expansion over the same period. This is observed via both numerical and experimental investigations. By controlling the intensity profile of the laser drive, via the use of two temporally separated pulses, both the initial rear surface proton expansion and magnetic field formation time can be varied, resulting in modification to the degree of filamentary structure present within the laser-driven proton beam. Reflection of intense laser light from microstructured targets as a potential diagnostic of laser focus and plasma temperature J. Jarrett, M. King, R. J. Gray, N. Neumann, L. Döhl, C. D. Baird, T. Ebert, M. Hesse, A. Tebartz, D. R. Rusby, N. C. Woolsey, D. Neely, M. Roth, P. McKenna Published online by Cambridge University Press: 27 December 2018, e2 The spatial-intensity profile of light reflected during the interaction of an intense laser pulse with a microstructured target is investigated experimentally and the potential to apply this as a diagnostic of the interaction physics is explored numerically. Diffraction and speckle patterns are measured in the specularly reflected light in the cases of targets with regular groove and needle-like structures, respectively, highlighting the potential to use this as a diagnostic of the evolving plasma surface. It is shown, via ray-tracing and numerical modelling, that for a laser focal spot diameter smaller than the periodicity of the target structure, the reflected light patterns can potentially be used to diagnose the degree of plasma expansion, and by extension the local plasma temperature, at the focus of the intense laser light. The reflected patterns could also be used to diagnose the size of the laser focal spot during a high-intensity interaction when using a regular structure with known spacing. Perceptions of Resilience and Physical Health Symptom Improvement Following Post Disaster Integrated Health Services Howard J. Osofsky, Carl F. Weems, Rebecca A. Graham, Joy D. Osofsky, Tonya C. Hansel, Lucy S. King Journal: Disaster Medicine and Public Health Preparedness / Volume 13 / Issue 2 / April 2019 Theorists and researchers have linked resilience with a host of positive psychological and physical health outcomes. This paper examines perceptions of resilience and physical health symptoms in a sample of individuals exposed to multiple community disasters following involvement in integrated mental health services. A multiwave naturalistic design was used to follow 762 adult clinic patients (72% female; 28% minority status), ages 18-92 years (mean age=40 years), who were evaluated for resilience and physical health symptoms prior to receiving services and at 1, 3, and 6 months' follow-up. Data indicated increases in perceptions of resilience and decreased physical health symptoms reported over time. Results also indicated that resilience predicted physical health symptoms, such that resilience and physical health symptoms were negatively associated (ie, improved resilience was associated with decreases in physical health symptoms). These effects were primarily observed for those individuals with previous exposure to natural disasters. Findings provide correlational evidence for behavioral health treatment provided as part of a stepped-care, collaborative model in reducing physical health symptoms and increasing resilience post-disaster. Controlled trials are warranted. (Disaster Med Public Health Preparedness. 2019;13:223–229) Subliminal and supraliminal processing of reward-related stimuli in anorexia nervosa I. Boehm, J. A. King, F. Bernardoni, D. Geisler, M. Seidel, F. Ritschel, T. Goschke, J.-D. Haynes, V. Roessner, S. Ehrlich Journal: Psychological Medicine / Volume 48 / Issue 5 / April 2018 Previous studies have highlighted the role of the brain reward and cognitive control systems in the etiology of anorexia nervosa (AN). In an attempt to disentangle the relative contribution of these systems to the disorder, we used functional magnetic resonance imaging (fMRI) to investigate hemodynamic responses to reward-related stimuli presented both subliminally and supraliminally in acutely underweight AN patients and age-matched healthy controls (HC). fMRI data were collected from a total of 35 AN patients and 35 HC, while they passively viewed subliminally and supraliminally presented streams of food, positive social, and neutral stimuli. Activation patterns of the group × stimulation condition × stimulus type interaction were interrogated to investigate potential group differences in processing different stimulus types under the two stimulation conditions. Moreover, changes in functional connectivity were investigated using generalized psychophysiological interaction analysis. AN patients showed a generally increased response to supraliminally presented stimuli in the inferior frontal junction (IFJ), but no alterations within the reward system. Increased activation during supraliminal stimulation with food stimuli was observed in the AN group in visual regions including superior occipital gyrus and the fusiform gyrus/parahippocampal gyrus. No group difference was found with respect to the subliminal stimulation condition and functional connectivity. Increased IFJ activation in AN during supraliminal stimulation may indicate hyperactive cognitive control, which resonates with clinical presentation of excessive self-control in AN patients. Increased activation to food stimuli in visual regions may be interpreted in light of an attentional food bias in AN. 5 - Global Change Impacts on Ant-Mediated Seed Dispersal in Eastern North American Forests from Part II - Ant-Seed Interactions and Man-Induced Disturbance By Robert J. Warren II, Joshua R. King, Lacy D. Chick, Mark A. Bradford Edited by Paulo S. Oliveira, Universidade Estadual de Campinas, Brazil, Suzanne Koptur, Florida International University Book: Ant-Plant Interactions Print publication: 17 August 2017, pp 93-111 Genetic and phenotypic overlap of specific obsessive-compulsive and attention-deficit/hyperactive subtypes with Tourette syndrome M. E. Hirschtritt, S. M. Darrow, C. Illmann, L. Osiecki, M. Grados, P. Sandor, Y. Dion, R. A. King, D. Pauls, C. L. Budman, D. C. Cath, E. Greenberg, G. J. Lyon, D. Yu, L. M. McGrath, W. M. McMahon, P. C. Lee, K. L. Delucchi, J. M. Scharf, C. A. Mathews Journal: Psychological Medicine / Volume 48 / Issue 2 / January 2018 The unique phenotypic and genetic aspects of obsessive-compulsive (OCD) and attention-deficit/hyperactivity disorder (ADHD) among individuals with Tourette syndrome (TS) are not well characterized. Here, we examine symptom patterns and heritability of OCD and ADHD in TS families. OCD and ADHD symptom patterns were examined in TS patients and their family members (N = 3494) using exploratory factor analyses (EFA) for OCD and ADHD symptoms separately, followed by latent class analyses (LCA) of the resulting OCD and ADHD factor sum scores jointly; heritability and clinical relevance of the resulting factors and classes were assessed. EFA yielded a 2-factor model for ADHD and an 8-factor model for OCD. Both ADHD factors (inattentive and hyperactive/impulsive symptoms) were genetically related to TS, ADHD, and OCD. The doubts, contamination, need for sameness, and superstitions factors were genetically related to OCD, but not ADHD or TS; symmetry/exactness and fear-of-harm were associated with TS and OCD while hoarding was associated with ADHD and OCD. In contrast, aggressive urges were genetically associated with TS, OCD, and ADHD. LCA revealed a three-class solution: few OCD/ADHD symptoms (LC1), OCD & ADHD symptoms (LC2), and symmetry/exactness, hoarding, and ADHD symptoms (LC3). LC2 had the highest psychiatric comorbidity rates (⩾50% for all disorders). Symmetry/exactness, aggressive urges, fear-of-harm, and hoarding show complex genetic relationships with TS, OCD, and ADHD, and, rather than being specific subtypes of OCD, transcend traditional diagnostic boundaries, perhaps representing an underlying vulnerability (e.g. failure of top-down cognitive control) common to all three disorders. Post-traumatic stress disorder associated with sexual assault among women in the WHO World Mental Health Surveys K. M. Scott, K. C. Koenen, A. King, M. V. Petukhova, J. Alonso, E. J. Bromet, R. Bruffaerts, B. Bunting, P. de Jonge, J. M. Haro, E. G. Karam, S. Lee, M. E. Medina-Mora, F. Navarro-Mateu, N. A. Sampson, V. Shahly, D. J. Stein, Y. Torres, A. M. Zaslavsky, R. C. Kessler Sexual assault is a global concern with post-traumatic stress disorder (PTSD), one of the common sequelae. Early intervention can help prevent PTSD, making identification of those at high risk for the disorder a priority. Lack of representative sampling of both sexual assault survivors and sexual assaults in prior studies might have reduced the ability to develop accurate prediction models for early identification of high-risk sexual assault survivors. Data come from 12 face-to-face, cross-sectional surveys of community-dwelling adults conducted in 11 countries. Analysis was based on the data from the 411 women from these surveys for whom sexual assault was the randomly selected lifetime traumatic event (TE). Seven classes of predictors were assessed: socio-demographics, characteristics of the assault, the respondent's retrospective perception that she could have prevented the assault, other prior lifetime TEs, exposure to childhood family adversities and prior mental disorders. Prevalence of Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) PTSD associated with randomly selected sexual assaults was 20.2%. PTSD was more common for repeated than single-occurrence victimization and positively associated with prior TEs and childhood adversities. Respondent's perception that she could have prevented the assault interacted with history of mental disorder such that it reduced odds of PTSD, but only among women without prior disorders (odds ratio 0.2, 95% confidence interval 0.1–0.9). The final model estimated that 40.3% of women with PTSD would be found among the 10% with the highest predicted risk. Whether counterfactual preventability cognitions are adaptive may depend on mental health history. Predictive modelling may be useful in targeting high-risk women for preventive interventions. Conceptual design of initial opacity experiments on the national ignition facility Solved and Unsolved problems in Plasma Physics R. F. Heeter, J. E. Bailey, R. S. Craxton, B. G. DeVolder, E. S. Dodd, E. M. Garcia, E. J. Huffman, C. A. Iglesias, J. A. King, J. L. Kline, D. A. Liedahl, P. W. McKenty, Y. P. Opachich, G. A. Rochau, P. W. Ross, M. B. Schneider, M. E. Sherrill, B. G. Wilson, R. Zhang, T. S. Perry Journal: Journal of Plasma Physics / Volume 83 / Issue 1 / February 2017 Published online by Cambridge University Press: 09 January 2017, 595830103 Accurate models of X-ray absorption and re-emission in partly stripped ions are necessary to calculate the structure of stars, the performance of hohlraums for inertial confinement fusion and many other systems in high-energy-density plasma physics. Despite theoretical progress, a persistent discrepancy exists with recent experiments at the Sandia Z facility studying iron in conditions characteristic of the solar radiative–convective transition region. The increased iron opacity measured at Z could help resolve a longstanding issue with the standard solar model, but requires a radical departure for opacity theory. To replicate the Z measurements, an opacity experiment has been designed for the National Facility (NIF). The design uses established techniques scaled to NIF. A laser-heated hohlraum will produce X-ray-heated uniform iron plasmas in local thermodynamic equilibrium (LTE) at temperatures ${\geqslant}150$ eV and electron densities ${\geqslant}7\times 10^{21}~\text{cm}^{-3}$ . The iron will be probed using continuum X-rays emitted in a ${\sim}200$ ps, ${\sim}200~\unicode[STIX]{x03BC}\text{m}$ diameter source from a 2 mm diameter polystyrene (CH) capsule implosion. In this design, $2/3$ of the NIF beams deliver 500 kJ to the ${\sim}6$ mm diameter hohlraum, and the remaining $1/3$ directly drive the CH capsule with 200 kJ. Calculations indicate this capsule backlighter should outshine the iron sample, delivering a point-projection transmission opacity measurement to a time-integrated X-ray spectrometer viewing down the hohlraum axis. Preliminary experiments to develop the backlighter and hohlraum are underway, informing simulated measurements to guide the final design. Money is Brain: Financial Barriers and Consequences for Canadian Stroke Patients Aravind Ganesh, Kathryn King-Shier, Braden J. Manns, Michael D. Hill, David J.T. Campbell Journal: Canadian Journal of Neurological Sciences / Volume 44 / Issue 2 / March 2017 Background: Stroke patients of lower socioeconomic status have worse outcomes. It remains poorly understood whether this is due to illness severity or personal or health system barriers. We explored the experiences of stroke patients with financial barriers in a qualitative descriptive pilot study, seeking to capture perceived challenges that interfere with their poststroke health and recovery. Methods: We interviewed six adults with a history of stroke and financial barriers in Alberta, Canada, inquiring about their: (1) experiences after stroke; (2) experience of financial barriers; (3) perceived reasons for financial barriers; (4) health consequences of financial barriers; and (5) mechanisms for coping with financial barriers. Two reviewers analyzed data using inductive thematic analysis. Results: The participants developed new or worsened financial circumstances as a consequence of stroke-related disability. Poststroke impairments and financial barriers took a toll on their mental health. They struggled to access several aspects of long-term poststroke care, including allied health professional services, medications, and proper nutrition. They described opportunity costs and tradeoffs when accessing health services. In several cases, they were unaware of health resources available to them and were hesitant to disclose their struggles to their physicians and even their families. Conclusion: Some patients with financial barriers perceive challenges to accessing various aspects of poststroke care. They may have inadequate knowledge of resources available to them and may not disclose their concerns to their health care team. This suggests that providers themselves might consider asking stroke patients about financial barriers to optimize their long-term poststroke care. Assertion-based analysis via slicing with ABETS * (system description) M. ALPUENTE, F. FRECHINA, J. SAPIÑA, D. BALLIS Journal: Theory and Practice of Logic Programming / Volume 16 / Issue 5-6 / September 2016 We present ABETS, an assertion-based, dynamic analyzer that helps diagnose errors in Maude programs. ABETS uses slicing to automatically create reduced versions of both a run's execution trace and executed program, reduced versions in which any information that is not relevant to the bug currently being diagnosed is removed. In addition, ABETS employs runtime assertion checking to automate the identification of bugs so that whenever an assertion is violated, the system automatically infers accurate slicing criteria from the failure. We summarize the main services provided by ABETS, which also include a novel assertion-based facility for program repair that generates suitable program fixes when a state invariant is violated. Finally, we provide an experimental evaluation that shows the performance and effectiveness of the system. Post-traumatic stress disorder associated with natural and human-made disasters in the World Mental Health Surveys E. J. Bromet, L. Atwoli, N. Kawakami, F. Navarro-Mateu, P. Piotrowski, A. J. King, S. Aguilar-Gaxiola, J. Alonso, B. Bunting, K. Demyttenaere, S. Florescu, G. de Girolamo, S. Gluzman, J. M. Haro, P. de Jonge, E. G. Karam, S. Lee, V. Kovess-Masfety, M. E. Medina-Mora, Z. Mneimneh, B.-E. Pennell, J. Posada-Villa, D. Salmerón, T. Takeshima, R. C. Kessler Research on post-traumatic stress disorder (PTSD) following natural and human-made disasters has been undertaken for more than three decades. Although PTSD prevalence estimates vary widely, most are in the 20–40% range in disaster-focused studies but considerably lower (3–5%) in the few general population epidemiological surveys that evaluated disaster-related PTSD as part of a broader clinical assessment. The World Mental Health (WMH) Surveys provide an opportunity to examine disaster-related PTSD in representative general population surveys across a much wider range of sites than in previous studies. Although disaster-related PTSD was evaluated in 18 WMH surveys, only six in high-income countries had enough respondents for a risk factor analysis. Predictors considered were socio-demographics, disaster characteristics, and pre-disaster vulnerability factors (childhood family adversities, prior traumatic experiences, and prior mental disorders). Disaster-related PTSD prevalence was 0.0–3.8% among adult (ages 18+) WMH respondents and was significantly related to high education, serious injury or death of someone close, forced displacement from home, and pre-existing vulnerabilities (prior childhood family adversities, other traumas, and mental disorders). Of PTSD cases 44.5% were among the 5% of respondents classified by the model as having highest PTSD risk. Disaster-related PTSD is uncommon in high-income WMH countries. Risk factors are consistent with prior research: severity of exposure, history of prior stress exposure, and pre-existing mental disorders. The high concentration of PTSD among respondents with high predicted risk in our model supports the focus of screening assessments that identify disaster survivors most in need of preventive interventions. Dust and Gas Correlations in the Region of the South Celestial Pole D. J. King, K. N. R. Taylor Journal: Publications of the Astronomical Society of Australia / Volume 3 / Issue 5 / 1979 About 20 years ago de Vaucouleurs (1955, 1960) drew attention to faint but extensive nebulosity in the region of the South Celestial Pole. He tentatively identified it as being emission nebulosity, excited by OB stars in the overlying galactic plane. The extent of these nebulae has become even more apparent in recent years on IIIa-J plates taken for the U.K. Schmidt survey of the southern sky. From a study of survey plates covering the sky south of declination 80° a map has been prepared of the nebulosity visible in the region. A study made of this ∽nebulosity suggests that it is predominantly reflection nebulosity, with the main source of illumination being integrated starlight of the overlying Carina spiral arm. The bulk of it is of very low surface brightness (fainter than about 26 mag. per square arcsec) and appears to be in the form of a broken layer underlying the local galactic plane at an altitude of ∽ 40–80 pc. There are a number of brighter nebulous patches and filaments, frequently highly structured on a scale of minutes of arc, and extending across several degrees, usually parallel to the galactic plane. A Monte Carlo Model for Light Scattering by Dark Nebulae M. I. Darby, D. J. King, K. N. R. Taylor The Thumbprint Nebula (TPN) in Chamaeleon (first described by Fitzgerald (1974), and shown in Figure 1) is a good example of the class of dense, dark nebulae that exhibit dark cores and bright rims, and have been referred to (Lynds 1967) as 'bright dark nebulae'. Early observations of these nebulae established that the dust grains within them were strongly forward-scattering (Struve and Elvey 1936, Struve 1937). However, the treatment of the radiative transfer problem was too inadequate to permit more than tentative conclusions regarding the nebulae. In more recent years, with the advent of modern computers, the transfer of radiation through a dust cloud has been treated more rigorously, using Monte Carlo techniques (Mattila 1970, Witt and Stephens 1974). Witt and Stephens (1974) have demonstrated that for a dense nebula the surface brightness profile is sensitive to the dust grain density distribution within the cloud and to the scattering properties of the grains. The scattering model approach can be valuable in the investigation of very opaque dark nebulae that cannot be studied by conventional star counting techniques. This has been demonstrated in the case of the TPN by Fitzgerald et al (1976), who used the Witt and Stephens model. The Healthy Activity Program lay counsellor delivered treatment for severe depression in India: Systematic development and randomised evaluation Neerja Chowdhary, Arpita Anand, Sona Dimidjian, Sachin Shinde, Benedict Weobong, Madhumitha Balaji, Steven D. Hollon, Atif Rahman, G. Terence Wilson, Helena Verdeli, Ricardo Araya, Michael King, Mark J. D. Jordans, Christopher Fairburn, Betty Kirkwood, Vikram Patel Journal: The British Journal of Psychiatry / Volume 208 / Issue 4 / April 2016 Reducing the global treatment gap for mental disorders requires treatments that are economical, effective and culturally appropriate. To describe a systematic approach to the development of a brief psychological treatment for patients with severe depression delivered by lay counsellors in primary healthcare. The treatment was developed in three stages using a variety of methods: (a) identifying potential strategies; (b) developing a theoretical framework; and (c) evaluating the acceptability, feasibility and effectiveness of the psychological treatment. The Healthy Activity Program (HAP) is delivered over 6–8 sessions and consists of behavioral activation as the core psychological framework with added emphasis on strategies such as problem-solving and activation of social networks. Key elements to improve acceptability and feasibility are also included. In an intention-to-treat analysis of a pilot randomised controlled trial (55 participants), the prevalence of depression (Beck Depression Inventory II ⩾19) after 2 months was lower in the HAP than the control arm (adjusted risk ratio = 0.55, 95% CI 0.32–0.94, P = 0.01). Our systematic approach to the development of psychological treatments could be extended to other mental disorders. HAP is an acceptable and effective brief psychological treatment for severe depression delivered by lay counsellors in primary care. Influence of laser polarization on collective electron dynamics in ultraintense laser–foil interactions HEDP and HPL 2016 Bruno Gonzalez-Izquierdo, Ross J. Gray, Martin King, Robbie Wilson, Rachel J. Dance, Haydn Powell, David A. MacLellan, John McCreadie, Nicholas M. H. Butler, Steve Hawkes, James S. Green, Chris D. Murphy, Luca C. Stockhausen, David C. Carroll, Nicola Booth, Graeme G. Scott, Marco Borghesi, David Neely, Paul McKenna Published online by Cambridge University Press: 27 September 2016, e33 The collective response of electrons in an ultrathin foil target irradiated by an ultraintense ( ${\sim}6\times 10^{20}~\text{W}~\text{cm}^{-2}$ ) laser pulse is investigated experimentally and via 3D particle-in-cell simulations. It is shown that if the target is sufficiently thin that the laser induces significant radiation pressure, but not thin enough to become relativistically transparent to the laser light, the resulting relativistic electron beam is elliptical, with the major axis of the ellipse directed along the laser polarization axis. When the target thickness is decreased such that it becomes relativistically transparent early in the interaction with the laser pulse, diffraction of the transmitted laser light occurs through a so called 'relativistic plasma aperture', inducing structure in the spatial-intensity profile of the beam of energetic electrons. It is shown that the electron beam profile can be modified by variation of the target thickness and degree of ellipticity in the laser polarization. On contact-line dynamics with mass transfer J. M. OLIVER, J. P. WHITELEY, M. A. SAXTON, D. VELLA, V. S. ZUBKOV, J. R. KING Journal: European Journal of Applied Mathematics / Volume 26 / Issue 5 / October 2015 We investigate the effect of mass transfer on the evolution of a thin, two-dimensional, partially wetting drop. While the effects of viscous dissipation, capillarity, slip and uniform mass transfer are taken into account, other effects, such as gravity, surface tension gradients, vapour transport and heat transport, are neglected in favour of mathematical tractability. Our focus is on a matched-asymptotic analysis in the small-slip limit, which reveals that the leading-order outer formulation and contact-line law depend delicately on both the sign and the size of the mass transfer flux. This leads, in particular, to novel generalisations of Tanner's law. We analyse the resulting evolution of the drop on the timescale of mass transfer and validate the leading-order predictions by comparison with preliminary numerical simulations. Finally, we outline the generalisation of the leading-order formulations to prescribed non-uniform rates of mass transfer and to three dimensions.
CommonCrawl
Number of dimensions in string theory and possible link with number theory This question has led me to ask somewhat a more specific question. I have read somewhere about a coincidence. Numbers of the form $8k + 2$ appears to be relevant for string theory. For k = 0 one gets 2 dimensional string world sheet, For k = 1 one gets 10 spacetime dimensions and for k = 3 one gets 26 dimensions of bosonic string theory. For k = 2 we get 18. I don't know whether it has any relevance or not in ST. Also the number 24, which can be thought of as number of dimensions perpendicular to 2 dimensional string world sheet in bosonic ST, is the largest number for which the sum of squares up to 24 is itself a square. $(1^2 + 2^2 + ..+24^2 = 70^2)$ My question is, is it a mere coincidence or something deeper than that? string-theory mathematics $\begingroup$ Excellent observations. It's indeed natural to count the transverse coordinates only - the number of physical "oscillator" degrees of freedom - and those transverse dimensionalities are multiples of eight. This is linked to the fact that the dimension of a spin field is $1/16$ for a single dimension and one needs dimensions that are integral or half-integral. In theories with spacetime fermions, it's also linked to the Bott periodicity - if the difference between spatial and temporal dimensions is a multiple of eight, there are real chiral spinor representations. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:16 $\begingroup$ Also, the number 24 for the transverse dimension of the bosonic string appears because one needs to get the right critical dimension, and the zero-point energy with the single excitation has to vanish: $(D-2)(-1/12)/2+1=0$. This is solved exactly for $D=26$; $(-1/12)$ arose as the sum of positive integers or $\zeta(-1)$. Incredibly enough, even the seemingly numerological observation with $70^2$ is actually used "somewhere" in string theory - one compactified on the Leech lattice. The identity guarantees that a null vector is null. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:19 $\begingroup$ @Luboš Motl: What about the number 18 Lubos? $\endgroup$ – user1355 Feb 15 '11 at 16:21 $\begingroup$ Under the comment by Dr Harvey, I link to a paper where a string theory compactification on the Leech lattice is actually used to explain even more fascinating numerological accidents - the "monstrous moonshine" linking some properties of the monster group, the largest finite sporadic group, to some properties of number theory and complex calculus, previously totally unrelated part of maths. See en.wikipedia.org/wiki/Monstrous_moonshine - In Monstrous Moonshine, numbers as high as 196,883+1 appear at 2 places and it was a complete mystery why! String theory has demystified this fact. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:25 $\begingroup$ @kakemonster: Numerology is to number theory as astrology is to astronomy, or alchemy is to chemistry. $\endgroup$ – QGR Feb 17 '11 at 6:08 There is definitely something deep going on, but there is not yet a deep understanding of what it is. In math the topology of the orthogonal group has a mod 8 periodicity called Bott periodicity. I think this is related to the dimensions in which one can have Majorana-Weyl spinors with Lorentzian signature which is indeed $8k+2$. So this is part of the connection and allows both the world-sheet and the spacetime for $d=2,10$ to have M-W spinors. The $26$ you get for $k=3$ doesn't have any obvious connection with spinors and supersymmetry, but there are some indirect connections related to the construction of a Vertex Operator Algebra with the Monster as its symmetry group. This involves a $Z_2$ orbifold of the bosonic string on the torus $R^{24}/\Lambda$ where $\Lambda$ is the Leech lattice. A $Z_2$ orbifold of this theory involves a twist field of dimension $24/16=3/2$ which is the dimension needed for a superconformal generator. So the fact that there are $24$ transverse dimensions does get related to world-sheet superconformal invariance. Finally, the fact you mentioned involving the sum of squares up to $24^2$ has been exploited in the math literature to give a very elegant construction of the Leech lattice starting from the Lorentzian lattice $\Pi^{25,1}$ by projecting along a null vector $(1,2, \cdots 24;70)$ which is null by the identity you quoted. I can't think of anything off the top of my head related to $k=2$ in string theory, but I'm sure there must be something. David Z♦ phopho $\begingroup$ Prof Harvey may be too modest here but let me mention that he is one of the 4 co-fathers of the heterotic string. And when it comes to a related compactification on the Leech lattice, see e.g. Beauty and the Beast: web.mac.com/chrisbertinato/iWeb/Physics/Seminars_files/… - This compactification of string theory actually knows about most (or all) about the largest sporadic finite group, the monster group. Fascinating and previously "impossible" connections between number theory and group theory - the "monstrous moonshine" - has been explained as a real link here. $\endgroup$ – Luboš Motl Feb 15 '11 at 16:22 $\begingroup$ Just a direct link to the construction of the Leech lattice where the "70 squared" identity is used: en.wikipedia.org/wiki/… $\endgroup$ – Luboš Motl Feb 15 '11 at 17:08 $\begingroup$ Yes, thanks @Jeff. And I am aware that they're the authors. Sorry I didn't make it clear. The final proof of the monstrous moonshine claim that won the Fields medal was found by Borcherds - just to make it clear that I acknowledge that this Gentleman has some divine abilities, too. ;-) $\endgroup$ – Luboš Motl Feb 15 '11 at 17:10 The descent from 24 to 8 seems to happen when the straight sums are substituted by alternating sums. This is known from the theory of the Riemann zeta funtion, whose only pole in $s=1$ is cancelled via a multiplication that produces the Dirichlet eta function, $$\eta(s) = \left(1-2^{1-s}\right) \zeta(s)$$. This function has better analiticity than Zeta. But it is alternating, $$ \eta(s) = \frac{1}{1^s} - \frac{1}{2^s} + \frac{1}{3^s} - \frac{1}{4^s} + \cdots$$ And thus it could be related with the simple expresion I have mentioned above in the comments. But, more important, $$\eta(-1)=\left(1-2^{2}\right) \zeta(-1) = -3 \times {-1\over 12} = {1\over 4}$$ And you can suspect that the Eta function does for the superstring the same role that the Zeta does for the bosonic string. And indeed it appears in very similar situations. For instance, Michael B. Green, in his 1986 Trieste lectures "String and Supertring Theory", section 5.11, calculates the NS sector spectrum and then the normal ordering constant, that appears formally as a difference between the bosonic term and the fermionic term. Such difference can be manipulated to obtain the Eta function as above, times $(D-2)/2$ So if anyone can tell how the Zeta regulator is related to the integer sum up to 24, then we could probably guess how the Eta regulator would be related to the alternating sum up to 8. ariveroarivero I am separating this answer from the other because it is overly speculative; mainly I wanted to list a few hints about the sequence I named in the comments. The OP names a square sum that happens to be related to the critical dimension of space time of the bosonic string, D-2=24. It seems natural to ask if there is some similar sequence for the critical dimension of the superstring, D-2=8. On other hand, as I said in the other answer, for the open superstring the Zeta regularization is naturally substituted, in some cases, by the Dirichlet Eta, that happens to be an alternating sum. So it is natural for a numerologist to try the alternating square sum and, as said in the comments to the OP, it works: $$1^2-2^2+3^2-4^2+5^2-6^2+7^2-8^2= -36 = -6^2$$ The main difference, number-wise, with the non alternating sum, is that here the solution is not unique. Still, it is the smallest non trivial one, and all the others can be generated iteratively: the (absolute value of the) sums are the triangular square numbers, and it was observed by Colin Dickson (alt.math.recreational March 7th 2004) that such numbers obey a recurrence law $$a_{n+1}={(a_n -1)^2 \over a_{n-1}}$$ with the first two terms being the trivial $a_1=1$ and the above $a_2=36$. For more info, see the OEIS sequences A001110 and A001108. Note that the sign in the actual solution depends of the number of terms in the sequence, alternating itself, so that actually the sum is $\sigma_n= (-1)^{n-1} a_n$ A way to produce the alternating solutions is to solve Pell equation, and then the sums are also produced from Pell numbers, via $a_n= P_n (P_n+P_{n-1})$ . This could be interesting because the root of the non alternating series, $70$, is a Pell number itself, the 6th, and the next Pell number, $70+(70+29)=169$, is the only Pell number that is an exact square (and the only one that is an exact power). In A001108, Mohamed Bouhamida mentions some periodicities and some mod 8 relationships for the series. Also the page http://www.cut-the-knot.org/do_you_know/triSquare.shtml gives some hints on some eight factors appearing in a particular subsequence of square triangular numbers: "Eight triangles increased by unity produce a square". If these factors can be related to the 8-periodicities of Bott theory or theta functions, I can not tell. EDIT: of course the use of triangular numbers can be telling us that all the business of alternating series is just a decoy: pairing the terms, we can reduce $ (m+1)^2 - m^2 = 2 m + 1 = (m+1) + m $ the alternating squared series to the non alternating non squared series and then simply $$1+2+3+4+5+6+7+8= 36 = 6^2$$ But the goal is to keep at least a formal likeliness with the OP series. I don't know about the particular example that you mention, but there are certainly some interconnections with special numbers in mathematics and in string theory/supersymmetry. One worked out example is the connection of possible dimensions of supersymmetry in dimensions 3, 4, 6 or 10 which is connected to the existence of normed division algebras in dimensions 1, 2, 4 and 8. For more details see John C. Baez, John Huerta: Division Algebras and Supersymmetry I (arXiv) and related work about higher gauge theory. Tim van BeekTim van Beek $\begingroup$ Sorry for being off-topic, but I thought the relationship I mention interesting enough in the given context in its own right. $\endgroup$ – Tim van Beek Feb 15 '11 at 18:39 $\begingroup$ it is one of the most important "numerlogical" facts in these considerations, so +1 $\endgroup$ – user346 Feb 16 '11 at 5:24 Why is there a deep mysterious relation between string theory and number theory, elliptic curves, $E_8$ and the Monster group? Is this explanation of "Why nine space dimensions?" correct? Heterotic string as worldvolume theory of two coincident 9-branes in 27 dimensions? Black Hole Singularity and String Theory Compactification of dimensions in string theory: Why our Universe has 3 large spatial dimensions? What is the connection between extra dimensions in Kaluza-Klein type theories and those in string theories? How exactly do superstrings reduce the number of dimensions in bosonic string theory from 26 to 10 and remove the tachyons? Is mean field theory self-consistency analogous to string theory consistency? String Theory Landscape Is the number of dimensions predicted by String Theory related to the Poincare group? Fundamental string theory questions Extra Dimensions (in String Theory) - What does it mean?
CommonCrawl
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. Join them; it only takes a minute: Define comb function in terms of exponential function I have one signal with $N$ samples and I want to downsample it by factor of $M$. Essentially my downsampler is comb function which takes every M$^{th}$ sample while others are set to zero, i.e., $$ \check{Y}_k=\sum_{n=0}^{N-1}\sum_{m=0}^{\frac{N}{M}-1}x[n]\delta(n-mM)e^{-j\frac{2\pi nk}{N}}\tag{1} $$ where $x[n]$ is original signal, $\check{Y}_k$ is downsampled signal, and $n$ are time samples . How is comb function $\delta(n-mM)$ define in terms of exponential function ? i.e. $$\sum_{n=0}^{N-1}\sum_{m=0}^{\frac{N}{M}-1}\delta(n-mM)=\sum_{n=0}^{N-1}\sum_{m=0}^{\frac{N}{M}-1}e^{??}$$ Also what is the definition of comb function in frequency domain? fft discrete-signals sampling CaliCali The frequency domain representation of the discrete time downsampling operation (i.e. $X_M(e^{j\omega})$ , the $DTFT\{x[Mn]\}$, in terms of $X(e^{j\omega})$, the $DTFT\{x[n]\}$ ) can be most easily observed by the following approach. We will begin from the end, $x[Mn]$ and proceed to its pre-compressed form by first deriving the frequency domain relation between the compressed signal $x[Mn]$ and its expanded form (pre-compressed) $x_s[n]$ and then we shall show the spectral relation between an input signal and its sampled form $x_s[n]$ (which is also the expanded signal at the input of the compressor) First consider a periodic pulse train , $p_M[n] \triangleq \sum_{k=-\infty}^{\infty}{ \delta[n-kM]}$ whose period is $M$ and defined as : $$p_M[n]= \begin{cases} 1 ~~~~, ~~~ \text{n = 0, M, 2M, ..., kM , ...} \\ 0 ~~~~, ~~~ \text{otherwise} \\ \end{cases} \tag{1}$$ Then the signal $x[n]$ will be first multiplied by this perodic pulse train $p_M[n]$, to produce $x_s[n]$ which will be selecting the required samples as follows: $$x_s[n]=x[n]\cdot p_M[n] = \begin{cases} x[n] ~~~~, ~~~ \text{n = 0, M, 2M, ..., kM , ...} \\ 0 ~~~~, ~~~ \text{otherwise} \tag{2}\\ \end{cases}$$ Now, we shall take every $M^{th}$ sample of $x_s[n]$ and discard the rest, which is called as the compression operation as graphically represented by: $$ x_s[n] \longrightarrow \boxed{\downarrow M} \longrightarrow x_s[Mn]=x[Mn]=x_M[n] \tag{3}$$ Note that eventhough the conversion between $x[n]$ and $x[Mn]$ is not invertable, the conversion between $x_s[n]$ and $x_s[Mn]$ is so, as should be clear: $$ x_s[Mn]=x_{sM}[n] \longrightarrow \boxed{\uparrow M} \longrightarrow x_s[n] \tag{4}$$ i.e. the inverse is the expander operator and we shall exploit it for easily showing the spectral relation between $x_s[n]$ and $x_M[n]$ as it can be shown that: $$X_{s}(e^{j\omega}) = X_{sM}(e^{j\omega M}) \Longrightarrow X_{sM}(e^{j\omega}) = X_{s}(e^{j\omega/ M}) \tag{5}$$ Based on this we can argue that (noting the equation (3)), $$ \boxed{ DTFT\{x[Mn]\} \triangleq X_M(e^{j\omega}) = X_{s}(e^{j\omega/ M}) } \tag{6}$$ where $X_{s}(e^{j\omega})$ is the DTFT of the sampled signal $x_s[n]$. What comes next is the relation between the DTFTs of the signals $x_s[n]$ and $x[n]$ to complete the picture, which will be a straight forward observation if we consider representing the periodic impulse train by another form namelmy by its DFS (Discrete Fourier Series) representations (as is possible for any periodic discrete time signal with period M) : $$ p_M[n] \triangleq \sum_{k=-\infty}^{\infty}{ \delta[n-kM]} = \frac{1}{M} \sum_{k=0}^{M-1}{e^{j \frac{2\pi}{M}kn} } \tag{7}$$ Then we shall express $x_s[n]$ as : $$ x_s[n] \triangleq x[n]\cdot p_M[n] = x[n] \big( \frac{1}{M} \sum_{k=0}^{M-1}{e^{j \frac{2\pi}{M}kn} }\big) = \frac{1}{M} \sum_{k=0}^{M-1}{x[n]e^{j \frac{2\pi}{M}kn} } \tag{8}$$ What comes next is using the linearity and modulation properties of DTFT operator and argue that: $$DTFT\{x_s[n]\} = DTFT\{\frac{1}{M} \sum_{k=0}^{M-1}{x[n]e^{j \frac{2\pi}{M}kn} } \} = \frac{1}{M} \sum_{k=0}^{M-1}{ DTFT \{ x[n]e^{j \frac{2\pi}{M}kn} \} } = $$ $$ \boxed{X_s(e^{j\omega})=\frac{1}{M} \sum_{k=0}^{M-1}{ X(e^{j \big(\omega -\frac{2\pi}{M}k \big)} ) } } \tag{9}$$ The final step is the mergal of the steps (6) and (9) to produce the result: $$ \boxed{ X_M(e^{j\omega}) = X_s(e^{j\omega/M})=\frac{1}{M} \sum_{k=0}^{M-1}{ X(e^{j \big(\frac{\omega}{M} -\frac{2\pi}{M}k \big)} ) } = \frac{1}{M} \sum_{k=0}^{M-1}{ X(e^{j \big(\frac{\omega-2\pi k}{M} \big)} ) } } \tag{10} $$ I hope it was a clear way of expressing the spectrum of downsampled signal $x_M[n]=x[Mn]$ to that of the original signal $x[n]$ Also note that a more complete understanding of the spectral relation would only be possible if you fully explore the relation by a graphical plot as well, which would clearly demonstrate the resulting frequency stretching towards the $\omega =\pi$ boundary, by a factor of $M$ after which a consideration of aliasing becomes clear... Fat32Fat32 $\begingroup$ Just one comment: shouldn't be in Eq. (7) $p_m[n]=..=\frac{1}{M}\sum_{k=0}^{M-1}e^{j\frac{2\pi nk}{M}}$? $\endgroup$ – Cali Jul 1 '16 at 19:47 $\begingroup$ @Cali yes you are right! It should be M, not N... let me correct. $\endgroup$ – Fat32 Jul 1 '16 at 20:01 $\begingroup$ Then I think the same editing should be done in Eq. (8). However, a nice and detailed answer. Thank you! $\endgroup$ – Cali Jul 1 '16 at 20:16 $\begingroup$ yeees... latex itself is tiring to catch a typo... hope you understood it. I suggest you interpret the last equation (10) carefully. By adding M copies of scaled and shifted versions of $X(e^{j\omega})$ ... In particular observe the period of the scaled verison... which is not $2\pi$ but $M2\pi$ $\endgroup$ – Fat32 Jul 1 '16 at 20:30 $\begingroup$ if there is a step you feel insecure, please don't hesitate to ask it as a new question... ;-) $\endgroup$ – Fat32 Jul 1 '16 at 20:42 Thanks for contributing an answer to Signal Processing Stack Exchange! Not the answer you're looking for? Browse other questions tagged fft discrete-signals sampling or ask your own question. Sampling of a continuous function: Kronecker's or Dirac's delta? oversampled coefficient for existing exponential smoothing Non-uniform sampling and over-sampling — how to combine? Flat top sampling to a step shape signal Aliasing after downsampling Downsampling and filtering (convolution) Sinc interpolation formula for signal reconstruction in frequency domain from bipolar samples frequency spectrum of a sampled signal, PSD and power discussion Relation between the DTFT and CTFT in sampling- sample period isn't as the impulse train period Intuitively, why is the comb function the sampling function?
CommonCrawl
There is no continuous surjective multiplicative map from $M_n(\mathbb H)$ to $\mathbb H$ Let $\mathbb H$ denote the field of quaternions. I would like to prove that there does not exist any function $f:M_n(\mathbb H)\rightarrow \mathbb H$ for $n\geq 2$ that is continous surjective and multiplicative. I have been thinking about this problem for a while but I can't find any contradiction assuming that such a function does exist. I tried considering preimages for $1,i,j,k$ and toying with them, I also tried infering the values of some specific matrices (the $\lambda I_n$, the nilpotent matrices, etc...) but I couldn't reach any conclusion. Mostly, I fail to see how to make use of the continuity here. Would somebody have a hint as to how to proceed with this problem? linear-algebra matrices quaternions Suzet SuzetSuzet $\begingroup$ How is the situation different from the case where we replace $\mathbb{H}$ with $\mathbb{R}$ or $\mathbb{C}$? $\endgroup$ – Vincent $\begingroup$ Do you know about representations of Lie groups? If so, restrict such a map to get a representation of $GL_n(\mathbb H)$ over $\mathbb H$. Tensoring over $\mathbb C$ gives a 2-dimensional complex representation of $\GL_{2n}(\mathbb C)$. This must be reducible if $n > 1$, which contradicts surjectivity. $\endgroup$ – Kimball $\begingroup$ There used to be a very good answer, due to ThorstenK to the question in my second comment. Somehow it disappeared so I will reproduce it here. There is a multiplicative non-zero map by composing the embedding of $Mat(n, \mathbb{H})$ into $Mat(4n, \mathbb{R})$ with the determinant and the embedding of $\mathbb{R}$ into $\mathbb{H}$. $\endgroup$ $\begingroup$ It would be nice to know the subgroup structure of $GL(n, \mathbb{H})$. By dimension considerations the kernel of the map $f: GL(n, \mathbb{H}) \to GL(1, \mathbb{H}) \cong SU(2) \times \mathbb{R}_{>0}$ must be pretty big and when the only pretty big subgroup of $GL(n, \mathbb{H})$ turns out $GL(n, \mathbb{H})$ itself we get a contradiction with surjectivity. However I could not find much information on $GL(n, \mathbb{H})$ as a group on the internet and the somewhat similar group $GL(n, \mathbb{C})$ does have a fairly big subgroup in the form of $SL(n, \mathbb{C})$ so that doesn't bode well... $\endgroup$ $\begingroup$ @Vincent Thank you Vincent for these explanations, I understand the argument now. $\endgroup$ – Suzet I will work out the case $n = 2$ in detail. The same proof works for general $n$, I just want to save the labor of typing $n$ by $n$ matrices... Thus assume that $f:\operatorname M_2 = \operatorname M_2(\Bbb H) \rightarrow \Bbb H$ is a surjective multiplicative map. Lemma 1. Whenever $A\in \operatorname M_2$ is invertible, the image $f(A)$ is also invertible. Proof: If $A$ is invertible, then multiplication by $A$ is a bijection on $\operatorname M_2$. Hence $f(A)$ cannot be zero, otherwise $f$ is constantly zero. Lemma 2. The map $f$ restricted to $\operatorname{GL}_2 = \operatorname{GL}_2(\Bbb H)$ gives a group homomorphism from $\operatorname{GL}_2$ to $\Bbb H^\times$. In particular, we have $f\begin{pmatrix} 1 & \\ & 1\end{pmatrix} = 1$. Proof: This is clear from Lemma 1. We want to arrive at a contradiction, hence showing that such an $f$ does not exist. Assumption 3. Without loss of generality, we may assume that $f\begin{pmatrix} & 1\\1 &\end{pmatrix} = 1$. Note: for general $n$, we have the canonical embedding of the symmetric group $S_n$ into $\operatorname{GL}_n$, and this assumption becomes: $f(\sigma) = 1$ for all $\sigma \in S_n$. Why we can make this assumption: we have $f(\sigma)^{n!} = f(\sigma^{n!}) = 1$ by Lemma 2, hence by changing $f$ to $f^{n!}$, which is still surjective multiplicative, we may make this assumption. From now on, we always make Assumption 3. Lemma 4. We have $f\begin{pmatrix}1 & \\ & \lambda\end{pmatrix} = f\begin{pmatrix}\lambda & \\ & 1\end{pmatrix}$ for any $\lambda \in \Bbb H$. Proof: This comes from the identity $\begin{pmatrix}1 & \\ & \lambda\end{pmatrix}\begin{pmatrix} & 1\\1 & \end{pmatrix} = \begin{pmatrix} & 1\\1 & \end{pmatrix}\begin{pmatrix}\lambda & \\ & 1\end{pmatrix}$ and Assumption 3. Lemma 5. We have $f\begin{pmatrix}1 & \\ & z\end{pmatrix} = 1$ for all $z \in \Bbb H^\times$ with $|z| = 1$. Proof: For any $\lambda, \mu \in \Bbb H^\times$, we have: $$f\begin{pmatrix}1 & \\ & \lambda\mu\end{pmatrix} = f\begin{pmatrix}1 & \\ & \lambda\end{pmatrix}f\begin{pmatrix}1 & \\ & \mu\end{pmatrix}=f\begin{pmatrix}\lambda & \\ & 1\end{pmatrix}f\begin{pmatrix}1 & \\ & \mu\end{pmatrix}=f\begin{pmatrix}\lambda & \\ & \mu\end{pmatrix} = f\begin{pmatrix}1 & \\ & \mu\end{pmatrix}f\begin{pmatrix}\lambda & \\ & 1\end{pmatrix} = f\begin{pmatrix}1 & \\ & \mu\end{pmatrix}f\begin{pmatrix}1 & \\ & \lambda\end{pmatrix} = f\begin{pmatrix}1 & \\ & \mu\lambda\end{pmatrix}.$$ Therefore we have $f\begin{pmatrix}1 & \\ & \lambda\mu\lambda^{-1}\mu^{-1}\end{pmatrix} = 1$. But any $z \in \Bbb H^\times$ with $|z| = 1$ can be written as $\lambda\mu\lambda^{-1}\mu^{-1}$ for some $\lambda, \mu \in \Bbb H^\times$. Lemma 6. For any $a\in \Bbb R$, the value $f\begin{pmatrix}a & \\ & a\end{pmatrix}$ is real. Proof: Since the matrix $A = \begin{pmatrix}a & \\ & a\end{pmatrix}$ is in the center of $\operatorname M_2$, we have $f(A)f(B) = f(AB) = f(BA) = f(B)f(A)$ for all $B\in \operatorname M_2$. The surjectivity of $f$ then implies that $f(A)$ lies in the center of $\Bbb H$, namely $\Bbb R$. Assumption 7. Without loss of generality, we may assume that $f\begin{pmatrix}1 & \\ & a\end{pmatrix}$ is real for all $a \in \Bbb R$. Why we can make this assumption: we already have $$\left(f\begin{pmatrix}1 & \\ & a\end{pmatrix}\right)^2 = f\begin{pmatrix}1 & \\ & a\end{pmatrix}f\begin{pmatrix}a & \\ & 1\end{pmatrix} = f\begin{pmatrix}a & \\ & a\end{pmatrix}\in \Bbb R.$$Therefore, by changing $f$ to $f^2$, we may make this assumption (while still keeping all required properties of $f$, including Assumption 3). From now on, we always make Assumptions 7. Lemma 8. We have $f\begin{pmatrix}1 & \\ & \lambda\end{pmatrix}\in \Bbb R$ for all $\lambda \in \Bbb H$. Proof: The case $\lambda = 0$ is covered by Assumption 7. For $\lambda \neq 0$, by Lemma 5 and Assumption 7, we have: $f\begin{pmatrix}1 & \\ & \lambda\end{pmatrix} = f\begin{pmatrix}1 & \\ & |\lambda|\end{pmatrix}f\begin{pmatrix}1 & \\ & \frac \lambda {|\lambda|}\end{pmatrix}\in \Bbb R$. Lemma 9. For any $\alpha \in \Bbb H^\times$, we have $f\begin{pmatrix}1 & \alpha \\ & 1\end{pmatrix} = f\begin{pmatrix}1 & 1\\ & 1\end{pmatrix}$. Proof: This comes from the identity $\begin{pmatrix}1 & \\ & \alpha^{-1}\end{pmatrix}\begin{pmatrix}1 & 1\\ & 1\end{pmatrix}\begin{pmatrix}1 & \\ & \alpha\end{pmatrix} = \begin{pmatrix}1 & \alpha\\ & 1\end{pmatrix}$ and the fact that $f\begin{pmatrix}1 & \\ & \alpha\end{pmatrix}$ is a real number, hence is in the center of $\Bbb H$. Lemma 10. For any $\alpha \in \Bbb H$, we have $f\begin{pmatrix}1 & \alpha \\ & 1\end{pmatrix} = 1$. Proof: Let $h$ be the value of $f\begin{pmatrix}1 & 1 \\ & 1\end{pmatrix}$. By Lemma 9, we have $h = f\begin{pmatrix}1 & 2 \\ & 1\end{pmatrix} = h^2$. By Lemma 1, we get $h = 1$ and Lemma 9 tells us that $f\begin{pmatrix}1 & \alpha \\ & 1\end{pmatrix} = 1$ for any $\alpha \in \Bbb H^\times$. The case $\alpha = 0$ is Lemma 2. Conclusion. The map $f$ takes values in $\Bbb R$ on $\operatorname M_2$, hence is not surjective. We obtain a contradiction. Proof: Just note that any matrix in $\operatorname M_2$ can be written as a product of matrices of the form $\begin{pmatrix}1 & \alpha \\ & 1\end{pmatrix}$, $\begin{pmatrix} & 1 \\1 & \end{pmatrix}$, $\begin{pmatrix}1 & \\ & \lambda\end{pmatrix}$ with $\alpha, \lambda \in \Bbb H$ (by performing "row and column operations"). Final remarks. As claimed in the very beginning, the proof adapts without difficulty to general $n$. The continuous assumption is not used. All arguments are algebraic. Since it's a proof by contradiction, it doesn't show that any multiplicative map from $\operatorname M_n(\Bbb H)$ to $\Bbb H$ has image in $\Bbb R$. But it is true that any group homomorphism from $\operatorname{GL}_n(\Bbb H)$ to $\Bbb C^\times$ must factorize through $\Bbb R^\times_+$, as the abelianization of $\operatorname{GL}_n(\Bbb H)$ is isomorphic to $\Bbb R^\times_+$. WhatsUpWhatsUp $\begingroup$ Very nice proof, great use of proof by contradiction. I only do not agree with the last final remark: the map that you describe does give non-negative real entries, as described here: math.stackexchange.com/q/3477987/101420. It would be nice to see if an example exists of a non-surjective multiplicative map with actual non-real elements in the image. $\endgroup$ $\begingroup$ @Vincent After thinking more about it, I think there is no example of group homomorphism from $\operatorname{GL}_n(\Bbb H)$ to $\Bbb C^\times$ taking values outside $\Bbb R$. This is because the abelianization of $\operatorname{GL}_n(\Bbb H)$ is isomorphic to $\Bbb R^\times_{> 0}$. More precisely, the homomorphism $\Bbb H^\times \rightarrow \operatorname {GL}_n(\Bbb H)$ sending $z$ to $diag(z, 1, \dotsc, 1)$ induces an isomorphism between the abelianizations. This is not hard to prove, and is in fact known to Dieudonné: it's Theorem 1 here: numdam.org/article/BSMF_1943__71__27_0.pdf $\endgroup$ – WhatsUp $\begingroup$ Hmm I just concluded that image outside $\mathbb{R}$ is possible, see the last remark in my answer. Put more simply: take a homomorphism to $\mathbb{R}_+$, take the log to make it an homomorphism to the additive group $\mathbb{R}$ and wind that around the unit circle in $\mathbb{C}$... $\endgroup$ $\begingroup$ @Vincent Ah, of course, there are non-trivial homomorphisms from $\Bbb R^\times_+$ to $\Bbb C^\times$... It just has to factor through $\Bbb R^\times_+$. $\endgroup$ $\begingroup$ Thanks for the acceptance. I believe that we all enjoyed solving and discussing this problem. $\endgroup$ This is a modified version of my earlier answer with some gaps filled up, hence the weird numbering a Let $D \subset Mat(2, \mathbb{H})$ denote the group of invertible diagonal matrices. Let $L$ be the group of lower triangular matrices with $1$s on the diagonal and let $U$ be the group of upper triangular matrices with $1$s on the diagonal. Let $G \subset GL(2, \mathbb{H})$ the set of of matrices $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ such that $a \neq 0$ an $d - ca^{-1}b \neq 0$. The set $G$ is open and dense in $GL(2, \mathbb{H})$ so that once we understand the image $f(G)$ of $G$ under a continuous multiplicative map, we also understand the image of $GL(2, \mathbb{H})$. Alternatively if you don't like topology you can show that every matrix in $GL(2, \mathbb{H})$ can be written as a product of matrices from $G$. The reason for working with $G$ is that every $g \in G$ can be decomposed as $g = ldu$ for some $l \in L, d \in D, u \in U \qquad (1)$ Let's focus on $D$ first. It has two subgroups $D_1$ and $D_2$ consisting respectively of the diagonal matrices with a 1 in the lower right corner and those with a 1 in the upper left corner. As a group both $D_1$ and $D_2$ are isomorphic to $\mathbb{H}^*$ of course. We draw some conclusions from the group structure of $\mathbb{H}^*$. Lemma 0: The group $\mathbb{H}^*$ decomposes as a direct product of topological groups $\mathbb{H}^* \cong SU(2) \times \mathbb{R}_+$ where the cannonical projection onto the second term is just the familiar modulus operator $|.|$ and the subgroup $SU(2)$ appears as the set of elements of norm $1$. Lemma 0.5 The group $SU(2)$ is almost simple: its only normal subgroups are $\{1\}, \{-1, +1\}$, and $SU(2)$ itself. The quotient $SU(2)/\{-1, 1\}$ is isomorphic to $SO(3)$ which is simple and is not isomorphic to any subgroup of $\mathbb{H}$. Corollary 0.75: every group homomorphism from $\mathbb{H}^*$ to itself maps the $SU(2)$-subgroup of norm 1 elements in the domain either bijectively onto the $SU(2)$-subgroup of norm 1 elements in the codomain or onto the one element subgroup $\{1\}$ in the codomain. Lemma 1, modified: let $f$ be a multiplicative map from $D \to \mathbb{H}^*$. Then for at least one of the two subgroups $D_1, D_2$ it maps the $SU(2)$-subgroup of norm 1 elements inside that subgroup to $\{1\}$. Proof: Let $y_1, x_y$ be two non-commuting elements of $SU(2)$ in the codomain $\mathbb{H}$. If $f$ does not map the norm 1 elements in $D_1$ to $1$ then, by corollary 0.75 there is an $x_1 \in D_1$ with $f(x_1) = y_2$. Similarly if $f$ does not map the norm 1 elements in $D_2$ to $1$ then there is a $x_2$ with $f(x_2) = y_2$. Now $x_1x_2 = x_2x_1$ since every element in $D_1$ commutes with every element in $D_2$ but $f(x_1)f(x_2) \neq f(x_2)f(x_1)$, a contradiction. The question is now what happens to the $\mathbb{R}_+$ subgroup of that group ($D_1$ or $D_2$). I thought that it must be mapped to $\mathbb{R}_+$ in $\mathbb{H}^+$ but that is incorrect, $D_1 \cong \mathbb{H}^*$ can be mapped into a spiral via e.g. $f(x) = e^{(a + bi)\log(|x|)}$ while still sending $SU(2)$ to $\{1\}$, the latter condition being equivalent to $f(x) = f(y)$ whenever $|x| = |y|$ as in lemma 0. However what we do know is that if the restriction of $f$ maps the $SU(2)$ part of $D_i$ (for some $i \in \{1, 2\}$) to $1$ and hence only depends on its restriction to the $\mathbb{R}_+$ part then $f(x)f(y) = f(y)f(x)$ for every $x,y \in D_i$. It follows that $f(D_i)$ is contained in a two dimensional subalgebra $\mathbb{C}'$ of $\mathbb{H}$ isomorphic to $\mathbb{C}$. Now let $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ be a multiplicative map and let $J = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$. Since $J^2 = 1$ we have that either $f(J) = 1$ or $f(J) = -1$. But since $JD_1J = D_2$ and vice versa we conclude from Lemma 1 and the above subsequent reasoning that: Lemma 2, modified: every multiplicative map $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ maps $D$ into $\mathbb{C}' \backslash \{0\}$ for some 2-dimensional subalgebra $\mathbb{C}' \subset \mathbb{H}$. Moreover $f(D) = f(D_1)$ hence if $f$ is continuous the image $f(D)$ is connected and at most one dimensional. Progress! Before moving on to $L$ and $U$ we collect some corollaries of this result. Corollary 2.5: Let $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ be a multiplicative map and $x = \begin{pmatrix} a & 0 \\ 0 & d \end{pmatrix} \in D$. Then $f(x) = f(|x|)$ where $|x| \in D$ is the matrix whose entries are the absolute values of the entries in $x$. Proof: by Lemma 2 the space $f(D)$ is too small to contain $SU(2)$, so both the copy of $SU(2)$ inside $D_1$ and that in $D_2$ are mapped to 1. The result then follows from Lemma 0. Corollary 3 (same result, new proof): Let $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ be a multiplicative map and $q \in D$. Then $f(qxq^{-1}) = f(x)$ for all $x \in GL(2,\mathbb{H})$. Proof: we distinguish two cases. Either $f(D) \subset \mathbb{R}$ or it doesn't. In the first case we have that $f(q)$ commutes with $f(x)$ for every $x \in \ GL(2, \mathbb{H})$ and we have $f(qxq^{-1} = f(q)f(x)f(q^{-1} = f(x)f(q)f(q^{-1}) = f(x)$. In the second case we have a $y \in D$ such that $f(y) \not\in \mathbb{R}$. Let $|y|$ be as in the previous corollary and let $r \in D$ be the matrix whose entries are the square roots of the corresponding entries of $|y|$. We see from the previous corollary that $f(r)^2 = f(y)$. Let $s = r(JrJ)$. Then $s$ is a real scalar multiple of the identity matrix but $f(s) = f(r)^2 = f(y) \not\in \mathbb{R}$. Since $s$ is a real scalar multiple of the identity matrix we have that $sx = xs$ and hence $$f(s)f(x) = f(x)f(s) \qquad(1.5)$$ for every $x \in GL(2, \mathbb{H})$. But since $f(s)$ is a non-real element of $\mathbb{C}'$, with $\mathbb{C}'$ as in Lemma 2 we find that (1.5) implies that $f(x) \in \mathbb{C}'$ for every $x \in GL(2, \mathbb{H})$. It then follows from lemma 2 that $f(q)f(x) = f(x)f(q)$ for every $q \in D$ and the claim of Corollary 3 follows. From this point on we return to the original argument.. We use corollary 3 to understand the action of $f$ on $U$. Lemma 4: let $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ be a multiplicative map and let $u_1, u_2 \in U \backslash \{I\}$. Then $f(u_1) = f(u_2)$. Proof: $u_i = \begin{pmatrix} 1 & b_i \\ 0 & 1 \end{pmatrix}$ for $i = 1, 2$ where $b_1, b_2$ are non-zero, hence invertible, elements of $\mathbb{H}$. In general we have $$\begin{pmatrix} a & 0 \\ 0 & d \end{pmatrix} \begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix}\begin{pmatrix} a^{-1} & 0 \\ 0 & d^{-1} \end{pmatrix} = \begin{pmatrix} 1 & abd^{-1} \\ 0 & 1 \end{pmatrix} \qquad (2)$$ Taking $a = b_2, b = d = b_1$ in (2) we obtain $qu_1q^{-1} = u_2$ where $q \in D$ is the diagonal matrix $\begin{pmatrix} a & 0 \\ 0 & d \end{pmatrix}$ from (2). The claim $f(u_1) = f(u_2)$ then follows from Corollary 3. Corollary 5: let $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ be a continuous multiplicative map, then $f(U) = \{1\}$. Proof: by Lemma 4 we have that $f$ takes only one value on $U \backslash \{I\}$ and hence by continuity it should take this same value on $I \in U$ as well. But this means we know the unique value that $f$ takes on $U$ because $f(I) = 1$ for any multiplicative map. In a completely analogous way we get: Lemma 6: let $f: GL(2, \mathbb{H}) \to \mathbb{H}^*$ be a continuous multiplicative map, then $f(L) = \{1\}$. Now we can prove our main result. Theorem 7, modified: let $f: Mat(2, \mathbb{H}) \to \mathbb{H}$ be a continuous multiplicative map, then either $f$ maps every element of $Mat(2, \mathbb{H})$ to zero, or it maps invertible matrices to a one dimensional multiplicative Lie subgroup of $\mathbb{H}^*$ contained in a two dimensional subalgebra $\mathbb{C}'$ of $\mathbb{H}$ isomorphic to $\mathbb{C}$. Proof: We distinguish two cases: either $f(GL(2, \mathbb{H})) \subset \mathbb{H}^*$ or there is some $g \in GL(2, \mathbb{H})$ with $f(g) = 0$. In the latter case we find that $f$ is identically zero as $f(x) = f(xg^{-1}g) = f(xg^{-1})f(g) = f(xg^{-1})0 = 0$ for every $x \in Mat(2, \mathbb{H})$. In the second case, let $g \in GL(2, \mathbb{H})$. As in the text preceding (1) we may assume that $g \in G$, with $G$ defined there. From (1) and Corollary 5 and Lemma 6 we see that there is a $d \in D$ such that $f(g) = f(d)$. Lemma 2 then gives us the claim of the theorem. I like Theorem 7 because it tells us that yes, non-zero maps may exist, but only under very severe restrictions. To get the full result we only need: Lemma 8: The set $GL(2, \mathbb{H})$ of invertible matrices is dense (in the topological sense) in the real vectorspace $Mat(2, \mathbb{H})$ of all matrices. Remark 9: I think that every group homomorphism $f: \mathbb{R}_+ \to \mathbb{H}^*$ is of the form $x \mapsto \exp(\alpha \log x)$ for some quaternion $\alpha$. (Here $\exp$ is defined by the same power series as always.) Reading my proof with this in mind we find that for non-zero continous multiplicative $f$ we find that there is an $\alpha \in \mathbb{H}$ such that for each $g = \begin{pmatrix} a & b \\ c& d \end{pmatrix} \in G$ we have that $f(g) = \exp(\alpha \log(|a||d - ca^{-1}b|))$. Now we can recognize the expression inside the $\log$ as the determinant of the $(2 \times 2) \times (2 \times 2)$-block matrix $g'$ over $\mathbb{C}$ associated to $g$ in the standard way (i.e. as in your linked question). By continuity we then conclude that $f(g) = \exp(\alpha \log(\det(g')))$ for every $g \in Mat(2, \mathbb{H})$. This then gives a nice classification of all possible $f$ and answers the question about the existence of $f$ with non-real image (e.g. take $\alpha = 2 pi i$). $\begingroup$ Thank you for your proof Vincent. I noticed no flaw in it, so I believe it to be true. You proved a stronger statement than the problem I initially asked in the case of $n=2$. Using your theorem $7$ and if Gauß reduction works over quaternionic matrices (which I am not exactly sure), we can actually drop the continous hypohesis in my initial problem : any multiplicative map from $M_n(\mathbb H)$ (for now, $n=2$) to $\mathbb H$ is not surjective. Indeed, with Gauß reduction we could argue that any matrix is equivalent to $J_r$ (the diagonal matrix with $r$ times the entry $1$ then all $0$)[...] $\endgroup$ $\begingroup$ [...] and then, by multiplicativity and your theorem $7$, we see that all such maps could only take values inside the union of at most $n$ half-lines inside $\mathbb H$. Each half lines would be directed by the image of $J_r$, for $r$ varying from $1$ to $n$. This would be a really nice result to me ; and it could work for general $n$ provided that we can extend your theorem $7$ and that we can apply Gauß reuduction on quaternionic matrices. $\endgroup$ $\begingroup$ Unfortunately I noticed that there is a flaw, in Lemma 1, although I believe with some extra work we can get back on track from Corollaray 3 onwards. I'll try and edit the answer tonight (European time) $\endgroup$ Not the answer you're looking for? Browse other questions tagged linear-algebra matrices quaternions or ask your own question. Determinant of a $2 \times 2$ complex block matrix is nonnegative Do multiplicative maps of matrices factor through determinants? Completely multiplicative matrix norm for certain semigroups of matrices. Rational matrix having roots of every degree Problem of existing matrices Checking on the injectivity, surjectivity and bijectivity of this function Are there Hermitian Unitary matrices U and V generating $\mathbb{Z}/2 \ast \mathbb{Z}/2$? Minimal polynomial of $I_n+aI(i,j)$ Ring homomorphism from matrix ring to smaller ring What values can the given determinant not take under the given conditions?
CommonCrawl
An analysis of the effects of territory properties on population behaviors and evacuation management during disasters using coupled dynamical systems Valentina Lanza1, Edwige Dubos-Paillard2, Rodolphe Charrier3, Damienne Provitolo4 & Alexandre Berred1 One of the major current challenges in the field of security and safety of populations is to advance further in the understanding and the ability to anticipate their behaviors when faced with threats or disasters. Several factors such as the hazard properties or the culture of risk can influence people behavior during a disaster. In this paper, we assume that the spatial configuration of the site where the disaster takes place has also a significant impact on collective behaviors. For this, we use a mathematical model based on meta-population networks, in order to design realistic evacuation scenarios. This model, called the Alert-Panic-Control model, allows, on the one hand, to take into account the temporal dynamics of collective behaviors in front of disaster and on the other hand, the spatial context considered in terms of site maximum capacity. From the results of a in situ experiment carried out in 2019 with the population of Le Havre (France) concerning an industrial accident hypothesis, and in particular from the different evacuation paths chosen by the respondents, three scenarios of evacuation of the place located below the street level and in front of the Niemeyer Cultural Centre were built. The results are able to quantify how the contagion of panic has a major impact or not on an evacuation forced by specific territorial properties. Too narrow paths can cause panic phenomena due to bottleneck. They also highlight that the function of the refuge places, recreational function in this case, must be taken into account insofar as they can gather many people, impeding the evacuation of the population exposed to the danger. Risk and disaster prevention has become a major societal issue over the past years. During the last decades, the number of disasters has increased and major crises have revealed the complexity of these events. Besides being a physical, natural or technological event, a catastrophe also has human, economic, financial, social, environmental and cultural effects. Modern societies, whatever their level of development, are still unprepared to face the complexity of disasters and in general the population does not know how to behave in such situations. This partial ignorance of human behavior in front of catastrophic events is not only peculiar to population and/or decision-makers. It also depends on the difficulties of researchers in the understanding the range of behaviors adopted during a disaster (Crocq 2013), their sequence, dynamics and interdependence (Provitolo et al. 2015). Research in geography shows that the influence of territorial and spatial contexts must be taken into account in order to better understand the collective behavioral dynamics during a catastrophic event (Dubos-Paillard et al. 2021; Provitolo et al. 2020). Geographers consider that space is not a neutral support. Produced by societies, it is heterogeneous and anisotropic. As a result, it constrains the adopted behaviors, if only by the topography and the organisation of the buildings and the infrastructures in an urban area. In urban areas where the spatial extent of the disaster is large, the variety of observed behaviors may be wide. This variety also depends on the location, the intensity of the event, the social group, etc (Provitolo et al. 2015; Fischer 1998). Thus, for a same kind of disaster people's responses can be different. Moreover, during a disaster, individuals scarcely adopt the same behavior during the whole event. Indeed, we often observe a sequence of several behaviors. Nevertheless, researchers have few empirical information about the real evolution of population response, because of the difficulties to collect real-time observation and to analyse human reactions during disasters. Therefore, the main sources of information about the behaviors adopted during a disaster are interviews and surveys carried out with operational actors, residents and victims after (or before) a disaster and in a specific territory. In order to advance further in this research field, an interdisciplinary approach has been undertaken under the Com2SiCa research program. It combines an innovative experiment methodology with a mathematical modeling based on the APC (Alert-Panic-Control) model applied to a network (Lanza et al. 2021). The experiment consisted in the implementation of an in situ simulation based on a sound immersion in a real place where the occurrence of a disaster was plausible (Lago et al. 2022). During the sound immersion, interviewees were asked to react to what they heard in the soundtrack. This permitted to observe and record real-time participants' behaviors, to monitor the stress level before and after the sound immersion, and to get information about the escape routes chosen during the experience. In literature, a mathematical model describing the temporal dynamics of collective behaviors when faced with a disaster has been proposed (Verdière et al. 2014; Cantin et al. 2016). In order to consider the influence of the territorial properties, in previous works we modeled the geographical area under study as a network (Lanza et al. 2021; Cantin 2017). The territory is therefore subdivided in different areas that correspond to the nodes of the network. By exploiting a meta-population approach, each node hosts a sub-population whose dynamics is governed by a APC model. Moreover, this sub-population can move to the adjacent nodes of the network and influence their dynamics. The aim of this work is to exploit the information collected during the surveys as an input to the mathematical model. Our objective is not to identify parameters from data in order to reproduce the evolution of a complex situation subject to many hazards. Our purpose here is to exploit the information on the trajectories adopted in the stressful survey situation to design several realistic evacuation networks and scenarios. In particular, we are interested to investigate to what extent the spatial context and the urban functions influence the individual and collective behavioral dynamics. We show that our approach and our mathematical model permit to evaluate how behavioral dynamics impact evacuation paths in terms of hazard potential. This paper is structured as follows. First, the protocol of the in situ experiments and their results are briefly presented. Then, the Alert-Panic-Control Behavior (APC) mathematical model and its implementation on a network are explained. Finally numerical simulations of realistic evacuation scenarios designed by exploiting the data from the surveys are shown and discussed. We show that the maximal number of individuals that a place can host and the presence of people in a shelter at the onset of the event have an influence on the behavioral dynamics and can lead to a crisis in the crisis phenomena. Field surveys to identify behavioral responses and escape trajectories in stressful situations Surveys protocol In 2018 and 2019, two in situ experiments (Lago et al. 2022) have been conducted: the first one on the seafront of Nice (France) where a tsunami hazard scenario has been investigated, and the second one in Le Havre (France) where the occurrence of a major technological catastrophic event has been considered. The surveys carried out aimed to immerse the participants in two distinct disaster scenarios, one of natural origin, the other of technological one. Their common factor is to simulate a sudden, rapid occurrence with little or no warning signs of a major event. In this paper we mainly focus on the survey in Le Havre. The in situ experiment in Le Havre took place in two different part of the city: at the seaside and in the city center, near the Niemeyer Cultural Centre. This place is of particular importance since it is a semi-closed esplanade with buildings open to the public (a city library, Le Volcan theater and a restaurant). Moreover, it is below the street level and surrounded by residential buildings, see Fig. 1. Three different exits can be taken in order to go up to the street level: via a narrow spiral footbridge it is possible to arrive to a small terrace with some restaurants and a porch (denoted by B in Fig. 1), while a larger staircase and an access ramp lead to Louis Brindeau Street (denoted by C in Fig. 1) and Place Perret square. Due to its location in the basement, behavioral reactions take place without being aware of the events that could occur in the city. The difficulty of identifying the origin of a loud blast is likely to increase the stress because the interpretations can be different from one person to another (terrorist attacks, industrial explosion, collapse of a building, etc). When people cannot clearly identify the source of the danger, imitation processes are often important and can lead to collective panic (Helbing et al. 2002; Moussaïd 2010). Cross sectional view of the Niemeyer Cultural Centre area. We can note the two main exits from the basement: the spiral footbridge at the center towards the restaurants area and a porch (B) and the large staircase on the right towards Louis Brindeau Street (C) The purpose of these experiments was to analyze people reactions facing a simulated danger, and to observe and record the different behavioral sequences of each interviewee immersed into a scenario of a sudden, unforeseen (without pre-warning signs) disaster. The investigation seeks to provide the keys to understanding the questionings, the reasonings and the behaviors that one might expect in such situations. Between in situ simulation and interview, the investigation was structured in three steps (see Fig. 2): visual immersion sound immersion debriefing of the experience. In particular, the second step consisted in a role-play scenario, where the interviewee was asked to listen to an audio track and react in the more naturally way. The sound immersion aims to project the interviewee into a dynamic and fictitious (but credible) situation of a sudden and unforeseen disaster. During this immersion, the person, depending on his/her emotional response, which may be more or less strong, reacts to the situation by adopting different behaviors. During this phase, no specific instructions were given to the volunteer except to listen carefully to the audio track. He/she reacts as he/she wishes, the researcher does not ask questions or suggest to adopt a particular behavior. Steps and purposes of the survey protocol (Lago et al. 2022) Categorization of human behaviors during a catastrophic event The analysis of data from individual interviews with residents has enabled us to identify nearly twenty reactions (panic or reflective flight, mutual aid, seeking shelter, or simply moving away from the danger zone, to give just a few examples) (Dubos-Paillard et al. 2021; Tricot et al. 2021). As this diversity of behavioral reactions cannot be integrated into mathematical models, we have found a consensus between thematicians and modelers while keeping the detail of the information obtained from the surveys. We have thus categorized this behavioral diversity on the basis of two criteria used in emotional psychology to qualify reactions: the variables of emotional load and emotional regulation. The first takes into account the level of stress and nervousness, while the second refers to the ability to control this excess of emotions (Russell 1980). These two key variables allowed us to categorize the diversity of behavioral responses observed during the survey into three meta-behaviors: alert, controlled and panic behaviors, presented below. Alert behaviors (A) are micro-behaviors that can be observed from a motor point of view (e.g., a start or a rapid eye movement to scan what is happening in the near visual environment). This behavior marks a transition from the behavior that was appropriate for the current activity (e.g., jogging or driving to work). In an alert state, the person has an emotional charge which is weak, due to the state of uncertainty of the situation. This means that there has not yet been a significant rise in stress. Control behaviors (C) have the common feature of being reflective behaviors where the emotional load, more or less strong, is regulated. By regulating their emotions, the person seeks to adapt their reactions to the context of the disaster. These reactions can be very varied. It is possible to observe behaviors such as controlled flight, taking shelter, and mutual aid, as well as less virtuous behaviors, such as theft or looting. In addition, it should be noted that not everyone behaves in the same way when faced with an identical situation. During an explosion, for example, people may seek shelter in the nearest building, others may instead prefer to flee and return home. Panic behaviors (P) are uncontrolled behaviors where fear-related emotions have taken over. Panic behavior involves a strong emotional charge and weak regulation, ineffective in regaining a controlled state. Different behaviors can signify a state of panic: panic flight or, on the contrary, stupor (Crocq 2013). The first is characterized by movement while the second, in contrast, is marked by immobility. Trajectories adopted in stressful situations Among the control and panic behaviors, some produce movement while others are marked by immobility. Flight (reasoned or under the influence of panic), evacuation and taking shelter behaviors observed during the survey campaign carried out in Le Havre, are associated with movement trajectories. In the Niemeyer Cultural Centre area in Le Havre, 13 respondents out of 16 adopted movement trajectories, i.e., 81 \(\%\) of respondents. One of the objectives of the surveys carried out was precisely to capture these behaviors and the specific trajectories taken by the respondents. By immersing people in a simulated disaster context, filming their reactions and equipping them with a smartwatch, we were able to map all of the paths taken to leave the danger zone (Ranarimahefa 2020). The extracted information from the interviews has been gathered and treated so as to render a geographical visualization of the different trajectories chosen by people. This map (Fig. 3) shows two different rationales for movement: some of the respondents tried to leave the danger zone (identified by them as being the basement area of the Niemeyer Cultural Centre) so as not to be trapped, while the others instead stayed in this zone in order to reach another building in a few seconds, in this case a library, in order to take shelter there. Among the people leaving the area, we can identify three major trajectories: some took the wide stairs or the narrower footbridge to leave the premises on foot, others preferred to go via the access ramp, especially those on a bicycle. Mapping of the trajectories chosen by people during the interviews around the Niemeyer Cultural Center. These trajectories have been extracted from geolocalized watches. Three main escape directions (noted A, B, and C, respectively, on the map) can be identified These collected, spatialized data allow us to discretize the space under study, to build the corresponding network and to feed the mathematical model presented below, in order to carry out realistic simulations and anticipate the spatio-temporal behavioral dynamics. Mathematical model The APC model The Alert-Panic-Control (APC) model, proposed for the first time in Lanza et al. (2021), as the previous model developed in Verdière et al. (2014), Provitolo et al. (2015), Cantin et al. (2016), is a nonlinear ODE compartmental model inspired by the classical epidemic compartmental models such as the SIR (Susceptible-Infected-Recovered) one (Hethcote 2000). It is based on the behavioral categorization proposed in the previous section and people is assumed to evolve among five main categories of behaviors during a catastrophe: the daily behaviors before the catastrophic event, the states of alert, panic and control, the behaviors of everyday life after the disaster. They can also die or be severely injured (victims). We consider two types of transitions from one behavior to another: the ones due to intrinsic motivations, that are peculiar to each individual and depend on the past, the individual characteristics, etc, or the ones due to imitation processes. Indeed, human behavior is often ruled by imitation and social comparison with others (Drury et al. 2009). We briefly present the variables and the equations of the model, firstly introduced in Lanza et al. (2021). We note \(t_0\) the initial moment of the catastrophic event and for \(t \ge t_0\)and: a(t) the number of individuals in a state of alert, p(t) the number of individuals in a state of panic, c(t) the number of individuals in a state of control, q(t) the number of individuals in the everyday behaviors, b(t) all the people in a behavior of everyday life after the disaster, v(t) the number of individuals who lose their lives during the disaster. The APC model equations are the following: $$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{da}{dt}=\gamma (t)q-(B_1+B_2+D_{a}) a -F(a,c)\dfrac{ac}{N} -G(a,p)\dfrac{ap}{N} +B_3 c+B_4 p,\\ \\ \displaystyle \frac{dp}{dt}=B_2 a+C_2 c-(B_4+C_{1}+D_{p}) p+G(a,p)\dfrac{ap}{N} -H(c,p)\dfrac{cp}{N},\\ \\ \displaystyle \frac{dc}{dt}=B_1 a+C_1 p-(B_3 +C_2+D_{c})c +F(a,c) \dfrac{ac}{N} +H(c,p)\dfrac{cp}{N}-\varphi (t) c,\\ \\ \displaystyle \frac{dq}{dt}=-\gamma (t) q,\\ \\ \displaystyle \frac{d b}{dt}=\varphi (t) c,\\ \\ \displaystyle \frac{d v}{dt}=D_{a}a+D_{c}c+D_{p}p.\\ \end{array}\right. \end{aligned}$$ According to the flow diagram of Fig. 4, we assumed that, before the event, all the people adopt an everyday behavior q. The occurrence of a sudden catastrophic event is modeled by function \(\gamma\). Once the event is triggered, people firstly pass through a state of alert, hence the term \(\gamma (t)q\) in the equation for a in system (1). For an unexpected and without warning signs event, function \(\gamma\) is defined as $$\begin{aligned} \gamma (t)=\zeta (t,\tau _0,\tau _1)= \left\{ \begin{array}{l} 0 \quad \text{ si } t<\tau _0 \\ \dfrac{1}{2}-\dfrac{1}{2}\cos \left( \dfrac{t-\tau _0}{\tau _1-\tau _0}\pi \right) \quad \text{ si } \tau _0\le t\le \tau _1\\ 1 \quad \text{ si } t>\tau _1 \end{array}\right. \end{aligned}$$ At \(t=\tau _0\) the event takes place and at \(t=\tau _1\) the majority of the population in the daily behavior becomes alerted. Flow diagram for the APC (Alert, Panic and Control) compartment model. The intrinsic transitions are represented in solid lines, while the imitation ones in dashed lines Then, alerted people adopt a panic or a control behavior, according to an intrinsic behavioral transition or an imitation process. All the linear terms in system (1) represent intrinsic transitions, while the nonlinear functions F, G and H model the transitions due to imitation. In the following, we call them imitation functions. Function F represents the imitation of the alert people toward the control one, and is defined as: $$\begin{aligned} F(a,~c) \dfrac{ac}{N}= \alpha _1 \xi \left( \dfrac{c}{a+\varepsilon } \right) \cdot \dfrac{a}{N}\cdot c, \end{aligned}$$ where \(\displaystyle \xi (s)=\frac{s^2}{1+s^2}\), \(N=N(t)=q(t)+a(t)+p(t)+c(t)+b(t)\), and \(\varepsilon<<1\). Since we are interested in a significant population size, a classical proportional incidence rate is considered (Arino and Van den Driessche 2006; Blackwood and Childs 2018). Function \(\xi\), represented in Fig. 5, takes into account the assumption that the minority tends to adopt by imitation the behavior of the majority. Function \(\xi\) in the imitation functions F, G and H. This sygmoidal function has been chosen to model the fact that the behavior of the majority is the most imitated one In the same way we can define the two other imitation functions: $$\begin{aligned} {\left\{ \begin{array}{ll} G(a,~p) = \beta \xi \left( \dfrac{p}{a+\varepsilon } \right) \\[2em] H(c,~p) = \gamma _{p\rightarrow c} \xi \left( \dfrac{c}{p+\varepsilon } \right) - \gamma _{c\rightarrow p} \xi \left( \dfrac{p}{c+\varepsilon } \right) . \end{array}\right. } \end{aligned}$$ Finally, we supposed that everyday behaviors can be adopted once again after a certain time and only starting from a controlled behavior. This transition is taken into account by the term \(-\varphi (t) c\) in the second equation of system (1). Function \(\varphi\), as function \(\gamma\), has to be chosen according to the nature of the disaster under study. In analogy with function \(\gamma\), it is defined as: $$\begin{aligned} \phi (t)=\zeta (t,\tau _2,\tau _3) \left\{ \begin{array}{l} 0 \quad \text{ si } t<\tau _2 \\ \dfrac{1}{2}-\dfrac{1}{2}\cos \left( \dfrac{t-\tau _2}{\tau _3-\tau _2}\pi \right) \quad \text{ si } \tau _2\le t\le \tau _3\\ 1 \quad \text{ si } t>\tau _3 \end{array}\right. \end{aligned}$$ The pseudo-daily behavior cannot be adopted before \(t=\tau _2\) and since \(t=\tau _3\) this transition is at its maximum. It is worth remarking that, by adding up all the equations, we obtain $$\begin{aligned} \frac{da}{dt}+\frac{dc}{dt}+\frac{dp}{dt}+\frac{dq}{dt}+\frac{db}{dt}+\frac{dv}{dt}=0, \end{aligned}$$ thus the population is constant in time. The variables, the functions and the parameters of the APC model are summarized in Tables 1, 2 and 3, respectively. Table 1 Variables of the APC model Table 2 Fonctions of the APC model Table 3 Parameters of the APC model The APC model on networks The APC model presented before takes into account only the temporal dynamics of the different behavioral categories. However, the territory and its properties play an important role on the dynamics of the human behaviors. To include this essential aspect, we exploit a mixed approach, based on complex networks and nonlinear differential equations. First of all, we show how the spatial environment is modeled by a complex network adapted to the terrain under study. Then the equations of the APC model on network are presented. Modeling a territory with networks Our purpose is to study the spatio-temporal dynamics of the different behaviors during a catastrophic event in a geographical area. In order to exploit a complex network approach, the first step is to divide the territory under study in different sub-areas, that will be the nodes of our network. An example of a subdivision of the urban area around the Niemeyer Cultural Centre can be found in Fig. 6. The nodes represent a region endowed with some specific properties (for instance a surface) and containing populations subjected to the panel of behaviors of the APC model (Lanza et al. 2021). Here physical displacements of people are considered. Therefore, one directed edge between two nodes models the fact that people can move from one node to the other. Example of a subdivision of the geographical area around the Niemeyer Cultural Centre in Le Havre (France). The information on the escape routes chosen by the interviewees (Fig. 3) have guided the selection of the sub-regions. In particular, five sub-areas, being the nodes of our network, have been identified It is worth remarking that the close tie between the network structure and the geographical characteristics of the territory under study makes our APC model on network very different to the meta-population epidemic models in literature (Arino 2009). In particular, our network has some properties that directly stem from the fact that our nodes represent geographical sub-areas: each node i has a surface that we denote \(S_i\); we suppose that each node i has a maximum capacity, denoted by \(N_i^{max}\), i.e. a maximum number of individuals that the node can host. We assume that \(\rho =3\) people per \(\normalsize m^2\) is the density beyond which motion begins to stop flowing and population slows down due to its mass (Hermant 2012). Thus, the maximal capacity of node i is calculated as $$\begin{aligned} N_i^{max}= \rho S_i \approx 3\ S_i. \end{aligned}$$ Each directed edge has a weight, that is the coupling coefficient. It takes into account the fact that an edge represents people displacements between nodes. It depends on the properties of the node and on the population that is located within, according to the following formula: $$\begin{aligned} \eta _{ik}^j=\dfrac{ L_{ik} \langle v_i^j\rangle }{S_{i} },\qquad 1\le i,k\le n, i\ne k,\quad j\in \{a,p,c\} \end{aligned}$$ \(L_{ik}\) is the width of the exit from node i to node k \(\langle v_i^j\rangle\) is the average speed of the population j leaving node i (it is noteworthy that this quantity is computed here only with the horizontal components of speed vectors) \(S_i\) is the global surface of node i that people leave. Figure 7 shows an example of two nodes. the esplanade between the two volcano buildings who has a surface of \(S_1 \approx 1500\) \(\hbox {m}^2\), and the large staircase that leads to the Louis Brindeau Street, whose surface is about \(S_2=300\) \(\hbox {m}^2\). People can move from the first node to the second one through a passage whose width is \(L_{12}=30\) \(\hbox {m}^2\). People in each category of behavior have a different average speed. We suppose that people in the alert behavior barely move, since people in an alert state are usually in search of information. Moreover, we suppose that the speed of individuals in panic is less than the one of individuals in a control behavior. This choice has two reasons: firstly among the people in a panic behavior some of them could have a freeze response, and secondly others do not always take the right route to escape. Therefore, for all \(1\le i,k\le n\), \(i\ne k\) we have $$\begin{aligned} \eta _{ik}^a<< \eta _{ik}^p< \eta _{ik}^c \end{aligned}$$ Example of two nodes. Each node has a surface: here, the first node (in blue) has a surface of \(S_1=1500\,\hbox {m}^2\), while the second one (represented in red) has a surface of \(S_2=300m^2\). Moreover, the two nodes are adjacent, and the exit from one node to the other is equal here to \(L=30\) m. This geographical information is useful to calculate the coupling coefficients (6) The mathematical equations Here are our main assumptions for the APC model on network: A1:: the network consists of n non-identical nodes, that is the parameters of the APC model on each node can be different, only linear couplings are considered, i.e. only physical displacements are taken into account. In addition, no behavioral transitions are allowed during the displacement, the network population is constant and equal to N. No population can enter or leave the network. On the other hand, the population on each node \(N_i\) depends on time, since individuals move from one node to another, each node k has a maximum capacity, i.e. a maximum number of individuals \(N^{max}_k\) that the node can receive, the transitions towards panic can also depend on the density on each node. Indeed, we assume that, the more population is in the node, the more it has a tendency to lose control, the individuals speed depends on the density in the node. The greater is the density, the lower is the speed, the victims and the population in a daily behavior do not move from one node to another one, depending on the disaster, we can have nodes that are not directly impacted by the hazard and therefore where the population continue to show a daily behavior even after the occurrence of the disaster. In this case, only the arrival of population from the other nodes triggers the APC model on those nodes, population can go back to behaviors close to everyday life only on previously defined refuge nodes. The presence of people who exhibit again a pseudo-daily behavior can help individuals in a panic state to become controlled. In the following, let us denote by \(N_i=N_i(t)=q_i(t)+a_i(t)+p_i(t)+c_i(t)+b_i(t)\) the number of individuals in the i-th node at time t. \({\mathcal {N}}^{out} (i)\) the out-neighbors set of node i (that is the nodes that are adjacent to node i and whose edge starts from i) (West 1996), \({\mathcal {N}}^{in} (i)\) the in-neighbors set of node i (that is the nodes that are adjacent to node i and whose edge comes into i) In Fig. 8, an example of network where the out-neighbors and the in-neighbors of a node are highlighted is presented. Example of network. All the in-neighbors of node i, noted in the model as \({\mathcal {N}}^{in}(i)\), are represented in blue, while its out-neighbors nodes belonging to \({\mathcal {N}}^{out}(i)\) are here represented in red Thus, the equation for the individuals in the alert behavior of node i (\(i=1,\dots ,n\)) reads as: $$\begin{aligned} \frac{da_i}{dt}&=-\left( B_1^i+B_2^i+D_{a}^i \right) a_i -F^i(a_i,c_i)\dfrac{a_ic_i}{N_{i}} -G(a_i,p_i)\dfrac{a_ip_i}{N_{i}} +B_3^i c_i+B_4^i p_i \nonumber \\&\quad \displaystyle +\sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^a a_k -\sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^a a_i \nonumber \\&\quad \displaystyle +\delta ^i\gamma ^i (t)q_{i} +(1-\delta ^i)\left( \sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) (\theta _{ki}^a a_k+\theta _{ki}^c c_k +\theta _{ki}^p p_k) \right) \dfrac{q_i}{N_{i}} \nonumber \\&\quad \displaystyle +(a_{i}+c_{i}+p_{i}) \dfrac{q_i}{N_{i}} . \end{aligned}$$ The terms in the first line of Eq. (7) represent the intrinsic and imitation transitions, that are already present in the simple APC model (1). Moreover, we have the coupling terms that model the fact that individuals in alert behavior can move from one node to another. In particular, $$\begin{aligned} \sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^a a_i \end{aligned}$$ represents the out-coming flux of node i. The term \(1-\dfrac{N_k}{N_k^{max}}\) in Eq. (8) takes into account that population in node i can move to node k only if node k has not reached its maximal capacity (assumption A4). Furthermore, we assume that the average speed is an affine function of the density, that is $$\begin{aligned} \theta _{ik}^a=\eta _{ik}^a\left( w_i+1-\dfrac{N_i}{N_i^{max}}\right) =\dfrac{ L_{ik} }{S_{i}}\langle v_i^a\rangle \left( w_i+1-\dfrac{N_i}{N_i^{max}}\right) , \end{aligned}$$ with \(w_i \in [0,1]\). Indeed, the speed of the alert population \(\langle v_i^a\rangle \left( w_i+1-\dfrac{N_i}{N_i^{max}}\right)\) is supposed to decrease as the density increases (assumption A6). Moreover, we suppose that, when the i-th node is completely full, that is when \(N_i=N_i^{max}\), people inside the node move with a speed equal to \(\langle v_i^a\rangle w_i\). In the same way, $$\begin{aligned} \sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^a a_k \end{aligned}$$ models the incoming flux and the fact that node i can take in new individuals only if it has not yet reached its maximal capacity (assumption A4). The terms $$\begin{aligned}&{\delta ^i\gamma ^i (t)q_{i}} +(1-\delta ^i)\left( \sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) (\theta _{ki}^a a_k+\theta _{ki}^c c_k +\theta _{ki}^p p_k) \right) \dfrac{q_i}{N_{i}} \end{aligned}$$ model assumption A8. The dynamics on each node can be triggered both by the hazard and by all the population arriving from the adjacent nodes, represented by $$\begin{aligned} \sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) (\theta _{ki}^a a_k+\theta _{ki}^c c_k +\theta _{ki}^p p_k), \end{aligned}$$ since only people in alert, panic and control behaviors can move. Parameter \(\delta ^i \in [0, 1]\) is a sort of weight that takes into account which one of these two sources is the one that contributes the most to the onset of the APC dynamics on the node. Moreover, the term $$\begin{aligned} (a_{i}+c_{i}+p_{i}) \dfrac{q_i}{N_{i}} \end{aligned}$$ takes into account that people in a daily behavior interact with the individuals in the alert, panic or control behavior that are already present in the node, and become alerted. The equations for the individuals in panic and control behaviors have a similar structure, that is the terms specific to the transitions already present in the APC model (1) plus the coupling terms: $$\begin{aligned} \displaystyle \frac{dp_i}{dt}&=B_2^i a_i+{\tilde{C}}_2^i \left( 1+\dfrac{N_i}{N_i^{max}}\right) c_i-\left( B_4^i+C_{1}^i+D_{p}^i\right) p_i+G^i(a_i,p_i)\dfrac{a_i p_i}{N_{i}} \nonumber \\&\quad \displaystyle -H^i(c_i+b_i,p_i)\dfrac{(c_i+b_i) p_i}{N_{i}}+\sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^p p_k\nonumber \\&\quad \displaystyle -\sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^p p_i,\nonumber \\ \displaystyle \frac{dc_i}{dt}&=B_1^i a_i+C_1^i p_i-\left( B_3^i +{\tilde{C}}_2^i \left( 1+\dfrac{N_i}{N_i^{max}}\right) +D_{c}^i\right) c_i +F^i(a_i,c_i) \dfrac{a_ic_i}{N_{i}} \nonumber \\&\quad \displaystyle +H^i(c_i+b_i,p_i)\dfrac{(c_i+b_i) p_i}{N_{i}}-\varphi ^i(t) c_i+\sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^c c_k\nonumber \\&\quad \displaystyle -\sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^c c_i, \end{aligned}$$ $$\begin{aligned} H^i(c_i+b_i,~p_i) = \gamma ^i _{p\rightarrow c} \xi \left( \dfrac{c_i+b_i }{p_i+\varepsilon } \right) - \gamma ^i _{c\rightarrow p} \xi \left( \dfrac{p_i }{c_i+b_i+\varepsilon } \right) \end{aligned}$$ since in the refuge nodes people in panic behavior interact also with the individuals that have gone back to a pseudo-normal behavior and become controlled (assumption A9). In particular, in order to model the assumption A5, that is the fact that the increase of the density in one node amplifies the panic behaviors, the intrinsic transition from control to panic is modified with respect to the same term in the a-spatial APC model (1), as follows: $$\begin{aligned} -{\tilde{C}}_2^i \left( 1+\dfrac{N_i}{N_i^{max}}\right) c \end{aligned}$$ where \({\tilde{C}}_2^i=\dfrac{C_2^i}{2},\) and \(C_{2}^{i}\) is the parameter associated to the intrinsic transition from control to panic in the a-spatial APC model. This definition permits to link the parameter of the a-spatial APC model (1), where the notion of density is not present, and the one of the APC model on network. Indeed, \(C_{2}^{i}\) can be interpreted as the maximum value that we can consider for this transition and we reach it when the node is saturated (that is when \(N_i=N_i^{max}\)). Writing down in the same way the equations for people in a daily behavior, for the individuals that in the shelters go back to a pseudo-daily behaviors and for victims, the APC model on a network of n nodes consists in the following 6n equations (\(i=1,\dots ,n\)): $$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{da_i}{dt}=-\left( B_1^i+B_2^i+D_{a}^i \right) a_i -F^i(a_i,c_i)\dfrac{a_ic_i}{N_{i}} -G(a_i,p_i)\dfrac{a_ip_i}{N_{i}} +B_3^i c_i+B_4^i p_i +\delta ^i\gamma ^i (t)q_{i}\\ \quad \qquad \displaystyle +(1-\delta ^i)\left( \sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) (\theta _{ki}^a a_k+\theta _{ki}^c c_k +\theta _{ki}^p p_k) \right) \dfrac{q_i}{N_{i}} \\ \quad \qquad \displaystyle +(a_{i}+c_{i}+p_{i}) \dfrac{q_i}{N_{i}} +\sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^a a_k-\sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^a a_i,\\ \\ \displaystyle \frac{dp_i}{dt}=B_2^i a_i+{\tilde{C}}_2^i \left( 1+\dfrac{N_i}{N_i^{max}}\right) c_i-\left( B_4^i+C_{1}^i+D_{p}^i\right) p_i+G^i(a_i,p_i)\dfrac{a_i p_i}{N_{i}} \\ \quad \qquad \displaystyle - H^i(c_i+b_i,p_i)\dfrac{(c_i+b_i) p_i}{N_{i}}+\sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^p p_k\\ \quad \qquad \displaystyle -\sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^p p_i,\\ \\ \displaystyle \frac{dc_i}{dt}=B_1^i a_i+C_1^i p_i-\left( B_3^i +{\tilde{C}}_2^i \left( 1+\dfrac{N_i}{N_i^{max}}\right) +D_{c}^i\right) c_i +F^i(a_i,c_i) \dfrac{a_ic_i}{N_{i}} \\ \quad \qquad \displaystyle + H^i(c_i+b_i,p_i)\dfrac{(c_i+b_i) p_i}{N_{i}}-\varphi ^i(t) c_i+\sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) \theta _{ki}^c c_k\\ \quad \qquad \displaystyle -\sum _{k\in {\mathcal {N}}^{out}(i)} \left( 1-\dfrac{N_k}{N_k^{max}}\right) \theta _{ik}^c c_i,\\ \\ \displaystyle \frac{dq_i}{dt}=-\delta ^i\gamma ^i (t)q_{i}-(1-\delta ^i)\left( \sum _{k\in {\mathcal {N}}^{in}(i)} \left( 1-\dfrac{N_i}{N_i^{max}}\right) (\theta _{ki}^a a_k+\theta _{ki}^c c_k +\theta _{ki}^p p_k) \right) \dfrac{q_i}{N_{i}}\\ \quad \qquad -(a_{i}+c_{i}+p_{i}) \dfrac{q_i}{N_{i}},\\ \\ \displaystyle \frac{d b_i}{dt}=\varphi ^i(t) c_i\\ \\ \displaystyle \frac{d v_i}{dt}=D_{a}^ia_i+D_{c}^ic_i+D_{p}^ip_i, \end{array}\right. \end{aligned}$$ $$\begin{aligned} {\left\{ \begin{array}{ll} F^i(a_i,~c_i) = \alpha ^i \xi \left( \dfrac{c_i}{a_i+\varepsilon } \right) \\[2em] G^i(a_i,~p_i) = \beta ^i \xi \left( \dfrac{p_i}{a_i+\varepsilon } \right) \\[2em] H^i(c_i+b_i,~p_i) = \gamma ^i _{p\rightarrow c} \xi \left( \dfrac{c_i+b_i }{p_i+\varepsilon } \right) - \gamma ^i _{c\rightarrow p} \xi \left( \dfrac{p_i }{c_i+b_i+\varepsilon } \right) .\\[2em] \gamma ^i(t)=\zeta (t,\tau _0^i,\tau _1^i)\\[2em] \phi ^i(t)=\zeta (t,\tau _2^i,\tau _3^i) \end{array}\right. } \end{aligned}$$ It is worth noting that the sum of all the equations of system (10) is equal to zero. This means that the total population in the network is constant, as stated in A3. As in the simple APC model (1), we suppose that at \(t=t_0\) all the people is in a daily behavior. It means that we consider the following initial condition (\(i=1,\dots , n\)): $$\begin{aligned} a_i(t_0)=p_i(t_0)=c_i(t_0)=b_i(t_0)=v_i(t_0)=0, \qquad q_i(t_0)=q^0_i, \end{aligned}$$ with \(\displaystyle \sum _{i=1}^n q^0_i=N.\) Assumption A9 is taken into account by considering \(\varphi _i=0\) for all the nodes that are not sufficiently far from the impact zone of the disaster and are not labeled as refuge nodes. Therefore, in these nodes, since \(b_i(0)=0\), we have that \(b_i(t)=0\) for all \(t\ge t_0\), that is no one adopts again a pseudo-daily behavior after the beginning of the APC dynamics within the node. Numerical simulations Description of the scenarios The extracted information from the interviews fed a set of scenarios based on a population of about 1500 individuals. Le Volcan theater is designed to host up to 800 members in the audience, without counting the employees. The doors of the theater and the library has just closed as well as the restaurant which in the basement. That is why we consider a total of 1500 people in the esplanade at the beginning of the simulations. We assume then that sudden several violent explosions resound. Due to their location in the basement, people cannot identify the origin of the blast (industrial accident, collapse of a building or terrorist attacks). This situation leads to high levels of stress among the population. The objective of the population is then to leave the esplanade whose spatial configuration is relatively closed and rapidly considered as a potential source of danger. According to our in situ survey, two paths are preferred by respondents, as shown in Fig. 3. The first one consists of taking the spiral footbridge, whose width is about 2 meters. Even if people are aware that this escape path is narrow, it is the closest to the esplanade and gives access to a relatively small terrace (see Fig. 1). According to the survey results, this alternative was chosen by around \(20\%\) of the interviewees (two people over eleven have chosen this path), which represents 300 persons that we have located in an area of 300 \(\hbox {m}^2\) next to the spiral footbridge. These elements enable us to design the first scenario of the esplanade evacuation. In a second scenario, which is an alternative to the first one, many people are located on the patios of the bars and restaurants located on the terrace. This situation differs from the first scenario where there were a very small number of individuals on the terrace. Our objective is to analyze if the presence of many people could have a significant impact on the evacuation process. Finally, the third scenario relies on the second path of Fig. 6, which is longer, but whose staircase is wider for leaving the basement (30 meters). It gives access to Louis Brindeau Street and to the large Place Perret square (see the green area on the top of Fig. 6) that offers a broad perspective on the city and enables to more easily identify the origin of the explosions. We suppose that the remaining \(80\%\) of people, i.e. 1200 individuals located on an area of 1200\(\hbox {m}^2\), have chosen this alternative. In all cases, we considered that the access to the terrace or the square has a reassuring effect on the population (satisfaction at having left the basement, better visibility and contact with people able to give information about the situation) while a feeling of stress prevails in the other areas (basement, footbridge and staircase). Therefore, the terrace and Place Perret square are considered as refuge areas. Design of the network The novelty of our approach is that the choice of the topology of the network takes into account the real configuration of the place where the surveys were carried out. Moreover, some of the involved geographical areas exhibit limited capacities to host people, therefore some nodes of the network under study—including the refuge nodes—may have a small maximum capacity \(N^{max}_k\). The main trajectories chosen by people for evacuating the esplanade have guided us to design the networks under study in the different scenarios. In both cases (the two main trajectories), a network of three nodes has been designed: the first node represents the esplanade in front of Le Volcan theater and the city library. It is the same for both trajectories. Indeed, this location is relatively large with a high maximal capacity. This capacity has been estimated considering the available surface of the area and an averaged maximal density of people. As mentioned before, the whole initial population can be divided into two groups according to their escaping path choices. One group accounts of 300 people in the first two scenarios occupying a surface of \(300\ {\hbox {m}}^2\), that corresponds to a maximal capacity of 900 people, when considering an average maximal density of 3 people per \({\hbox {m}}^2\). In the third scenario, we consider an initial population of 1200 people occupying \(1200\ {\hbox {m}}^2\), that gives a maximal capacity of 3600 people. The second node corresponds either to the narrow spiral footbridge in path 1, or to the large staircase in path 2 (see Figs. 1 and 6). The two maximal capacities are very different in these two cases: in the spiral footbridge, which enables to reach the ground level, the surface of the area has been estimated approximately to \(50\,{\hbox {m}}^2\), therefore it can hold \(N^{max}=150\) persons maximum if we consider \(3\,{\hbox {p}}/{\hbox {m}}^2\) as the average density in an evacuation process. In the second path, the staircase is very large with a surface evaluated to \(300\ {\hbox {m}}^2\), so we obtain a maximal capacity of \(N^{max}=900\) people at the same time. Finally the third node can be considered in each case as a refuge: in the path 1, the refuge is on the road level and is a relatively small terrace with many restaurants and bars. This is the reason why we consider two scenarios with different initial conditions on this node: one with a few people already on the place (5 people) and another one with a crowdy place (295 people). Moreover, the place is full of obstacles (plant pots, seats,...), this is why we may consider this refuge not so large in terms of carrying capacity and we have chosen a maximal capacity of 500 people maximum in this zone/node. For the path 2 towards Louis Brindeau Street, the situation is more convenient because this path ends on a big square whose surface is around \(50\ {\hbox {m}}\times 50\ {\hbox {m}} = 2500\ {\hbox {m}}^2\). Thus, this square may carry up to 7500 people. Finally, for the first path we have \(L_{1,2}=L_{2,3}=2\) m, since the footbridge width is about 2 m wide. For the second one, the staircase is 30 m wide, so we take \(L_{1,2}=L_{2,3}=30\) m. Several authors assume that the free flow walking speed on a flat surface is on average about 1.3 m/s (Hermant 2012; Fruin 1971; Muccini et al. 2019). This velocity evolves according to several factors as the social composition of the crowd, the culture, the density or the presence of stairs. Indeed, the speed of individuals decreases as the age or the density of the crowd increases, or when people have to go upstairs or to go downstairs. According to Fruin (1971) the free flow speed on stairs can almost be divided by two if we consider upward walking speed on a short stairway. However, this approximation does not take into account the level of stress of the population due to an exposition to a danger. For instance, during the world trade center attack, people went down the stairs at an estimated speed of 0.3 m/s (Blake et al. 2004). Therefore people in high stress (or in panic) have a lower speed in average. Furthermore, we assume that the population in alert states do not move. Finally, we suppose \(w_1=w_2=0.2\), that is, when a node is completely full, the average speed of the population is equal to \(20\%\) of their free flow speed, according to the empirical data from the survey paper (Hermant 2012). For all these reasons, we will consider different speeds and derived coupling coefficients, which are summarized in Table 4. Table 4 Coupling setup. Here two networks are considered, the ones labelled Path 1 and Path 2 in Fig. 6 Simulation setup According to the three scenarios explained before and the two paths/networks under study, we have chosen the initial conditions summarized in Table 5. Table 5 Initial conditions and node maximal capacities for the three scenarios under study Moreover, for all the scenarios, we suppose that the explosion is almost instantaneous and the different nodes experience the event at the same time. Thus, \(\tau ^i_0=0\) and \(\tau _1^i=0.5\), for all \(i=1, 2, 3\). the explosion triggers the behavioral dynamics in node 1, that is \(\delta ^1=1\), while in node 2 and 3 the arrival of people from the other nodes plays a role as important as the event itself. Thus \(\delta ^2=\delta ^3=0.5\). the involved population has a low level of risk culture. The intrinsic transitions towards panic are more likely than the ones towards control. in the refuge zones people tend to imitate the individuals in control behavior. the third node of the networks under study is the only refuge node. Moreover, in the refuge nodes people can attain a pseudo-daily behavior only after 50 min. Therefore \(\phi ^1=\phi ^2=0\) and \(\tau _2^3=50\) and \(\tau _3^3=80\) min. These assumptions lead to the parameter choices in Table 6. Table 6 Parameters values for the numerical simulations Numerical results and discussion The numerical results for the three scenarios are represented in Fig. 9. Each column corresponds to a scenario. For each scenario, the total population on each node, and the number of individuals in daily, alert, panic, control and pseudo-daily behaviors for each node of the network are plotted. Behavioral evolutions for the three scenarios under study. Each column corresponds to a scenario. For each scenario, the total population on each node is represented on the first line of the figure. Each of the other lines represents a node in the network. For each node of the network, the number of individuals in daily, alert, panic, control and pseudo-daily behaviors are represented. The evacuation of the first node towards the larger staircase of the third scenario is faster than the one via the narrow spiral footbridge of the first and second scenarios. In the second scenario many people are already present in node 3 at the beginning of the event. It yields a persistence of panic and a bottleneck in node 2, and the development of a panic situation in node 3. This is different to what happens in the scenarios 1 and 3 where in the refuge node 3 we have a majority of people in a controlled behavior. All the system parameters are set as in Tables 4, 5 and 6 Scenarios 1 and 3 The first scenario shows that the 300 individuals who escaped via the spiral staircase (node 2) were all able to take refuge on the terrace (node 3) within 22 min. This evacuation is faster in the third scenario for the 1200 people who fled by the wide staircase (30 metres wide), since it lasted about 11 min. In both cases, it takes from 0 to 5 min for people of node 1 to understand that they are exposed to a possible danger. Even if the evacuation via the spiral staircase is slower, no bottleneck is observed. At most, there are 45 people, whereas the capacity of the staircase is 150 persons. Panic behaviors (panic flight, agitation, freezing response, automatic movements) take over in both node 1 and node 2. The arrival of individuals in panic on node 3 is visible at the beginning of the simulation. Indeed, for about 15 min for scenario 1 and for between 8–10 min for scenario 3, the arrival of panicked people justifies the fact that there are as many controlled people as panicked ones. However, the reassuring information provided by the five people present at the beginning on the node 3 and/or a better view of what is happening are likely to gradually reassure the population. The issue is more problematic in the second scenario. Effectively the terrace is almost at its full capacity due to the many people in restaurants and bars. Moreover numerous tables, chairs and various obstacles restrict the space for people who try to escape the esplanade via the small staircase. In this case, the time to understand that something is going wrong (state of alert) persists less than 5 min, and then a bottleneck appears in the footbridge node. This situation is due to the footbridge capacity which is nearly reached after about 10 min (about 110 people are in node 2), and even more to the progressive saturation of the terrace. The situation in nodes 2 and 3 gives rise to panic phenomena classically observed in densely crowdy areas during disaster. The risk is high to observe trampling, compression, crushes. The ripple effect is that people on the esplanade are also blocked and it takes more than 40 min to evacuate the area next to the narrow footbridge instead of the 22 min observed in the first scenario. This second scenario is particularly interesting because it highlights a crisis within a crisis phenomenon. Indeed, for a part of the population on the esplanade, the terrace seems to be the best location to take refuge in. We can assume that people arriving from the other nodes do not know that the terrace is already crowded and that the capacity of this place is nearly reached. This could explain the observed situation of panic predominance. If these people at the beginning of the simulation had this information, they could have chosen the other path. The specificity of our case study is that the density increases sharply when people enter the stairs where speed is divided by two. The high density combined with the slackening of the flow leads to low average speeds for panicked and controlled people. Moreover, by comparing scenarios 1 and 2, we remark that the number of individuals present at \(t=0\) in the refuge node 3 has an important role on the dynamics of the whole network. If node 3 is quite full of people from the beginning, on the one hand, it can rapidly reach its maximum capacity so bottlenecks can occur; on the other hand, due to its high density, panic reactions can easily take place. In order to clear up this last phenomenon, let us consider what happens in node 3 at \(t=40\) min, when the APC dynamics on the node is well established and none has already returned in a pseudo-daily behavior. In particular, let us focus on the proportion of individuals in panic in node 3 at \(t=40\) min, that is $$\begin{aligned} p^{40}=\dfrac{p_3(t=40)}{N_3(t=40)} \end{aligned}$$ Figure 10 shows \(p^{40}\) as a function of \(N_3(0)\), that is the population at \(t=0\) in node 3. All the parameters are here settled as in Scenarios 1 and 2, only \(N_3(0)\) varies. It is possible to see that for small values of \(N_3(0)\), the proportion of individuals in panic is low. Thus, in this case, node 3 is a proper refuge node and panic is managed. If, at the beginning of the catastrophe, node 3 hosts more than 240 individuals, then a crisis in the crisis phenomena can occur and panic gets the upper hand, since the node becomes rapidly full and cannot welcome other people who tried to get there. Proportion of panic individuals \(p^{40}\) (blue) in node 3 at \(t=40\) min, as function of the number of individuals \(N_3(0)\) already present in node 3 at \(t=0\). All the parameters are here settled as in scenarios 1 and 2, only \(N_3(0)\) varies. In particular, scenario 1 is for \(N_3(0)=5\), while scenario 2 is for \(N_3(0)=295.\) Here, the maximal capacity of node 3 is \(N_3^{max}=500\). If the refuge node is quite full from the beginning, panic reaction can get the upper hand Furthermore, we remark that in our simulations the two networks are not linked whereas the population leaves the same place. This connection between different simple networks will be the next step of our work. However, several empirical studies about evacuation have shown that rational behaviors are not always observed. Individuals may collectively take the same path, while other exits are possible. Firstly because herding or imitation behavior occur frequently in exit choices. People behave as a group by putting aside their ability to act as individuals, leading some to choose the most congested exit, rather than an exit with less people (Saloma and Perez 2007; Lovreglio et al. 2014). Secondly, because some experiments have shown that people often use familiar exits. For Nilsson et al. (2009), a familiar exit can be for example the entrance of a building or the ordinary exit. Finally, we note that, whatever the scenario 1 or 2, the choice of path 1 seems to be a wrong decision in both cases, leading eventually to highly hazardous situations because of the emergence of panic behaviors on the narrow footbridge. This type of analysis and conclusion may be considered as a significant contribution of our simulation model to operational staff: it could be efficient to block the way to such hazardous paths and to drive people towards the safest exits so as to improve the global evacuation process and the risk management. The lack of knowledge about the spatio-temporal dynamics of human behavior faced with catastrophic events is still significant. In order to advance further in this research field, an interdisciplinary approach, combining in situ experiments and mathematical modeling, has been recently adopted. In this paper, we show how the survey results can feed our network mathematical model in order to design numerical simulations of realistic scenarios. Here, we focused on the in situ experiment that took place in the urban area around the Niemeyer Cultural Centre in Le Havre (France) two years ago. We have shown how the territorial configuration and the interviewees' responses, through their specific evacuation paths, guided us in the construction of the network under study. We have been therefore brought to consider two separated networks of three nodes, with different properties. Based on these networks some simulations have been achieved. Results confirm the idea that the maximal capacity of each node and the initial conditions (that is the number of individuals on the node at the onset of the catastrophe) play an important role on the sequence of events and on the level of panic behaviors in some particular configurations. The main contribution of this approach lies in the predictive ability of our simulation tool for evaluating one evacuation path or another in terms of hazard potential. The next steps of this research is to consider bigger networks or specific cases in order to evaluate their different potential evacuation paths by simulation. Our perspective work is also to pursue the mathematical analysis of the APC model on networks and study how parameters influence the dynamics of the whole network. Alert-Panic-Control ODE: Susceptible-infected-recovered Arino J (2009) Diseases in metapopulations. In: Modeling and dynamics of infectious diseases. World Scientific, Singapore, pp. 64–122 Arino J, Van den Driessche P (2006) Disease spread in metapopulations. Fields Inst Commun 48(2006):1–12 Blackwood JC, Childs LM (2018) An introduction to compartmental modeling for the budding infectious disease modeler. Lett Biomath 5(1):195–221 Blake S, Galea E, Westang H, Dixon A (2004) An analysis of human behaviour during the WTC disaster of 9/11 based on published survivor accounts. In: 3rd international symposium on human behaviour in fire: conference proceedings, Greenwich, London, UK, pp 181–192 Cantin G (2017) Nonidentical coupled networks with a geographical model for human behaviors during catastrophic events. Int J Bifurc Chaos 27(14):1750213 Cantin G, Verdière N, Lanza V, Aziz-Alaoui M, Charrier R, Bertelle C, Provitolo D, Dubos-Paillard E (2016) Mathematical modeling of human behaviors during catastrophic events: stability and bifurcations. Int J Bifurc Chaos 26(10):1630025 Crocq L (2013) Paniques Collectives (Les). Odile Jacob, Paris Drury J, Cocking C, Reicher S (2009) Everyone for themselves? A comparative study of crowd solidarity among emergency survivors. Br J Soc Psychol 48(3):487–506 Dubos-Paillard E, Berred A, Provitolo D (2021) Classification des catastrophes fondée sur l'analyse des relations entre les propriétés de l'événement et les comportements humain. Technical report, ANR Fischer HW (1998) Response to disaster: fact versus fiction & its perpetuation: the sociology of disaster. University Press of America, Washington Fruin JJ (1971) Pedestrian planning and design. Metropolitan Association of Urban Designers and Environmental Planners, New York Helbing D, Farkas IJ, Vicsek T (2002) Crowd disasters and simulation of panic situations. In: The science of disasters. Springer, Berlin, pp 330–350 Hermant LFL (2012) Video data collection method for pedestrian movement variables & development of a pedestrian spatial parameters simulation model for railway station environments. Ph.D. thesis, Stellenbosch University Hethcote HW (2000) The mathematics of infectious diseases. SIAM Rev 42(4):599–653 Lago M, Tricot A, Provitolo D, Boudoukha A-H, Verdière N, Lanza V, Charrier R, Haule S, Mallet P, Bertelle C, Dubos-Paillard E, Perez S, Navarro O, Ranarimahefa P, Lindenmann A, Berred A, Aziz-Alaoui M (2022) Comprendre, simuler et analyser les comportements humains en situation de catastrophe: enjeux et résultats d'une démarche d'enquête innovante. accepted Lanza V, Dubos-Paillard E, Charrier R, Verdière N, Provitolo D, Navarro O, Bertelle C, Cantin G, Berred A, Aziz-Alaoui M (2021) Spatio-temporal dynamics of human behaviors during disasters: a mathematical and geographical approach. Complex Systems. Smart Territories and Mobility. Springer, Cham, pp 201–218 Lovreglio R, Fonzone A, Dell'Olio L, Borri D, Ibeas A (2014) The role of herding behaviour in exit choice during evacuation. Procedia Soc Behav Sci 160:390–399 Moussaïd M (2010) Étude expérimentale et modélisation des déplacements collectifs de piétons. PhD thesis, Université de Toulouse, Université Toulouse III-Paul Sabatier Muccini H, Arbib C, Davidsson P, Tourchi Moghaddam M (2019) An IoT software architecture for an evacuable building architecture. In: Proceedings of the 52nd Hawaii international conference on system sciences Nilsson D, Frantzich H, Saunders W (2009) Influencing exit choice in the event of a fire evacuation. Fire Saf Sci 9:341–352 Provitolo D, Dubos-Paillard E, Verdière N, Lanza V, Charrier R, Bertelle C, Aziz-Alaoui M (2015) Les comportements humains en situation de catastrophe: de l'observation à la modélisation conceptuelle et mathématique. Cybergeo: European Journal of Geography Provitolo D, Lozi R, Tric E (2020) Topological analysis of a weighted human behaviour model coupled on a street and place network in the context of urban terrorist attacks. In: Mathematical modelling. Optimization, analytic and numerical solutions. Springer, Singapore, pp 117–146 Ranarimahefa P (2020) Les comportements humains face à un accident technologique majeur - Analyse de la campagne d'enquète menée au Havre. Master's thesis, Université Côte d'Azur, France Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161 Saloma C, Perez G (2007) Herding in real escape panic. In: Pedestrian and evacuation dynamics 2005. Springer, Berlin, pp 471–479 Tricot A, Provitolo D, Navarro O, Boudoukha A-H, Lago M, Naud A, Lindenmann A (2021) Présentation du dispositif méthodologique de la recherche Com2SiCa: démarche et résultats. Technical report, ANR Verdière N, Lanza V, Charrier R, Provitolo D, Dubos-Paillard E, Bertelle C, Aziz-Alaoui M (2014) Mathematical modeling of human behaviors during catastrophic events. In: International conference on complex systems and applications, Le Havre, 23–26 June 2014, pp. 67–74 West DB (1996) Introduction to graph theory. Prentice Hall, Englewood Cliffs This work has been supported by the French government, through the National Research Agency (ANR) under the Societal Challenge 9 "Freedom and security of Europe, its citizens and residents" with the reference number ANR-17-CE39-0008, co-financed by French Defence Procurement Agency (DGA) and The General Secretariat for Defence and National Security (SGDSN). This work was supported as part of grant from ANR (National Research Agency) for the Com2SiCa project, Grant Number ANR-17-CE39-0008. LMAH, FR-CNRS-3335, Université Le Havre Normandie, Le Havre, France Valentina Lanza & Alexandre Berred Géographie-Cités UMR 8504, Université Paris 1, Panthéon-Sorbonne, Paris, France Edwige Dubos-Paillard LITIS, Université Le Havre Normandie, Le Havre, France Rodolphe Charrier CNRS, Observatoire de la Côte d'Azur, IRD, Géoazur, UMR 7329, Université Côte d'Azur, Valbonne, France Damienne Provitolo Valentina Lanza Alexandre Berred VL, EDP and RC were responsible for conception and analysis of the numerical simulations, and wrote the manuscript. DP analyzed the survey results, provided the maps of the escape trajectories and contributed to writing the manuscript. AB provided guidance towards the research and provided comments on the manuscript. All authors read and approved the final manuscript. Correspondence to Valentina Lanza. Lanza, V., Dubos-Paillard, E., Charrier, R. et al. An analysis of the effects of territory properties on population behaviors and evacuation management during disasters using coupled dynamical systems. Appl Netw Sci 7, 17 (2022). https://doi.org/10.1007/s41109-022-00450-6 Coupled dynamical systems Human behaviors Territorial risk management Special Issue of the French Regional Conference on Complex Systems
CommonCrawl
The Squares of R-S Integrable Functions with Increasing Integrators The Squares of Riemann-Stieltjes Integrable Functions with Increasing Integrators Recall from The Absolute Value of Riemann-Stieltjes Integrals with Increasing Integrators page that if $f$ is a function defined on $[a, b]$ and $\alpha$ is an increasing function on $[a, b]$ then if $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $\mid f \mid$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ and furthermore: \begin{align} \quad \biggr \lvert \int_a^b f(x) \: d \alpha (x) \biggr \rvert \leq \int_a^b \mid f(x) \mid \: d \alpha (x) \end{align} Now suppose once again that $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ (where $\alpha$ is an increasing function). It would be nice to know whether or not the function $f^2$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. Fortunately it is, and we can prove it by using the theorem above and Riemann's condition. Theorem 1: Let $f$ be a function defined on $[a, b]$ and $\alpha$ be an increasing function on $[a, b]$. If $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $f^2$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. Proof: Let $f$ be Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ where $\alpha$ is an increasing function. Let $M > 0$ be any upper bound of $\mid f \mid$ on $[a, b]$. By Riemann's condition, for $\epsilon_1 = \frac{\epsilon}{2M} > 0$ there exists a partition $P_{\epsilon_1} \in \mathscr{P}[a, b]$ such that if $P$ is finer than $P_{\epsilon_1}$ then: \begin{align} \quad \sum_{k=1}^{n} [M_k(f) - m_k(f)] \Delta \alpha_k = U(P, f, \alpha) - L(P, f, \alpha) < \epsilon_1 = \frac{\epsilon}{2M} \quad (*) \end{align} We note that: \begin{align} \quad M_k(f^2) = \sup \{ [f(x)]^2 : x \in [x_{k-1}, x_k] \} = \left ( \sup \{ \mid f(x) \mid : x \in [x_{k-1}, x_k] \} \right )^2 = [M_k (\mid f \mid)]^2 \end{align} \begin{align} \quad m_k(f^2) = \inf \{ [f(x)^2] : x \in [x_{k-1}, x_k] \} = \left ( \inf \{ \mid f(x) \mid : x \in [x_{k-1}, x_k] \} \right )^2 = [m_k (\mid f \mid)]^2 \end{align} Hence, if $P_{\epsilon} = P_{\epsilon_1}$, then for $P$ finer than $P_{\epsilon}$ we have that $(*)$ holds and so: \begin{align} \quad M_k(f^2) - m_k(f^2) = M_k(\mid f \mid)^2 - m_k(\mid f \mid)^2 = [M_k(\mid f \mid) + m_k(\mid f \mid)][M_k(\mid f \mid) - m_k(\mid f \mid)] < 2M[M_k(\mid f \mid) - m_k(\mid f \mid)] \end{align} Multiplying by $\Delta \alpha_k$ and taking the sum from $k = 1$ to $k = n$ gives us that: \begin{align} \quad U(P, f^2, \alpha) - L(P, f^2, \alpha) = \sum_{k=1}^{n} [M_k(f^2) - m_k(f^2)] \Delta \alpha_k < \sum_{k=1}^{n} 2M[M_k(\mid f \mid) - m_k (\mid f \mid)] \Delta \alpha_k = 2M \sum_{k=1}^{n} [M_k(\mid f \mid) - m_k(\mid f \mid)] \Delta \alpha_k \end{align} From The Absolute Value of Riemann-Stieltjes Integrals with Increasing Integrators page we see that then: \begin{align} \quad U(P, f^2, \alpha) - L(P, f^2, \alpha) < ... = 2M[U(P, \mid f \mid, \alpha) - L(P, \mid f \mid, \alpha)] \leq 2M[U(P, f, \alpha) - L(P, f, \alpha)] < 2M \epsilon_1 = 2M \frac{\epsilon}{2M} = \epsilon \end{align} So for all $\epsilon > 0$ there exists a partition $P_{\epsilon}$ such that if $P$ is finer than $P_{\epsilon}$ then $U(P, f^2, \alpha) - L(P, f^2, \alpha) < \epsilon$, so Riemann's condition is satisfied and $f^2$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$. $\blacksquare$ It is very important to note that the converse of Theorem 1 is not true in general. For example, consider the function $f$ defined for all $x \in [0, 1]$ by $f(x) = \left\{\begin{matrix} 1 & \mathrm{if} \: x \: \mathrm{is \: irrational}\\ -1 & \mathrm{if} \: x \: \mathrm{is \: rational} \end{matrix}\right.$ and let $\alpha (x) = x$. Then $f^2(x) = 1$ on all of $[0, 1]$ which we already know is Riemann-Stieltjes integrable from the Riemann-Stieltjes Integrals with Constant Integrands page. However, $f$ itself is not Riemann-Stieltjes integrable. If $P = \{ 0 = x_0, x_1, ..., x_n = 1 \} \in \mathscr{P}[0, 1]$ is any partition, then for all $k \in \{ 1, 2, ..., n \}$ we have that $M_k(f) = \sup \{ f(x) : x \in [x_{k-1}, x_k] \} = 1$ and $m_k (f) = \inf \{ f(x) : x \in [x_{k-1}, x_k] \} = -1$ since every subinterval $[x_{k-1}, x_k]$ contains both rational and irrational numbers. Therefore: \begin{align} \quad U(P, f, x) - L(P, f, x) = \sum_{k=1}^{n} [M_k(f) - m_k(f)] \Delta \alpha_k = \sum_{k=1}^{n} 2 \Delta \alpha_k = 2(1 - 0) = 2 \end{align} But $U(P, f, x) - L(P, f, x) = 2$ for all partitions $P$, so for $\epsilon_1 = 1 > 0$ we have that there exists an positive epsilon such that for all partitions $P \in \mathscr{P}[0, 1]$ we have that $U(P, f, x) - L(P, f, x) > \epsilon_1$, so $f$ does not satisfy Riemann's condition and is hence not Riemann-Stieltjes integrable with respect to $\alpha$ on $[0, 1]$.
CommonCrawl
From GM-RKB (Redirected from number) A Number is a member from a formal number sequence (a number set in an ordering relation with other numbers). AKA: Numeric Value, Scalar Quantity. It can be referenced by a Number Label. It can range from being a Conceptual Number to being a Number Mention. It can range from being a Natural Number, an Integer Number, a Rational Number, an Irrational Number (transcendental number), a Real Number, an Imaginary Number. It can range from being a Negative Number, a Non-Positive Number, a Non-Negative Number, or a Positive Number. It can be associated to an Arithmetic System. It can be an Input to: a Scalar-Input Operation, a Scalar-Input Function, a Scalar-Input Relation. It can be an Input to an Arithmetic Operation, e.g. Addition Operation, Multiplication Operation. It can be the Output of a Scalar-Output Function, It can range from being a Small Number to being a Large Number. It can range from being an Observed Number to being a Predicted Number. [math]1[/math]. [math]\sqrt{-\pi}[/math]. Counter-Example(s): an Ordinal Value, such as large. a Class Value, such as red, or a telephone number, or a serial number. a Structured Object, such as a parsed sentence. See: Scalar Statistic, Vector, Scalar Product, Vector Length Function, Mathematical Object, Counting, Nominal Number, Natural Number, Numeral System. (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/number Retrieved:2016-10-1. A number is a mathematical object used to count, measure, and label.The original examples are the natural numbers Template:Num, Template:Num, Template:Num, and so forth. A notational symbol that represents a number is called a numeral. In addition to their use in counting and measuring, numerals are often used for labels (as with telephone numbers), for ordering (as with serial numbers), and for codes (as with ISBNs). In common usage, number may refer to a symbol, a word, or a mathematical abstraction. In mathematics, the notion of number has been extended over the centuries to include , negative numbers, rational numbers such as [math] \frac{1}{2} [/math] and [math] -\frac{2}{3} [/math] , real numbers such as [math] \sqrt{2} [/math] and [math] \pi [/math], complex numbers, which extend the real numbers by including [math] \sqrt{-1} [/math], and sometimes additional objects. Calculations with numbers are done with arithmetical operations, the most familiar being addition, subtraction, multiplication, division, and exponentiation. Their study or usage is called arithmetic. The same term may also refer to number theory, the study of the properties of the natural numbers. Besides their practical uses, numbers have cultural significance throughout the world.[1] [2] For example, in Western society the number 13 is regarded as unlucky, and "a million" may signify "a lot."[1] Though it is now regarded as pseudoscience, numerology, the belief in a mystical significance of numbers permeated ancient and medieval thought.[3] Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today.[3] During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers and may be seen as extending the concept. Among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. Today, number systems are considered important special examples of much more general categories such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance. [4] ↑ 1.0 1.1 Gilsdorf, Thomas E. Introduction to Cultural Mathematics: With Case Studies in the Otomies and Incas, John Wiley & Sons, Feb 24, 2012. ↑ Restivo, S. Mathematics in Society and History, Springer Science & Business Media, Nov 30, 1992. ↑ 3.0 3.1 Ore, Oystein. Number Theory and Its History, Courier Dover Publications. ↑ Gouvea, Fernando Q. The Princeton Companion to Mathematics, Chapter II.1, "The Origins of Modern Mathematics", p. 82. Princeton University Press, September 28, 2008. ISBN 978-0691118802. the property possessed by a sum or total or indefinite quantity of units or individuals; "he had a number of chores to do"; "the number of ... a concept of quantity involving zero and units; "every number has a unique position in the sequence" act: a short theatrical performance that is part of a longer program; "he did his act three times every evening"; "she had a catchy little routine"; "it was one of the best numbers he ever did" phone number: the number is used in calling a particular telephone; "he has an unlisted number" numeral: a symbol used to represent a number; "he learned to write the numerals before he went to school" total: add up in number or quantity; "The bills amounted to $2,000"; "The bill came to $2,000" issue: one of a series published periodically; "she found an old issue of the magazine in her dentist's waiting room" give numbers to; "You should number the pages of the thesis" a select company of people; "I hope to become one of their number before I die" enumerate; "We must number the names of the great mathematicians" a numeral or string of numerals that is used for identification; "she refused to give them her Social Security number" count: put into a group; "The academy counts several Nobel Prize winners among its members" a clothing measurement; "a number 13 shoe" count: determine the number or amount of; "Can you count the books on your shelf?"; "Count your change" the grammatical category for the forms of nouns and pronouns and verbs that are used depending on the number of entities involved (singular or dual or plural); "in English the subject and the verb must agree in number" place a limit on the number of an item of merchandise offered for sale; "she preferred the black nylon number"; "this sweater is an all-wool number" (WordNet, 2009) ⇒ http://wordnetweb.princeton.edu/perl/webwn?s=scalar S: (n) scalar (a variable quantity that cannot be resolved into components) S: (adj) scalar (of or relating to a musical scale) "he played some basic scalar patterns on his guitar" S: (adj) scalar (of or relating to a directionless magnitude (such as mass or speed etc.) that is completely specified by its magnitude) "scalar quantity" http://en.wiktionary.org/wiki/scalar 1. (mathematics) Having magnitude but not direction 2. Of, or relating to scale 1. (mathematics) A quantity that has magnitude but not direction; compare vector 2. (electronics) An amplifier whose output is a constant multiple of its input http://www.math.com/tables/oddsends/vectordefs.htm Definition:A scalar, generally speaking, is another name for "real number." Definition: A vector of dimension n is an ordered collection of n elements, which are called components. … It can represent magnitude and direction simultaneously. Retrieved from "http://www.gabormelli.com/RKB/index.php?title=Number&oldid=542959" About GM-RKB
CommonCrawl
Area Studies (19) Physics and Astronomy (3) Statistics and Probability (2) Publications of the Astronomical Society of Australia (3) Canadian Journal of Neurological Sciences (2) Canadian Journal on Aging / La Revue canadienne du vieillissement (2) Epidemiology & Infection (2) Industrial and Organizational Psychology (2) Infection Control & Hospital Epidemiology (2) International Journal of Technology Assessment in Health Care (2) Acta Neuropsychiatrica (1) Advances in Archaeological Practice (1) Behavioural and Cognitive Psychotherapy (1) Journal of Dairy Research (1) Journal of Pension Economics & Finance (1) Palliative & Supportive Care (1) Prehospital and Disaster Medicine (1) Proceedings of the Prehistoric Society (1) The Journal of Economic History (1) The Psychiatrist (1) Boydell & Brewer (32) Cambridge University Press (15) Canadian Association on Gerontology/L'Association canadienne de gerontologie CAG CJG (2) Canadian Neurological Sciences Federation (2) Health Technology Assessment International (2) Society for Industrial and Organizational Psychology (SIOP) (2) Economic History Association EHA JEH (1) FACBT (1) Fauna & Flora International - Oryx (1) International Organisation of Pension Supervisors (1) Society for American Archaeology (1) The Prehistoric Society (1) The Roman Society - JRS and BRI (1) World Association for Disaster and Emergency Medicine (1) Studies in German Literature Linguistics and Culture (19) Medieval English Theatre (13) Cambridge Studies in Biological and Evolutionary Anthropology (1) Ecological Reviews (1) University students and staff able to maintain low daily contact numbers during various COVID-19 guideline periods Adam Trickey, Emily Nixon, Hannah Christensen, Adam Finn, Amy Thomas, Caroline Relton, Clara Montgomery, Gibran Hemani, Jane Metz, Josephine G. Walker, Katy Turner, Rachel Kwiatkowska, Sarah Sauchelli, Leon Danon, Ellen Brooks-Pollock Published online by Cambridge University Press: 10 August 2021, e169 UK universities re-opened in September 2020, amidst the coronavirus epidemic. During the first term, various national social distancing measures were introduced, including banning groups of >6 people and the second lockdown in November; however, outbreaks among university students occurred. We aimed to measure the University of Bristol staff and student contact patterns via an online, longitudinal survey capturing self-reported contacts on the previous day. We investigated the change in contacts associated with COVID-19 guidance periods: post-first lockdown (23/06/2020–03/07/2020), relaxed guidance period (04/07/2020–13/09/2020), 'rule-of-six' period (14/09/2020–04/11/2020) and the second lockdown (05/11/2020–25/11/2020). In total, 722 staff (4199 responses) and 738 students (1906 responses) were included in the study. For staff, daily contacts were higher in the relaxed guidance and 'rule-of-six' periods than the post-first lockdown and second lockdown. Mean student contacts dropped between the 'rule-of-six' and second lockdown periods. For both staff and students, the proportion meeting with groups larger than six dropped between the 'rule-of-six' period and the second lockdown period, although was higher for students than for staff. Our results suggest university staff and students responded to national guidance by altering their social contacts. Most contacts during the second lockdown were household contacts. The response in staff and students was similar, suggesting that students can adhere to social distancing guidance while at university. The number of contacts recorded for both staff and students were much lower than those recorded by previous surveys in the UK conducted before the COVID-19 pandemic. A framework for conceptualizing leadership in conservation Seth A. Webb, Brett Bruyere, Matt Halladay, Sarah Walker Journal: Oryx , First View Published online by Cambridge University Press: 21 June 2021, pp. 1-7 Conservation challenges occur in complex social-ecological systems that require scientists and practitioners to recognize and embrace that humans are active agents within these systems. This interdependence of the social and ecological components of systems necessitates effective leadership to address and solve conservation problems successfully. Although conservation practitioners increasingly recognize leadership as critical to achieve conservation goals, clarity about the term leadership remains elusive in terms of specific strategies and behaviours. Our objective in this review of conservation leadership scholarship was to build on prior literature to conceptualize and define the behavioural leadership strategies that lead to successful conservation outcomes. Following an initial review of more than 1,200 peer-reviewed publications, we conducted a systematic review of 59 articles utilizing an inductive analysis approach and identified a set of five leadership domains that contribute to positive conservation outcomes: (1) stakeholder engagement, (2) trust, (3) vision, (4) individual champion, and (5) excellence in internal attributes. Each domain is defined by 2–4 behaviours that we consider leadership practices. To sustain meaningful progress toward global conservation of biodiversity, conservation scientists and practitioners must embrace and invest in leadership as an integral component of solving our collective conservation challenges. The Origins of Asset Management from 1700 to 1960: Towering Investors By Nigel Edward Morecroft. Palgrave Studies in the History of Finance, 1st ed. 2017. Sarah Walker Journal: Journal of Pension Economics & Finance , First View Prevalence and 1-year incidence of HIV-associated neurocognitive disorder (HAND) in adults aged ≥50 years attending standard HIV clinical care in Kilimanjaro, Tanzania Aidan Flatt, Tom Gentry, Johanna Kellett-Wright, Patrick Eaton, Marcella Joseph, Sarah Urasa, William Howlett, Marieke Dekker, Aloyce Kisoli, Jane Rogathe, Lindsay Henderson, Thomas Lewis, Jessica Thornton, Judith McCartney, Vanessa Yarwood, Charlotte Irwin, Elizabeta B. Mukaetova-Ladinska, Rufus Akinyemi, William K. Gray, Richard W. Walker, Catherine L. Dotchin, Andrew-Leon S. Quaker, Philip C. Makupa, Stella-Maria Paddick Journal: International Psychogeriatrics , First View HIV-associated neurocognitive disorders (HANDs) are prevalent in older people living with HIV (PLWH) worldwide. HAND prevalence and incidence studies of the newly emergent population of combination antiretroviral therapy (cART)-treated older PLWH in sub-Saharan Africa are currently lacking. We aimed to estimate HAND prevalence and incidence using robust measures in stable, cART-treated older adults under long-term follow-up in Tanzania and report cognitive comorbidities. A systematic sample of consenting HIV-positive adults aged ≥50 years attending routine clinical care at an HIV Care and Treatment Centre during March–May 2016 and followed up March–May 2017. HAND by consensus panel Frascati criteria based on detailed locally normed low-literacy neuropsychological battery, structured neuropsychiatric clinical assessment, and collateral history. Demographic and etiological factors by self-report and clinical records. In this cohort (n = 253, 72.3% female, median age 57), HAND prevalence was 47.0% (95% CI 40.9–53.2, n = 119) despite well-managed HIV disease (Mn CD4 516 (98-1719), 95.5% on cART). Of these, 64 (25.3%) were asymptomatic neurocognitive impairment, 46 (18.2%) mild neurocognitive disorder, and 9 (3.6%) HIV-associated dementia. One-year incidence was high (37.2%, 95% CI 25.9 to 51.8), but some reversibility (17.6%, 95% CI 10.0–28.6 n = 16) was observed. HAND appear highly prevalent in older PLWH in this setting, where demographic profile differs markedly to high-income cohorts, and comorbidities are frequent. Incidence and reversibility also appear high. Future studies should focus on etiologies and potentially reversible factors in this setting. Remnant radio galaxies discovered in a multi-frequency survey GAMA Legacy ATCA Southern Survey Benjamin Quici, Natasha Hurley-Walker, Nicholas Seymour, Ross J. Turner, Stanislav S. Shabala, Minh Huynh, H. Andernach, Anna D. Kapińska, Jordan D. Collier, Melanie Johnston-Hollitt, Sarah V. White, Isabella Prandoni, Timothy J. Galvin, Thomas Franzen, C. H. Ishwara-Chandra, Sabine Bellstedt, Steven J. Tingay, Bryan M. Gaensler, Andrew O'Brien, Johnathan Rogers, Kate Chow, Simon Driver, Aaron Robotham Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 09 February 2021, e008 The remnant phase of a radio galaxy begins when the jets launched from an active galactic nucleus are switched off. To study the fraction of radio galaxies in a remnant phase, we take advantage of a $8.31$ deg $^2$ subregion of the GAMA 23 field which comprises of surveys covering the frequency range 0.1–9 GHz. We present a sample of 104 radio galaxies compiled from observations conducted by the Murchison Widefield Array (216 MHz), the Australia Square Kilometer Array Pathfinder (887 MHz), and the Australia Telescope Compact Array (5.5 GHz). We adopt an 'absent radio core' criterion to identify 10 radio galaxies showing no evidence for an active nucleus. We classify these as new candidate remnant radio galaxies. Seven of these objects still display compact emitting regions within the lobes at 5.5 GHz; at this frequency the emission is short-lived, implying a recent jet switch off. On the other hand, only three show evidence of aged lobe plasma by the presence of an ultra-steep-spectrum ( $\alpha<-1.2$) and a diffuse, low surface brightness radio morphology. The predominant fraction of young remnants is consistent with a rapid fading during the remnant phase. Within our sample of radio galaxies, our observations constrain the remnant fraction to $4\%\lesssim f_{\mathrm{rem}} \lesssim 10\%$; the lower limit comes from the limiting case in which all remnant candidates with hotspots are simply active radio galaxies with faint, undetected radio cores. Finally, we model the synchrotron spectrum arising from a hotspot to show they can persist for 5–10 Myr at 5.5 GHz after the jets switch of—radio emission arising from such hotspots can therefore be expected in an appreciable fraction of genuine remnants. Does the maturation of early sleep patterns predict language ability at school entry? A Born in Bradford study Victoria C. P. KNOWLAND, Sam BERENS, M. Gareth GASKELL, Sarah A. WALKER, Lisa-Marie HENDERSON Journal: Journal of Child Language / Volume 49 / Issue 1 / January 2022 Published online by Cambridge University Press: 03 February 2021, pp. 1-23 Children's vocabulary ability at school entry is highly variable and predictive of later language and literacy outcomes. Sleep is potentially useful in understanding and explaining that variability, with sleep patterns being predictive of global trajectories of language acquisition. Here, we looked to replicate and extend these findings. Data from 354 children (without English as an additional language) in the Born in Bradford study were analysed, describing the mean intercepts and linear trends in parent-reported day-time and night-time sleep duration over five time points between 6 and 36 months-of-age. The mean difference between night-time and day-time sleep was predictive of receptive vocabulary at age five, with more night-time sleep relative to day-time sleep predicting better language. An exploratory analysis suggested that socioeconomic status was predictive of vocabulary outcomes, with sleep patterns partially mediating this relationship. We suggest that the consolidation of sleep patterns acts as a driver of early language development. Do existing real-world data sources generate suitable evidence for the HTA of medical devices in Europe? Mapping and critical appraisal Benedetta Pongiglione, Aleksandra Torbica, Hedwig Blommestein, Saskia de Groot, Oriana Ciani, Sarah Walker, Florian Dams, Rudolf Blankart, Meilin Mollenkamp, Sándor Kovács, Rosanna Tarricone, Mike Drummond Journal: International Journal of Technology Assessment in Health Care / Volume 37 / Issue 1 / 2021 Published online by Cambridge University Press: 26 April 2021, e62 Technological and computational advancements offer new tools for the collection and analysis of real-world data (RWD). Considering the substantial effort and resources devoted to collecting RWD, a greater return would be achieved if real-world evidence (RWE) was effectively used to support Health Technology Assessment (HTA) and decision making on medical technologies. A useful question is: To what extent are RWD suitable for generating RWE? We mapped existing RWD sources in Europe for three case studies: hip and knee arthroplasty, transcatheter aortic valve implantation (TAVI) and mitral valve repair (TMVR), and robotic surgery procedures. We provided a comprehensive assessment of their content and appropriateness for conducting the HTA of medical devices. The identification of RWD sources was performed combining a systematic search on PubMed with gray literature scoping, covering fifteen European countries. We identified seventy-one RWD sources on arthroplasties; ninety-five on TAVI and TMVR; and seventy-seven on robotic procedures. The number, content, and integrity of the sources varied dramatically across countries. Most sources included at least one health outcome (97.5%), with mortality and rehospitalization/reoperation the most common; 80% of sources included resource outcomes, with length of stay the most common, and comparators were available in almost 70% of sources. RWD sources bear the potential for the HTA of medical devices. The main challenges are data accessibility, a lack of standardization of health and economic outcomes, and inadequate comparators. These findings are crucial to enabling the incorporation of RWD into decision making and represent a readily available tool for getting acquainted with existing information sources. Efficacy of the Zero Suicide framework in reducing recurrent suicide attempts: cross-sectional and time-to-recurrent-event analyses Nicolas J. C. Stapelberg, Jerneja Sveticic, Ian Hughes, Alice Almeida-Crasto, Taralina Gaee-Atefi, Neeraj Gill, Diana Grice, Ravikumar Krishnaiah, Luke Lindsay, Carla Patist, Heidy Van Engelen, Sarah Walker, Matthew Welch, Sabine Woerwag-Mehta, Kathryn Turner Journal: The British Journal of Psychiatry / Volume 219 / Issue 2 / August 2021 The Zero Suicide framework is a system-wide approach to prevent suicides in health services. It has been implemented worldwide but has a poor evidence-base of effectiveness. To evaluate the effectiveness of the Zero Suicide framework, implemented in a clinical suicide prevention pathway (SPP) by a large public mental health service in Australia, in reducing repeated suicide attempts after an index attempt. A total of 604 persons with 737 suicide attempt presentations were identified between 1 July and 31 December 2017. Relative risk for a subsequent suicide attempt within various time periods was calculated using cross-sectional analysis. Subsequently, a 10-year suicide attempt history (2009–2018) for the cohort was used in time-to-recurrent-event analyses. Placement on the SPP reduced risk for a repeated suicide attempt within 7 days (RR = 0.29; 95% CI 0.11–0.75), 14 days (RR = 0.38; 95% CI 0.18–0.78), 30 days (RR = 0.55; 95% CI 0.33–0.94) and 90 days (RR = 0.62; 95% CI 0.41–0.95). Time-to-recurrent event analysis showed that SPP placement extended time to re-presentation (HR = 0.65; 95% CI 0.57–0.67). A diagnosis of personality disorder (HR = 2.70; 95% CI 2.03–3.58), previous suicide attempt (HR = 1.78; 95% CI 1.46–2.17) and Indigenous status (HR = 1.46; 95% CI 0.98–2.25) increased the hazard for re-presentation, whereas older age decreased it (HR = 0.92; 95% CI 0.86–0.98). The effect of the SPP was similar across all groups, reducing the risk of re-presentation to about 65% of that seen in those not placed on the SPP. This paper demonstrates a reduction in repeated suicide attempts after an index attempt and a longer time to a subsequent attempt for those receiving multilevel care based on the Zero Suicide framework. Reduction of Central-Line–Associated Bloodstream Infections in a Spinal Cord Injury Unit Stephanie L. Baer, Amy Halcyon Larsh, Annalise Prunier, Victoria Thurmond, Donna Goins, Nancy Hickox, Mary Gardenhire, Tiffany Walker, Sarah Bernal, Maryea Nowacki, Lenora Griffin, Heather Hunter-Watson Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue S1 / October 2020 Published online by Cambridge University Press: 02 November 2020, p. s370 Background: Central-line–associated bloodstream infections (CLABSIs) are a complication of indwelling central venous catheters, which increase morbidity, mortality, and cost to patients. Objective: Due to increased rates in a spinal cord injury unit (SCIU), a performance improvement project was started to reduce CLABSI in the patient population. Methods: To reduce the incidence of CLABSI, a prevention bundle was adopted, and a peer-surveillance tool was developed to monitor compliance with the bundle. Staff were trained to monitor their peers and submit weekly surveillance. Audits were conducted by the clinical nurse leader with accuracy feedback. Bundle peer-surveillance was implemented in February of 2018 with data being fed back to leadership, peer monitors, and stakeholders. Gaps in compliance were addressed with peer-to-peer education, changes in documentation requirements, and meetings to improve communication and reduce line days. In addition, the use of an antiseptic-impregnated disc for vascular accesses was implemented for dressing changes. Further quality improvement cycles during the first 2 quarters of fiscal year 2019 included service-wide education reinforcement, identification in variance of practice, and reporting to staff and stakeholders. Results: CLABSI bundle compliance increased from 67% to 98% between February and October 2018. The weekly audit reporting accuracy improved from 33% to 100% during the same period. Bundle compliance was sustained through the fourth quarter of 2019 at 98%, and audit accuracy was 99%. The initial CLABSI rates the quarter prior to the intervention were 6.10 infections per 1,000 line days for 1 of the 3 SCIUs and 2.68 infections per 1,000 line days for the service overall. After the action plan was initiated, no CLABSIs occurred for the next 3 quarters in all SCIUs despite unchanged use of central lines (5,726 line days in 2018). The improvement was sustained, and the line days decreased slightly for 2019, with a fiscal year rate of 0.61 per 1,000 line days (ie, 3 CLABSIs in 4,927 central-line days). Conclusions: The incidence of CLABSI in the SCIU was reduced by an intensive surveillance intervention to perform accurate peer monitoring of bundle compliance with weekly feedback, communication, and education strategies, improvement of the documentation, and the use of antiseptic-impregnated discs for dressings. Despite the complexity of the patient population requiring long-term central lines, the CLABSI rate was greatly impacted by evidence-based interventions coupled with reinforcement of adherence to the bundle. Funding: None Disclosures: None The GLEAM 4-Jy (G4Jy) Sample: I. Definition and the catalogue Sarah V. White, Thomas M. O Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, Bi-Qing For, B. M. Gaensler, Melanie Johnston-Hollitt, André Offringa, Lister Staveley-Smith Published online by Cambridge University Press: 01 June 2020, e018 The Murchison Widefield Array (MWA) has observed the entire southern sky (Declination, $\delta< 30^{\circ}$ ) at low radio frequencies, over the range 72–231MHz. These observations constitute the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we use the extragalactic catalogue (EGC) (Galactic latitude, $|b| >10^{\circ}$ ) to define the GLEAM 4-Jy (G4Jy) Sample. This is a complete sample of the 'brightest' radio sources ( $S_{\textrm{151\,MHz}}>4\,\text{Jy}$ ), the majority of which are active galactic nuclei with powerful radio jets. Crucially, low-frequency observations allow the selection of such sources in an orientation-independent way (i.e. minimising the bias caused by Doppler boosting, inherent in high-frequency surveys). We then use higher-resolution radio images, and information at other wavelengths, to morphologically classify the brightest components in GLEAM. We also conduct cross-checks against the literature and perform internal matching, in order to improve sample completeness (which is estimated to be $>95.5$ %). This results in a catalogue of 1863 sources, making the G4Jy Sample over 10 times larger than that of the revised Third Cambridge Catalogue of Radio Sources (3CRR; $S_{\textrm{178\,MHz}}>10.9\,\text{Jy}$ ). Of these G4Jy sources, 78 are resolved by the MWA (Phase-I) synthesised beam ( $\sim2$ arcmin at 200MHz), and we label 67% of the sample as 'single', 26% as 'double', 4% as 'triple', and 3% as having 'complex' morphology at $\sim1\,\text{GHz}$ (45 arcsec resolution). We characterise the spectral behaviour of these objects in the radio and find that the median spectral index is $\alpha=-0.740 \pm 0.012$ between 151 and 843MHz, and $\alpha=-0.786 \pm 0.006$ between 151MHz and 1400MHz (assuming a power-law description, $S_{\nu} \propto \nu^{\alpha}$ ), compared to $\alpha=-0.829 \pm 0.006$ within the GLEAM band. Alongside this, our value-added catalogue provides mid-infrared source associations (subject to 6" resolution at 3.4 $\mu$ m) for the radio emission, as identified through visual inspection and thorough checks against the literature. As such, the G4Jy Sample can be used as a reliable training set for cross-identification via machine-learning algorithms. We also estimate the angular size of the sources, based on their associated components at $\sim1\,\text{GHz}$ , and perform a flux density comparison for 67 G4Jy sources that overlap with 3CRR. Analysis of multi-wavelength data, and spectral curvature between 72MHz and 20GHz, will be presented in subsequent papers, and details for accessing all G4Jy overlays are provided at https://github.com/svw26/G4Jy. The GLEAM 4-Jy (G4Jy) Sample: II. Host galaxy identification for individual sources Sarah V. White, Thomas M. O. Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, B. M. Gaensler, Melanie Johnston–Hollitt, André Offringa, Lister Staveley–Smith The entire southern sky (Declination, $\delta< 30^{\circ}$ ) has been observed using the Murchison Widefield Array (MWA), which provides radio imaging of $\sim$ 2 arcmin resolution at low frequencies (72–231 MHz). This is the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we have previously used a combination of visual inspection, cross-checks against the literature, and internal matching to identify the 'brightest' radio-sources ( $S_{\mathrm{151\,MHz}}>4$ Jy) in the extragalactic catalogue (Galactic latitude, $|b| >10^{\circ}$ ). We refer to these 1 863 sources as the GLEAM 4-Jy (G4Jy) Sample, and use radio images (of ${\leq}45$ arcsec resolution), and multi-wavelength information, to assess their morphology and identify the galaxy that is hosting the radio emission (where appropriate). Details of how to access all of the overlays used for this work are available at https://github.com/svw26/G4Jy. Alongside this we conduct further checks against the literature, which we document here for individual sources. Whilst the vast majority of the G4Jy Sample are active galactic nuclei with powerful radio-jets, we highlight that it also contains a nebula, two nearby, star-forming galaxies, a cluster relic, and a cluster halo. There are also three extended sources for which we are unable to infer the mechanism that gives rise to the low-frequency emission. In the G4Jy catalogue we provide mid-infrared identifications for 86% of the sources, and flag the remainder as: having an uncertain identification (129 sources), having a faint/uncharacterised mid-infrared host (126 sources), or it being inappropriate to specify a host (2 sources). For the subset of 129 sources, there is ambiguity concerning candidate host-galaxies, and this includes four sources (B0424–728, B0703–451, 3C 198, and 3C 403.1) where we question the existing identification. Chapter 23 - Fetal Growth Restriction: Placental Basis and Implications for Clinical Practice from Fetal Growth and Well-being By John Kingdom, Melissa Walker, Sascha Drewlo, Sarah Keating Edited by Mark D. Kilby, University of Birmingham, Anthony Johnson, Dick Oepkes Book: Fetal Therapy Advances in obstetrical ultrasound technology, combined with newer magnetic resonance imaging (MRI) methods, cell-free fetal DNA testing in maternal blood, and comprehensive molecular testing of the fetus, have greatly improved prenatal diagnostic capabilities in the context of fetal growth restriction (FGR) as shown in Chapter 24. Increased understanding and use of these resources means the likelihood of recognizing a fetal basis for FGR before birth, and managing it accordingly, will increase. The presumption of a placental basis for FGR dominates everyday clinical practice, yet paradoxically at present the application of current knowledge of what constitutes true 'placental insufficiency' has not translated into improved maternal care and perinatal outcomes. As an example, 33% of 650 women recruited to the landmark DIGITAT (Disproportionate Intrauterine Growth Intervention Trial at Term) trial had no postnatal evidence of FGR (defined as birth weight <10th percentile) [1]. Since obstetricians manage suspected FGR prior to delivery, they fear a risk of antepartum stillbirth and deploy frequent short-term tests of fetal well-being (biophysical profile, Doppler ultrasound, and non-stress tests), even via hospital admission, in the absence of any objective placental diagnosis. Fortunately, recent advances in the understanding of the placental basis of FGR have led to much-improved precision in both screening for the disease [2, 3] and in the prenatal diagnosis of the placental basis of FGR [4]. This chapter is designed to equip obstetricians, midwives and maternal–fetal medicine sub-specialists with key concepts in placental development and pathology that directly contribute to the care of women with suspected FGR pregnancies. Nothing About Us, without Us." How Community-Based Participatory Research Methods Were Adapted in an Indigenous End-of-Life Study Using Previously Collected Data—ERRATUM Sarah Funnell, Peter Tanuseputro, Angeline Letendre, Lisa Bourque Bearskin, Jennifer Walker Journal: Canadian Journal on Aging / La Revue canadienne du vieillissement / Volume 39 / Issue 2 / June 2020 Published online by Cambridge University Press: 02 December 2019, p. 330 "Nothing About Us, without Us." How Community-Based Participatory Research Methods Were Adapted in an Indigenous End-of-Life Study Using Previously Collected Data Indigenous health research in Canada has a chequered past and has been identified as problematic and lacking in appropriate collaboration with Indigenous people. The Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans, Chapter 9 describes ethical conduct of research regarding First Nations, Inuit, and Métis Peoples. First Nations Ownership, Control, Access, and Possession (OCAP®) Principles highlight the necessity of Indigenous engagement and governance. To ensure that the aims and activities of the research being developed are in full and meaningful partnership with Indigenous peoples and communities, community-based participatory research (CBPR) methods provide a process in which full engagement is possible. Research utilizing secondary data sets, such as routinely collected health administrative data, should no longer be excluded from this approach. Our aim was to describe how our research team of academic researchers and a national Indigenous health organization adapted CBPR methods in a research project using previously collected data to examine end-of-life health care service delivery gaps for Indigenous people in Ontario. We describe the process of how we developed our research partnership and how grounding principles and Indigenous ways of knowing guided our work together. Through the adaptation of CBPR methods, our research partnership illustrates a process of engagement that can guide others hoping to conduct Indigenous health research using previously collected data. We also present a transparent research agreement negotiated equally by a national Indigenous health organization and research scientists, which can also be used as a framework for others wishing to establish similar research partnerships. Ensuring that Indigenous perspectives are central to and reflected in the research process is essential when using health administrative data. Zooarchaeological Database Preservation, Multiscalar Data Integration, and the Collaboration of the Eastern Archaic Faunal Working Group Sarah W. Neusius, Bonnie W. Styles, Tanya M. Peres, Renee B. Walker, George M. Crothers, Beverley A. Smith, Mona L. Colburn Journal: Advances in Archaeological Practice / Volume 7 / Issue 4 / November 2019 Data preservation, reuse, and synthesis are important goals in contemporary archaeological research that have been addressed by the recent collaboration of the Eastern Archaic Faunal Working Group (EAFWG). We used the Digital Archaeological Record (tDAR) to preserve 60 significant legacy faunal databases from 23 Archaic period archaeological sites located in several contiguous subregions of the interior North American Eastern Woodlands. In order to resolve the problem of synthesizing non-standardized databases, we used the ontology and integration tools available in tDAR to explore comparability and combine datasets so that our research questions about aquatic resource use during the Archaic could be addressed at multiple scales. The challenges of making digital databases accessible for reuse, including the addition of metadata, and of linking disparate data in queryable datasets are significant but worth the effort. Our experience provides one example of how collaborative research may productively resolve problems in making legacy data accessible and usable for synthetic archaeological research. Reporting sexual harassment: The role of psychological safety climate Sarah Singletary Walker, Enrica N. Ruggs, Regina M. Taylor, M. Lance Frazier Journal: Industrial and Organizational Psychology / Volume 12 / Issue 1 / March 2019 Conceptualization of depression amongst older adults in rural Tanzania: a qualitative study Kate Howorth, Stella-Maria Paddick, Jane Rogathi, Richard Walker, William Gray, Lloyd L. Oates, Damas Andrea, Ssenku Safic, Sarah Urasa, Irene Haule, Catherine Dotchin Journal: International Psychogeriatrics / Volume 31 / Issue 10 / October 2019 Depression in older people is likely to become a growing global health problem with aging populations. Significant cultural variation exists in beliefs about depression (terminology, symptomatology, and treatments) but data from sub-Saharan Africa are minimal. Low-resource interventions for depression have been effective in low-income settings but cannot be utilized without accurate diagnosis. This study aimed to achieve a shared understanding of depression in Tanzania in older people. Using a qualitative design, focus groups were conducted with participants aged 60 and over. Participants from rural villages of Kilimanjaro, Tanzania, were selected via randomized sampling using census data. Topic guides were developed including locally developed case vignettes. Transcripts were translated into English from Swahili and thematic analysis conducted. Ten focus groups were held with 81 participants. Three main themes were developed: a) conceptualization of depression by older people and differentiation from other related conditions ("too many thoughts," cognitive symptoms, affective and biological symptoms, wish to die, somatic symptoms, and its difference to other concepts); b) the causes of depression (inability to work, loss of physical strength and independence, lack of resources, family difficulties, chronic disease); c) management of depression (love and comfort, advice, spiritual support, providing help, medical help). This research expands our understanding of how depression presents in older Tanzanians and provides information about lay beliefs regarding causes and management options. This may allow development of culturally specific screening tools for depression that, in turn, increase diagnosis rates, support accurate diagnosis, improve service use, and reduce stigma. Tariffs and Trees: The Effects of the Austro-Hungarian Customs Union on Specialization and Land-Use Change Jennifer Alix-Garcia, Sarah Walker, Volker Radeloff, Jacek Kozak Journal: The Journal of Economic History / Volume 78 / Issue 4 / December 2018 Published online by Cambridge University Press: 02 October 2018, pp. 1142-1178 This article examines the impact of the 1850 Austro-Hungarian customs union on production land-use outcomes. Using newly digitized data from the Second Military Survey of the Habsburg Monarchy, we apply a spatial discontinuity design to estimate the impact of trade liberalization on land use. We find that the customs union increased cropland area by 8 percent per year in Hungary between 1850 and 1855, while forestland area decreased by 6 percent. We provide suggestive evidence that this result is not confounded by the emancipation of the serfs, population growth, or technological change in agriculture. Canadian Normative Data for Minimal Assessment of Cognitive Function in Multiple Sclerosis – CORRIGENDUM Lisa A.S. Walker, David Marino, Jason A. Berard, Anthony Feinstein, Sarah A. Morrow, Denis Cousineau Journal: Canadian Journal of Neurological Sciences / Volume 45 / Issue 5 / September 2018 Published online by Cambridge University Press: 23 August 2018, p. 604 Identification of an immune modulation locus utilising a bovine mammary gland infection challenge model Mathew D Littlejohn, Sally-Anne Turner, Caroline G Walker, Sarah D Berry, Kathryn Tiplady, Ric G Sherlock, Greg Sutherland, Simon Swift, Dorian Garrick, S Jane Lacy-Hulbert, Scott McDougall, Richard J Spelman, Russell G Snell, J Eric Hillerton Journal: Journal of Dairy Research / Volume 85 / Issue 2 / May 2018 Inflammation of the mammary gland following bacterial infection, commonly known as mastitis, affects all mammalian species. Although the aetiology and epidemiology of mastitis in the dairy cow are well described, the genetic factors mediating resistance to mammary gland infection are not well known, due in part to the difficulty in obtaining robust phenotypic information from sufficiently large numbers of individuals. To address this problem, an experimental mammary gland infection experiment was undertaken, using a Friesian-Jersey cross breed F2 herd. A total of 604 animals received an intramammary infusion of Streptococcus uberis in one gland, and the clinical response over 13 milkings was used for linkage mapping and genome-wide association analysis. A quantitative trait locus (QTL) was detected on bovine chromosome 11 for clinical mastitis status using micro-satellite and Affymetrix 10 K SNP markers, and then exome and genome sequence data used from the six F1 sires of the experimental animals to examine this region in more detail. A total of 485 sequence variants were typed in the QTL interval, and association mapping using these and an additional 37 986 genome-wide markers from the Illumina SNP50 bovine SNP panel revealed association with markers encompassing the interleukin-1 gene cluster locus. This study highlights a region on bovine chromosome 11, consistent with earlier studies, as conferring resistance to experimentally induced mammary gland infection, and newly prioritises the IL1 gene cluster for further analysis in genetic resistance to mastitis.
CommonCrawl
中心公开课 短期课程 讨论班 学术报告 会议论坛 当前位置: 首页 >> 学术活动 >> 讨论班 >> 近期讨论班 >> 正文 BIMSA-YMSC Tsinghua Number Theory Seminar 报告人 Speaker: Vesselin Dimitrov (Georgia Institute of Technology) 组织者 Organizer:Hansheng Diao, Yueke Hu, Emmanuel Lecouturier, Cezar Lupu 时间 Time:Weekly/Biweekly 地点 Venue:Zoom ID: 293 812 9202 Passcode: BIMSA Note:Location & Time can change depending on the speaker's availability. This is a research seminar on topics related to number theory and its applications which broadly can include related areas of interests such as analytic and algebraic number theory, algebra, combinatorics, algebraic and arithmetic geometry, cryptography, representation theory etc. The speakers are also encouraged to make their talk more accessible for graduate level students. For more information,please refer to: http://www.bimsa.cn/newsinfo/647938.html. Upcoming Talks: Title: Arithmetic holonomy bounds and their applications Speaker: Vesselin Dimitrov (Institute for Advanced Study in Princeton, Georgia Institute of Technology) Time: 20:00-21:30 Beijing time, Jan 17, 2023 Zoom ID: 293 812 9202 Passcode: BIMSA On the heels of the proof of the Unbounded Denominators conjecture (previously presented in this seminar by Yunqing Tang), we discuss an upgraded and refined form of our main technical tool in this area, the "arithmetic holonomicity theorem," of which we will detail a proof based on Bost's slopes method. Our treatment will lead us to a new alternative argument for the unbounded denominators theorem on the Fourier expansions of noncongruence modular forms. We will then conclude by explaining how the same arithmetic holonomicity theorem also leads to a proof of the irrationality of all products of two logarithms $\log(1+1/n)\log(1+1/m)$ for arbitrary integer pairs $(n,m)$ with $|1-m/n| < c$, where $c > 0$ is a positive absolute constant. This is a joint work with Frank Calegari and Yunqing Tang. Past Talks: Title: Finite Euler products and the Riemann Hypothesis Speaker: Steve M. Gonek (University of Rochester) Venue: BIMSA 1118 We investigate approximations of the Riemann zeta function by truncations of its Dirichlet series and Euler product, and then construct a parameterized family of non-analytic approximations to the zeta function. Apart from a few possible exceptions near the real axis, each function in the family satisfies a Riemann Hypothesis. When the parameter is not too large, the functions have roughly the same number of zeros as the zeta function, their zeros are all simple, and they repel. In fact, if the Riemann hypothesis is true, the zeros of these functions converge to those of the zeta function as the parameter increases, and between zeros of the zeta function the functions in the family tend to twice the zeta function. They may therefore be regarded as models of the Riemann zeta function. The structure of the functions explains the simplicity and repulsion of their zeros when the parameter is small. One might therefore hope to gain insight from them into the mechanism responsible for the corresponding properties of the zeros of the zeta function. Title: Bounds for standard L-functions Speaker: Paul Nelson (Aarhus University) Time: 15:30-16:30 Beijing time, Dec 13, 2022 We consider the standard L-function attached to a cuspidal automorphic representation of a general linear group. We present a proof of a subconvex bound in the t-aspect. More generally, we address the spectral aspect in the case of uniform parameter growth. These results are the subject of the third paper linked below, building on the first two. Title: A Hardy space characterization of the zero-free region of the Riemann zeta function Speaker: Dongsheng Wu (BIMSA) In this talk, I will first introduce an equivalent statement of the Riemann Hypothesis in the framework of Hardy spaces in right half-planes. Then I will give a characterization of the zero-free region of the Riemann zeta function in this framework. I will explain the proof and discuss some related topics. This talk is based on a joint work with Fei Wei. Title: A proof of Kudla-Rapoport conjecture for Kramer models at ramified primes Speaker: Qiao He (University of Wisconsin-Madison) Time: Tues.,10:30-11:30am Beijing time, Nov 29, 2022 In this talk, I will first talk about the Kudla-Rapoport conjecture, which suggests a precise identity between arithmetic intersection numbers of special cycles on Rapoport-Zink space and derived local densities of hermitian forms. Then I will discuss how to modify the original conjecture over ramified primes and how to prove the modified conjecture. On the geometric side, we completely avoid explicit calculation of intersection number and the use of Tate's conjecture. On the analytic side, the key input is a surprisingly simple formula for derived primitive local density. This talk is based on joint work with Chao Li, Yousheng Shi and Tonghai Yang. Title: Generalized Paley Graphs, Finite Field Hypergeometric Functions and Modular Forms Speaker: Dermot McCarthy (Texas Tech University) Time: 10:30-11:30 Beijing time, Nov 22, 2022 In 1955, Greenwood and Gleason proved that the two-color diagonal Ramsey number $R(4,4)$ equals 18. Key to their proof was constructing a self-complementary graph of order 17 which does not contain a complete subgraph of order four. This graph is one in the family of graphs now known as Paley graphs. In the 1980s, Evans, Pulham and Sheehan provided a simple closed formula for the number of complete subgraphs of order four of Paley graphs of prime order. Since then, \emph{generalized Paley graphs} have been introduced. In this talk, we will discuss our recent work on extending the result of Evans, Pulham and Sheahan to generalized Paley graphs, using finite field hypergeometric functions. We also examine connections between our results and both multicolor diagonal Ramsey numbers and Fourier coefficients of modular forms. This is joint work with Madeline Locus Dawsey (UT Tyler) and Mason Springfield (Texas Tech University). Title: Quantitative weak approximation of rational points on quadrics Speaker: Zhizhong Huang (AMSS) Venue: W11, Ningzhai, Tsinghua University The classical Hasse—Minkowski theorem states that rational points on quadrics (if non-empty) satisfy weak approximation. We explain how Heath-Brown's delta circle method allows to obtain a quantitive and effective version of this theorem, namely counting rational points of bounded height on quadrics satisfying prescribed local conditions with optimal error terms. We then discuss applications in intrinsic Diophantine approximation on quadrics. This is based on joint work in progress with M. Kaesberg, D. Schindler, A. Shut. Title: Equidistribution in Stochastic Dynamical Systems Speaker: Bella Tobin (Oregon State University) In arithmetic dynamics, one typically studies the behavior and arithmetic properties of a rational map under iteration. Instead of iterating a single rational map, we will consider a countable family of rational maps, iterated according to some probability measure. We call such a system a stochastic dynamical system. As such a family can be infinite and may not be defined over a single number field, we introduce the concept of a generalized adelic measure, generalizing previous notions introduced by Favre and Rivera-Letelier and Mavraki and Ye. Generalized adelic measures are defined over the measure space of places of an algebraic closure of the rational numbers using the framework established by Allcock and Vaaler. This turns heights from sums into integrals. We prove an equidistribution result for generalized adelic measures, and in turn prove an equidistribution theorem for random backwards orbits for stochastic dynamical systems. This talk will include some background in arithmetic dynamics and will be suitable for graduate students. Title: Slopes of modular form and ghost conjecture Speaker: Bin Zhao (Capital Normal University) Time: 16:00-17:00 Beijing time, Nov 1, 2022 In 2016, Bergdall and Pollack raised a conjecture towards the computation of the p-adic slopes of Hecke cuspidal eigenforms whose associated p-adic Galois representations satisfy the assumption that their mod p reductions become reducible when restricted to the p-decomposition group. In this talk, I will report the joint work with Ruochuan Liu, Nha Truong and Liang Xiao to prove this conjecture under mild assumptions. I will start with the statement of this conjecture and the intuition behind it. Then I will explain some strategies of our proof. If time permits, I will mention some arithmetic applications of this conjecture. Title: On $G$-isoshtukas over function fields. Speaker: Wansu Kim Time: 15:00-16:00 Beijing time, Oct 25, 2022 Let $F$ be a global function field, and let $G$ be a connected reductive group over $F$. In this talk, we will introduce the notion of $G$-isoshtukas, and discuss a classification result analogous to Kottwitz' classification of local and global $B(G)$. If $G=\GL_n$ then $\GL_n$-isoshtukas are nothing but $\varphi$-spaces of rank $n$ (which naturally arise as an isogeny class of rank-$n$ Drinfeld shtukas), and our classification result for $\GL_n$-isoshtukas can be read off from Drinfeld's classification of $\varphi$-spaces. This is a joint work with Paul Hamacher. Title: Counting polynomials with a prescribed Galois group Speaker: Vlad Matei (Simion Stoilow Institute of Mathematics of the Romanian Academy) Time: 15:30-16:30 Beijing time, Oct 18, 2022 (updated) An old problem, dating back to Van der Waerden, asks about counting irreducible polynomials degree $n$ polynomials with coefficients in the box $[-H,H]$ and prescribed Galois group. Van der Waerden was the first to show that $H^n+O(H^{n-\delta})$ have Galois group $S_n$ and he conjectured that the error term can be improved to $o(H^{n-1})$. Recently, Bhargava almost proved van der Waerden conjecture showing that there are $O(H^{n-1+\varepsilon})$ non $S_n$ extensions, while Chow and Dietmann showed that there are $O(H^{n-1.017})$ non $S_n$, non $A_n$ extensions for $n\geq 3$ and $n\neq 7,8,10$. In joint work with Lior Bary-Soroker, and Or Ben-Porath we use a result of Hilbert to prove a lower bound for the case of $G=A_n$, and upper and lower bounds for $C_2$ wreath $S_{n/2}$ . The proof for $A_n$ can be viewed, on the geometric side, as constructing a morphism $\varphi$ from $A^{n/2}$ into the variety $z^2=\Delta(f)$ where each $varphi_i$ is a quadratic form. For the upper bound for $C_2$ wreath $S_{n/2}$ we improve on the monic version of Widmer's result on counting polynomials with an imprimitive Galois group. We also pose some open problems/conjectures. Title: Multizeta for function fields Speaker: Dinesh Thakur (Universty of Rochester) We will discuss multizeta values for the function field case, explain various analogies and contrasts with the rational number field case, and discuss recent developments and open questions. Title: The plectic conjecture over local fields Speaker: Siyan Daniel Li-Huerta Time: 10:00-11:00 Beijing time, Sep 27, 2022 Room: BIMSA 1118 Affiliation: Harvard University Host: Hansheng Diao The étale cohomology of varieties over Q enjoys a Galois action. For Hilbert modular varieties, Nekovář-Scholl observed that this Galois action on the level of cohomology extends to a much larger profinite group: the plectic group. Motivated by applications to higher-rank Euler systems, they conjectured that this extension holds even on the level of complexes, as well as for more general Shimura varieties. We present a proof of the analog of this conjecture for local Shimura varieties. Consequently, we obtain results for the basic locus of global Shimura varieties, after restricting to a decomposition group. The proof crucially uses a mixed-characteristic version of fusion due to Fargues–Scholze. Title: The Tate conjecture over finite fields for varietes with $h^{2, 0}=1$ Speaker: Ziquan Yang The past decade has witnessed a great advancement on the Tate conjecture for varietes with Hodge number $h^{2, 0}=1$. Charles, Madapusi-Pera and Maulik completely settled the conjecture for K3 surfaces over finite fields, and Moonen proved the Mumford-Tate (and hence also Tate) conjecture for more or less arbitrary $h^{2, 0}=1$ varietes in characteristic $0$. In this talk, I will explain that the Tate conjecture is true for mod $p$ reductions of complex projective $h^{2, 0}=1$ when $p>>0$, under a mild assumption on moduli. By refining this general result, we prove that in characteristic $p\geq 5$ the BSD conjecture holds true for height $1$ elliptic curve $\mathcal{E}$ over a function field of genus $1$, as long as $\mathcal{E}$ is subject to the generic condition that all singular fibers in its minimal compactification are irreducible. We also prove the Tate conjecture over finite fields for a class of surfaces of general type and a class of Fano varieties. The overall philosophy is that the connection between the Tate conjecture over finite fields and the Lefschetz $(1, 1)$-theorem over $\mathbb{C}$ is very robust for $h^{2, 0}=1$ varietes, and works well beyond the hyperkahler world. This is a joint work with Paul Hamacher and Xiaolei Zhao. Title: Elementary proofs of Zagier's formula for multiple zeta values and its odd variant Speaker: Li Lai (Tsinghua University) Time: 16:00-17:00 Beijing time,Jul 12, 2022(updated) In 2012, Zagier proved a formula which expresses the multiple zeta values \[ H(a, b)=\zeta(\underbrace{2,2, \ldots, 2}_{a}, 3, \underbrace{2,2, \ldots, 2}_{b}) \] as explicit $\mathbb{Q}$-linear combinations of products $\pi^{2m}\zeta(2n+1)$ with $2m+2n+1=2a+2b+3$. Recently, Murakami proved an odd variant of Zagier's formula for the multiple $t$-values \[ T(a, b)=t(\underbrace{2,2, \ldots, 2}_{a}, 3, \underbrace{2,2, \ldots, 2}_{b}). \] In this talk, we will give new and parallel proofs of these two formulas. Our proofs are elementary in the sense that they only involve the Taylor series of powers of arcsine function and certain trigonometric integrals. Thus, these formulas become more transparent from the view of analysis. This is a joint work with Cezar Lupu and Derek Orr. Title: Spectrum of p-adic differential equations Speaker: Tinhinane Amina Azzouz (BIMSA) Time: 16:00-17:00 Beijing time, Jun 14, 2022 Abstract: In the ultrametric setting, linear differential equations present phenomena that do not appear over the complex field. Indeed, the solutions of such equations may fail to converge everywhere, even without the presence of poles. This leads to a non-trivial notion of the radius of convergence, and its knowledge permits us to obtain several interesting information about the equation. Notably, it controls the finite dimensionality of the de Rham cohomology. In practice, the radius of convergence is really hard to compute and it represents one of the most complicated features in the theory of p-adic differential equations. The radius of convergence can be expressed as the spectral norm of a specific operator and a natural notion, that refines it, is the entire spectrum of that operator, in the sense of Berkovich. In our previous works, we introduce this invariant and compute the spectrum of differential equations over a power series field and in the p-adic case with constant coefficients. In this talk we will discuss our last results about the shape of this spectrum for any linear differential equation, the strong link between the spectrum and all the radii of convergence, notably a decomposition theorem provided by the spectrum. Title: Reciprocity, non-vanishing, and subconvexity of central L-values Speaker: Subhajit Jana (MPIM) Time: 13:30-15:00 Beijing time, May 26, 2022 Zoom ID: 844 745 8596 Passcode: 568789 Abstract: A reciprocity formula usually relates certain moments of two different families of L-functions which apparently have no connections between them. The first such formula was due to Motohashi who related a fourth moment of Riemann zeta values on the central line with a cubic moment of certain automorphic central L-values for GL(2). In this talk, we describe some instances of reciprocity formulas both in low and high rank groups and give certain applications to subconvexity and non-vanishing of central L-values. These are joint works with Nunes and Blomer--Nelson. Title:Duals of linearized Reed-Solomon codes Speaker: Xavier Caruso (CNRS, Université de Bordeaux) Organiser:Emmanuel Lecouturier (BIMSA) Time: 16:00-17:00 Friday, 2022/1/7 Zoom: 638 227 8222 PW: BIMSA Errors correcting codes are a basic primitive which provides robust tools against noise in transmission. On the theoretical perspective, they are usually founded on beautiful properties of some mathematical objects. For example, one of the oldest construction of codes is due to Reed and Solomon and takes advantage of the fact the number of roots of a polynomial cannot exceed its degree. During the last decades, new problems in coding theory have emerged (e.g. secure network transmission or distributive storage) and new families of codes have been proposed. In this perspective, Martínez-Peñas has recently introduced a linearized version of Reed-Solomon codes which, roughly speaking, is obtained by replacing classical polynomials by a noncommutative version of them called Ore polynomials. In this talk, I will revisit Martínez-Peñas' construction and give a new description of the duals of linearized Reed-Solomon codes. This will lead us to explore the fascinating world of noncommutative polynomials and notably develop a theory of residues for rational differential forms in this context. Title:Explicit realization of elements of the Tate-Shafarevich group constructed from Kolyvagin classes Speaker: Lazar Radicevic (Maxplanck Institute, Bonn) Time: 16:00-17:00 Wednesday, 2021/12/15 Zoom: 3885289728 PW: BIMSA We consider the Kolyvagin cohomology classes associated to an elliptic curve E defined over ℚ from a computational point of view. We explain how to go from a model of a class as an element of (E(L)/pE(L))^Gal(L/ℚ), where p is prime and L is a dihedral extension of ℚ of degree 2p, to a geometric model as a genus one curve embedded in ℙ^(p−1). We adapt the existing methods to compute Heegner points to our situation, and explicitly compute them as elements of E(L). Finally, we compute explicit equations for several genus one curves that represent non-trivial elements of the p-torsion part of the Tate-Shafarevich group of E, for p≤11, and hence are counterexamples to the Hasse principle. Title:A modular construction of unramified p-extensions of $\Q(N^{1/p})$ Speaker:Jacky Lang (Philadelfia) Organizer: Emmanuel Lecouturier (BIMSA) Time:9:00-10:00, Nov. 19, 2021 Venue:BIMSA 1118 Zoom ID: 849 963 1368 Password: YMSC In Mazur's seminal work on the Eisenstein ideal, he showed that when N and p > 3 are primes, there is a weight 2 cusp form of level N congruent to the unique weight 2 Eisenstein series of level N if and only N = 1 mod p. Calegari--Emerton, Merel, Lecouturier, and Wake--Wang-Erickson have work that relates these cuspidal-Eisenstein congruences to the p-part of the class group of $\Q(N^{1/p})$. Calegari observed that when N = -1 mod p, one can use Galois cohomology and some ideas of Wake--Wang-Erickson to show that p divides the class group of $\Q(N^{1/p})$. He asked whether there is a way to directly construct the relevant degree p everywhere unramified extension of $\Q(N^{1/p})$ in this case. After discussing some of this background, I will report of work with Preston Wake in which we give a positive answer to this question using cuspidal-Eisenstein congruences at prime-square level. Title:The unbounded denominators conjecture Speaker:Yunqing Tang (Princeton university) Time:9:30-10:30, Oct. 29, 2021 The unbounded denominators conjecture, first raised by Atkin and Swinnerton-Dyer, asserts that a modular form for a finite index subgroup of SL_2(Z) whose Fourier coefficients have bounded denominators must be a modular form for some congruence subgroup. In this talk, we will give a sketch of the proof of this conjecture based on a new arithmetic algebraization theorem. (Joint work with Frank Calegari and Vesselin Dimitrov.) Title:Eisenstein congruences and Euler systems Speaker:Oscar Rivero Salgado (University of Warwick) Time:16:00-17:00, Oct. 22, 2021 Zoom ID: 388 528 9728 Password: BIMSA Let f be a cuspidal eigenform of weight two, and let p be a prime at which f is congruent to an Eisenstein series. Beilinson constructed a class arising from the cup-product of two Siegel units and proved a relationship with the first derivative of the L-series of f at the near central point s=0. I will motivate the study of congruences between modular forms at the level of cohomology classes, and will report on a joint work with Victor Rotger where we prove two congruence formulas relating the Beilinson class with the arithmetic of circular units. The proofs make use of Galois properties satisfied by various integral lattices and exploits Perrin-Riou's, Coleman's and Kato's work on the Euler systems of circular units and Beilinson-Kato elements and, most crucially, the work of Fukaya-Kato around Sharifi's conjectures. Title: Modular regulator with Rogers-Zudilin method Speaker: Weijia Wang (ENS Lyon) Time: 2020-7-14, 16:00 – 17:00 Abstract: Let Y (N) be the modular curve of level N and E(N) be the universal elliptic curve over Y (N). Beilinson (1986) defined the Eisenstein symbol in the motivic cohomology of Ek(N) and the work of Deninger–Scholl (1989) shows the Petersson inner product of its regulator gives us special L-values. In this talk I will present how to relate the modular regulator with L-value of quasi-modular forms by using Lanphier's formula and Rogers–Zudilin method. https://zoom.us/j/91653446007?pwd=QUFEUTZramJNeGpBdjVSWUV6cmpBZz09 Password: 8Ma4ed Title: Projective bundle theorem in MW-motives Speaker: Nanjun Yang (YMSC, Tsinghua) Time: 2020-7-2, 10:00 – 11:00 Abstract: We present a version of projective bundle theorem in MW-motives (resp. Chow-Witt rings), which says that $\widetilde{CH}^*(\mathbb{P}(E))$ is determined by $\widetilde{CH}^*(X)$ and $\widetilde{CH}^*(X\times\mathbb{P}^2)$ for smooth quasi-projective schemes $X$ and vector bundles $E$ over $X$ with odd rank. If the rank of $E$ is even, the theorem is still true under a new kind of orientability, which we call it by projective orientability. As an application, we compute the MW-motives of blow-ups over smooth centers. (arXiv 2006.11774) ZOOM https://zoom.us/j/91653446007?pwd=QUFEUTZramJNeGpBdjVSWUV6cmpBZz09 Title: Elliptic cocycle for GLN(Z) and Hecke operators Speaker: Hao Zhang (Sorbonne Université) Abstract: A classical result of Eichler, Shimura and Manin asserts that the map that assigns to a cusp form f its period polynomial r_f is a Hecke equivariant map. We propose a generalization of this result to a setting where r_f is replaced by a family of rational function of N variables equipped with the action of GLN(Z). For this purpose, we develop a theory of Hecke operators for the elliptic cocycle recently introduced by Charollois. In particular, when f is an eigenform, the corresponding rational function is also an eigenvector respect to Hecke operator for GLN. Finally, we give some examples for Eisenstein series and the Ramanujan Delta function.
CommonCrawl
Search E-alert Submit My Account Login Article | Open | Published: 23 January 2018 Epsin and Sla2 form assemblies through phospholipid interfaces Maria M. Garcia-Alai1 na1, Johannes Heidemann2 na1, Michal Skruzny3, Anna Gieras1,4, Haydyn D. T. Mertens1, Dmitri I. Svergun1, Marko Kaksonen5, Charlotte Uetrecht ORCID: orcid.org/0000-0002-1991-79222,6 & Rob Meijers ORCID: orcid.org/0000-0003-2872-62791 Nature Communicationsvolume 9, Article number: 328 (2018) | Download Citation Supramolecular assembly In clathrin-mediated endocytosis, adapter proteins assemble together with clathrin through interactions with specific lipids on the plasma membrane. However, the precise mechanism of adapter protein assembly at the cell membrane is still unknown. Here, we show that the membrane–proximal domains ENTH of epsin and ANTH of Sla2 form complexes through phosphatidylinositol 4,5-bisphosphate (PIP2) lipid interfaces. Native mass spectrometry reveals how ENTH and ANTH domains form assemblies by sharing PIP2 molecules. Furthermore, crystal structures of epsin Ent2 ENTH domain from S. cerevisiae in complex with PIP2 and Sla2 ANTH domain from C. thermophilum illustrate how allosteric phospholipid binding occurs. A comparison with human ENTH and ANTH domains reveal only the human ENTH domain can form a stable hexameric core in presence of PIP2, which could explain functional differences between fungal and human epsins. We propose a general phospholipid-driven multifaceted assembly mechanism tolerating different adapter protein compositions to induce endocytosis. Clathrin-mediated endocytosis is essential for protein retrieval during neurotransmission and receptor recycling. It is also involved in viral entry and the uptake of nutrients and hormones. During endocytosis, a cargo-containing vesicle is formed through the invagination of a plasma membrane patch, in a process that involves proteins and specific lipids. Accumulation of the adapter proteins at the plasma membrane initiates the endocytic event and contributes to membrane bending1. The adapters recruit clathrin and associate with the actin cytoskeleton to accomplish vesicle budding2. Finally, the vesicle coated with clathrin and adapter proteins is detached from the plasma membrane by dynamin3. The size and shape of the vesicle is determined by the interplay of adapter proteins and clathrin4. Clathrin does not bind to the plasma membrane directly, but to a range of adapter proteins that interact with lipids on the plasma membrane. Many adapter proteins, such as epsin5, AP1806, and AP-27, contain positively charged patches that bind to the head groups of specific phospholipids. This binding is strong enough so that the adapter proteins remain attached to the membrane, and can undergo conformational changes to bind cargo7. Adapter proteins containing an epsin N-terminal homology (ENTH) domain5,8 and some adapter proteins containing an AP180 N-terminal homology (ANTH) domain that belong to the subfamily of clathrin assembly lymphoid myeloid leukemia (CALM) proteins9, have an additional membrane-binding mode, inserting an N-terminal helix into the plasma membrane. It is proposed that there is a concerted mechanism where binding to the head group of phosphatidylinositol 4,5-bisphosphate (PIP2) triggers the N-terminal portion of these adapter proteins to fold into an α helix. The insertion of the α helix into the membrane contributes to its curvature1. It is unclear whether the clathrin-associated adapter proteins engage clathrin individually at the cell membrane, or whether they assemble in larger structures to drive membrane curvature and clathrin recruitment10. It has also been proposed that membrane curvature is caused simply by a protein crowding mechanism, where the formation of the clathrin coat crowds adapter proteins together, leading to membrane curvature11. Critically, the combination of membrane insertions and spherical scaffolds that curve the membrane have been observed10, but their interactions are not understood. There are indications that adapter proteins themselves form larger oligomers, preceding the formation of the clathrin coat. Complex formation has been observed between the ENTH domain of epsin and the ANTH domain of Sla2, the yeast homolog of human huntingtin interacting protein 1 related (Hip1R)12. The formation of this complex depends on the presence of PIP2 and has been observed in Saccharomyces cerevisiae. A pulldown experiment with full-length epsin-1 and Hip1R from Mus musculus suggested the same complex forms in mammals13. Endocytosis is stalled in S. cerevisiae when the complex between the ENTH domain of epsin Ent1 and the ANTH domain of Sla2 is disrupted by mutagenesis12. Electron cryo-tomography studies on helical assemblies of epsin Ent1 ENTH and Sla2 ANTH domains obtained from giant unilamellar vesicles (GUV)s revealed that the co-assembly of ENTH/ANTH occurs14. The ENTH/ANTH assembly may help organize the clathrin coat and could contribute to membrane curvature during the initial stages of endocytosis14. It is of particular interest that epsin and Sla2 form a membrane-bound scaffold, because these adapter proteins have also been associated with actin recruitment to the coated pit. Actin polymerization is known to provide mechanical force for vesicle formation from membranes under tension15. The adapter assembly would harness singular weak interactions together to form an anchor that can withstand the strong forces generated by the actin cytoskeleton. In this paper, we reveal the assembly process of epsin ENTH and Sla2/Hip1R ANTH domains through PIP2 interfaces by native MS. An X-ray crystal structure of the ENTH domain of epsin Ent2 from S. cerevisiae in complex with PIP2 reveals an allosteric mechanism underlying the assembly. The crystal structure of the Sla2 ANTH domain from C. thermophilum reveals how this subfamily of ANTH domains evolved to engage ENTH domains in assembly. We determine by native MS that some elements of the assembly process are evolutionarily conserved between the fungal and human adapter proteins. However, human epsin ENTH can autonomously form homo-oligomers, whereas fungal ENTH domains need the presence of Sla2 ANTH domains to create stable assemblies. Together, these data suggest that the ENTH/ANTH complex concentrates PIP2 locally on the plasma membrane to facilitate the formation of adapter complexes with cargo, clathrin, or other adapter proteins through PIP2-dependent interfaces. Cooperative binding of PIP2 to the ENTH domain of epsin To investigate how a lipid-dependent protein complex is formed between epsin ENTH domain and the Sla2/Hip1R ANTH domain, we first determined the binding between the PIP2 lipid (diC8-PIP2) and ENTH domains from S. cerevisiae alone using native MS16,17. The Ent1 and Ent2 ENTH domains (ENTH1 and ENTH2) from S. cerevisiae as well as an ENTH domain of homologous epsin from C. thermophilum bind up to two PIP2 molecules per domain (Fig. 1a, b). Based on a direct MS approach18,19, macroscopic dissociation constants were determined revealing dependence of the two PIP2-binding sites by positive cooperativity20 (Fig. 1c and Supplementary 11). Results were comparable for the MS optimized ammonium acetate concentration of 300 mM and the more physiological ionic strength of 160 mM ammonium acetate (Supplementary Table 1). Native MS of ENTH/PIP2 complexes suggests an allosteric binding mechanism. a For analyzing lipid binding to ENTH domains, proteins were measured in presence of PIP2 and cytochrome c as reference. Raw spectra show free cytochrome c and unspecific attachment of 1 PIP2 (gray), while ENTH domains (here C. thermophilum) bind 0–3 PIP2. b Signal intensities from MS were summed over all charge states (back) and corrected for unspecific PIP2 clustering based on the ratio of bound/unbound reference protein (front). Data of at least three independent measurements were normalized to the corrected signal of unbound ENTH and the averages of the relative signal intensities and their standard deviations were plotted. The signal for ENTH with three PIP2 observed in raw spectra disappears after correction. c Schematic illustration of microscopic and macroscopic dissociation constants of two PIP2 molecules (orange) binding independently to ENTH (blue). For the first binding event of PIP2, two pathways with the microscopic dissociation constants kd,1 and kd,2 are available, leading to one apparent species of ENTH+PIP2. Combined, they account for the macroscopic dissociation constant KD,1. The second macroscopic dissociation constant KD,2 describes PIP2 binding to the thus far unoccupied binding site that yields the product ENTH+2PIP2. Again, this binding event can be partitioned into two pathways with the microscopic dissociation constants kd,1 and kd,2. If kd,1 and kd,2 are unaltered in the first and second binding event, binding sites are independent. Binding sites are represented by rounded rectangles A crystal structure was determined of a complex between the ENTH domain of Ent2 (ENTH2) from S. cerevisiae and the PIP2 phospholipid to a resolution of 3.4 Å (Table 1). The structure shows two ENTH2 domains sandwiched around one PIP2 molecule (Fig. 2a and Supplementary Fig. 1). The inositol head group of PIP2 is bound to the previously identified phosphatidylinositol-binding pocket of one of the ENTH2 molecules5. The N-terminal region of this ENTH2 molecule is folded into an α helix that corresponds to the helix (α0) that has been observed in the ENTH domain of rat epsin-1 in complex with the phosphoinositol (IP3) group5, and this copy is labeled ENTH2α0. In the second ENTH2 molecule, the N-terminal α0 is unfolded (ENTH2Noα0), and the electron density suggests the N-terminal residues are extended. A superimposition of the ENTH1/IP3 domain complex structure of rat epsin-15 onto the ENTH2/PIP2 domain crystal structure from S. cerevisiae results in an RMSD of 1.1 Å for 147 residues showing distinct orientations of the α0 helix (Fig. 2b). In the ENTH1/IP3 structure, the α0 helix folds back onto the phosphoinositol group, so that Arg 8 and Lys 11 interact with an inositol head group. In the ENTH2/PIP2 structure, the α0 helix points away from the ENTH2 domain and there are no interactions with the phosphoinositol group. Instead, the evolutionarily conserved residue Tyr 16 of the ENTH2α0 molecule forms direct interactions with the inositol head group of PIP2. Table 1 Data collection and refinement statistics Full size table Crystal structure of the ENTH2/PIP2 complex reveals an allosteric-binding mechanism. a Ribbon diagram of the ENTH2α0 (cyan) and ENTH2Noα0 dimer (blue) building block. N-terminal regions are colored (ENTH2α0, yellow; ENTH2Noα0, orange). PIP2 sitting in the interface between ENTH2 molecules is shown as spheres. b Superposition of the ENTH/IP3 epsin-1 complex (dark blue) and the ENTH2/PIP2 complex (yellow). The N-terminal α0 is colored (ENTH2/PIP2, yellow; ENTH/IP3, orange), the inositol head groups are shown as sticks. c Superimposition of the ENTH2/PIP2 dimer (blue/cyan) and the ENTH1 dimer (green/palegreen) shown as a ribbon diagram. The α0 helix of ENTH1 is oriented similarly to α0 of ENTH2α0. d Surface presentation of the ENTH2/PIP2 complex showing that unfolded N-termini of ENTH2 are in plane with the lipid tail of the PIP2 molecules (two ENTH2α0/ENTH2Noα0 building blocks, cyan/blue and magenta/violet; PIP2, spheres; N-termini for empty ENTH2, yellow and orange; Tyr 16, Arg 24, Arg 62 and His 72 of ENTH2Noα0 with empty PIP2-binding pocket, red) As a result of α0 reorientation pointing away from the ENTH2α0 molecule, the second ENTH2Noα0 molecule can form interactions with the phosphatidylinositol head group. Interestingly, the ENTH2Noα0 domain is oriented toward the PIP2 molecule to involve residue Thr 104 on α6 in the ENTH2 dimer interface. This residue is crucial for the formation of the ENTH/ANTH complex12,14,21. Thr 104 of ENTH2Noα0 is in the periphery of the PIP2-binding site, and sits opposite Tyr 68 and Lys 69 of the ENTH2α0 molecule. Thr 104 is conserved in epsin ENTH domains and is essential for ENTH function, though independent from the canonical PIP2-binding site22,23. Native MS on a Thr104Glu mutant of ENTH1 from S. cerevisiae shows a reduction of PIP2 binding, where the binding of two PIP2 molecules is barely observed (Fig. 1b), indicating impairment of the binding site. To analyze the contribution of the PIP2 molecule to complex formation, we calculated the buried surface area contributions for each molecule with PISA24. The total solvent accessible area of the phosphatidylinositol head group of PIP2 is 488 Å2. Most of the head group is buried by the ENTH2α0 molecule (306 Å2, or 63% of the total available surface area of the PIP2 molecule), whereas the ENTH2Noα0 molecule covers 95 Å2, or 20% of the available surface area. The buried solvent area between ENTH2α0 and ENTH2Noα0 is 1820 Å2, with almost equal contributions from ENTH2α0 (955 Å2) and ENTH2Noα0 (865 Å2). However, most of the buried area (1020 Å2) contributed to the dimer interface comes from the α0 helix of the ENTHα0 domain, which is repositioned by the PIP2 molecule to facilitate dimer formation. The effect of PIP2 binding is therefore twofold; it attaches the ENTH domain to the membrane and displaces the α0 helix, so it can form a dimer with a second ENTH domain. As a further indication that the ENTH2 dimer, which sandwiched PIP2, is functionally relevant, a similar dimer interface is observed in a crystal structure of the epsin Ent1 ENTH domain (ENTH1) from S. cerevisiae at 2.9 Å resolution (Fig. 2c). Here, a 2-methyl-2,4-pentanediol (MPD) molecule from the crystallization solution has caused a displacement of the α0 helix to allow dimer formation between ENTH1 domains. In the ENTH2/PIP2 structure, two α0 helices from symmetry-related ENTH2α0 molecules stack against each other to create a homotetramer containing two PIP2 molecules in the same plane, ready to insert into the membrane (Fig. 2d). In this planar orientation, four ENTH molecules line up so they could associate with the plasma membrane, whereas the C-termini are pointing away from the membrane. The α0 helices of the ENTH2α0 domains are not positioned to insert into the membrane. However, the unfolded N-terminal regions of ENTH2Noα0 as well as the neighboring positively charged PIP2-binding patch consisting of residues Lys 14, Arg 24, Arg 72, and His 72 are lying in the same plane as the lipid tail of PIP2 being aligned with the cell membrane. The N-terminus is therefore ideally positioned to bind another PIP2 molecule on the plasma membrane. We thus conclude that the PIP2/ENTH2 crystal structure shows an intermediate state, where the binding of one PIP2 dimerizes two ENTH2 domains and prepares the "empty" ENTH domain to bind another PIP2. This allosteric switch between PIP2 binding, dimerization, and the formation of α0 could potentially create larger epsin clusters on the cell membrane. The Sla2 ANTH domain evolved to bind the ENTH/PIP2 complex Native MS was also used to investigate PIP2 binding to several ANTH domains, showing two PIP2 binding sites for all wild-type proteins (Fig. 3a and Supplementary Table 1). There is a striking difference in PIP2 binding between human CALM and the fungal Sla2 ANTH domains. The macroscopic dissociation constants for CALM suggest higher affinity for the first PIP2-binding site, and a lack of cooperative binding (Supplementary Table 1). Mutagenesis of the canonical PIP2-binding site on the Sla2 ANTH molecule from S. cerevisiae revealed the presence of a specific secondary binding site that has so far not been identified. Implementing the dissociation constant of the mutant into a mathematical model (Fig. 1c) revealed dependence and positive cooperativity of the two binding sites. In general, PIP2 dissociation constants are higher for Sla2 ANTH domains than for ENTH domains from S. cerevisiae showing KDs of 100–250 µM (Supplementary Table 1). Evolutionarily conserved features in Hip1R subfamily of ANTH domains. a Native MS was used to investigate PIP2 binding to several members of the ANTH family: ANTHSla2, ANTHSla2,mut (four residues to Ala in the canonical PIP2-binding site, see Constructs) and CALM from indicated species. Signal intensities from mass spectra of at least three independent measurements were summed over all charge states, normalized to the corrected signal of the unbound protein and the average of the relative signal intensities and their standard deviations were plotted (back). After correction for unspecific clustering, relative signals of unbound and PIP2-bound ANTH domains are obtained (front). Signal for proteins with three attached PIP2 molecules observed in raw spectra disappear after correction. b Composite model of the ENTH and ANTHSla2 complex based on the low-resolution EM structure14 and the high-resolution X-ray crystal structures presented here (ENTH2/PIP2, yellow, and superimposed ANTH domains of Sla2 (violet), AP180 (cyan), and CALM (pale cyan). Several elements contributing to the interface are highlighted, including the inserted α helix in the ANTHSla2 domain, the proximate Arg 37 of ANTHSla2 and Thr 104 from ENTH, and the location of PIP2 from the secondary site of the ENTH2/PIP2 crystal structure. c Growth defects of Sla2(1–360) strains mutated in Sla2/Hip1R ANTH-specific features. Tenfold serial dilutions of sla2Δ strains expressing Sla2(1–360) wt, Sla2(1–360) NHL with α8–α9 loop replaced by residues NHL as occurring in Yap1802, Sla2(1–360) dYL deletion of conserved Tyr 252 and Leu 253 and Sla2(1–360) R29A were incubated on SC-Ura plates at 30, 34, and 37 °C for 2 days To understand why the ANTH domain of Sla2/Hip1R is specifically suited to bind the ENTH domain of epsins through the lipid interface, we determined the crystal structure of the ANTH domain of homologous Sla2 from C. thermophilum to 1.8 Å resolution (Table 1). The overall ANTH fold consists of helices α1–α11 and is conserved between CALM and Sla2/Hip1R subfamilies (Fig. 3b). A structure-based sequence alignment of CALM subfamily members CALM and AP1806 reveals two hitherto hidden, systematic and evolutionarily conserved features (Fig. 3b and Supplementary Fig. 2). Through insertion of a conserved Tyr 252 and a hydrophobic residue, an extra α helix (α12) uniquely occurs in Sla2 ANTH domains, providing rigidity. A loop connecting helices α8 and α9 has a five-residue extension in Sla2, although with non-conserved sequence. Based on the low-resolution electron microscopy tomogram of the Ent1 ENTH/ANTH Sla2 complex from S. cerevisiae, these insertions point at the epsin interface14. Moreover, deleting the Sla2/Hip1R-specific α12 helix impaired the function of the Sla2 ANTH domain similarly to mutating Arg 29, which is essential for the ENTH/ANTH complex formation (Fig. 3c)14. In addition, the CALM ANTH domain has an N-terminal helix that is affecting membrane curvature9, and that is similar to α0 observed in epsins. Based on the C. thermophilum Sla2 ANTH crystal structure, ANTH domains of the Sla2/Hip1R subfamily do not contain an N-terminal α helix. This is further confirmed by circular dichroism experiments performed on several ANTH and ENTH domains (Supplementary Fig. 3). Sla2/Hip1R ANTH domains display a loss of secondary structure or aggregation at high PIP2 concentrations (Supplementary Fig. 3c, f). To address whether this is caused by PIP2 itself, or the presence of an amphipathic environment, we repeated the circular dichroism measurements in the presence of submicellar and micellar concentrations of the detergent n-Dodecyl-beta-Maltoside (DDM) with and without PIP2 (Supplementary Fig. 3g, h). Human Hip1R ANTH domain shows a loss of signal with DDM (Supplementary Fig. 3h). The protein seems to undergo unfolding in the presence of an amphipathic environment, which would not be expected for a protein that contains a helix inducible by lipid binding. In contrast, CALM displays a clear increase in signal in the presence of DDM and PIP2 compared to the protein in buffer alone or with PIP2. In addition, when only the detergent above micellar concentrations is added, we further observe an increase in the signal. This experiment shows that it is actually the presence of the micellar environment and not only the phospholipid per se that is inducing/stabilizing helical structures in the protein and is in agreement with what has been previously reported for CALM that contains an α0 amphipathic helix9. We therefore conclude that N-terminal amphipathic helices are not characteristic of either ANTH subfamily, but are only present in a subset of CALM adapter proteins. Ordered assembly formation of fungal ENTH and Sla2 ANTH To determine how universal PIP2-dependent interactions between ENTH and ANTH domains are, we analyzed PIP2-driven complex formation for a selection of ENTH and ANTH domains by native MS. We tested complex formation between the S. cerevisiae ENTH2 domain and the ANTH domain of Sla2 to verify the assembly relevant to the ENTH2/PIP2 crystal structure (Fig. 4). Indeed, an assembly with a stoichiometry of 8:8:24 ± 3 for ENTH2/ANTHSla2/PIP2 is formed, similar to the ENTH1 complex and ENTH/ANTHSla2 complex of C. thermophilum (Fig. 4 and Supplementary Table 2). When equal amounts of S. cerevisiae ENTH1 and ENTH2 are mixed together with ANTHSla2 and measured by native MS, a complex containing equal amounts of ENTH1 and ENTH2 is formed, showing no preference for either isoform and highlighting the possibility to recruit different epsins into a complex with Sla2 (Supplementary Fig. 4a). Furthermore, complex formation was impaired for the ENTH1 Thr 104 mutant (Supplementary Fig. 6), confirming previous ITC measurements14. ENTH/ANTH/PIP2 complex formation in fungi 6:6:~18 and 8:8:~24 ENTH/ANTH/PIP2 complexes are the observed stoichiometries in native MS measurements. A cartoon of the most prominent complex is shown with ANTH in green and ENTH in blue. Complexes from C. thermophilum (green) and S. cerevisiae, containing ENTH1 (orange) or ENTH2 (red), show the same stoichiometries, signal ratios and dissociation pathways in collision-induced dissociation (CID) MS/MS. Here, the dissociation of the +40 charged 8:8:25 ± 3 ENTH1/ANTHSla2/PIP2 complex into partially unfolded ANTHSla2 (top left, cartoon shows green ANTH domain) and a residual 8:7:23 ± 1 complex (top right, showing also a cartoon of the remaining complex) is depicted. The annotation shows the stoichiometry (ENTH:ANTH:average PIP2 number), charge state of the main peaks, and average experimental masses. Ranges of PIP2 numbers, statistical errors, and an average FWHM value rating the MS resolution of all complexes can be found in Supplementary Table 2. ANTH dissociation in CID MS/MS measurements of 6:6:18 ± 1 ENTH2/ANTHSla2/PIP2 complexes from S. cerevisiae is presented in Supplementary Fig. 4b Although the hetero 16-mer of the fungal ENTH/ANTHSla2 complex is most prominent, a smaller oligomer of ~330 kDa is also observed (Fig. 4) consisting of six ENTH and six ANTH domains (ENTH1/2/ANTHSla2/PIP2 is 6:6:19 ± 2). Further information on complex topology was obtained in collision-induced dissociation (CID) MS/MS, in which proteins unfold partially until small proteins from the periphery are ejected. Here, hetero 12-mers and 16-mers show identical dissociation pathways. The larger ANTH domain is ejected from the complexes while all PIP2 and ENTH molecules remain bound to the complexes, indicating localization of ANTH in the periphery around an ENTH/PIP2 core (Fig. 3a and Supplementary Fig. 4). It seems that the ENTH/ANTH/PIP2 mixtures form transient complexes that can alter in size. To understand the relationship between the two oligomeric states, oligomeric composition of a mixture of ENTH, ANTHSla2 (both S. cerevisiae), and PIP2 was monitored by native MS over time. The changing signal intensities revealed a maturation process enriching the larger hetero 16-mer (Fig. 5a). Amounts of hetero 12-mer and 16-mer complexes are approximately equal 1 min after mixing. Within 2 min, the 12-mer signal fades, and the 16-mer gets predominant. These results and the equilibrium ratio of 12-mers and 16-mers show that the hetero 16-mer is the more stable ENTH/ANTH/PIP2 complex, suggesting that the 12-mer is an assembly intermediate. Furthermore, complex assembly is reversible upon depletion and subsequent replenishing of PIP2 (Supplementary Fig. 4c). Dynamics of ENTH/ANTH/PIP2 complex formation in fungi and human assemblies. a Time course of ENTH2/ANTHSla2/PIP2 (S. cerevisiae) complex formation. Components were mixed, injected into the electrospray capillary and the spectra monitored over time. Relative signal intensities for 6:6 (green), 8:8 (dark blue), and dimers of 8:8 complexes (light blue) were determined and plotted against time (n = 3). The signal of the 6:6 complex drops within 2 min after mixing the complex components, while the signal of the 8:8 complex increases, suggesting a transition between these forms. The signal of the 8:8 dimer remains constant, ruling out aggregation effects. Average data of three independent measurements, error bars (standard deviation) are shown for data points with N = 3 b Native MS shows oligomerization of human epsin-1 ENTH and Hip1R ANTH in various stoichiometries, ranging from 6:0 to 8:8 with 6:6 being the main species. c Human epsin-1 ENTH domain forms hexamers (6:0) with at least six PIP2 molecules also in absence of Hip1R Human ENTH forms a hexameric core upon PIP2 binding It was previously reported that full-length Hip1R can only be pulled down by murine epsin ENTH in the presence of PIP2, suggesting that the PIP2-dependent formation of ENTH/ANTH complexes could be conserved in mammals13. To verify evolutionary conservation of ENTH/ANTH/PIP2 complexes, a mixture of human epsin-1 ENTH domain with human Hip1R ANTH domain and PIP2 was analyzed by native MS, showing less homogeneity with a larger variety of oligomers (Fig. 5b). However, the hetero 16-mers observed with S. cerevisiae domains were also present in samples with human domains. The majority of human ENTH/ANTH complexes consist of a combination of six epsin ENTH molecules and a range of ANTH Hip1R molecules. Contrary to the fungal samples, the ENTH hexamer is already observed in the presence of PIP2 alone (Fig. 5c). A minimum of six PIP2 molecules is required for human ENTH hexamer formation. The appearance of the hexameric ENTH core in the human homolog suggests higher stability compared to fungal homologs. To confirm existence of the hexameric human ENTH core with PIP2 in solution and to gain further insight into its structure, small-angle X-ray scattering (SAXS) was performed on human ENTH (Supplementary Table 3). In the presence of PIP2, a uniform hexameric species is observed (Fig. 6a and Supplementary Table 3). Using the ENTH2/PIP2 dimeric interface from the S. cerevisiae ENTH2/PIP2 crystal structure as a rigid body, and representing the C-terminal parts missing in the crystal structure as dummy residue chains, a model for the hexamer was constructed yielding an excellent fit to the experimental data (Fig. 6b and Supplementary Fig. 4d). Three ENTH domains are oriented with α0 helices in plane of the cell membrane, in accordance with previous observations5,8,25. For the three other ENTH domains, the PIP2 molecule is still oriented in plane with the cell membrane (Fig. 6c), but α0 is pointing away, available to interact with additional ENTH domains to form larger oligomers. Similarly, the interface involving Thr 104 is alternatively used for hexamer formation as well as further oligomerization through phospholipid interfaces. Human ENTH forms a thermally stable hexamer with a predicted membrane-binding interface. a SAXS modeling of human ENTH in the presence of PIP2. SAXS data recorded for human ENTH in solution are shown (gray circles) along with a fit to the rigid-body model refined against the SAXS data with P3 symmetry using CORAL (orange solid line). Experimental errors are from counting statistics on the Pilatus 2M detector and propagated through the data reduction process as standard errors in the scattering intensities. The χ2 for the fit is 1.05. The inset shows the dimensionless Kratky plot representation of the SAXS data and the same fit. b Backbone and surface representation of the SAXS refined rigid body model. C-terminal (Ct) residues not observed in the crystal structure of the tandem domains are modeled as dummy residues by CORAL. P3 symmetry was enforced and the tandem ENTH domains with bound PIP2 used as rigid bodies. c 90° rotation of the model, demonstrating that the bound PIP2 molecules are all located on one side of the protein, consistent with a membrane-binding interface. The Thr 104 residues involved in ENTH/ENTH homodimerization are colored pink, whereas the Thr 104 residues present on the surface of the hexamer are colored green. d Hydrodynamic radius as a function of temperature measured by DLS (ENTH, S. cerevisiae, orange; ENTH/ANTH complex, S. cerevisiae, green; hexameric ENTH core, H. sapiens, blue) The potential secondary PIP2-binding sites are also available for docking of ENTH or ANTH domains bringing in their own PIP2 molecule to make a larger assembly. Characterization of phospholipid-dependent complex formation We used thermal denaturation aggregation assays by dynamic light scattering (DLS), to investigate stability differences between fungal and human ENTH/ANTH/PIP2 complexes. ENTH1 from S. cerevisiae in the presence of PIP2 shows a relatively low mid-aggregation temperature (Tagg) of 33 °C (Fig. 6d), which increases to 37 °C when ANTHSla2 is added. In contrast, the hexameric human ENTH core shows a Tagg of 45 °C in the presence of PIP2, which is more stable than the assembled fungal ENTH/ANTH complex. The optimal body temperature for H. sapiens is 37 °C, and the thermostability assay indicates that human ENTH domain alone will form a stable complex at that temperature, in contrast to ENTH domains from S. cerevisiae. To test whether PIP2 binding is the driving force for complex formation, rather than direct protein–protein interactions, we looked for cross-species complexes from C. thermophilum and S. cerevisiae by native MS, ITC, and DLS (Supplementary Fig. 5). Indeed, such complexes can be formed and are surprisingly robust as measured by ITC (KD = 450 nM ± 59 nM) (Supplementary Fig. 5c, d), despite low sequence identity. We also investigated whether the ANTHSla2 domains from S. cerevisiae and C. thermophilum bind to the human epsin ENTH core. The human-fungi cross-species assemblies show degenerated formation of PIP2-dependent complexes, with the 6:6 complex as the largest assembly observed by native MS (Supplementary Fig. 5b). These cross-species measurements confirm that human ENTH forms a stable hexameric core in the presence of PIP2, which can be decorated by ANTH domains through PIP2 interfaces. However, the fungal ANTH cannot open up the hexameric core to create larger assemblies as in the human ENTH/ANTH system. Clathrin-mediated endocytosis involves a diverse set of adapter proteins that help to curve the cell membrane and to recruit clathrin and cargo. The initial stages of vesicle coat assembly involve many adapter proteins and may involve many alternative assemblies26. Epsin and the phospholipid PIP2 are essential for clathrin-mediated endocytosis22,27, and we show how assemblies are formed between the ENTH domain of epsin and the ANTH domain of Sla2 in fungi. The ENTH and ANTH domains both contain two PIP2-binding sites, as determined by native MS. It has to be noted that the mature ENTH/ANTH 8:8 complexes contain only 24 ± 3 PIP2 molecules, which is much less than 32 PIP2 expected based on the number of available PIP2-binding sites. This indicates that PIP2 is shared between domains, and acts as a glue to bring these adapter proteins together. Recently, a similar gluing mechanism has been discovered using native MS for specific lipids that strengthen the oligomerization of membrane proteins28. This gluing mechanism is illustrated further by the X-ray crystal structure of the ENTH2 domain from S. cerevisiae in complex with PIP2, where one PIP2 molecule is sandwiched between two ENTH domains. The two ENTH2 domains glued together are structurally distinct. The ENTH2 domain that captured the PIP2 molecule in the canonical-binding pocket has an N-terminal region that is folded into an α helix. In contrast, the ENTH2 domain that associates to the other side of the PIP2 molecule through an interface that includes residue Thr 104 has an unfolded N-terminus. This ENTH2 domain could capture another PIP2 molecule on the cell membrane, which in turn would fold the N-terminal region into an α helix. This structure therefore not only confirms the presence of an allosteric switch upon PIP2 binding, but also shows how this binding event can trigger further oligomerization of ENTH domains at the cell surface. It is intriguing that this oligomerization process could include both ENTH1 and ENTH2 domains, and that the ANTHSla2 domain from S. cerevisiae does not show preference for either ENTH domain. This indicates that both epsins Ent1 and Ent2 can contribute to the assembly of the epsin/Sla2 complex, and qualifies the redundancy that is observed between epsin family members. So far, it seems the formation of the PIP2-driven complex between ENTH and ANTH domains is limited to Sla2/Hip1R ANTH domains12,13,21. Attempts to form PIP2-driven assemblies between ENTH domains and AP180/CALM subfamily members by native MS were unsuccessful. A crystal structure of the Sla2 ANTH domain from C. thermophilum reveals that Hip1R-related ANTH domains have two evolutionarily conserved insertions that contribute to the epsin/Hip1R interface to enable PIP2-dependent ENTH/ANTHSla2 complex formation. Especially the insertion of an α helix around Tyr 252 provides surface complementarity in the ENTH/ANTHSla2-binding interface, compared to the structures of the ANTH domains of AP180/CALM proteins. Superimposition of the ENTH domain with the secondary PIP2-binding site shows how the PIP2 molecule fits into the ENTH/ANTHSla2 interface close to the PIP2-binding pocket on the ANTHSla2 molecule (Fig. 3b). The residues Thr 104 from ENTH and Arg 37 from C. thermophilum ANTHSla2 are facing each other and have been shown by mutagenesis to be essential for the formation of the ENTH/ANTHSla2 complex14. In addition, residue Arg 37 can toggle its side chain between the canonical PIP2-binding site of ANTH domains, and the ENTH/ANTHSla2 interface. The biophysical experiments presented here have revealed that there is a difference in epsin and Sla2/Hip1R ENTH/ANTH complex assembly between fungal and human proteins. There is a general sequence of events revealed by native MS (Fig. 7). After PIP2 binding, ENTH domains form a core that is decorated on the outside by ANTH domains. Many different assemblies may occur, but in fungi the ENTH core alone is not stable enough to be maintained. The ENTH and ANTH domains cluster to yield a homogeneous 8:8 complex (Fig. 7a). In humans, the ENTH hexamer core is stable, but once it is decorated with ANTH, many different clusters may form (Fig. 7b). Our in vitro data are in nice agreement with cell biological studies on epsins and Sla2/Hip1R proteins in yeast12, amoebae21,23, and mammals13. In S. cerevisiae, Sla2 ANTH is needed for the stable binding of epsin ENTH to the endocytic site and both proteins are essential for actin-dependent endocytosis12,14. In contrast, mammalian epsin ENTH binds to the endocytic site in the absence of Hip1R13 and is necessary for Hip1R recruitment to the endocytic site. Consequently, an absence of epsin has more impact on endocytosis than the lack of Hip1R, suggesting epsin ENTH is crucial in PIP2-driven adapter protein complex formation in mammalian endocytosis. The independent formation of higher oligomers of the human ENTH domain alone was also observed in giant unilamellar vesicles by fluorescence microscopy25. Schematic models of PIP2 binding initiating clustering of ENTH and ANTH domains. a In S. cerevisiae and C. thermophilum ENTH (blue) and ANTH (green) domains bind PIP2 (orange) in the membrane and cluster to hetero 12-mers with the stoichiometry 6:6 that consequently can be transformed to more stable hetero 16-mers with the stoichiometry 8:8. b Human ENTH domains (yellow) bind PIP2 (orange) and cluster to homo 6-mers. ANTH domains (red) bind in different stoichiometries. 6:6 hetero 12-mers are the most abundant species, but a transition to larger complexes, up to hetero 16-mers can be observed. Symbols assigning the different complex stoichiometries are chosen as in the mass spectrum of Fig. 5b. Dashed arrows indicate that not all complex stoichiometries are represented in this model Autonomous assembly of human ENTH domains driven by PIP2 creates a scaffold that could contribute to membrane bending and scission29. Since the interactions between the ENTH domains are dominated by PIP2 interfaces, different epsins can contribute to the scaffolding, explaining why only a complete knockout of all epsins shows a phenotype that disrupts CME13. We propose that mammalian ENTH domains not only curve the membrane by the insertion of an amphipathic helix, but also can form a membrane scaffold when PIP2 is concentrated at an endocytic site. This proposal would also give further credence to molecular crowding mechanisms that are aided by the accumulation of phospholipid-dependent assemblies of adapter proteins30. Further studies on formation of complexes between PIP2-binding adapter proteins will hopefully provide insights into seeding and maturation processes that help to dedicate vesicles to certain endocytic or exocytic pathways. Phosphatidylinositol-4,5-biphosphate diC8 (PIP2) was purchased from Echelon. n-dodecyl β-d-maltoside (DDM) was purchased from Anatrace. Constructs and plasmids The coding sequence for the ANTH and ENTH homologs from Chaetomium thermophilum were amplified from an RNA library kindly provided by Peer Bork's laboratory (see constructs in Supplementary Table 4); EMBL Heidelberg31 using the QIAGEN OneStep RT-PCR Kit. The reverse transcription was done at 50 °C for 30 min and the second round PCR was performed in the presence of solution Q and a touchdown procedure was introduced starting at 72 °C (see primers in Supplementary Table 5). The DNA coding sequence was cloned in pETM-11/LIC vector (EMBL). Human CALM coding sequence has been purchased from Addgene (Plasmid #27691) and the ANTH domain was cloned in pETM-11/LIC vector for expression with an N-terminal 6xHis-tag. Sequence coding for ANTH domain of human Hip1R (aa 1–300) was PCR amplified from human cDNA and cloned into the NcoI-XhoI sites of pETM30 (EMBL). Sequence coding for ENTH domain of human EPN1 (aa 1–158) was PCR amplified from human cDNA (with the additional codon for glycine after START codon) and cloned into the BamHI-XhoI sites of pETM30. Sequence coding for ANTH domain of Yap1802 (aa 1–272) was PCR amplified from yeast gDNA and cloned into the NcoI-XhoI sites of pETM30. Protein production and purification Recombinant human Hip1R ANTH, human ENTH and the yeast Yap1802 ANTH domains were expressed in E. coli BL21 DE3 (Novagen) as GST fusion proteins containing an N-terminal His-tag followed by a TEV (Hisx6-TEV) cleavage site. Recombinant human ANTH domain from CALM and C. thermophilum ANTH and ENTH domains were expressed in E. coli BL21 DE3 pLYS (Novagen) as N-terminal Hisx6-TEV-tagged proteins. 4 L flasks containing 800 ml cultures in LB media were grown at 37 °C until an optical density (OD = 600) of 1.0. After induction with 0.5 mM IPTG, the cultures were moved to an incubator at 20 °C and harvested after 4 h. Protein expressing cells were harvested by centrifugation (3500 x g for 30 min at 4 °C). The cell pellet was lysed by sonication in the presence of 1 mg/ml DNase (400 K units) in 10 mM Tris-HCl pH 7.5. Lysed cell extract was centrifuged (45,000 x g, 45 min at 4 °C) and supernatant containing His-tagged proteins were purified by nickel-nitrilotriacetic acid (Ni-NTA) purification (Qiagen). Protein was eluted in a final elution buffer of 20 mM Tris pH 8.0, 300 mM NaCl, 250 mM imidazole. Excess of TEV protease was added to the imidazole eluted fractions for cleavage of the Hisx6-GST and Hisx6 tags (1 mg/ml TEV per liter of culture). Digestion was performed by dialysis at 4 °C overnight against 5 L of 20 mM Tris pH 8.0, 250 mM NaCl and 1 mM DTT. To remove the tags, the dialyzed fractions were subjected to a second Ni-NTA and the flow-through was concentrated to 5 mg/ml to be injected in a size exclusion chromatography (SEC). SEC was performed using an Äkta liquid chromatography system (Amersham Biosciences) and S75 10/300 GL (Tricorn) column (GE Healthcare) in 20 mM Tris-HCl pH 8.0 and 250 mM NaCl. ITC was performed at 25 °C using MicroCal VP-ITC calorimeter (GE Healthcare). An aliquot of 100 μM ENTH was titrated into 10 μM ANTH contained in the cell. ITC buffers: 20 mM Tris-HCl pH 8.0 (for S. cerevisiae ANTH) or 20 mM sodium phosphate pH 8.0 (for ANTHCh) with 250 mM NaCl supplemented with 200 μM PIP2. Dynamic light scattering Measurements have been performed using DynaPro Nanostar (serial no. 323-DPN, Wyatt Technology Corporation). Data have been processed using Dynamics v.7 software. 30 µM protein and PIP2 were mixed and incubated overnight at 4 °C. Samples were filtered through 0.22 µm centrifugal filters (Millipore). The acquisition time was 3 s with a total of 20 acquisitions. Crystal structure determination The ENTH2 domain from S. cerevisiae in complex with PIP2 was prepared with a protein concentration of 6 mg/ml pre-mixed with 200 μM PIP2 and incubated at 4 °C for 2 h. The sample was filtered through 0.22 µm centrifugal filters (Millipore). Protein crystals were obtained by vapor diffusion in a hanging drop setup using Limbro Plates (Hampton Research). For the crystallization drop, 1 μl of the complex was mixed with 1 μl of a mother liquor containing 0.1 M MES, pH 6.5, 0.1 M NaCl, 1.45 M ammonium sulfate. To obtain a complete data set to 3.35 Å resolution, X-ray data from three cryo-cooled crystals soaked in paraffin oil were combined (see Table 1). The ENTH2 domain from S. cerevisiae was crystallized with a protein concentration of 8 mg/ml in a buffer containing 10% (v/v) isopropanol, 20% PEG 4000 and 0.1 M Hepes pH 7.5. The ENTH1 domain from S. cerevisiae was crystallized with a protein concentration of 11 mg/ml in a buffer containing 0.2 M CaCl2, 25% v/v MPD and 0.1 M Tris pH 8.5. The ENTH1 and ENTH2 domain structures were solved by molecular replacement using PHASER32, with search models of the corresponding domains present in the PDB (Rattus norvegicus ENTH code 1EDU and S. cerevisiae ENTH2 code 4GZC). The structure was refined with REFMAC533 and manually rebuilt with Coot34, and the statistics are reported in Table 1. The ENTH2/PIP2 structure was solved by molecular replacement using the high-resolution structure of ENTH2 determined at 1.8 Å resolution (Table 1). The structure was refined with Phenix35 and manually rebuilt with Coot, resulting in a final Rfactor of 27.9% (Rfree = 30.3%). The stereochemistry was checked with Molprobity36, indicating good overall geometry with only 1.8% of the residues in disallowed regions of the Ramachandran plot. Structure diagrams were prepared with Pymol (The PyMOL Molecular Graphics System, Version 1.7.x, Schrödinger, LLC) and Chimera37. The Sla2 ANTH domain of C. thermophilum (ANTHCh) was crystallized with 0.2 M potassium bromide, 0.1 M Tris pH 7.5, 8% (w/v) PEG 20 K, 8% (w/v) PEG 550 MME. A single, cryo-cooled crystal of ANTHCh diffracted to 1.8 Å at the MASSIF beamline38 and belonged to space group P21 (Table 1). The structure was solved by molecular replacement using a structure prediction from the BAKER-ROSETTA SERVER through the Protein Structure Prediction Center CASP1139 as a search model. A total of 150 in silico designed models that were based on the amino acid sequence of Hip1R were provided by CASP11. One model designed by the Baker-Rosetta team gave a marginal solution, placing two molecules in the asymmetric unit of the P21 crystal form. The CASP model was derived from the CALM crystal structure (PDBID 3ZYM), but it had an RMSD of 2.0 Å for 219 residues when superimposed on the original model. Molecular replacement and model building were performed using PHASER32 and ARPWARP40. Subsequent manual inspection using Coot34 and refinement with REFMAC533 resulted in a refined structure with an Rfactor of 16.7% (Rfree = 22%). The stereochemistry was checked with Molprobity36, indicating good overall geometry with only 0.5% of the residues in disallowed regions of the Ramachandran plot. Human ENTH hexamer was assembled at 80 µM protein concentration in the presence of 20 mM Tris-HCl pH 8.0, 250 mM NaCl and 0.2 mM PIP2 and incubated overnight at 4 °C. As a control, 80 µM human ENTH with no PIP2 added was used. Synchrotron radiation X-ray scattering data were collected (EMBL P12, PETRA III, DESY, Germany)41 (Supplementary Table 3) with a PILATUS 2 M pixel detector (DECTRIS, Switzerland) (20 × 0.05 s frames). Solutions of human ENTH (0.4–1.5 mg/ml) were measured through a capillary (20 °C). The sample-to-detector distance was 3.1 m, covering a range of momentum transfer 0.01 ≤ s ≥ 0.46 Å−1 (s = 4π sinθ/λ). Frame comparison showed no detectable radiation damage. Data from the detector were normalized, averaged, buffer subtracted, and placed on an absolute scale that is relative to water, according to standard procedures. All data manipulations were performed using PRIMUSqt and the ATSAS software package42. The forward scattering I(0) and radius of gyration, Rg were determined from Guinier analysis: I(s) = I(0)exp(−(sRg)2/3)). The indirect Fourier transform method was applied using the program GNOM43 to obtain the distance distribution function p(r) and the maximum particle dimensions Dmax. Molecular masses (MMs) of solutes were estimated from SAXS data by comparing the extrapolated forward scattering with that of a reference solution of glucose isomerase (Hampton), the hydrated particle/Porod volume Vp, where molecular mass is estimated as 0.588 times Vp, and from the excluded solvent volumes, Vex is obtained from DAMMIF44 through ab initio modeling. Rigid body modeling was performed using the program CORAL42. Native mass spectrometry Prior to native MS analysis45, purified proteins were buffer exchanged to 300 mM ammonium acetate (PN 431311, 99.99% purity, Sigma-Aldrich) and 1 mM DL-dithiothreitol (PN 43815, 99.5% purity, Sigma-Aldrich), pH 8.0, via centrifugal filter units (Vivaspin 500, MWCO 5000 and 10000, Sartorius) at 13,000 x g and 4 °C. Complexes were assembled after buffer exchange by mixing 10 µM ENTH, 10 µM ANTH and 60 µM PIP2 (Phosphatidylinositol-4,5-biphosphate diC8, Echelon). Thus, soluble PIP2 was used at a submicellar concentration that is in the range of its physiological concentration in the cell46. ESI capillaries were produced in house. Therefore borosilicate capillaries (1.2 OD, 0.68 ID, World Precision Instruments) were processed in a two-step program with a micropipette puller (P-1000, Sutter instruments) using a squared box filament (2.5 × 2.5 mm, Sutter instruments) and subsequently gold-coated using a sputter coater (Q150R, 40 mA, 200 s, tooling factor of 2.3 and end bleed vacuum of 8 × 10−2 mbar argon, Quorum Technologies). Native MS on protein complexes was performed on a QToF 2 (Waters and MS Vision) modified for high mass experiments47 with nanoESI source in positive ion mode. The gas pressures were 10 mbar in the source region and 1.1 × 10−2 mbar xenon (purity 5.0) in the collision cell48,49. Mass spectra were recorded with applied voltages for capillary, cone, and collision of 1.35 kV, 120–150 V, and 30–60 V, respectively, optimized for good resolution and minimal complex dissociation. Complexes were analyzed in MS/MS by ramping collision voltages from 10 to 400 V in order to eject protein subunits. Raw data was calibrated with 25 mg/ml cesium iodide spectra of the same day with the software MassLynx (Waters). MassLynx and Massign50 were used to assign peak series to protein species. Lipid binding was studied on an LCT ESI-ToF system (Waters and MS Vision)47 with direct infusion using the gentlest ionization possible (capillary 1.4 kV, cone 100–120 V, extraction cone 0 V, 6.5 mbar source pressure). Samples were prepared by adding 60 µM PIP2 (Phosphatidylinositol-4,5-biphosphate diC8, Echelon) to the protein of interest (10 µM). For ENTH domains, cytochrome c from equine heart (PN 129021, Sigma-Aldrich) and for ANTH domains, carbonic anhydrase isoenzyme II from bovine erythrocytes (PN C2522, Sigma-Aldrich) was used as reference protein to test for unspecific clustering as described18. If there was signal overlap of the ENTH or ANTH domain with signal of the PIP2-bound reference protein, this charge state was excluded from the calculation of the ratio of unspecific binding. As an approximation the unspecific binding that was determined using the remaining charge states of the reference protein was subtracted from overlapping signals and the residual signals of the overlapping peaks were considered for further analysis. Relative peak intensities were used to determine the ratio of lipid-bound control protein to non-bound control protein. This ratio was used to correct peak intensities and visualize the data using GraphPad Prism (GraphPad Software). Cooperativity of the two binding sites was assessed by reviewing the mathematical relation of microscopic (binding site) and macroscopic (apparent) dissociation constants of independent binding sites (see Fig. 1c). As shown in20 these are related by: $$K_{\rm{D,1}} = \frac{{k_{\rm{d,1}} \times k_{\rm{d,2}}}}{{k_{\rm{d,1}} + k_{\rm{d,2}}}}$$ $$K_{\rm{D,2}} = k_{\rm{d,1}} + k_{\rm{d,2}}$$ A lack of microscopic constants kd,1 and kd,2 fulfilling the required condition for macroscopic constants is an indication for dependence of PIP2-binding sites. Macroscopic dissociation constants in the same order of magnitude for both binding events suggest positive cooperativity. All errors given for native MS data refer to the standard deviation and were based on at least three independent measurements. Yeast strain and plasmids Yeast strain MKY0764 (MATa, his3Δ200, leu2-3,112, ura3-52, lys2-801, sla2:natNT2) was transformed by pRS416-based centromeric plasmid12 expressing indicated variants of Sla2 aa 1–360 fragments51 (generated by overlapping PCR with mutagenic primers). The expression levels of protein constructs were assessed by immunoblotting with anti-HA antibodies (Covance MMS-101R) at 1:1000 dilution. The secondary antibody used was anti-mouse HRP conjugate (BIO-RAD, 1706516) at 1:3000 dilution. The atomic coordinates for the ENTH2/PIPI2 complex (PDBID 5ON7), the ENTH1 structure (PDBID 5ONF), and the ENTH2 structure (PDBID 6ENR) from the S. cerevisiae as well as the Sla2 structure from C. thermophilum (PDBID 5OO7) are deposited at the Protein Data Bank. Other data are available from the corresponding authors upon reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. McMahon, H. T. & Boucrot, E. Membrane curvature at a glance. J. Cell Sci. 128, 1065–1070 (2015). Merrifield, C. J. & Kaksonen, M. Endocytic accessory factors and regulation of clathrin-mediated endocytosis. Cold Spring Harb. Perspect. Biol. 6, a016733 (2014). Ferguson, S. M. & De Camilli, P. Dynamin, a membrane-remodelling GTPase. Nat. Rev. Mol. Cell Biol. 13, 75–88 (2012). Kirchhausen, T., Owen, D. & Harrison, S. C. Molecular structure, function, and dynamics of clathrin-mediated membrane traffic. Cold Spring Harb. Perspect. Biol. 6, a016725 (2014). Ford, M. G. J. et al. Curvature of clathrin-coated pits driven by epsin. Nature 419, 361–366 (2002). Ford, M. G. et al. Simultaneous binding of PtdIns(4,5)P2 and clathrin by AP180 in the nucleation of clathrin lattices on membranes. Science 291, 1051–1055 (2001). Jackson, L. P. et al. A large-scale conformational change couples membrane recruitment to cargo binding in the AP2 clathrin adaptor complex. Cell 141, 1220–1229 (2010). Itoh, T. et al. Role of the ENTH domain in phosphatidylinositol-4,5-bisphosphate binding and endocytosis. Science 291, 1047–1051 (2001). Miller, S. E. et al. CALM regulates clathrin-coated vesicle size and maturation by directly sensing and driving membrane curvature. Dev. Cell 33, 163–175 (2015). Kozlov, M. M. et al. Mechanisms shaping cell membranes. Curr. Opin. Cell Biol. 29, 53–60 (2014). Stachowiak, J. C. et al. Membrane bending by protein-protein crowding. Nat. Cell Biol. 14, 944–949 (2012). Skruzny, M. et al. Molecular basis for coupling the plasma membrane to the actin cytoskeleton during clathrin-mediated endocytosis. Proc. Natl Acad. Sci. USA 109, E2533–E2542 (2012). Messa, M. et al. Epsin deficiency impairs endocytosis by stalling the actin-dependent invagination of endocytic clathrin-coated pits. eLife 3, e03311 (2014). Skruzny, M. et al. An organized co-assembly of clathrin adaptors is essential for endocytosis. Dev. Cell 33, 150–162 (2015). Boulant, S., Kural, C., Zeeh, J.-C., Ubelmann, F. & Kirchhausen, T. Actin dynamics counteract membrane tension during clathrin-mediated endocytosis. Nat. Cell Biol. 13, 1124–1131 (2011). Erba, E. B. & Zenobi, R. Mass spectrometric studies of dissociation constants of noncovalent complexes. Annu Rep. Prog. Chem. C Phys. Chem. 107, 199–228 (2011). Leney, A. C. & Heck, A. J. R. Native mass spectrometry: what is in the name? J. Am. Soc. Mass Spectrom. 28, 5–13 (2017). Sun, J., Kitova, E. N., Wang, W. & Klassen, J. S. Method for distinguishing specific from nonspecific protein-ligand complexes in nanoelectrospray ionization mass spectrometry. Anal. Chem. 78, 3010–3018 (2006). Kitova, E. N., El-Hawiet, A., Schnier, P. D. & Klassen, J. S. Reliable determinations of protein–ligand interactions by direct esi-ms measurements. are we there yet? J. Am. Soc. Mass Spectrom. 23, 431–441 (2012). Klotz, I. M. Ligand–receptor interactions: facts and fantasies. Q. Rev. Biophys. 18, 227 (1985). Brady, R. J., Damer, C. K., Heuser, J. E. & O'Halloran, T. J. Regulation of Hip1r by epsin controls the temporal and spatial coupling of actin filaments to clathrin-coated pits. J. Cell Sci. 123, 3652–3661 (2010). Aguilar, R. C. et al. Epsin N-terminal homology domains perform an essential function regulating Cdc42 through binding Cdc42 GTPase-activating proteins. Proc. Natl Acad. Sci. USA 103, 4116–4121 (2006). Brady, R. J., Wen, Y. & O'Halloran, T. J. The ENTH and C-terminal domains of Dictyostelium epsin cooperate to regulate the dynamic interaction with clathrin-coated pits. J. Cell Sci. 121, 3433–3444 (2008). Krissinel, E. Stock-based detection of protein oligomeric states in jsPISA. Nucleic Acids Res. 43, W314–W319 (2015). Yoon, Y. et al. Molecular basis of the potent membrane-remodeling activity of the epsin 1 N-terminal homology domain. J. Biol. Chem. 285, 531–540 (2010). Cocucci, E., Aguet, F., Boulant, S. & Kirchhausen, T. The first five seconds in the life of a clathrin-coated pit. Cell 150, 495–507 (2012). Raucher, D. et al. Phosphatidylinositol 4,5-Bisphosphate functions as a second messenger that regulates Cytoskeleton–Plasma membrane adhesion. Cell 100, 221–228 (2000). Gupta, K. et al. The role of interfacial lipids in stabilizing membrane protein oligomers. Nature 541, 421–424 (2017). Boucrot, E. et al. Membrane fission is promoted by insertion of amphipathic helices and is restricted by crescent BAR domains. Cell 149, 124–136 (2012). Busch, D. J. et al. Intrinsically disordered proteins drive membrane curvature. Nat. Commun. 6, 7875 (2015). Bock, T. et al. An integrated approach for genome annotation of the eukaryotic thermophile chaetomium thermophilum. Nucleic Acids Res. 42, 13525–13533 (2014). Bunkóczi, G. et al. Phaser.MRage: automated molecular replacement. Acta Crystallogr. D Biol. Crystallogr. 69, 2276–2286 (2013). Winn, M. D., Murshudov, G. N. & Papiz, M. Z. Macromolecular TLS refinement in REFMAC at moderate resolutions. Methods Enzymol. 374, 300–321 (2003). Emsley, P., Lohkamp, B., Scott, W. G. & Cowtan, K. Features and development of Coot. Acta Crystallogr. D Biol. Crystallogr. 66, 486–501 (2010). Afonine, P. V. et al. Towards automated crystallographic structure refinement with phenix.refine. Acta Crystallogr. D Biol. Crystallogr. 68, 352–367 (2012). Chen, V. B. et al. MolProbity: all-atom structure validation for macromolecular crystallography. Acta Crystallogr. D Biol. Crystallogr. 66, 12–21 (2010). Pettersen, E. F. et al. UCSF Chimera--a visualization system for exploratory research and analysis. J. Comput. Chem. 25, 1605–1612 (2004). Nurizzo, D. et al. RoboDiff: combining a sample changer and goniometer for highly automated macromolecular crystallography experiments. Acta Crystallogr. D Struct. Biol. 72, 966–975 (2016). Kryshtafovych, A., Krysko, O., Daniluk, P., Dmytriv, Z. & Fidelis, K. Protein structure prediction center in CASP8. Proteins 77, 5–9 (2009). Suppl 9. Langer, G. G., Hazledine, S., Wiegels, T., Carolan, C. & Lamzin, V. S. Visual automated macromolecular model building. Acta Crystallogr. D Biol. Crystallogr. 69, 635–641 (2013). Blanchet, C. E. et al. Versatile sample environments and automation for biological solution X-ray scattering experiments at the P12 beamline (PETRA III, DESY). J. Appl. Crystallogr. 48, 431–443 (2015). Petoukhov, M. V. et al. New developments in the ATSAS program package for small-angle scattering data analysis. J. Appl. Crystallogr. 45, 342–350 (2012). Semenyuk, A. V. & Svergun, D. I. GNOM – a program package for small-angle scattering data processing. J. Appl. Crystallogr. 24, 537–540 (1991). Franke, D. & Svergun, D. I. DAMMIF, a program for rapid ab-initio shape determination in small-angle scattering. J. Appl. Crystallogr. 42, 342–346 (2009). Lorenzen, K. & van Duijn, E. in Current Protocols in Protein Science (eds Coligan, J. E., Dunn, B. M., Speicher, D. W. & Wingfield, P. T.) 17.12.1–17.12.17 (John Wiley & Sons, Inc., New York, 2010). Gamper, N. & Shapiro, M. S. Target-specific PIP 2 signalling: how might it work?: Target-specific PIP 2 signalling. J. Physiol. 582, 967–975 (2007). van den Heuvel, R. H. H. et al. Improving the performance of a quadrupole time-of-flight instrument for macromolecular mass spectrometry. Anal. Chem. 78, 7473–7483 (2006). Lorenzen, K., Versluis, C., van Duijn, E., van den Heuvel, R. H. H. & Heck, A. J. R. Optimizing macromolecular tandem mass spectrometry of large non-covalent complexes using heavy collision gases. Int. J. Mass Spectrom. 268, 198–206 (2007). Tahallah, N., Pinkse, M., Maier, C. S. & Heck, A. J. R. The effect of the source pressure on the abundance of ions of noncovalent protein assemblies in an electrospray ionization orthogonal time-of-flight instrument. Rapid Commun. Mass Spectrom. 15, 596–601 (2001). Morgner, N. & Robinson, C. V. Massign: an assignment strategy for maximizing information from the mass spectra of heterogeneous protein assemblies. Anal. Chem. 84, 2939–2948 (2012). Sun, Y., Carroll, S., Kaksonen, M., Toshima, J. Y. & Drubin, D. G. PtdIns(4,5)P2 turnover is required for multiple stages during clathrin- and actin-dependent endocytic internalization. J. Cell Biol. 177, 355–367 (2007). We thank Rob Smock, Daniel Lietha, and Michel Koch for useful comments on the manuscript. We acknowledge the staff of the EMBL beamlines P12 and P14 as well as the MASSIF beamline at the ESRF, and the sample preparation and characterization (SPC) facility of EMBL at PETRA III (DESY, Hamburg) for assistance. M.G.-A. and A.G. were supported by the EMBL Interdisciplinary Postdoc Program (EIPOD) under Marie Curie COFUND actions. The Heinrich Pette Institute, Leibniz Institute for Experimental Virology is supported by the Free and Hanseatic City of Hamburg and the Federal Ministry of Health. J.H. and C.U. are funded by the Leibniz Association through SAW-2014-HPI-4 grant. M.K. is funded by the Swiss National Science Foundation (31003A_163267). Maria M. Garcia-Alai and Johannes Heidemann contributed equally to this work. European Molecular Biology Laboratory (EMBL), Hamburg Outstation, Notkestrasse 85, 22607, Hamburg, Germany Maria M. Garcia-Alai , Anna Gieras , Haydyn D. T. Mertens , Dmitri I. Svergun & Rob Meijers Heinrich Pette Institute, Leibniz Institute for Experimental Virology, Martinistrasse 52, 20251, Hamburg, Germany Johannes Heidemann & Charlotte Uetrecht LOEWE Center for Synthetic Microbiology (SYNMIKRO), Max Planck Institute for Terrestrial Microbiology, 35043, Marburg, Germany Michal Skruzny University Medical Center Hamburg – Eppendorf, Martinistrasse 52, 20246, Hamburg, Germany Anna Gieras Department of Biochemistry and NCCR Chemical Biology, University of Geneva, Quai Ernest-Ansermet 30, 1211, Geneva 4, Switzerland Marko Kaksonen European XFEL GmbH, Holzkoppel 4, 22869, Schenefeld, Germany Charlotte Uetrecht Search for Maria M. Garcia-Alai in: Search for Johannes Heidemann in: Search for Michal Skruzny in: Search for Anna Gieras in: Search for Haydyn D. T. Mertens in: Search for Dmitri I. Svergun in: Search for Marko Kaksonen in: Search for Charlotte Uetrecht in: Search for Rob Meijers in: R.M. conceived the work together with C.U. and M.K. M.G.-A. produced proteins and samples for crystallization, SAXS, and native mass spectrometry, determined the Sla2 ANTH crystal structure, designed and performed biophysical experiments, and interpreted data. J.H. performed mass spectrometry experiments and interpreted the data together with C.U. M.S. performed growth experiments in S. cerevisiae. A.G. produced proteins for native mass spectrometry and the ENTH1 and ENTH2/PIP2 crystals. H.D.T.M. performed SAXS analysis and interpreted the data together with D.I.S. R.M. wrote the manuscript with input from all authors. Correspondence to Charlotte Uetrecht or Rob Meijers. Peer Review File https://doi.org/10.1038/s41467-017-02443-x Native Massenspektrometrie für die Proteinstrukturanalytik , Boris Krichel BIOspektrum (2018) Nature Communications menu Editors' Highlights Top 50 Read Articles of 2018
CommonCrawl
Entire solutions with asymptotic behavior of fully nonlinear uniformly elliptic equations Unbounded solutions of the nonlocal heat equation November 2011, 10(6): 1687-1706. doi: 10.3934/cpaa.2011.10.1687 Self-adjoint, globally defined Hamiltonian operators for systems with boundaries Nuno Costa Dias 1, , Andrea Posilicano 2, and João Nuno Prata 1, Universidade Lusófona de Humanidades e Tecnologias, Av. Campo Grande 376, 1749-024 Lisboa, Portugal, Portugal Dipartimento di Scienze Fisiche e Matematiche, Università dell'Insubria, via valleggio 11, I-22100 Como, Italy Received March 2010 Revised February 2011 Published May 2011 For a general self-adjoint Hamiltonian operator $H_0$ on the Hilbert space $L^2(R^d)$, we determine the set of all self-adjoint Hamiltonians $H$ on $L^2(R^d)$ that dynamically confine the system to an open set $\Omega \subset \RE^d$ while reproducing the action of $ H_0$ on an appropriate operator domain. In the case $H_0=-\Delta +V$ we construct these Hamiltonians explicitly showing that they can be written in the form $H=H_0+ B$, where $B$ is a singular boundary potential and $H$ is self-adjoint on its maximal domain. An application to the deformation quantization of one-dimensional systems with boundaries is also presented. Keywords: Quantum systems with boundaries, deformation quantization., self-adjoint extensions. Mathematics Subject Classification: Primary: 81Q10, 47B25; Secondary: 81S3. Citation: Nuno Costa Dias, Andrea Posilicano, João Nuno Prata. Self-adjoint, globally defined Hamiltonian operators for systems with boundaries. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1687-1706. doi: 10.3934/cpaa.2011.10.1687 N. Akhiezer and I. Glazman, "Theory of Linear Operators in Hilbert Space,", Pitman, (1981). Google Scholar S. Albeverio, F. Gesztesy, R. Högh-Krohn and H. Holden, "Solvable Models in Quantum Mechanics,", 2$^{nd}$ edition, (2005). Google Scholar G. A. Baker, Formulation of quantum mechanics based on the quasi-probability distribution induced on phase space,, Phys. Rev., 109 (1958), 2198. Google Scholar F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz and D. F. Sternheimer, Deformation theory and quantization, I and II,, Ann. Phys., 111 (1978), 61. Google Scholar F. A. Berezin and L. D. Fadeev, Remark on the Schröinger equation with singular potential,, Dokl. Akad. Nauk. SSSR, 137 (1961), 1011. Google Scholar J. Behrndt and M. Langer, Boundary value problems for elliptic partial differential operators on bounded domains,, J. Funct. Anal., 243 (2007), 536. Google Scholar M. S. Birman and M. Z. Solomjak, "Spectral Theory of Self-Adjoint Operators in Hilbert Spaces,", Reidel, (1987). Google Scholar Ph. Blanchard, R. Figari and A. Mantile, Point interaction Hamiltonians in bounded domains,, J. Math. Phys., 48 (2007). Google Scholar J. Blank, P. Exner and M. Havlíček, "Hilbert Space Operators in Quantum Physics,'', 2$^{nd}$ edition, (2008). Google Scholar G. Bonneau, J. Faraut and G. Valent, Self-adjoint extensions of operators and the teaching of quantum mechanics,, Am. J. Phys., 69 (2001), 322. Google Scholar A. Bracken, G. Cassinelli and J. Wood, Quantum symmetries and the Weyl-Wigner product of group representations,, preprint, (). Google Scholar B. M. Brown, M. Marletta, S. Naboko and I. G. Wood, Boundary triplets and M-functions for non-selfadjoint operators, with applications to elliptic PDEs and block operator matrices,, J. Lond. Math. Soc., 77 (2008), 700. Google Scholar B. M. Brown, G. Grubb and I. G. Wood, M-functions for closed extensions of adjoint pairs of operators with applications to elliptic boundary problems,, Math. Nachr., 282 (2009), 314. Google Scholar C. Cacciapuoti, R. Carlone and R. Figari, Spin dependent point potentials in one and three dimensions,, J. Phys. A: Math. Gen., 40 (2007), 249. Google Scholar J. W. Calkin, Abstract symmetric boundary conditions,, Trans. Am. Math. Soc., 45 (1939), 369. Google Scholar A. Connes, "Noncommutative Geometry,", Academic Press, (1994). Google Scholar C. R. de Oliveira, "Intermediate Spectral Theory and Quantum Dynamics,", Birkh\, (2009). Google Scholar N. C. Dias and J. N. Prata, Wigner functions with boundaries,, J. Math. Phys., 43 (2002), 4602. Google Scholar N. C. Dias, A. Posilicano and J. N. Prata, in, in preparation., (). Google Scholar N. C. Dias and J. N. Prata, Admissible states in quantum phase space,, Ann. Phys., 313 (2004), 110. Google Scholar N. C. Dias and J. N. Prata, Comment on "On infinite walls in deformation quantization",, Ann. Phys., 321 (2006), 495. Google Scholar D. Dubin, M. Hennings and T. Smith, "Mathematical Aspects of Weyl Quantization,", World Scientific, (2000). Google Scholar W. Faris, "Self-Adjoint Operators,", Lecture Notes in Mathematics {\bf 433}, 433 (1975). Google Scholar D. Fairlie, The formulation of quantum mechanics in terms of phase space functions,, Proc. Camb. Phil. Soc., 60 (1964), 581. Google Scholar B. Fedosov, A simple geometric construction of deformation quantization,, J. Diff. Geom., 40 (1994), 213. Google Scholar R. Gambini and R. A. Porto, Relational time in generally covariant quantum systems: four models,, Phys. Rev., D 63 (2001). Google Scholar P. Garbaczewski and W. Karwowski, Impenetrable barriers and canonical quantization,, Am. J. Phys., 72 (2004), 924. Google Scholar F. Gesztesy and M. Mitrea, Robin-to-Robin maps and Krein-type resolvent formulas for Schrödinger operators on bounded Lipschitz domains, in "Modern Analysis and Applications. The Mark Krein Centenary Conference. Vol. 2: Differential Operators and Mechanics'', (eds. V. Adamyan et al.), (2009), 81. Google Scholar F. Gesztesy and M. Mitrea, Generalized Robin boundary conditions, Robin-to-Dirichlet maps, and Krein-type resolvent formulas for Schrödinger operators on bounded Lipschitz domains,, in, 79 (2008), 105. Google Scholar V. I. Gorbachuk and M. L. Gorbachuk, "Boundary Value Problems for Operator Differential Equations,", Kluver, (1991). Google Scholar M. de Gosson and F. Luef, A new approach to the $\star$-genvalue equation,, Lett. Math. Phys., 85 (2008), 173. Google Scholar G. Grubb, A characterization of the non local boundary value problems associated with an elliptic operator,, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 22 (1968), 425. Google Scholar G. Grubb, Krein resolvent formulas for elliptic boundary problems in nonsmooth domains,, Rend. Sem. Mat. Univ. Pol. Torino, 66 (2008), 271. Google Scholar C. Isham, Topological and global aspects of quantum theory,, in, (1984), 1059. Google Scholar M. Kontsevich, Deformation quantization of Poisson manifolds I,, Lett. Math. Phys., 66 (2003), 157. Google Scholar M. G. Kre\u\i n, The theory of self-adjoint extensions of half-bounded Hermitean operators and their applications I,, Mat. Sbornik N.S., 20 (1947), 431. Google Scholar M. G. Kreĭn, The theory of self-adjoint extensions of half-bounded Hermitean operators and their applications II,, Mat. Sbornik N.S., 21 (1947), 365. Google Scholar K. Kowalski, K. Podlaski and J. Rembieliński, Quantum mechanics of a free particle on a plane with an extracted point,, Phys. Rev. A, 66 (2002), 032118. Google Scholar S. Kryukov and M. A. Walton, On infinite walls in deformation quantization,, Ann. Phys., 317 (2005), 474. Google Scholar J. L. Lions and E. Magenes, Problèmes aux limites non homogènes II,, Ann. Institut Fourier, 11 (1961), 137. Google Scholar J. L. Lions and E. Magenes, "Non-Homogeneous Boundary Value Problems and Applications I,", Springer-Verlag, (1972). Google Scholar J. Madore, "An Introduction to Noncommutative Differential Geometry and its Physical Applications,", 2$^{nd}$ edition, (2000). Google Scholar M. A. Naimark, "Theory of Linear Differential Operators,", Frederick Ungar Publishing Co., (1967). Google Scholar J. von Neumann, Allgemeine eigenwerttheorie Hermitscher funktionaloperatoren,, Math. Ann., 102 (1929), 49. Google Scholar J. von Neumann, "Mathematische Grundlagen der Quantenmechanik,", Springer-Verlag, (1932). Google Scholar A. Pinzul and A. Stern, Absence of the holographic principle in noncommutative Chern-Simons theory,, J. High Energy Phys., 0111 (2001). Google Scholar A. Posilicano, A Krein-like formula for singular perturbations of self-adjoint operators and applications,, J. Funct. Anal., 183 (2001), 109. Google Scholar A. Posilicano, Self-adjoint extensions by additive perturbations,, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 2 (2003), 1. Google Scholar A. Posilicano, Self-adjoint extensions of restrictions,, Oper. Matrices, 2 (2008), 483. Google Scholar A. Posilicano and L. Raimondi, Krein's resolvent formula for self-adjoint extensions of symmetric second order elliptic differential operators,, J. Phys. A: Math. Theor., 42 (2009). Google Scholar M. Reed and B. Simon, "Methods of Modern Mathematical Physics. Vol. II: Fourier Analysis, Self-Adjointness,", Academic Press, (1975). Google Scholar V. Ryzhov, A general boundary value problem and its Weyl function,, Opuscula Math., 27 (2007), 305. Google Scholar N. Seiberg and E. Witten, String theory and noncommutative geometry,, J. High Energy Phys., 9909 (1999). Google Scholar M. L. Višik, On general boundary problems for elliptic differential equations,, Trudy Mosc. Mat. Obsv., 1 (1952), 186. Google Scholar B. Voronov, D. Gitman and I. Tyutin, Self-adjoint differential operators associated with self-adjoint differential expressions,, preprint, (). Google Scholar J. Weidmann, "Linear Operators in Hilbert Spaces,", Springer-Verlag, (1980). Google Scholar M. A. Walton, Wigner functions, contact interactions, and matching,, Ann. Phys., 322 (2007), 2233. Google Scholar M. W. Wong, "Weyl Transforms,", Springer-Verlag, (1998). Google Scholar Erik Kropat, Silja Meyer-Nieberg, Gerhard-Wilhelm Weber. Computational networks and systems-homogenization of self-adjoint differential operators in variational form on periodic networks and micro-architectured systems. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 139-169. doi: 10.3934/naco.2017010 Abdallah El Hamidi, Aziz Hamdouni, Marwan Saleh. On eigenelements sensitivity for compact self-adjoint operators and applications. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 445-455. doi: 10.3934/dcdss.2016006 Dachun Yang, Sibei Yang. Maximal function characterizations of Musielak-Orlicz-Hardy spaces associated to non-negative self-adjoint operators satisfying Gaussian estimates. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2135-2160. doi: 10.3934/cpaa.2016031 Wen Deng. Resolvent estimates for a two-dimensional non-self-adjoint operator. Communications on Pure & Applied Analysis, 2013, 12 (1) : 547-596. doi: 10.3934/cpaa.2013.12.547 José M. Arrieta, Simone M. Bruschi. Very rapidly varying boundaries in equations with nonlinear boundary conditions. The case of a non uniformly Lipschitz deformation. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 327-351. doi: 10.3934/dcdsb.2010.14.327 Setsuro Fujiié, Jens Wittsten. Quantization conditions of eigenvalues for semiclassical Zakharov-Shabat systems on the circle. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3851-3873. doi: 10.3934/dcds.2018167 Sebastián Donoso, Wenbo Sun. Dynamical cubes and a criteria for systems having product extensions. Journal of Modern Dynamics, 2015, 9: 365-405. doi: 10.3934/jmd.2015.9.365 David Bourne, Howard Elman, John E. Osborn. A Non-Self-Adjoint Quadratic Eigenvalue Problem Describing a Fluid-Solid Interaction Part II: Analysis of Convergence. Communications on Pure & Applied Analysis, 2009, 8 (1) : 143-160. doi: 10.3934/cpaa.2009.8.143 O. A. Veliev. On the spectrality and spectral expansion of the non-self-adjoint mathieu-hill operator in $ L_{2}(-\infty, \infty) $. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1537-1562. doi: 10.3934/cpaa.2020077 Ruikuan Liu, Tian Ma, Shouhong Wang, Jiayan Yang. Thermodynamical potentials of classical and quantum systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1411-1448. doi: 10.3934/dcdsb.2018214 Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part II: The nonlinear system.. Evolution Equations & Control Theory, 2014, 3 (1) : 83-118. doi: 10.3934/eect.2014.3.83 Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part I: The linearized system.. Evolution Equations & Control Theory, 2014, 3 (1) : 59-82. doi: 10.3934/eect.2014.3.59 Krzysztof Fujarewicz, Marek Kimmel, Andrzej Swierniak. On Fitting Of Mathematical Models Of Cell Signaling Pathways Using Adjoint Systems. Mathematical Biosciences & Engineering, 2005, 2 (3) : 527-534. doi: 10.3934/mbe.2005.2.527 Ian Melbourne, V. Niţicâ, Andrei Török. A note about stable transitivity of noncompact extensions of hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2006, 14 (2) : 355-363. doi: 10.3934/dcds.2006.14.355 Stuart S. Antman, David Bourne. A Non-Self-Adjoint Quadratic Eigenvalue Problem Describing a Fluid-Solid Interaction Part I: Formulation, Analysis, and Computations. Communications on Pure & Applied Analysis, 2009, 8 (1) : 123-142. doi: 10.3934/cpaa.2009.8.123 Silviu-Iulian Niculescu, Peter S. Kim, Keqin Gu, Peter P. Lee, Doron Levy. Stability crossing boundaries of delay systems modeling immune dynamics in leukemia. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 129-156. doi: 10.3934/dcdsb.2010.13.129 Roberto Alicandro, Andrea Braides, Marco Cicalese. Phase and anti-phase boundaries in binary discrete systems: a variational viewpoint. Networks & Heterogeneous Media, 2006, 1 (1) : 85-107. doi: 10.3934/nhm.2006.1.85 Viorel Niţică. Stable transitivity for extensions of hyperbolic systems by semidirect products of compact and nilpotent Lie groups. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1197-1204. doi: 10.3934/dcds.2011.29.1197 Ian Melbourne, Dalia Terhesiu. Mixing properties for toral extensions of slowly mixing dynamical systems with finite and infinite measure. Journal of Modern Dynamics, 2018, 12: 285-313. doi: 10.3934/jmd.2018011 Roberta Bosi. Classical limit for linear and nonlinear quantum Fokker-Planck systems. Communications on Pure & Applied Analysis, 2009, 8 (3) : 845-870. doi: 10.3934/cpaa.2009.8.845 Nuno Costa Dias Andrea Posilicano João Nuno Prata
CommonCrawl
What is the typical temperature of an airliner's hull during flight? Some high-speed military aircraft like the SR-71 had real heating problems, but airliners also travel almost at the speed of sound, use most of their fuel to make up for frictional losses, so I would presume that their hulls heat up. They are also cooled by the airflow, but at what temperature does equilibrium set in during cruise? I remember that airliners don't seem to be very hot when you touch them after landing, but they have had time to cool down in the slow winds during descent. aerodynamics airliner temperature Rodrigo de Azevedo yippy_yayyippy_yay $\begingroup$ use most of their fuel to make up for frictional losses - what gave you this idea? Since efficiency is < 50%, most of the fuel is simply burnt for no return. The majority of what is left is burnt to overcome drag which is a consequence of creating lift. I don't have a number, but the overall amount of fuel used to overcome friction is going to be a small fraction. $\endgroup$ – Simon Feb 27 '16 at 9:38 There are two primary factors that affect the skin temperature of an aircraft in flight: the air temperature, and the speed of the aircraft. The air temperature where airliners cruise is relatively cold, around -54 °C at 35,000 feet. As a body like an aircraft moves through air, it compresses the air, which causes the air temperature to rise. The maximum temperature rise will be if the air is completely stopped, such as at a leading edge. This is called the total air temperature, and the amount that the temperature rises is called the ram rise. Using a simple formula to find the ram rise: $$RR = \frac{V^2}{87^2}$$ … where $RR$ is in Kelvin, and $V$ is the true airspeed in knots. Using a typical airliner cruising speed of 500 knots gives a temperature of 33 degrees. This brings the total air temperature to -22 °C, which is still quite cold. At places other than the leading edge, the temperature rise will be less. This is why cargo holds will need heaters to be safe for live animals, even being insulated and pressurized. Airliners just don't fly fast enough to produce a significant amount of heating. On the other hand, the SR-71 could fly at over 1,910 kts, which gives a ram rise of 482 °C. The air doesn't get much colder as you climb to the altitudes where the SR-71 flew, so this gives a total air temperature of over 400 °C. Speed makes a huge difference. fooot♦fooot $\begingroup$ Where does the 87 come from in that equation? $\endgroup$ – Holloway Dec 18 '15 at 11:35 $\begingroup$ @Holloway If you follow the linked reference you'll see it is an empirical approximation (for the specific system we're discussing) lumping in the heat capacity and recovery factors in the analytical equation. The = should probably be an ≈ in this instance. $\endgroup$ – J... Dec 18 '15 at 11:55 $\begingroup$ @J... Thanks, I imagined it was a mix of constants but wasn't sure which. $\endgroup$ – Holloway Dec 18 '15 at 11:56 $\begingroup$ alternately, you can use the isentropic flow relations and get $T \approx T_\infty(1+\frac{v^2}{531.6\cdot T_\infty})$, where $T_\infty$ is the air temperature in Kelvins and $v$ is the speed in knots $\endgroup$ – costrom Dec 18 '15 at 16:15 $\begingroup$ This answer calculates temperature rise due to compression, however it does not address the friction between the air and the surface of the aircraft. I'm assuming the frictional heating is negligible, but someone might want to elaborate on it. $\endgroup$ – Ian Dec 18 '15 at 17:24 Local air temperature In fast aircraft, the maximum heating is at the stagnation point. Here the kinetic energy of the flow is completely converted into pressure, which heats the air and, consequently, the structure. Due to the low local speed and the high pressure at and near the stagnation point, the rate of heat transfer is high, too, adding to the heat load. The formula for the stagnation point temperature $T_s$ of an ideal gas of the temperature $T_{\infty}$ hitting an object with the Mach number Ma is $$T_s = T_{\infty} + T_{\infty}\cdot\frac{(\kappa-1)\cdot Ma^2}{2}$$ For air the ratio of specific heats $\kappa$ is 1.4. The tip of the fuselage nose of an airliner flying at Mach 0.85 will see air temperature to rise by 14.45%. If the air at altitude has a temperature of 220°K (-53.15°C), the air temperature at the stagnation point will be 251.8°K (-21.36°C). But past the stagnation point the air will accelerate and become faster than flight speed. Now pressure and, consequently, temperature need to drop sufficiently to encourage the flow to stay attached and follow the curvature of the forward fuselage. This acceleration will cool the air, so the flow right above the windshield will be cooler than the ambient air. Along the cylindrical portion of the fuselage, we find roughly flight speed again, but now friction will change the temperature close to the wall. Again the kinetic energy is converted, but the heating is caused by friction. See the boundary layer plots below: Frictional and thermal boundary layer (picture source) The temperature close to the wall is now called recovery temperature and is different from the stagnation point temperature because there is a small speed component normal to the surface which carries away some of the heat. The air temperature depends on the ratio between viscous diffusion and thermal diffusion, which is expressed by the Prandtl number Pr. If Pr>1, the air temperature at the wall is higher than the stagnation temperature and for Pr<1, it is colder. The Prandtl number of air is 0.72, so the air surrounding the fuselage is slightly colder than the stagnation temperature. Fuselage temperature The fuselage temperature is determined by the equilibrium between thermal conductivity, radiation and convection. Conductivity: Here it is important how much the internal temperature of the fuselage can heat the skin. Cabin temperature is likely around 20°C, so some heating can be expected. However, since most airliners have isolation mats between the outer skin and the internal wall panels, conductivity from the inside is not dominant and will likely raise the skin temperature by a few degrees or less. The low heat conductivity of air (0.0204 W per m² and Kelvin) means that the heating from the inside dominates conductivity. Radiation: Since the top of the fuselage is pointing into space, its far-field radiation budget is negative at night and where it points away from the sun, so radiation will cool it. The lower fuselage, however, is facing either the ground or clouds below, which both are likely hotter than the ambient air. Radiation will not cool it much and is more likely to heat it up. The part of the fuselage in direct sunlight will be significantly hotter again, depending on its color. Convection: This is the dominant factor due to the high speed of the air around the fuselage. Here the air and the fuselage exchange heat by near-field radiation, and since the layer of air is replenished quickly and continuously, the air temperature is impressed on the fuselage. I did not go to the effort of calculating the end result, but tried to list the main contributors and their magnitude. In general, the fuselage temperature is slightly below the stagnation temperature, and a dark fuselage in bright sunlight or one with little insulation and a hot interior will be several degrees hotter than the stagnation temperature. Peter KämpfPeter Kämpf $\begingroup$ You distinguish between the kinetic energy of the flow converted into pressure at the stagnation point and 'friction' caused by the airflow against the hull - isn't the first friction too? In both cases, the body of the aircraft transforms the uniform speed of air molecules into heat. $\endgroup$ – yippy_yay Dec 19 '15 at 12:11 $\begingroup$ @yippy_yay: No, in the first case it is the pressure rise which heats the flow reversibly, and friction heating is irreversible and isobaric. $\endgroup$ – Peter Kämpf Dec 19 '15 at 12:22 $\begingroup$ Okay, but once the heat inside the stagnation point flows into the hull, the process is irreversible. You would still distinguish between friction and this process? $\endgroup$ – yippy_yay Dec 19 '15 at 12:29 $\begingroup$ @yippy_yay: Yes, because there is no friction involved. Compression heat occurs in an ideal gas, too. $\endgroup$ – Peter Kämpf Dec 19 '15 at 16:34 Not the answer you're looking for? Browse other questions tagged aerodynamics airliner temperature or ask your own question. Temperature of airplane while flying? How do fast-flying aircraft avoid overheating? What is the difference between OAT, RAT, TAT, and SAT? Do aircraft use the majority of their fuel to overcome friction? Does cargo heat failure require a diversion? What about if there are live animals in cargo? Why is the temperature of the cabin so low during a flight? What is the relation between airspeed and generated heat at constant drag? What is the temperature of intake air at several speed of sound intervals from 1x to 10x? What type of variation in temperature does the wing undergo? Is the altimeter setting corrected for temperature? What is the fuel temperature in the tanks during a flight? What is the temperature of the brakes after a typical landing? What's the formula to calculate static air temperature? Why do aircraft stall warning systems use angle-of-attack vanes rather than detecting airflow separation directly? How to compute the change in static temperature using only altitude? What are the equation linking static air temperature with true airspeed & altitude?
CommonCrawl
Proof that $\bigcap_{a\in A} \bigg(\, \bigcup_{b\in B} F_{a,b} \, \bigg) = \bigcup_{f\in ^AB} \bigg(\, \bigcap_{a \in A} F_{a,f(a)}\,\bigg) $ I found an interesting exercise of other amazing book (Jech's) that I'm not totally sure about how to do it. Prove the following form of the distributive law: $$\bigcap_{a\in A} \bigg(\, \bigcup_{b\in B} F_{a,b} \, \bigg) = \bigcup_{f\in ^AB} \bigg(\, \bigcap_{a \in A} F_{a,f(a)}\,\bigg) $$ Assuming that $F_{a,b_1} \cap F_{a,b_2} = \emptyset $ for each $a\in A$ and $b_1,b_2\in B$ and $b_1\not=b_2$. First of all, I'm not completely sure about what $ F_{a,b}$ really means. I suppose that is the range of a family with a domain $A \times B$. And second, I cannot figure out how to do the converse (assuming that the first part it is correct, I used as normal in this kind of proof element chasing) ($\Rightarrow$) ... ($\Leftarrow$) Suppose that $z\in \bigcup_{f\in ^AB} \big(\, \bigcap_{a \in A} F_{a,f(a)}\,\big)$; then there is $f\in \,^AB$ such that $z\in \bigcap_{a \in A} F_{a,f(a)}\, $. Let $a \in A$ be arbitrary. Claim 1 $\, \bigcap_{a \in A} F_{a,f(a)}\subseteq F_{a,f(a)} \subseteq \bigcup_{b\in B} F_{a,b}$. Proof of the Claim 1: For the first inclusion: Suppose $z\in\bigcap_{a \in A} F_{a,f(a)}$; then for each $a\in A$ we have that $z\in F_{a,f(a)}$. Then $\bigcap_{a \in A} F_{a,f(a)}\subseteq F_{a,f(a)}$. For the second inclusion: Now suppose $z\in F_{a,f(a)}$. Clearly $f(a)\in B$ because $f\in ^AB$. Then there is some $b\in B$ such that $z\in F_{a,b}$ and hence that $z\in \bigcup_{b\in B} F_{a,b}$. $\square$ Since $a$ was arbitrary it follows that $z\in \bigcup_{b\in B} F_{a,b}$ (by claim 1) for each $a\in A$ and hence that $z\in \bigcap_{a\in A} \big(\, \bigcup_{b\in B} F_{a,b} \, \big)$ as desired. I really, really would very much appreciate some help with that. Thanks in advance as usual. elementary-set-theory Jose AntonioJose Antonio $\begingroup$ How can you possibly have proved anything if you are not sure what $F_{a,b}$ means? $\endgroup$ – Mariano Suárez-Álvarez Aug 20 '13 at 4:29 $\begingroup$ I suppose that is the family $\left\{ F_{a,b}\right\}$ with domain $A\times B$. But to be honest I'm not completely sure. I assumed that. $\endgroup$ – Jose Antonio Aug 20 '13 at 4:35 Your understanding of $F_{a,b}$ is correct. Your statement of Claim $1$ is a bit flawed, because you're using the letter $a$ both as an index variable in the intersection and as a specific element of $A$. You should instead let $a_0\in A$ be arbitrary, and state the claim as follows: Claim $\mathbf1$ : $\bigcap_{a\in A}F_{a,f(a)}\subseteq F_{a_0,f(a_0)}\subseteq \bigcup_{b\in B}F_{a_0,b}$. Your argument for the first inclusion is then just the observation that an intersection of sets is contained in any one of those sets: if $z\in\bigcap_{a\in A}F_{a,f(a)}$, then $z\in F_{a,f(a)}$ for each $a\in A$, and in particular $z\in F_{a_0,f(a_0)}$. For the second inclusion you will of course then start with $z\in F_{a_0,f(a_0)}$, and the rest of your argument proceeds with only very minor changes: $f(a_0)\in B$, so $F_{a_0,f(a_0)}\subseteq\bigcup_{b\in B}F_{a_0,b}$. Now you can argue that since $a_0\in A$ was arbitrary, for each $a\in A$ we have $z\in\bigcup_{b\in B}F_{a,b}$, and therefore $z\in\bigcap_{a\in A}\bigcup_{b\in B}F_{a,b}$, as desired. In other words, your argument is basically correct, but you've allowed yourself to confuse generic elements with specific elements; a trivial rewriting takes care of the problem. For the other direction, suppose that $z\in\bigcap_{a\in A}\bigcup_{b\in B}F_{a,b}$. Let $a\in A$ be arbitrary; then $z\in\bigcup_{b\in B}F_{a,b}$, so there is a $b\in B$ such that $z\in F_{a,b}$. The disjointness condition on the sets ensures that this $b$ is unique, so $\{\langle a,b\rangle\in A\times B:z\in F_{a,b}\}$ is actually a function from $A$ to $B$; call this function $f_z$. Clearly $z\in\bigcap_{a\in A}F_{a,f_z(a)}\subseteq\bigcup_{f\in{}^BA}\bigcap_{a\in A}F_{a,f(a)}$, and you're done. Brian M. ScottBrian M. Scott $\begingroup$ I'll try to pay more attention to the use of generic elements with specific elements. But this $\{\langle a,b\rangle\in A\times B:z\in F_{a,b}\}$ it was breaking my head. Thank you so much :) $\endgroup$ – Jose Antonio Aug 20 '13 at 4:56 $\begingroup$ @Jose: Yes, finding a nice way to express that function is a little tricky. You're welcome! $\endgroup$ – Brian M. Scott Aug 20 '13 at 5:03 Like in many set theory proofs, it helps to translate from the set level to the element level, and then use ordinary logic. In this case that means that we are asked to prove for any $\;z\;$ that $$ (0) \;\;\; \langle \forall a :: \langle \exists b :: z \in F(a,b) \rangle \rangle \;\equiv\; \langle \exists f : f \in A \to B : \langle \forall a :: z \in F(a,f(a)) \rangle \rangle $$ (Throughout this answer implicitly $\;a \in A\;$ and $\;b,b_1,b_2 \in B\;$, and $\;f\;$ is a function.) It seems necessary to prove both directions separately. The first thing I tried is start from the most complex side of $(0)$, and push the $\;\exists f\;$ inward as much as possible: for any $\;z\;$, \begin{align} & \langle \exists f : f \in A \to B : \langle \forall a :: z \in F(a,f(a)) \rangle \rangle \\ \Rightarrow & \;\;\;\;\;\text{"logic: $\;\exists\forall \Rightarrow \forall\exists\;$"} \\ & \langle \forall a :: \langle \exists f : f \in A \to B : z \in F(a,f(a)) \rangle \rangle \\ \equiv & \;\;\;\;\;\text{"introduce abbreviation with one-point rule -- suggested by left hand side of $(0)$"} \\ & \langle \forall a :: \langle \exists f : f \in A \to B : \langle \exists b : b = f(a) : z \in F(a,b) \rangle \rangle \rangle \\ \equiv & \;\;\;\;\;\text{"move $\;\exists f\;$ to the only part which uses $\;f\;$"} \\ & \langle \forall a :: \langle \exists b : \langle \exists f : f \in A \to B : b = f(a) \rangle : z \in F(a,b) \rangle \rangle \\ \Rightarrow & \;\;\;\;\;\text{"weaken range of $\;\exists b\;$ to $\;\text{true}\;$"} \\ & \langle \forall a :: \langle \exists b :: z \in F(a,b) \rangle \rangle \\ \end{align} This proves the backward direction of $(0)$. For the forward direction we assume $$ (1) \;\;\; \langle \forall a :: \langle \exists b :: z \in F(a,b) \rangle \rangle $$ and we must construct a function $\;f\ \in A \to B\;$ such that $\;\langle \forall a :: z \in F(a,f(a)) \rangle\;$. We calculate for any function $\;f\;$ and any $\;z\;$ \begin{align} & \langle \forall a :: z \in F(a,f(a)) \rangle \\ \equiv & \;\;\;\;\;\text{"rewrite using one-point rule -- allows definition of function application"} \\ & \langle \forall a,b : b = f(a) : z \in F(a,b) \rangle \\ \equiv & \;\;\;\;\;\text{"definition of function application"} \\ & \langle \forall a,b : (a,b) \in f : z \in F(a,b) \rangle \\ \Leftarrow & \;\;\;\;\;\text{"the simplest possible choice for $\;f\;$"} \\ & f = \{a,b : z \in F(a,b) : (a,b)\} \end{align} (Notation. The last line describes the set which, for each $\;a,b\;$ such that $\;z \in F(a,b)\;$, contains the pair $\;(a,b)\;$.) Now we only have to prove that this relation is indeed a function from $\;A \to B\;$. It clearly is a subset of $\;A \times B\;$, since $\;a \in A\;$ and $\;b \in B\;$ (implicitly). And it is a function, since for any $\;z\;$ \begin{align} & \{a,b : z \in F(a,b) : (a,b)\}\text{ is a function} \\ \equiv & \;\;\;\;\;\text{"definition of what it means to be a function"} \\ & \langle \forall a :: \langle \exists! b :: z \in F(a,b) \rangle \rangle \\ \equiv & \;\;\;\;\;\text{"split $\;\exists! b\;$ into $\;\exists b\;$ and $\;! b\;$; $\;\land\;$ distributes over $\;\forall\;$"} \\ & \langle \forall a :: \langle \exists b :: z \in F(a,b) \rangle \rangle \;\land\; \langle \forall a :: \langle ! b :: z \in F(a,b) \rangle \rangle \\ \equiv & \;\;\;\;\;\text{"left part using $(1)$; right part using $(2)$ below"} \\ & \text{true} \end{align} (Notation. Here $\;! b\;$ means "there exists at most one $\;b\;$.) The right part of the last step follows directly from the proviso, which (translated to the element level) says that $$ \langle \forall a, b_1, b_2 : b_1 \not= b_2 : \lnot(z \in F(a,b_1) \land z \in F(a,b_2)) \rangle $$ or equivalently $$ (2) \;\;\; \langle \forall a :: \langle ! b :: z \in F(a,b) \rangle \rangle $$ for any $\;z\;$. This completes the proof. Marnix KloosterMarnix Klooster Not the answer you're looking for? Browse other questions tagged elementary-set-theory or ask your own question. Arbitrary distributive law for Sets Prove that $C\left(\bigcup_{\alpha\in I} A_\alpha\right)=\bigcap_{\alpha\in I}C\left(A_\alpha\right)$ intersections and unions of families of open sets Show that $\bigcup_{n=1}^\infty A_n= B_1 \backslash \bigcap_{n=1}^\infty B_n$ Is my proof that $ B- \bigcup_{i=i}A_i \subseteq \bigcap_{i=I}(B - A_i)$ correct? How to interpret big $\bigcup_{i\in I}\bigcap_{j\in J}M_{i,j}$ and $\bigcap_{j\in J}\bigcup_{i\in I}M_{i,j}$ Is it true that $\bigcap_{n = 1}^{\infty} \bigcup_{k=n}^{\infty}A_k = \lim_{n \to \infty}\bigcup_{k = n}^{\infty}A_k$? Prove "If $B \subseteq A_k$, for all $k\in I$, then $B \subseteq \bigcap_{i\in I} A_i$" and other similar statement How to prove $ \bigcap_{i=1}^n A_i \subseteq \bigcap_{i=1}^n B_i $? Find $\bigcup_{m \in \mathbb N_+} \bigcap_{n \geq m} A_n$ Proof of the 'Infinite' DeMorgan's Law.
CommonCrawl
EURASIP Journal on Wireless Communications and Networking A state metrics compressed decoding technique for energy-efficient turbo decoder Ming Zhan1, Zhibo Pang ORCID: orcid.org/0000-0002-7474-42942, Ming Xiao3 & Hong Wen4 EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 152 (2018) Cite this article In the energy resource-constrained wireless applications, turbo codes are frequently employed to guarantee reliable data communication. To both reduce the power dissipation of the turbo decoder and the probability of data frame retransmission in the physical layer, memory capacity reduced near optimal turbo decoder is of special importance from the perspective of practical implementation. In this regard, a state metrics compressed decoding technique is proposed. By inserting two modules in the conventional turbo decoding architecture, a smaller quantization scheme can be applied to the compressed state metrics. Furthermore, structure of the inserted modules is described in detail. We demonstrate that one or two rounds of compression/decompression are performed in most cases during the iterative decoding process. At the cost of limited dummy decoding complexity, the state metrics cache (SMC) capacity is reduced by 53.75%. Although the proposed technique is a lossy compression strategy, the introduced errors only have tiny negative influence on the decoding performance as compared with the optimal Log-MAP algorithm. In recent years, turbo codes have been adopted as the channel coding scheme by some advanced communication standards [1, 2]. To improve the reliability in wireless data transmission, this family of error correction code also steps into some energy resource-constrained wireless applications [3]. In some scenarios, such as wireless sensor networks (WSNs), the sensor nodes only have limited weight of batteries, almost 80% of the overall power dissipation in the nodes is accounted for wireless data communication, and lifetime of the sensor nodes is dominated by the power dissipation of the communication module [4–6]. To reduce the transmission power and to decrease the number of data frame retransmission in the sensor nodes as much as possible, the research of turbo decoder with low power consumption and near optimal bit error rate (BER) is a key topic. However, in the engineering implementation of turbo code decoder, the maximum a posteriori (MAP) decoding algorithm is iteratively performed; the decoder requires large capacity memory and frequent memory accessing, which lead to high power dissipation of the turbo decoder [7]. Therefore, conventional turbo decoder is not suitable for power resource limited WSNs scenarios. Consequently, the energy issue of turbo decoder has become a bottleneck constraint that should be seriously concerned. To address this deficiency of the conventional turbo decoder, researchers have proposed different decoding architectures. These techniques include stopping the iteration under certain criteria [8], replacing the memory accessing with reverse calculation [9], and recently reducing the memory capacity of state metrics cache (SMC). Among these techniques, the memory-reduced decoding scheme decrease the overall power dissipation by a larger margin. Moreover, since a lower memory capacity is required, this technique is effective for the design of turbo decoder with smaller chip area. According to this strategy, the traceback decoding scheme stores different metrics and sign bits in the SMC; the SMC size is 20% reduced [10]. In [11], the Walsh-Hadamard transform is introduced to represent the forward state metrics with a smaller word width of the SMC memory. Cost of this simplification is the increased dummy decoding complexity. In order to further reduce the SMC capacity and maintain a low dummy decoding complexity, this paper proposes to insert two modules in the turbo decoding architecture: in the compression module, the forward state metrics are iteratively compressed to be the metrics with smaller values. In the decompression module, the compressed metrics are used to estimate the forward state metrics. Furthermore, only simple operations such as addition, shifting, and compare are applied in the compression and decompression modules. Theoretically, there are errors in this proposed technique. But simulation results still show that the introduced errors have little impact on the decoding performance, as compared with the optimal decoding algorithm. The rest of this paper is organized as follows. Section 2 gives a brief introduction of the MAP decoding algorithm and the derived variants. Section 3 addresses the proposed technique in detail, which include the compression and the decompression modules. In Section 4, the introduced dummy decoding complexity, the SMC capacity, and the BER performance of the proposed technique are discussed with clear analysis. At last, this paper is concluded in Section 5. Turbo decoding algorithm To simplify the decoding complexity, the MAP algorithm in logarithmic domain (Log-MAP) and its derivatives are widely used [12]. For the single binary convolutional turbo code that was defined in the LTE-Advanced standard [1], by assuming the encoded sequence is transmitted through an additive white Gaussian noise (AWGN) channel, the Log-MAP decoding algorithm is shown by Eq. (1). $$ \tilde{\gamma }_{k}^{\left(z \right)}\left({{s}_{{{j}_{1}},k-1}},{{s}_{{{j}_{2}},k}} \right)=\frac{{{L}_{c}}}{2}\left(x_{k}^{s}y_{k}^{s}+x_{k}^{p}y_{k}^{p} \right)+\Lambda_{apr,k}^{\left(z \right)}\left({{u}_{k}} \right) $$ $$ {{\tilde{\alpha }}_{k}}\left({{s}_{{{j}_{2}},k}} \right)=\underset{{{s}_{{{j}_{1}},k-1}}}{{ma{{x}^{*}}}}\,\left[ {{{\tilde{\alpha }}}_{k-1}}\left({{s}_{{{j}_{1}},k-1}} \right)+\tilde{\gamma }_{k}^{\left(z \right)}\left({{s}_{{{j}_{1}},k-1}},{{s}_{{{j}_{2}},k}} \right) \right] $$ $$ {{\tilde{\beta }}_{k}}\left({{s}_{{{j}_{1}},k}} \right)=\underset{{{s}_{{{j}_{2}},k+1}}}{{ma{{x}^{*}}}}\,\left[ {{{\tilde{\beta }}}_{k+1}}\left({{s}_{{{j}_{2}},k+1}} \right)+\tilde{\gamma }_{k+1}^{\left(z \right)}\left({{s}_{{{j}_{1}},k}},{{s}_{{{j}_{2}},k+1}} \right) \right] $$ $$ {{} \begin{aligned} & \Lambda_{apo,k}^{\left(z \right)}\left({{u}_{k}} \right)= \\ & \ \ \ \underset{\left({{u}_{k}}=z \right)}{{ma{{x}^{*}}}}\,\left[ {{{\tilde{\alpha }}}_{k-1}}\left({{s}_{{{j}_{1}},k-1}} \right)+\tilde{\gamma }_{k}^{\left(z \right)}\left({{s}_{{{j}_{1}},k-1}},{{s}_{{{j}_{2}},k}} \right)+{{{\tilde{\beta }}}_{k}}\left({{s}_{{{j}_{2}},k}} \right) \right] \\ & -\underset{\left({{u}_{k}}=0 \right)}{{ma{{x}^{*}}}}\,\left[ {{{\tilde{\alpha }}}_{k-1}}\left({{s}_{{{j}_{1}},k-1}} \right)+\tilde{\gamma }_{k}^{\left(z \right)}\left({{s}_{{{j}_{1}},k-1}},{{s}_{{{j}_{2}},k}} \right)+{{{\tilde{\beta }}}_{k}}\left({{s}_{{{j}_{2}},k}} \right) \right] \\ \end{aligned}} $$ (1d) $$ \begin{aligned} & \Lambda_{ex,k}^{\left(z \right)}\left({{u}_{k}} \right)=\Lambda_{apo,k}^{\left(z \right)}\left({{u}_{k}} \right)-\Lambda_{apr,k}^{\left(z \right)}\left({{u}_{k}} \right)-\Lambda_{in,k}^{\left(z \right)}\left({{u}_{k}} \right) \\ & \left\{ \begin{array}{ll} & \Lambda_{in,k}^{\left(0 \right)}\left({{u}_{k}} \right)=0 \\ & \Lambda_{in,k}^{\left(1 \right)}\left({{u}_{k}} \right)={{L}_{c}}y_{k}^{s} \\ \end{array} \right. \\ \end{aligned} $$ (1e) In Eq. (1), z belongs to {0, 1}, L c =2/σ2 (σ2 is the noise variance of the AWGN channel), k is the decoding time slot, \(x_{k}^{s}\) and \(\ x_{k}^{p}\) are the transmitted codewords, \(y_{k}^{s}\) and \(\ y_{k}^{p}\) are the received codewords, where s and p denote the systematic and parity bits. j∈{0,⋯,7} is the index of the state metrics, sj,k is the jth state at the decoding time slot k, \(\tilde {\gamma }_{k}^{\left (z \right)}\) is the branch metric, \({{\tilde {\alpha }}_{k}}\) is the forward state metric, and \({{\tilde {\beta }}_{k}}\) is the backward state metric. For u k =z, \(\Lambda _{apr,k}^{\left (z \right)}\left ({{u}_{k}} \right)\), \(\Lambda _{apo,k}^{\left (z \right)}\left ({{u}_{k}} \right)\) and \(\Lambda _{ex,k}^{\left (z \right)}\left ({{u}_{k}} \right)\) are the a priori log-likelihood ratio (LLR), the a posteriori LLR and the extrinsic information, respectively. Note that the max∗ operator in Eq. (1) is defined and simplified as follows [13]: $$ {{} \begin{aligned} & max*\left({{x}_{1}},{{x}_{2}} \right)=\ln \left(\exp \left({{x}_{1}} \right)+\exp \left({{x}_{2}} \right) \right) \\ & \approx \min \left\{ {{x}_{1}},{{x}_{2}} \right\}+\max \left\{ {{x}_{1}}-{{x}_{2}},0.75\left({{x}_{1}}-{{x}_{2}} \right)+0.625 \right\} \\ \end{aligned}} $$ For a max∗ operator with more than two operands, Eq. (2) can be recursively applied. However, this recursion processing is not necessary in practical. By using Eq. (3), the decoding complexity can be significantly reduced, which is shown as follows [13]. $$ {{} \begin{aligned} & {{\max }^{*}}\left({{x}_{1}},{{x}_{2}},\cdots,{{x}_{n}} \right)\approx {{\max }^{*}}\left({{y}_{1}},{{y}_{2}} \right) \\ & \approx \min \left\{ {{y}_{1}},{{y}_{2}} \right\}+\max \left\{ {{y}_{1}}-{{y}_{2}},0.75\left({{y}_{1}}-{{y}_{2}} \right)+0.625 \right\} \\ \end{aligned}} $$ where y1 and y2 are the maximum two variables among {x1,x2,⋯,x n }. In this research, Eq. (3) is adopted by Eq. (1) to calculate the forward state metrics \({{\tilde {\alpha }}_{k}}\), the backward state metrics \({{\tilde {\beta }}_{k}}\), and the a posteriori LLR \(\Lambda _{apo,k}^{\left (z \right)}\left ({{u}_{k}} \right)\). Method of proposed compression/decompression technique Compression of the state metrics In the hardware implementation of turbo decoder, the state metrics are stored in the last in and first out (LIFO) SMC. Existing researches have shown that the (10,3) quantization scheme is suitable for getting satisfactory BER performance (10 is the total bits, 3 is the fractional bits) [9, 10]. To reduce the SMC capacity, we propose to compress the state metrics and to employ a (5,3) quantization scheme in this research. Seen from Eq. (1b), for each decoding time slot k, there are eight forward state metrics \({{\tilde {\alpha }}_{k}}\left ({{s}_{{{j}_{2}},k}} \right),{{j}_{2}}\in \left \{ 0,\cdots,7 \right \}\). To facilitate the compression of these metrics, Eq. (4) is used for normalization. Since the decoding algorithm is performed in the logarithmic domain, when the same value is subtracted from the eight forward state metrics at time slot k, value of the a posteriori LLR \(\Lambda _{apo,k+1}^{\left (z \right)}\left ({{u}_{k+1}} \right)\) is not affected by replacing \(\phantom {\dot {i}\!}{{\tilde {\alpha }}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) with \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\phantom {\dot {i}\!}\). $$ {{{\alpha }'}_{k}}\left({{s}_{{{j}_{2}},k}} \right)={{\tilde{\alpha }}_{k}}\left({{s}_{{{j}_{2}},k}} \right)-{{\tilde{\alpha }}_{k}}\left({{s}_{0,k}} \right) $$ Subsequently, \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right),{{j}_{2}}\in \left \{ 1,\cdots,7 \right \}\phantom {\dot {i}\!}\) are recursively compressed by using Eq. (5) and noted that the value of α′ k (s0,k) is zero as implied by Eq. (4). $$ {{{{\alpha }'}}_{k}}\left({{s}_{{{j}_{2}},k}} \right)=\frac{{{{{\alpha }'}}_{k}}\left({{s}_{{{j}_{2}},k}} \right)-{{{{\alpha }'}}_{k}}\left({{s}_{{{j}_{2}}-1,k}} \right)}{4} $$ In Eq. (5), 1/4 is the compression coefficient, and this division operation can be realized by using one 2-bits right shifting in hardware implementation. Considering that the values of \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right),{{j}_{2}}\in \left \{ 1,\cdots,7 \right \}\phantom {\dot {i}\!}\) may be positive or negative, when the (5,3) quantization scheme is adopted, the most significant bit represents the sign bit, while the rest bits represent the absolute value of the compressed forward state metrics. However, when \(\phantom {\dot {i}\!}\left | {{{{\alpha }'}}_{k}}\left ({{s}_{{{j}_{2}},k}} \right) \right |\) is larger than 1.875, the (5,3) scheme is not sufficient to quantize the compressed metrics. Therefore, a compare unit is employed to decide whether the next round of iterative compression should be performed: (i) if \(\max \left (\left | {{{{\alpha }'}}_{k}}\left ({{s}_{{{j}_{2}},k}} \right) \right | \right)>1.875\phantom {\dot {i}\!}\), \(\phantom {\dot {i}\!}{{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) are feedback to the compression module, where Eq. (4) is applied for the next round of compression; (ii) if \(\phantom {\dot {i}\!}\max \left (\left | {{{{\alpha }'}}_{k}}\left ({{s}_{{{j}_{2}},k}} \right) \right | \right)\le 1.875, {{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) are output and then are stored in the LIFO SMC. Since \(\phantom {\dot {i}\!}{{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) are 10 bits quantized, and 7 bits are assigned for the integer part, at most 4 times of iterative compression is enough to guarantee \(\max \left (\left | {{{{\alpha }'}}_{k}}\left ({{s}_{{{j}_{2}},k}} \right) \right | \right)\) is no more than 1.875. So, the number of iterative compression times I k is an important parameter for the decompression, where 00, 01, 10, and 11 in binary denote the number of iterative compression times of 1, 2, 3, and 4 in decimal, respectively. As a result, when the compression procedure is finished, the number of iterative compression times and the compressed state metrics will be stored in the SMC. Furthermore, since α′ k (s0,k) equals to zero for each decoding time slot k, it is not necessary to store this metric in the SMC. For clear illustration, the word structure of the SMC is shown by Fig. 1 as below. Word structure of the state metrics cache Decompression of the state metrics In the backward direction, I k and \(\phantom {\dot {i}\!}{{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) are read out from the LIFO SMC to estimate their original values, which is performed in the decompression module. Noted that α′ k (s0,k)=0, we propose to decompress \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right),\ {{j}_{2}}\in \left \{7,\cdots,1\right \}\phantom {\dot {i}\!}\) by using Eq. (6). $$ {{{{\alpha }'}}_{k}}\left({{s}_{{{j}_{2}},k}} \right)=4{{{{\alpha }'}}_{k}}\left({{s}_{{{j}_{2}},k}} \right)+{{{{\alpha }'}}_{k}}\left({{s}_{{{j}_{2}}-1,k}} \right),{{j}_{2}}\in \left\{7,\cdots,1 \right\} $$ Equation (6) is the inverse calculation of Eq. (5), and I k is used to decide how many times Eq. (6) should be recursively performed. For example, if I k =10 in binary, Eq. (6) is 3 times recursively performed, and then the decompressed state metrics are output to the a posteriori LLR calculation module. It should be noted that, \(\phantom {\dot {i}\!}{{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) in Eq. (4) are 10 bits quantized, when they are compressed by using Eq. (5), they will be 5 bits stored in the SMC. Considering the finite word length effect, when these metrics are used for decompression, errors will be introduced during the decompression procedure, which have negative effect on the decoding performance. As it can be seen from the simulation results in Fig. 6, the BER is slightly lost as compared with the Log-MAP algorithm. State metrics compression based decoding architecture Based on the above described compression and decompression processes, two modules are inserted into the conventional decoding architecture. Assuming N denotes the decoding window length, the proposed decoding architecture is presented in Fig. 2, while Fig. 3 is the corresponded timing chart. State metrics compressed turbo decoding architecture Timing chart of the proposed decoding architecture As shown in Fig. 2, in the forward direction, \(\tilde {\gamma }_{k}^{(z)}\) are calculated in the branch metrics unit (BMU), and then are input to the forward recursion module, where \({{\tilde {\alpha }}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) are recursively calculated. Instead of been stored in the LIFO SMC, \({{\tilde {\alpha }}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\) are input to the compression module. In this module, the output control unit (OCU) is applied to enable the compression, and the compare unit (CU) provides the trigger signal. At first, the buffer is initialized as \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right)\phantom {\dot {i}\!}\), one adder and one 2-bits right shifting unit (RSU) are used to perform the compression. The compressed metrics are feedback to the adder, the buffer and the compare unit (CU). If \(\phantom {\dot {i}\!}\max \left (\left | {{{{\alpha }'}}_{k}}\left ({{s}_{{{j}_{2}},k}} \right) \right | \right)>1.875\) is true, the OCU is triggered to enable the next round of compression, while the addition counter unit (ACU) increases I k by 1 in decimal. Otherwise, the compression procedure is completed. Subsequently, the metrics in the buffer and the counting result in the ACU will be stored in the LIFO SMC. For the decompression module in the backward direction, I k is input to the subtraction counter unit (SCU), by which the OCU is triggered to enable the decompression. Note that a delay unit (DU) is applied to adjust the time slot, and one 2-bits left shifting unit (LSU) is used to realize the multiply operation. As presented in Fig. 2, two modules are embedded in the conventional turbo decoder architecture. To demonstrate the effectiveness of this technique, a set of MATLAB scripts is developed for verification, as the improved decoding algorithm detailed in Section 2 is used to construct the state metrics compressed decoding architecture. Build on this software platform, the introduced dummy decoding complexity, the SMC capacity, and the BER performance are discussed in this section. Dummy decoding complexity For the compression module, Eq. (4) shows seven addition operations are performed before the compression. When the OCU is enabled, one adder and one 2-bits RSU are employed to calculate \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right),\ {{j}_{2}}\in \left \{ 1,\cdots,7 \right \}\phantom {\dot {i}\!}\) recursively. Subsequently, seven compare operations are performed in the CU to decide whether the next round of compression should be enabled, and the ACU should increase I k by 1 or not. Similarly, in the decompression module, the SCU decides how many times the decompression procedure should be performed. By Eq. (6) and Fig. 2, the I k in OCU is used to enable the decompression module, in which one DU, one 2-bits LSU and one addition operation are performed to estimate the state metrics for each round of decompression. Therefore, I k is the key parameter to analyze the dummy decoding complexity of the proposed technique. In Section 3, the described compression procedure shows I k is a variable that depend on if \(\max \left (\left | {{{{\alpha }'}}_{k}}\left ({{s}_{{{j}_{2}},k}} \right) \right | \right)>1.875\phantom {\dot {i}\!}\) is true, which means the difference between \({{{\alpha }'}_{k}}\left ({{s}_{{{j}_{2}},k}} \right),\ {{j}_{2}}\in \left \{ 1,\cdots,7 \right \}\phantom {\dot {i}\!}\) is the most important factor. As a result, I k may take different values with the same signal to noise ratio (SNR) but with different iteration number, or with the same iteration number but with different SNR. To this purpose, we define \(\phantom {\dot {i}\!}{{D}_{{{I}_{k}}}}\) as the total amount for I k equals to 1, 2, 3, and 4 in decimal, respectively. Consequently, the percentage \(\phantom {\dot {i}\!}{{E}_{{{I}_{k}}}}\) for \(\phantom {\dot {i}\!}{{D}_{{{I}_{k}}}}\) are calculated as: $$ {{E}_{{{I}_{k}}}}=\frac{{{D}_{{{I}_{k}}}}}{\sum\limits_{{{I}_{k}}=1}^{4}{{{D}_{{{I}_{k}}}}}}\times 100 $$ Assuming the frame length is 1440, the statistical distribution of \({{E}_{{{I}_{k}}}}\phantom {\dot {i}\!}\) with the same SNR but with different iteration number is presented in Fig. 4. Although SNR = 0.9 dB is a special case, the results illustrated in Fig. 4 still represent a generic tendency: as the iteration goes on, more rounds of compression/decompression will be performed. Statistical distribution of \(\protect {{E}_{{{I}_{k}}}}\) with SNR = 0.9 dB for different iteration number Similarly, the statistical distribution of \(\phantom {\dot {i}\!}{{E}_{{{I}_{k}}}}\) with 8 iterations but with different SNR is presented in Fig. 5. As SNR increases, the difference among the forward state metrics \(\phantom {\dot {i}\!}{{\tilde {\alpha }}_{k}}\left ({{s}_{{{j}_{2}},k}} \right), {{j}_{2}}\in \left \{0,\cdots,7 \right \}\) becomes larger, i.e., the probability for I k to represent a larger value will increase accordingly. Statistical distribution of \(\protect \phantom {\dot {i}\!}{{E}_{{{I}_{k}}}}\) with eight iterations for different SNR BER performance comparison SMC capacity At the cost of dummy decoding complexity that performed in the compression and decompression modules, the compressed state metrics can be 5 bits quantized. As presented in Fig. 1, one I k (2-bits represented) and seven compressed metrics should be stored in the SMC for each time slot (1×2+7×5=37 bits). Compared with the conventional decoding architecture, where the quantization scheme is (10,3) and eight state metrics should be stored in the SMC (8×10=80 bits), the proposed technique have reduced the SMC capacity by 53.75%. To summarize, the SMC organization and capacity comparison are illustrated in Table 1. Table 1 Comparison of SMC organization and capacity BER simulation To investigate the decoding performance of the proposed technique, the simulation environment is set as follows: for the LTE-Advanced standard defined turbo code [1], two kinds of frame length are used for demonstration (800 and 1440 bits, respectively), the code rate is 1/3, and the encoded sequences are transmitted through an AWGN channel. Three decoding algorithms, i.e., the optimal Log-MAP algorithm, the state metrics compressed decoding technique (the near optimal decoding algorithm detailed in Section 2 is implemented), and the maximum Log-MAP (Max-Log-MAP) algorithm, are used for comparison. For each frame, the employed decoding algorithm is 8 times performed iteratively. Moreover, the quantization schemes in Table 2 are adopted [9–11], and δ is the scaling factor [14]. Table 2 Quantization schemes adopted in the simulation Seen from Fig. 6, the Log-MAP algorithm gets the best BER performance, about 0.2 dB superior to the Max-Log-MAP algorithm, but only slightly outperforms that of the state metrics compressed decoding technique. Thanks for the simplified max∗ operator in Section 2, although the inserted compression and decompression modules have introduced some errors, the resulted BER loss is limited, about 0.05 dB at BER of 10−2 and then becomes smaller as SNR increase. In the energy resource-limited applications where turbo code is adopted for data transmission, memory-reduced turbo decoder is an effective solution to reduce the overall power dissipation. In this paper, a state metrics compressed turbo decoding technique is proposed. It has been shown that, by inserting one compression and one decompression modules in the conventional decoding architecture, a quantization scheme with smaller SMC word length can be applied, and the SMC capacity is reduced by 53.75%. Dummy decoding complexity in the compression and decompression modules is analyzed in detail. In most cases of different iteration number but with the same SNR, and of different SNR with the same iteration number, one or two rounds of compression/decompression are implemented. For the LTE-Advanced standard defined turbo code, the proposed decoding technique clearly outperforms the Max-Log-MAP algorithm in BER performance, and only slightly degraded as compared with the optimal Log-MAP algorithm. ACU: Addition counter unit AWGN: Additive white Gaussian noise BER: BMU: Branch metrics unit CU: Compare unit Delay unit Log-MAP: The MAP algorithm in logarithmic domain LLR: Log-likelihood ratio LIFO: Last in and first out LSU: Left shifting unit The maximum a posteriori Max-Log-MAP: Maximum Log-MAP OCU: Ouptut control unit RSU: Right shifting unit SMC: State metrics cache SCU: Subtraction conter unit SNR: WSNs: 3rd Generation partnership project, Multiplexing and channel coding, 3rd Generation Partnership Project, TS 36.212 version v11.1.0 Release 11, 2013. [Online]. Available: http://www.etsi.org/deliver/etsi_ts/136200_136299/136212/11.01.00_60/ts_136212v110100p.pdf. Digital Video Broadcasting (DVB), Second Generation DVB Interactive Satellite System (DVB-RCS2); Part 2: Lower Layers for Satellite Standard, Digital Video Broadcasting (DVB), ETSI EN 301 545-2 V1.2.1, 2014, [Online]. Available: http://www.etsi.org/deliver/etsi_en/301500_301599/30154502/01.02.01_60/en_30154502v010201p.pdf. FB Matthew, L Liang, GM Robert, et al, 20 years of turbo coding and energy-aware design guidelines for energy-constrained wireless applications. IEEE Communication Surveys & Tutorials. 18(1), 8–28 (2016). J Haghighat, H Behroozi, DV Plant, in IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). Joint decoding and data fusion in wireless sensor networks using turbo codes (IEEECannes, 2008), pp. 1–5. https://ieeexplore.ieee.org/document/4699729/. H Cam, in IEEE International Conference on Communications (ICC). Multiple-input turbo code for joint data aggregation, source and channel coding in wireless sensor networks (IEEEIstanbul, 2006), pp. 3530–3535. https://ieeexplore.ieee.org/document/4025020/. YH Yitbarek, K Yu, J Åkerberg, M Gidlund, M Björkman, in 2014 IEEE International Conference on Industrial Technology (ICIT). Implementation and evaluation of error control schemes in industrial wireless sensor networks (IEEEBusan, 2014), pp. 730–735. https://ieeexplore.ieee.org/document/6895022/. L Li, GM Robert, BM Al-Hashimi, L Hanzo, A low-complexity turbo decoder architecture for energy-efficient wireless sensor networks. IEEE Trans. VLSI Syst. 21(1), 14–22 (2013). CH Lin, CC Wei, Efficient window-based stopping technique for double-binary turbo decoding. IEEE Commun. Lett. 17(1), 169–172 (2013). DS Lee, IC Park, Low-power Log-MAP decoding based on reduced metric memory access. IEEE Trans. Circ. Syst. I: Regular Papers. 53(6), 1244–C1253 (2006). CH Lin, CY Chen, AY Wu, Low-power memory-reduced traceback MAP decoding for double-binary convolutional turbo decoder. IEEE Trans. Circ. Syst. I: Regular Papers. 56(5), 1005–1016 (2009). M Martina, G Masera, State metric compression techniques for turbo decoder architectures. IEEE Trans. Circ. Syst. 58(5), 1119–1128 (2011). M Martina, S Papaharalabos, PT Mathiopoulos, G Masera, Simplified Log-MAP algorithm for very low-complexity turbo decoder hardware architectures. IEEE Trans Instrum. Meas. 63(3), 531–537 (2014). M Zhan, J Wu, ZZ Zhang, et al, Low-complexity error correction for ISO/IEC/IEEE 21451-5 sensor and actuator networks. IEEE Sensors J. 15(5), 2622–2630 (2015). J Vogt, A Finger. Improving the Max-Log-MAP turbo decoder. Electron. Lett. 36(23), 1937–1938 (2000). The authors thank the researchers form the School of Electronics and Information Engineering, Southwest University, for the helpful insights on the improvement of this manuscript. This work was supported in part by the National Natural Science Foundation of China under Grant 61671390, in part by the China Postdoctoral Science Foundation under Grant 2015M570776, and in part by the Fundamental Research Fund for Central University (Southwest University) under Grant SWU113044. The Key Laboratory of Networks and Cloud Computing Security of University of Chongqing, College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China Ming Zhan ABB Corporate Research, Forskargränd 7, Västerås, SE-721 78, Sweden Zhibo Pang School of Electrical Engineering, KTH Royal Institute of Technology, Stockholm, SE-100 44, Sweden Ming Xiao National Key Laboratory of Science and Technology on Communications at University of Electronic Science and Technology of China, Chengdu, 611731, China Hong Wen Search for Ming Zhan in: Search for Zhibo Pang in: Search for Ming Xiao in: Search for Hong Wen in: MZ proposed the state metrics compression/decompression technique for the turbo decoder architecture design, and carried out the analysis to demonstrate the effectiveness of this idea, which include the comparison of dummy computational complexity, SMC capacity, and BER performance. From the perspective of practical engineering, ZP gave the direction on the wireless sensor networks of which the power-efficient turbo decoder is to be applied. MX proposed the instruction on how to perform the compression/decompression procedures, together with computational complexity analysis. HW recommended the near optimal MAP decoding algorithm and the quantization scheme for simulation, which are the basis of this research work. All authors read and approved the final manuscript. Correspondence to Zhibo Pang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Zhan, M., Pang, Z., Xiao, M. et al. A state metrics compressed decoding technique for energy-efficient turbo decoder. J Wireless Com Network 2018, 152 (2018) doi:10.1186/s13638-018-1153-y Turbo decoder State metrics Algorithmic Foundations of Wireless Networks
CommonCrawl
Applied Network Science Research | Open | Published: 24 June 2019 When can overambitious seeding cost you? Shankar Iyer ORCID: orcid.org/0000-0002-0007-70971 & Lada A. Adamic1 Applied Network Sciencevolume 4, Article number: 38 (2019) | Download Citation In the classic "influence-maximization" (IM) problem, people influence one another to adopt a product and the goal is to identify people to "seed" with the product so as to maximize long-term adoption. Many influence-maximization models suggest that, if the number of people who can be seeded is unconstrained, then it is optimal to seed everyone at the start of the IM process. In a recent paper, we argued that this is not necessarily the case for social products that people use to communicate with their friends (Iyer and Adamic, The costs of overambitious seeding of social products. In: International Workshop on Complex Networks and Their Applications_273–286, 2018). Through simulations of a model in which people repeatedly use such a product and update their rate of subsequent usage depending upon their satisfaction, we showed that overambitious seeding can result in people adopting in suboptimal contexts, having bad experiences, and abandoning the product before more favorable contexts for adoption arise. Here, we extend that earlier work by showing that the costs of overambitious seeding also appear in more traditional threshold models of collective behavior, once the possibility of permanent abandonment of the product is introduced. We further demonstrate that these costs can be mitigated by using conservative seeding approaches besides those that we explored in the earlier paper. Synthesizing these results with other recent work in this area, we identify general principles for when overambitious seeding can be of concern in the deployment of social products. The study of how new ideas and products spread through networks dates back decades, to early studies from the 1950s and 1960s of the adoption of health innovations (Coleman et al. 1957; 1959; 1966) and to the development of general models of product adoption by Rogers and Bass (Rogers 1962; Bass 1969). An important milestone was the formulation of "influence maximization" (IM) as an algorithmic problem by Domingos and Richardson (2001). In IM, a product developer typically has limited resources (e.g., an advertising budget) with which to give or market a product to potential adopters. The developer assumes that adoption of the product spreads through the social network of potential adopters through some peer-influence process. Then, the challenge is to decide which people to "seed" with the product in order to maximize long-term adoption. Since its formulation by Domingos and Richardson, the influence maximization problem has found applications across diverse domains, from traditional applications in marketing (Hinz et al. 2011), to the spreading of health information (Yadav et al. 2018; Wilder et al. 2018), to the diffusion of microfinance programs in villages (Banerjee et al. 2013). Influence maximization has been theoretically studied under a variety of peer-influence models. One classic IM model is the independent-cascade model, in which friends of new adopters also adopt with some probability (Goldenberg et al. 2001). So-called linear threshold models comprise another class, in which people will adopt if sufficiently many of their friends adopt (Granovetter 1978; Schelling 2006). Depending upon the specific threshold model, an individual's adoption decision can depend upon a minimum number of friends adopting or upon a minimum percentage of friends adopting (Watts 2002). Soon after the formulation of IM by Domingos and Richardson, Kempe, Kleinberg, and Tardos demonstrated both that IM is NP-hard under the usual independent cascade and threshold models and that there are nevertheless simple greedy algorithms for selecting the seeds with strong performance guarantees (Kempe et al. 2003). Their work has inspired a large literature around developing even better heuristic algorithms for IM. A recent review of state-of-the-art algorithms can be found in Li et al. (2018). In this paper, we revisit a question that we previously explored in Iyer and Adamic (2018): if there is no budgetary constraint on seeding, is it optimal to seed everyone at the start of the IM process? Despite the general hardness of IM, the traditional independent cascade and threshold models all agree that the answer to this question is "yes." Does that property of these simplified models provide reasonable guidance for real product-deployment scenarios? There are several reasons why it may not, including costs associated with people rejecting the product, downstream word-of-mouth effects, and so-called "non-conformism" effects, where people are inclined to adopt less popular products. We review prior work on each of these pathways to "overexposure" in the "Related work" section below. Our main focus here, however, is on a distinct pathway to overexposure, which we demonstrated in a recent paper (Iyer and Adamic 2018): when the product under consideration is one that allows people to communicate with their friends (i.e., a "social" product), if people adopt too early, then they may begin using the product in contexts where insufficiently many of their friends are using it. This can lead to abandonment of the product prior to the emergence of a more favorable context for adoption. In our earlier paper, we showed that a more conservative seeding strategy can often help avoid these premature abandonments of the product and lead to greater long-term usage. Crucially, we showed that this remains true even in the absence of a budgetary constraint on initial seeding: even if a product developer can simply hand the product to everybody, it may be preferable not to do so. The present paper extends our previous work in various ways. In Iyer and Adamic (2018), we demonstrated the "costs" of overambitious seeding in a model of repeated product usage, where people gain access to a social product and then either use the product or abstain in a sequence of time steps. Here, we show that these "costs" also appear in more traditional threshold models, once the possibility of permanent abandonment of the product (also referred to as "churn") is introduced. Furthermore, through simulations on networks with a clear community structure, our earlier work showed that seeding approaches that focus on one of the clusters can often outperform approaches that seed the entire network. In this paper, we show that there are conservative seeding approaches that do not rely on clear-cut community structure, but which still lead to greater longer-term adoption than universal seeding. After demonstrating the robustness of our previous results in these two different ways, we then attempt to abstract away general principles for when product developers ought to factor these considerations into their product deployment decisions. The rest of this paper is structured as follows. The "Related work" section places our work in the context of prior research on overexposure in IM. With this context in place, in the "Models of social-product usage" section, we introduce the repeated-usage and threshold models that we study in this paper. Next, in the "Toy examples" section, we study each of these models on certain, very special network structures, developing intuitions for why overambitious seeding can be problematic in both models. In practice, of course, we will want to see how the models behave on more realistic network structures, and to that end, in the "Networks used in simulations" section we introduce the real-world network structures that we use in our numerical simulations. The "Simulation results: cluster-based seeding" and the "Simulation results: k-core seeding" sections then report our simulation results, showing how two different conservative seeding approaches can outperform universal seeding. In the "Discussion: When is overambitious seeding costly?" section, we extract some general principles for when overambitious seeding can be costly before concluding in the "Conclusion" section by reviewing our findings and pointing out opportunities for extensions. In this section, we review research on overexposure and overambitious seeding in influence maximization. Our goal here will be to examine the implications of various previously explored models for the fundamental question articulated above: in the absence of a budgetary constraint on the seeding process, is it optimal to seed everyone immediately? This survey of prior research helps distill the reasons why it is interesting that, in each of the models studied in this paper, the answer to this question is often "no." In Kempe et al. (2003), the authors showed that the classic independent cascade and linear threshold models obey a monotonicity property, where a subset of a cohort of initial adopters cannot lead to higher long-term adoption than the entire cohort. Furthermore, they generalized these models to a larger class of so-called "triggering" models, in which each subset of a person i's neighbors is associated with a probability of i adopting, and showed that triggering models also exhibit monotonicity (Kempe et al. 2003). If this monotonicity property holds, and if each person accepts the seed independently, then the optimal approach in the unbudgeted case clearly involves seeding everybody: an unseeded individual's probability of adoption in the seed round is 0, and by monotonicity, it would be preferable if that probability was non-zero. Models of overexposure generally try to show that there are plausible assumptions about real-world IM settings that can violate monotonicity. One path to overexposure involves introducing some type of negative payout for rejection of the product. A recent example of this can be found in Abebe et al. (2018), in which the authors study a diffusion process where there are positive payouts for adopters and negative payouts for rejecters. If someone adopts, that person will refer the product to his or her friends, which could result in further adoptions or rejections. In this model, there can be circumstances where it is detrimental to seed an individual i, because the costs of the product being exposed to i's friends may outweigh the benefits of i's adoption. In the unbudgeted case, the optimal strategy still does not generally involve seeding everyone, because that would expose the product to many rejecters, leading to potentially avoidable negative payouts (Abebe et al. 2018). The results of Abebe et al. echo empirical findings such as the so-called Groupon effect, where exposure to a larger audience can have unintended negative effects (e.g., upon Yelp ratings) (Byers et al. 2012). Other authors have studied overexposure effects arising from more direct negative externalities of adoption, such as "negative word-of-mouth" (Kiesling et al. 2012). Empirical research actually suggests that dissatisfied adopters spread their perspective more often than satisfied adopters, sharing their negative sentiment with up to ten friends (Anderson 1998). Cui et al. recently reported results for a model where satisfied adopters can enhance the probability of subsequent adoption by their friends, while dissatisfied adopters can reduce that probability (Cui et al. 2018). In such a setting, it may be preferable to seed people who are likely to spread positive word-of-mouth and avoid seeding others. "Non-conformist" or "hipster" effects comprise yet another class of negative externalities. "Hipsters" in these models refrain from adopting products that are too popular and/or abandon products if they become too popular (Alkemade and Castaldi 2005). Although not strictly framed as an IM study, the recent work of Juul and Porter shows how the presence of hipsters can have dramatic effects upon the long-term adoption of two competing products. Indeed, in some cases, the product that begins the process with no adopters at all ends up accounting for the majority in the steady state (Juul and Porter 2019). Kempe, Kleinberg, and Tardos referred to models in which adopters can revert to the non-adopting state as "non-progressive" models, to contrast with "progressive" models where people can only transition into the adopting state. If we want to consider the product experiences of people after they make their initial adoption decision, then some form of non-progressive model is appealing. Kempe et al. showed that the simplest non-progressive extensions of monotonic triggering models (e.g., where people abandon the product if enough of their friends do) inherit the monotonicity property. This is because these models can be mapped to their progressive counterparts on a temporal network in which people are represented by a node in each temporal layer, and there are links between each person i at time t and their friends at time t−1Footnote 1 (Kempe et al. 2003). This argument for monotonicity does not work if the original progressive model is itself non-monotonic, or if people abandon the product permanently after a fixed number of adoptions. In Iyer and Adamic (2018), we previously argued that it can be detrimental to seed everyone at the start of an unbudgeted IM process in a certain type of non-progressive model, even in the absence of the mechanisms studied in the previous literature surveyed above. Our model was motivated by "social products" that are used by friends to communicate with one another, and it considered the product experiences of people after adoption instead of focusing exclusively upon the binary adoption / rejection process. A key point of our earlier paper was that taking into account these product experiences naturally leads to the emergence of costs of overambitious seeding, even in the absence of negative payouts of rejection, negative word-of-mouth, and non-conformism effects. However, we made this point in a model that is rather structurally different from classic models of IM (Iyer and Adamic 2018). Here, we show that the same mechanisms can lead to costs of overambitious seeding in traditional threshold models, once the possibility of permanent churn is included. Models of social-product usage In this section, we introduce two models of how people adopt, use, and abandon social products. First, we review the model of repeated product usage that was proposed in reference (Iyer and Adamic 2018), which considers the gradual impact of individual product experiences upon people's subsequent behavior. Then, we propose a modification of the traditional threshold model as a "coarser-grained" model of long-term adoption decisions. Both models are intended to describe the choices of people embedded in an undirected social network. Each node i represents a person who can potentially use or adopt the product. Each edge ij represents a friendship tie between two people. Repeated-Usage Model: Our repeated-usage model proceeds in a sequence of time steps, beginning with t=0. At any time t, a person i can either have access to the product or not. People can only use the product in time step t if the have received access by that time. If a person i has access, then i uses the product in that time step with probability pi(t) and abstains otherwise. At the time ti when i initially gets access, pi(ti) is initialized to a value p0. We associate a threshold si with each person. If i uses the product in time step t, then si is the number of friends of i who also need to use the product at time t for i to be satisfied. Then, i adjusts his or her probability of subsequent usage up or down as follows: $$\begin{array}{@{}rcl@{}} p_{i}(t + 1) = \left\{\begin{array}{ll} p_{i}(t) + \delta &\quad \text{if more than}\ s_{i}\ \text{friends use in time step} {t} \\ p_{i}(t) &\quad \text{if exactly}\ s_{i}\ \text{friends use in time step t} \\ p_{i}(t) - \delta &\quad \text{if fewer than}\ s_{i}\ \text{friends use in time step t} \end{array}\right. \end{array} $$ We allow pi(t) to grow to 1 or drop to 0. While pi(t)=1 is not necessarily a permanent state, pi(t)=0 is permanent, because it guarantees that the person will no longer have any product experiences, and consequently, will have no opportunities to increment their usage. In this model, in situations where we do not give access to everyone at time t=0, we need some protocol for implementing the gradual expansion of access. As in Iyer and Adamic (2018), we expand access to a new person when they have had at least two friends using the product in each of five consecutive time steps. This is one example of a more conservative seeding strategy than universal seeding at t=0. Other variants of this rule can certainly be considered and may even lead to better long-term outcomes, but this choice suffices to demonstrate our main results. Threshold Model with Churn: Various types of threshold models have been proposed in a number of contexts, from models of percolation in statistical physics (Adler 1991) to collective models of social behavior (Granovetter 1978; Watts 2002). When these models are used to study the decisions of people situated in a social network, the general idea is that people are able to take on one of two states, which we can refer to as "adopting" and "non-adopting." Certain people begin in the adopting state (e.g., through the outcome of a seeding process). Then, others may adopt if sufficiently many of their friends are in the adopting state. The adoption rule may be formulated in terms of the absolute number of friends, or alternatively, it may be formulated in terms of a percentage of friends. Generally, the adoption rule is iteratively applied until no more people would adopt. In "non-progressive" threshold models, people can also transition out of the adopting state. For example, as an outcome of the seeding process, some people can find themselves in a situation where insufficiently many of their friends are adopting. In these circumstances, they may transition back to the non-adopting state. This can, in turn, leave others in a situation where they have too few adopting friends, leading to more defections. These transitions out of the non-adopting state will, in general, cooccur and compete with transitions into the adopting state over time. If people are willing to adopt the product an arbitrary number of times, then the non-progressive model can be mapped to a progressive model and is monotonic in the size of the original seed set (Kempe et al. 2003). However, if people permanently churn after a fixed number of adoptions, then the non-progressive model is not necessarily monotonic. The threshold model that we study here proceeds as follows: Seeding Stage: At time t=0, certain people within a social network are offered the product, which they adopt with acceptance probability pa. State Updates: At each subsequent time step t=1,2,3,…, people update their states in two successive waves, which continue until the process converges: Adoption Round: People who are not currently adopting look at the states of their friends after the previous churn round (at time step t−1)Footnote 2 and adopt if at least si of their friends are adopting. Churn Round: People who are currently adopting look at the states of their friends after the previous adoption round (at time step t) and churn if fewer than si of their friends are adopting. Constraints on State Changes: The state updates described above are constrained by the following two rules: One-Time-Step Commitment: People who adopt in time step t's adoption round do not immediately churn in time step t's churn round. There is a rate limit to these state changes because we are modeling long-term changes in people's attitudes towards the product. Single Adoption per Person: People give the product only one chance before churning permanently. Comparing the Two Models: In Iyer and Adamic (2018), we motivated the repeated-usage model through the following assumptions about social product usage: Need for social support: A person's satisfaction with a product experience depends upon how many of their friends are using it. Rate-of-usage adjustments: When people gain access to the product, they begin using it at a low rate p0 and then gradually ramp their rate of usage up or down depending upon whether they are satisfied with their experiences. Possibility of permanent churn: If people have enough unsatisfying product experiences, they churn permanently and are unwilling to try the product again. Our threshold model also clearly satisfies the "need for social support" assumption and, like the repeated-usage model, encodes this property through the parameters si. Moreover, the threshold model satisfies the "possibility of permanent churn" assumption. The threshold model does not, however, incorporate gradual "rate-of-usage adjustments" but, rather, binary state changes between adoption and non-adoption. This is in keeping with its being a temporally coarse-grained model of adoption decisions. In Iyer and Adamic (2018), we also emphasized that the repeated-usage model excludes: a budgetary constraint on seeding rejection of the seed negative word-of-mouth or non-conformism effects Our threshold model also excludes budgetary constraints, negative word-of-mouth, and non-conformism effects. However, when pa<1, we do allow rejection of the seed. Since there are no negative externalities to adoption in our model, there can be no costs to overambitious seeding if the seeding process is universally successful. We will show, however, that costs naturally emerge if the success of the seeding process is stochastic. Still, there is no direct cost to someone rejecting the seed, so the path to overambitious seeding here is distinct from the one explored, for example, in Abebe et al. (2018). Comparing the roles of the parameters p0 and pa in the repeated-usage and threshold models respectively, we can refine our fundamental question for each context. In the repeated-usage model, by fixing a low p0, we pose the question: is it optimal to seed everyone at time t=0 if every seeded person adopts, but subsequently uses at a low rate? Meanwhile, by fixing a low pa in the threshold model, we pose the question: is it optimal to seed everyone at time t=0 if seeding succeeds only at a low rate? The simulation results of "Simulation results: cluster-based seeding" and "Simulation results: k-core seeding" sections show that the answer to both of these questions is often "no." Toy examples Before proceeding to the simulation results, we dedicate this section to analytical investigation of our models on certain, very special network structures. These "toy" examples illustrate the mechanisms through which overambitious seeding can reduce long-term adoption. Then, the simulation results of subsequent sections show that these mechanisms are relevant in more general contexts. Repeated-Usage Model: Suppose we run the repeated-usage model on a network with a very strong core-periphery structure (Borgatti and Everett 2000). In particular, consider a situation where the "core" consists of a complete N-graph (i.e., N people who are all friends with the N−1 others) and the "periphery" consists of N people, each of whom is friends with one person in the core. An example of such a network is shown in Fig. 1a. a An example of the type of network used in the toy-example calculation for the repeated-usage model, with a complete N-graph for the "core" and N people in the periphery. b An example of the type of network used in the toy-example calculation for the threshold model, with N people each in core, intermediate, and periphery layers. The dark blue line connecting the core indicates that it is a complete N-graph. c Bounds on asymptotic adoption fractions under two seeding strategies in the repeated-usage model. Computations are for a network of the type shown in panel a, but with N=50. d Average steady-state adoption fractions under two seeding strategies in the threshold model. Computations are for a network of the type shown in panel b, but with N=50 As usual in the repeated-usage model, rates of usage pi(t) are initialized to p0 when people receive access. However, for the present purposes, suppose that update rule for pi(t) is a much simpler variant of the one proposed in Eq. (1): $$\begin{array}{@{}rcl@{}} p_{i}(t + 1) = \left\{\begin{array}{ll} 0 & \text{ if no friends use in time step} {t} \\ 1 & \text{ if at least 1 friend uses in time step t} \end{array}\right. \end{array} $$ The simplified update rule (2) has certain important corrollaries: Satisfaction is reciprocal: Because everyone only needs one active friend to be satisfied (i.e., to raise their rate of usage to 1), if a person i is satisfied with a product experience, then so are all of i's active friends. The state where pi(t)=1is permanent: This follows from item 1, because any i with pi(t)=1 has at least one friend j with pj(t)=1. This implies that all of i's subsequent product experiences will be satisfying. Churn is only possible on the first product experience: This follows from item 2, because if a person i does not churn on the first product experience, then that person ends up in the state where pi(t)=1. When coupled with the special network structure of Fig. 1a, there is another important implication: once one person in the core has had a satisfying experience, then everyone who is subsequently active in the core will be satisfied. We now consider the case where we only give access to the core at time t=0. The probability that at least two people are active in the core in the first time step is: $$ 1 - (1-p_{0})^{N} - {Np}_{0}(1-p_{0})^{N-1} $$ If this occurs, then all of the active people will be satisfied with their experience and update their rates of usage to 1. Then, in the ensuing time steps, others in the core will try out the product, have satisfying experiences, and update their rates of usage to 1 as well. This process will take some time, since each person's initial choice to be active is independent and will take $\frac {1}{p_{0}}$ time steps on average. However, if we wait until everyone in the core is consistently active, we can then grant access to the periphery in circumstances where all people in the periphery are guaranteed to have satisfying experiences. Thus, Eq. (3) is a lower bound on the probability with which we can end up with all 2N people adopting. This actually gives a very conservative underestimate of the average adoption fractionFootnote 3, but the bound implied by Eq. (3) is sufficient to demonstrate our main point. To see why, let us now consider the case where we grant access to everyone at time t=0. Here, we can lower bound the probability that a person i in the periphery will churn. In particular, we can bound it by the sum over all times T of the probability that i is first active in time step T and that i's friend in the core is not active at all up to and including time step TFootnote 4. This gives: $$ \sum_{T=0}^{\infty}(1-p_{0})^{2T+1}p_{0}=\frac{p_{0}(1-p_{0})}{1-(1-p_{0})^{2}} $$ Then, we can upper bound the average long-term adoption fraction by assuming that everyone except this fraction adopts: $$ 1 - \frac{p_{0}(1-p_{0})}{2\left(1-(1-p_{0})^{2}\right)} $$ We compare Eqs. (3) to (5) in Fig. 1c. This shows that a lower bound on the adoption under seeding the core beats an upper bound on the adoption under seeding everyone over a broad range of values of p0. The reasoning above exposes why this is the case: by granting access to the periphery too early, we expose people in the periphery to the product before they are likely to be satisfied with their product experiences. Furthermore, except at very low p0, this premature exposure of the periphery confers little benefit to the core, which is sufficiently dense to produce satisfying experiences all on its own. It is better to wait until the core is activated and only then to grant access to the periphery. Threshold Model with Churn: Next, we consider running our threshold model on the network shown in Fig. 1b. This is a network with a dense "core" of N people who are all connected to one another and who are represented by the blue nodes. There is an "intermediate" layer of N people who are each friends with two randomly chosen people in the core; the people in this "intermediate" layer are represented by the green nodes. Finally, there is a "periphery" of N people who are friends with two randomly chosen people in the intermediate layer and who are represented by red nodes. We will consider a case where si=2 for all people in the network and where pa can vary. We will then ask whether it is better, in terms of asymptotic adoption, to seed everyone or to only seed the core. First, we consider the case where we only seed the core. The probability that fewer than two people in the core adopt under seeding is given by: $$ (1 - p_{a})^{N} + {Np}_{a}(1-p_{a})^{N-1} $$ With this probability, adoption dies out completely. On the other hand, if at least 2 people adopt under seeding, then by time t=1, the entire core will adopt. Potentially, some people in the intermediate layer will as well, if they happen to have two friends in the core who adopted during the seeding round. By time t=2, the entire intermediate layer will adopt, because every person in the intermediate layer has two adopting friends in the universally adopting core. Potentially, some people in the periphery will adopt as well, if they happen to have two intermediate layer friends who adopted by time t=1. Finally, by time t=3, the entire periphery will adopt as well, because every person in the periphery has two adopting friends in the universally adopting intermediate layer. Thus, as long as two people in the core adopt during the seeding round, the entire network eventually adopts. This means that the average final adoption fraction is: $$ 1 - (1 - p_{a})^{N} - {Np}_{a}(1-p_{a})^{N-1} $$ Next, we turn to the case where we seed everyone. The probability that at least two people in the core adopt under seeding remains the same. If that happens, then no one in the intermediate layer will churn, including those who happened to adopt under seeding. This is because, as of the adoptions that occur at t=1, every person in the intermediate layer will have 2 adopting friends in the core. However, people in the periphery who adopt under seeding can churn. For a t=0 adopter in the periphery to not churn, one of the following must be true by the first churn round: Their two friends in the intermediate layer adopted under seeding. One of their friends in the intermediate layer adopted under seeding, and the other had two friends in the core who adopted under seeding. Neither of their friends in the intermediate layer adopted under seeding, but both had two friends in the core who adopted under seeding. In principle, in case 3, the four friends-of-friends in the core need not all be distinct; however, as N grows very large, we can ignore this possibility. Then, the probability that a person in the periphery churns is approximately: $$ p_{a}\left[1 - p_{a}^{2} - 2(1-p_{a})p_{a}^{3} - (1-p_{a})^{2}p_{a}^{4}\right] $$ Hence, in the case where at least two people in the core adopt upon seeding, we can expect the final fraction of adopters to approximately be: $$ \frac{2}{3} + \frac{1}{3}\left[1 - p_{a} + p_{a}^{3} + 2(1-p_{a})p_{a}^{4} + (1-p_{a})^{2}p_{a}^{5}\right] $$ When fewer than two people in the core adopt upon seeding everyone, it is still possible for long-term adoption to be sustained if sufficiently many people adopt in the intermediate and periphery layers. This exemplifies how seeding everyone can sometimes be beneficial, especially at small values of pa. Nevertheless, when $p_{a} >> \frac {2}{N}$, Eq. (9) will still be a good approximation to the average adoptionFootnote 5. We compare Eqs. (7) and (9) in Fig. 1d. Here too, we see that it is preferable to seed only the core over a large range of pa. Yet again, the costs of overambitious seeding originate in the premature exposure of the people in the periphery to the product. These people's abandonment of the product is avoidable under a more conservative seeding strategy that focuses on the densest part of the network. Comparing Fig. 1d to c, we see that the costs of overambitious seeding are maximized at some intermediate value of pa for the threshold model, while these costs get bigger as p0 gets lower in the repeated-usage model. This is due to pa playing a dual role in the threshold model, determining both the proportion of the population that is exposed early and the average social support that population can expect. We will return to this point in the "Discussion: When is overambitious seeding costly?" section, when we discuss general settings in which overambitious seeding can be especially problematic. Networks used in simulations In Iyer and Adamic (2018), to argue that overambitious seeding can be problematic on real-world networks, we ran simulations of the repeated-usage model on portions of the Facebook friendship graph, known as Social Hash (SH) clusters. The SH clustering was originally developed to enable faster data retrieval by physically collocating data for people who communicate frequently. Thus, many (but not all) of a person's frequently contacted friends belong to the same SH cluster (Shalita et al. 2016; Kabiljo et al. 2017). This property is well matched to the type of cluster-level approaches that we tested previously (Iyer and Adamic 2018), and the same is true here. Therefore, in this paper as well, we will report simulation results modeled on de-identified SH clusters containing US Facebook users who visited in a 28 day period. When we discuss cluster-based seeding of the repeated-usage model (in the "Simulation results: cluster-based seeding" section), we report data from simulations on SH clusters computed on 2018-04-29. The properties of the SH clusters used in these simulations can be found in Iyer and Adamic (2018). All other simulations were performed on SH clusters computed for active US users who visited in the 28 days leading up to 2019-01-27. As in Iyer and Adamic (2018), we also select three-cluster networks such that each cluster has average out-of-cluster degree 〈koc〉>=1Footnote 6. Tables 1 and 2 report statistics of the distributions of the within-cluster degree kic and out-of-cluster degree koc for the various SH clusters and three-cluster networks. These tables show that there is considerable structural diversity amongst these clusters and networksFootnote 7. Table 1 Statistics of individual SH clusters Table 2 Statistics of networks composed of three SH clusters from Table 1 Figure 2 shows an example of a three-cluster SH network. This is the network qrs from Table 2. Empirical three-cluster Social-Hash network qrs. See the text of the "Networks used in simulations" section and Tables 1 and 2 for details about the Social-Hash clusters. In each cluster, we highlight one person in red; we color that person's within-cluster links yellow and out-of-cluster links light blue Simulation results: cluster-based seeding Previously (Iyer and Adamic 2018), we demonstrated the costs of overambitious seeding in the repeated-usage model by showing that seeding a single cluster can lead to greater long-term adoption than seeding all three clusters in a variety of SH networks. In this section, we recap the results of Iyer and Adamic (2018) for the repeated-usage model and then move on to show that the same phenomenon can be observed in the threshold model as well, albeit in a quantitatively weaker form. Repeated-Usage Model: Figure 3 shows simulation results for the repeated-usage model on three-cluster SH networks. These are the three-cluster networks that we introduced in Iyer and Adamic (2018), and they can be distinguished from the newer clusters used in subsequent sections by the use of uppercase letter labels. In these simulations, we fix si=2 for all people, vary p0, and ask which of the following strategies leads to the most long-term adoption: Seed the cluster with the highest median within-cluster degrees kic. Comparison of cluster-based seeding approaches in the repeated-usage model. Different rows correspond to different three-cluster networks from Table 2 of Iyer and Adamic (2018). The left-hand column shows the average asymptotic percentage of the population with access; the right-hand column shows the average asymptotic percentage that is active. Legends are shared by the left and right panels in each row. The parameters si=2 and δ=0.005 in these simulations. Each data point is an average over 50 simulations. In this and all subsequent plots, we include 95% confidence intervals, but they are sometimes smaller than the plot line Seed the two clusters with the highest median within-cluster degrees kic. Seed all three clusters. We report the average adoption in the last 100 time steps of 10000 time step simulations. Simulation results from Iyer and Adamic (2018) showed that 10000 time steps are generally sufficient for adoption to reach its asymptotic value. The left-hand panels of Fig. 3 show the fraction of the network that has access at late times, and the right-hand panels show the fraction that is active. In these five three-cluster networks, we see that seeding one cluster consistently beats seeding three clusters in the low p0 regime. The "costs" of overambitious seeding here can be substantial: for networks LMN, OPQ, and RST, we observe up to 35-45% less activity under the universal seeding strategy. There is only one network (FGH) where universal seeding at t=0 ever wins, and then only at high p0. Meanwhile, the left-hand panels of Fig. 3 show that, when the single-cluster seeding policies win, it is often despite the fact that there are people who are never given access. We put forward an argument for why seeding a single-cluster is so often preferable in Iyer and Adamic (2018), which goes as follows: at early times in the repeated-usage model, we are faced with a fundamental tradeoff. There are costs to seeding a cluster, because by assumption, the rate of initial activity p0 is low. Therefore, some people will adopt in unfavorable contexts, meaning that they will typically be unsatisfied by how many of their friends are active when they are. This will result in some permanent churn. On the other hand, there are also costs to not seeding a cluster: in particular, people in other clusters lose out on the social support of people in the unseeded cluster. In Iyer and Adamic (2018), through simulations on synthetic networks, we demonstrated multiple regimes where this tradeoff plays out in different ways. When p0 is very low, activity is not sustained in the long-term under any seeding strategy. As p0 is tuned up from this regime, we initially enter a regime where the combined early activity in all three clusters is sufficient to sustain long-term activity (i.e., the universal seeding policy wins). At higher values of p0, two clusters can sustain long-term activity in isolation, and it is "costly" in terms of asymptotic activity to seed the third. In other words, seeding the third cluster at time t=0 results in churn that could have been avoided by waiting and granting access to the third cluster under more favorable circumstances, when people in the two seeded clusters are active at very high rates. Finally, if p0 is sufficiently high, a single cluster can sustain long-term activity in isolation, and it is costly to seed any more at t=0. When studying this model on empirical networks, we typically only observe the final regime, because of the inherent heterogeneity in the within-cluster degree distribution. If we seed a single cluster, there is usually some subnetwork of that cluster (e.g., perhaps involving the highest-degree people) where long-term activity can build up in isolation. Then, the activity in that subnetwork is usually sufficient to bootstrap the spreading of favorable contexts for adoption through the rest of the three-cluster network. Note that the other two regimes (where it is preferable to seed 2 or 3 clusters) presumably still exist; we just do not observe them in Fig. 3 because they occur in a very narrow and low range of p0. Furthermore, idiosyncrasies of graph topologies in empirical networks can produce cases like FGH, where at high p0 we reenter a regime where seeding all three clusters is preferable. Despite these anomalies, seeding one cluster very generally beats seeding three when the repeated-usage model is simulated on real-world networks. Threshold Model with Churn: In the case of the threshold model, simulations are efficient enough that we can simulate every possible cluster-based seeding strategy for each of the five three-cluster networks. We simulate a case where si=5 for all people. The final adoption curves vs. seed acceptance probability pa are shown in Fig. 4. Comparison of cluster-based seeding approaches in the threshold model. Different rows correspond to different three-cluster networks from Table 2. The left-hand column shows the final percentage of the network adopting under various cluster-based seeding strategies; the right-hand column shows the difference in the number adopting under seeding the single cluster with the highest median degree vs. all three clusters. The parameter si=5 in these simulations. Each data point is an average over 100 simulations The clearest case here is network nop. At the lowest values of pa, the strategy of seeding all clusters leads to the highest asymptotic activity. This is for the same reasons that we discussed in the case of the repeated-usage model: there is a tradeoff at early times between exposing people to the product prematurely and missing out on the social support that these people could provide to others. Asymptotic activity first develops when the combined early activity in all three clusters is sufficient to sustain long-term usage. As pa increases though, we observe a small regime (around pa=0.08) where seeding two clusters is optimal. Finally, we enter a regime where seeding just one cluster (cluster p) beats out all other strategies in terms of asymptotic activity. The other three-cluster networks show similar effects, although the "costs" of overambitious seeding are quantitatively much smaller. For example, in the case of network klm, it is clear that the key ingredient in maximizing long-term adoption is to seed cluster k. Seeding the other clusters is, at best, superfluous throughout the simulated range and incurs some small costs as pa grows. In the cases of networks qrs and wxy, a similar story holds for cluster q and y respectively. For network tuv, seeding either cluster t or cluster u is sufficient, and seeding v is superfluous. In all cases, the zoomed-in views on the right-hand side of Fig. 4 show that, at high pa, a single-cluster-seeding strategy performs best, although as noted above, the "costs" of other seeding strategies are often very small. Simulation results: k-core seeding We now turn our attention to a different type of seeding that can lead to better long-term outcomes than universal seeding, even when there is no obvious cluster-structure to leverage. Specifically, we will consider seeding, at t=0, only the k-core of the network under consideration. Here, the k-core is defined as usual: it is the subnetwork that remains after repeatedly removing people with degree less than k and all friendship edges incident to these people. The k-core, if it exists, thus corresponds to a dense subnetwork of the original network. Such a seeding approach has been motivated by much of the argumentation above. In particular, it is motivated by the toy examples for both models, where seeding a dense core of the network can be preferable to seeding the entire network. Repeated-Usage Model: Figure 5 compares seeding the 10-core of various SH clusters to seeding the entire cluster at time t=0 in the repeated-usage model. As in "Simulation results: cluster-based seeding" section, we set all si=2 here, meaning that everyone needs two active friends to be satisfied during a product experience. We vary p0 and check which strategy (10-core seeding or universal) wins out in the long-time limit. The asymptotic access and activity values plotted in Fig. 5 are again averages over the last 100 time steps of 10000 time-step simulations. Comparison of 10-core and universal seeding in the repeated-usage model. Different rows correspond to different SH clusters from Table 1. The left-hand column shows the average asymptotic percentage of the population with access; the right-hand column shows the average asymptotic percentage that is active. Legends are shared by the left and right panels in each row. The parameters si=2 and δ=0.005 in these simulations. Each data point is an average over 50 simulations The simulation results of Fig. 5 show that 10-core seeding leads to more long-term adoption than universal seeding over large ranges of the low p0 regime for five different SH clusters. The costs of universal seeding, as compared to the 10-core strategy, are often very large. Our interpretation of these results, echoing our analysis of the toy examples of the "Toy examples" section, is that it is preferable to allow activity to build up in the core before expanding access to the periphery. This is because people in the core, by virtue of having more friends overall, are much more likely to have satisfying experiences when rates of activity are low. Meanwhile, people in the periphery, because they depend on the usage of a few friends in order to have satisfying experiences, are more likely to have positive product experiences if they receive access after activity has built up in the core. Threshold Model with Churn: We now study k-core seeding in the threshold model. We will again set si=5 in these simulations, meaning that each person needs five active friends to become or remain active. We will compare the strategy of seeding the 10-core of each SH cluster to seeding the entire cluster. The simulation results in Fig. 6 generally show three different regimes of behavior. At very low pa, neither approach leads to substantial long-term adoption. As pa grows, there is a regime where universal seeding outperforms 10-core seeding. At still higher pa, 10-core seeding generally wins out. In several cases, 10-core seeding wins by a few percentage points in terms of the total cluster size (clusters a, e, and h). In others, the benefits of 10-core seeding are smaller, but still statistically robust (clusters b, d, f, i, and j). Comparison of 10-core and universal seeding in the threshold model. Different rows correspond to different SH clusters from Table 1. The left-hand column shows the final percentage of the network adopting under various cluster-based seeding strategies; the right-hand column shows the difference in the number adopting under seeding the 10-core vs. the entire cluster. The parameter si=5 in these simulations. Each data point is an average over 100 simulations Again, the tradeoff here is similar to what we have observed previously: in the regime where universal seeding outperforms 10-core seeding, the benefits of early activity in the periphery for the core outweigh the costs to the periphery. Generally though, at high enough pa, the tradeoff flips, with the costs to the periphery outweighing benefits to the core. Thus, the more conservative seeding strategy (i.e., 10-core seeding) prevails. Discussion: When is overambitious seeding costly? As we noted above in the "Models of social-product usage" section, in the repeated-usage model, the question of overambitious seeding amounts to: is it beneficial to seed everyone if everyone whom you seed will accept, but will use at a low rate? On the other hand, in the threshold model, the question is: is it beneficial to seed everyone if only some of those people will accept? Our simulation results show that the answer to both of these questions can be "no" and that various conservative seeding strategies can do substantially better. In this section, we will attempt to abstract from these observations some general principles around when overambitious seeding should be a cause for concern. In both cases, context is the key factor in explaining why overambitious seeding is costly. If seeded individuals adopt in contexts where insufficiently many of their friends are adopting or where their friends are not using sufficiently often, they may churn. Meanwhile, if these same individuals are not seeded, better contexts may emerge at later times for them to begin using the product. Thus, one rule-of-thumb for when to worry about overambitious seeding is the following: overambitious seeding can be costly whenever seeding results in contextually-unaware adoption choices (e.g., people adopting uniformly at random, people using at a rate that's independent of their friends' rates) but where continued usage crucially depends on context. Note, however, that the effects of overambitious seeding are much more pronounced in the repeated-usage model than in the threshold model. To understand why this is the case, we should note one important distinction between the parameter pa in the threshold model and the parameter p0 in the repeated-usage model. In the threshold model, the parameter pa influences both whether a person adopts the product at all and how much social support that person can expect at early times. When pa is low, a person can expect little social support, but it is also less likely that it matters, since the person is less likely to adopt in the first place. When pa is higher, a person is more likely to "accept" the product, but is also more likely to experience social contexts that favor continued usage. This restricts the range of pa where overambitious seeding is likely to be relevant. It also restricts the magnitude of the effect because, typically, the people who incur the costs of overambitious seeding are those who adopt during the seeding round (i.e., in a context-unaware way); pa constrains this proportion of the population. We can contrast this with the role of p0 in the repeated-usage model. Here, p0 determines how much social support a person can expect at early times and also determines the time scale over which a person will choose to have his or her initial product experiences. Meanwhile, this parameter does not determine whether the person has product experiences at all. At low values of p0, a large proportion of people can still end up having bad experiences and churning. Hence, there is both a wider range of p0 where overambitious seeding can be relevant and the proportion of the population that can be "lost" due to a bad seeding strategy is large. This suggests another principle around when we should be especially wary of the costs of overambitious seeding: the problem can be especially severe when people's initial decisions to adopt the product are less correlated with the amount of social support that they can receive at early times. It is interesting to also consider recent related work by Sela et al. in this context. These authors study product adoption through an SIR model, where a person adopts (transitioning from the S to the I state) either in a budgeted seeding round or because they subsequently have enough adopting friends. After adopting, a person transitions from the influential (I) to non-influential (R) state after a fixed amount of time. When there is a seeding budget b and people are prioritized for seeding by eigenvector centrality, Sela et al. find that the final adoption rate is non-monotonic in the budget b. They call this phenomenon the "flip anomaly" (Sela et al. 2016). The "flip anomaly" of Sela et al. also admits a contextual explanation along the lines of those that we have proposed above: if a seeded person is the only adopting friend in a non-seeded person's local network, then the non-seeded person may not adopt before the seeded person becomes non-influential. If better contexts for the non-seeded person's adoption emerge later on, the now non-influential friend has no opportunity to contribute to that adoption (Sela et al. 2016). There are two interesting points of comparison between the model of Sela et al. and those that we have studied here. First, Sela et al. note that their "flip anomaly" must reverse as the budget grows, because adoption is universal in their model (Sela et al. 2016). This is also true of other models with similar properties that have recently been reviewed by Centola (2018). Meanwhile, our results show how analogues of the flip anomaly of Sela et al. can still persist with no seeding budget. Indeed, based on the arguments in this paper, we conjecture that the "flip anomaly" would persist in the unbudgeted case of the model of Sela et al. if adoption under seeding was probabilistic rather than universal. A perhaps more interesting distinction is that Sela et al. show how overambitious seeding can be costly even in the absence of churn, because someone in the R state of their SIR model is still interpreted as an adopter. This shows that the "possibility of permanent churn" assumption that we encoded into both of the models studied in this paper is not strictly necessary for overambitious seeding to be a problem. Instead, we can make a more general conjecture: overambitious seeding can be costly whenever it results in premature exhaustion of opportunities for further spreading that would better be delayed to later in the spreading process. In this paper, we have revisited a question that we originally posed in Iyer and Adamic (2018): suppose a product developer wants to introduce a new social product to a population of potential adopters and is unconstrained by any seeding budget. In this case, should the developer give the product to everyone immediately (as implied by many classic influence-maximization models), or should the developer adopt a more conservative approach? We have extended the results of Iyer and Adamic (2018) in various ways: We have shown that overambitious seeding is not just a concern in the repeated-usage model of Iyer and Adamic (2018) but can be a problem in more traditional threshold models as well, once the possibility of churn is introduced. We have studied both types of models analytically on certain simplified network structures and thereby developed intuitions for why overambitious seeding can be costly. We have explored k-core seeding as an alternative to cluster-based seeding, showing that the results of our earlier work are not tied to the cluster-based approach; there are multiple conservative seeding strategies that can outperform seeding everyone at once. Drawing upon simulation results, we have proposed some general principles around when the possibility of overambitious seeding ought to be considered: Overambitious seeding is a concern whenever early adoption can result in the premature exhaustion of a resource for future spreading that would be better delayed to a more favorable context for that spreading. Overambitious seeding is especially a concern when people's initial decisions to adopt the product are less correlated with the amount of social support that they can receive at early times. We emphasize again that the models considered here exclude other pathways to overexposure in the influence maximization problem, including negative word-of-mouth, direct costs to rejection of the seed, and hipster effects. We have excluded these effects to make the case that overambitious seeding can be detrimental in the context of social products, even if none of these factors are at play. Of course, all of these alternative mechanisms are important in real-world settings, and together with the mechanism discussed in this paper, they make the case that product developers should not always expend all of the marketing resources at their disposal. There are many possible interesting extensions of this work. For example, we have always assumed a homogeneous value of p0 and pa across the whole population of potential adopters. The costs of overambitious seeding will vary if this assumption of homogeneity is relaxed. If people with many friends have higher values of p0 or pa and the friendship network is assortative by degree, presumably seeding everyone would produce an outcome that is more similar to just seeding the core, diminishing the costs of overambitious seeding. On the other hand, if p0 and pa are negatively correlated with degree, that could exacerbate the costs of overambitious seeding and make the considerations of this paper more important. Another consideration that would mitigate the costs of overambitious seeding in the threshold model would be to allow multiple adoptions per person (i.e., if a person is willing to adopt m times, where m>1). This is because adoptions after the first would happen in contextually aware (and thus, favorable) circumstances, because the person has enough adopting neighbors to adopt. In such a setting, it would be interesting to ask if enriching the model with other aspects of real-world complexity (e.g., some amount of spontaneous churn, some within-person variance over time in social expectations for the product) might reintroduce the costs of overambitious seeding, or fundamentally change the tradeoffs considered here in some unforeseen way. Here and in Iyer and Adamic (2018), we have always taken the perspective that what matters in influence maximization is adoption in the long-time limit. However, it is possible to consider scenarios where there are time constraints, and the goal is to maximize adoption within a fixed time (Chen et al. 2012). This too could fundamentally change the tradeoffs discussed here, perhaps shifting them in favor of seeding less conservatively. On the other hand, a very interesting recent line of work in the influence maximization literature considers other target outcomes besides maximum adoption (Matakos and Gionis 2018; Aslay et al. 2018; Tsang et al. 2019; Chen et al. 2019; Pasumarthi et al. 2015; Loukides and Gwadera 2018). As one example, Matakos and Gionis (2018) and Aslay et al. (2018) consider maximizing the diversity of information shared in a social network. Are "overambitious seeding" considerations relevant in such a setting, or is seeding as widely as possible beneficial for promoting diversity? This seems like a fruitful question to pursue, given the findings of this paper for the more traditional influence-maximization problem. The network datasets used in the simulation studies in this paper are not publicly available. However, the simulation code can be made available upon request. If such a mapping holds and the non-progressive model is monotonic, it still may make sense to employ a gradual seeding approach in a budgeted scenario. Indeed, Jankowski et al. have recently explored the benefits of gradually seeding parts of the network that have not been activated by previous seeding rounds, instead of seeding all at once and possibly wasting resources on parts of the network that would have adopted anyway (Jankowski et al. 2017). Note, however, that the notion of "wasting" seeding resources in that work depends upon the existence of a budget. If it is t=1, then people look at the states of their friends after the seed round. Because we have not added in possible scenarios where, for instance, no one in the core is active in the first time step but two are active in the second, where one person in the core churns in each of the first two time steps but two people are active in the third, etc. In each of these cases, a very large fraction of the population can nevertheless be active asymptotically. Here, we are neglecting situations where i's friend in the core already churned due to having an unsatisfying experience before time T. This is a small effect as N gets large because of the low likelihood of having bad experiences in the core. Meanwhile, it takes each person $\frac {1}{p_{0}}$ time steps on average to be active at all, so if p0 is small, it is quite likely that a specific individual in the core is inactive at early time T. This is the effect that we capture in Eq. (4). Note that Eq. (9) is a poor approximation to the adoption fraction when $p_{a} \approx \frac {2}{N}$ or lower for at least three reasons. First, we need to incorporate corrections to (8) arising from the fact that we condition on at least two people in the core adopting under seeding in that calculation. Second, to produce a good estimate in this regime, we cannot neglect the case of fewer than two core adopters accurately. Third, a good approximation in this regime must approach zero adoption as pa goes to zero; Eq. (9) does not exhibit this behavior. On average, each person in each cluster has at least one out-of-cluster friend. Table 1 reports structural properties for 23 SH clusters. Clusters a-j were sampled for the purpose of comparing k-core seeding and universal seeding in the two models. We did not end up reporting results for clusters c and g in Figs. 5 and 6 because these clusters do not have a 10-core, so they are excluded from Table 1. In Fig. 5, we also did not run simulations for clusters b, d, and j because the repeated-usage model is expensive to simulate, and these clusters ended up being too large. Clusters k-y were sampled for the purpose of comparing cluster-based seeding approaches in the threshold-model. These clusters form parts of three-cluster networks whose properties are reported in Table 2. The size of the three-cluster network can differ slightly from the size of the three clusters individually because, in both cases, we exclude people with zero degree, who would inevitably churn under our model. In a small percentage of cases, a person who has no within-cluster friends may still have friends in another cluster when three clusters are considered together. Abebe, R, Adamic LA, Kleinberg JM (2018) Mitigating overexposure in viral marketing In: Proceedings of the 32nd Conference on Artificial Intelligence.. AAAI. Adler, J (1991) Bootstrap percolation. Phys A Stat Mech Appl 171(3):453–470. Alkemade, F, Castaldi C (2005) Strategies for the diffusion of innovations on social networks. Comput Econ 25(1-2):3–23. Anderson, EW (1998) Customer satisfaction and word of mouth. J Serv Res 1(1):5–17. Aslay, C, Matakos A, Galbrun E, Gionis A (2018) Maximizing the diversity of exposure in a social network In: 2018 IEEE International Conference on Data Mining (ICDM), 863–868.. IEEE. Banerjee, A, Chandrasekhar AG, Duflo E, Jackson MO (2013) The diffusion of microfinance. Science 341(6144):1236498. Bass, FM (1969) A new product growth for model consumer durables. Manag Sci 15(5):215–227. Borgatti, SP, Everett MG (2000) Models of core/periphery structures. Soc Netw 21(4):375–395. Byers, JW, Mitzenmacher M, Zervas G (2012) The groupon effect on yelp ratings: a root cause analysis In: Proceedings of the 13th ACM Conference on Electronic Commerce, 248–265.. ACM. Centola, D (2018) How Behavior Spreads: The Science of Complex Contagions, vol. 3. Princeton University Press. Chen, H, Loukides G, Fan J, Chan H (2019) Limiting the influence to vulnerable users in social networks: A ratio perspective In: International Conference on Advanced Information Networking and Applications, 1106–1122.. Springer. Chen, W, Lu W, Zhang N (2012) Time-critical influence maximization in social networks with time-delayed diffusion process In: Twenty-Sixth AAAI Conference on Artificial Intelligence. Coleman, J, Katz E, Menzel H (1957) The diffusion of an innovation among physicians. Sociometry 20(4):253–270. Coleman, J, Menzel H, Katz E (1959) Social processes in physicians' adoption of a new drug. J Chronic Dis 9(1):1–19. Coleman, JS, Katz E, Menzel H (1966) Medical Innovation: A Diffusion Study. Bobbs-Merrill Co. Cui, F, Hu H-h, Cui W-t, Xie Y (2018) Seeding strategies for new product launch: The role of negative word-of-mouth. PLoS ONE 13(11):0206736. Domingos, P, Richardson M (2001) Mining the network value of customers In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 57–66.. ACM. Goldenberg, J, Libai B, Muller E (2001) Talk of the network: A complex systems look at the underlying process of word-of-mouth. Mark Lett 12(3):211–223. Granovetter, M (1978) Threshold models of collective behavior. Am J Sociol 83(6):1420–1443. Hinz, O, Skiera B, Barrot C, Becker JU (2011) Seeding strategies for viral marketing: An empirical comparison. J Mark 75(6):55–71. Iyer, S, Adamic LA (2018) The costs of overambitious seeding of social products In: International Workshop on Complex Networks and Their Applications, 273–286.. Springer. Jankowski, J, Bródka P, Kazienko P, Szymanski BK, Michalski R, Kajdanowicz T (2017) Balancing speed and coverage by sequential seeding in complex networks. Sci Rep 7(1):891. Juul, JS, Porter MA (2019) Hipsters on networks: How a minority group of individuals can lead to an antiestablishment majority. Phys Rev E 99(2):022313. Kabiljo, I, Karrer B, Pundir M, Pupyrev S, Shalita A (2017) Social hash partitioner: a scalable distributed hypergraph partitioner. Proc VLDB Endowment 10(11):1418–1429. Kempe, D, Kleinberg J, Tardos É (2003) Maximizing the spread of influence through a social network In: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 137–146.. ACM. Kiesling, E, Günther M, Stummer C, Wakolbinger LM (2012) Agent-based simulation of innovation diffusion: a review. CEJOR 20(2):183–230. Li, Y, Fan J, Wang Y, Tan K-L (2018) Influence maximization on social graphs: A survey. IEEE Trans Knowl Data Eng 30(10):1852–1872. Loukides, G, Gwadera R (2018) Preventing the diffusion of information to vulnerable users while preserving pagerank. Int J Data Sci Analytics 5(1):19–39. Matakos, A, Gionis A (2018) Tell me something my friends do not know: Diversity maximization in social networks In: 2018 IEEE International Conference on Data Mining (ICDM), 327–336.. IEEE. Pasumarthi, R, Narayanam R, Ravindran B (2015) Near optimal strategies for targeted marketing in social networks In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 1679–1680. International Foundation for Autonomous Agents and Multiagent Systems. Rogers, EM (1962) Diffusion of Innovations. Free Press of Glencoe. Schelling, TC (2006) Micromotives and Macrobehavior. WW Norton & Company. Sela, A, Shmueli E, Goldenberg D, Ben-Gal I (2016) Why spending more might get you less, dynamic selection of influencers in social networks In: Science of Electrical Engineering (ICSEE), IEEE International Conference on The, 1–4.. IEEE. Shalita, A, Karrer B, Kabiljo I, Sharma A, Presta A, Adcock A, Kllapi H, Stumm M (2016) Social hash: An assignment framework for optimizing distributed systems operations on social networks In: NSDI, 455–468. Tsang, A, Wilder B, Rice E, Tambe M, Zick Y (2019) Group-fairness in influence maximization. arXiv preprint arXiv:1903.00967. Watts, DJ (2002) A simple model of global cascades on random networks. Proc Natl Acad Sci 99(9):5766–5771. Wilder, B, Onasch-Vera L, Hudson J, Luna J, Wilson N, Petering R, Woo D, Tambe M, Rice E (2018) End-to-end influence maximization in the field In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, 1414–1422.. International Foundation for Autonomous Agents and Multiagent Systems. Yadav, A, Wilder B, Rice E, Petering R, Craddock J, Yoshioka-Maxwell A, Hemler M, Onasch-Vera L, Tambe M, Woo D (2018) Bridging the gap between theory and practice in influence maximization: Raising awareness about hiv among homeless youth In: IJCAI, 5399–5403. We thank Udi Weinsberg, Israel Nir, and Jarosław Jankowski for helpful discussions, Shuyang Lin for development of the original simulation infrastructure, and Justin Cheng for reviewing code. Core Data Science, Facebook, 1 Hacker Way, Menlo Park, California, USA Shankar Iyer & Lada A. Adamic Search for Shankar Iyer in: Search for Lada A. Adamic in: SI formulated the argument for costs of overambitious seeding in the context of the threshold model, developed and ran the simulations, and wrote the manuscript. LAA proposed the repeated-usage model as a setting where the costs of overambitious seeding might be more pronounced and contributed to the composition of Iyer and Adamic (2018). Both authors read and approved the final manuscript. The authors are employed by Facebook and used Facebook computing resources for this research. Correspondence to Shankar Iyer. https://doi.org/10.1007/s41109-019-0146-z Influence maximization Diffusion on networks Special Issue of the 7th International Conference on Complex Networks and Their Applications
CommonCrawl
Using Fraction Notation: Addition, Subtraction, Multiplication & Division Instructor: Jeff Calareso Show bio Jeff teaches high school English, math and other subjects. He has a master's degree in writing and literature. In mathematics, a fraction is a number that is not whole, and mathematical equations that contain fractions can be challenging to understand and solve. Learn how to use fraction notations to make it easier to perform addition, subtraction, multiplication, and division in problems with fractions. Updated: 10/02/2021 Fraction Notation The term fraction notation just means a fraction written as a/b. We call the number above the line the numerator. The one below the line is the denominator. If it rains five days in a week, well, that's a dreary week. In fraction notation, we'd say it rained 5/7 days. The denominator represents the total number of days in the week. The numerator is the part of the whole, or the number of days it rained. What if we're in a Beatles song, and it rains eight days a week? Our fraction would be 8/7. That's called an improper fraction. It also breaks the calendar. But it's still a fraction written in fraction notation. In this lesson, we're going to learn how to do all the fun things you might want to do with fractions: addition, subtraction, multiplication and division. Whoa. That's a lot. But don't worry. We'll start simple and build from there. Coming up next: Factoring Out Variables: Instructions & Examples 0:02 Fraction Notation 0:59 Multiplication 1:57 Division 3:15 Addition 4:55 Subtraction You might think we'd start with addition, which is so often the simplest operation. But with fraction notation, multiplication is actually the easiest. When we multiply fractions, a/b * c/d = ac/bd. In other words, 2/3 * 5/7 equals 2 * 5 over 3 * 7. That's 10/21. Let's see that in action. Let's say there's 1/2 of a pie just sitting on the kitchen counter, begging to be eaten. You decide to eat 1/3 of what's there. That 1/2 * 1/3. We just multiply the numerators, 1 * 1, to get 1. Then we multiply the denominators, 2 * 3, to get 6. How much of the pie did you eat? 1/6. As you can see, there were originally 6 pieces, so your 1/3 of 1/2 is 1/6 of the original pie. Let's tackle division next. When we divide fractions, (a/b) / (c/d) = a/b * d/c. Wait, what? When we divide fractions, we take the reciprocal of the second fraction, and then multiply them together. In other words, flip the second fraction upside down, then multiply. So, 2/3 divided by 5/7 equals 2/3 * 7/5. That's 14/15. Should we see it in action? Ok. Let's say you're working off that pie by running a half marathon. But you only had a little pie, so you're running as part of a 4-person relay team. What fraction of a marathon are you running? That's 1/2, or half the marathon, divided by 4 people, or 4/1. To figure out (1/2) / (4/1), we take the reciprocal of 4/1. Again, just flip it upside down, like how your stomach feels if you go running too soon after eating pie. So 4/1 becomes 1/4. Then multiply 1/2 * 1/4. That's 1/8. So you'll run 1/8 of a full marathon. That's not bad! Ok, time to talk addition. When we add fractions, we find a common denominator. Then add the numerators. We can't add 1/2 and 1/4, but we can add 2/4 and 1/4, which is 3/4. Let's think about what this means. Let's say you have a box of 12 doughnuts. You eat one, or 1/12, of the doughnuts. Your friend eats 1/3 of the doughnuts. How do you compare 1/12 and 1/3? It's like your friend is trying to hide how many doughnuts he ate. Not cool. You need to figure out what 1/3 is in terms of the 12 doughnuts. That's what we mean by the common denominator. Remember that the denominator represents the whole, while the numerator is the part. If your doughnut-loving friend eats 1/3 of the doughnuts, how many out of 12 is that? To find the common denominator, you can multiply 1/3 * 4/4. Why? Because 3 * 4 is 12. And it's ok to multiply a fraction by some version of 1, which is what 4/4 is. That gets us 4/12. So your friend ate 4 doughnuts. Oh, man, that's a lot. I hope there's still a chocolate-frosted one left. If we want to know how many doughnuts were eaten, we'd be adding 1/12 and 1/3. To add these fractions, we find the common denominator, 12 - so it's 1/12 + 4/12 - and then we add the numerators: 1 + 4 = 5. So, 5 out of 12 doughnuts were eaten. To subtract fractions, we also find a common denominator, and then we just subtract the numerators. Let's try this out. What if you and your friend have a falling out over what you now refer to as 'the doughnut incident.' You walked to the store to get those doughnuts, even though your friend lives closer. You live 3/4 of a mile from the store and he lives 1/8 of a mile from the store. How much closer is he? This is a classic fraction subtraction problem. What is 3/4 minus 1/8? We need a common denominator. That will be 8. Let's multiply 3/4 * 2/2 to get 6/8. We can work with 6/8 - 1/8. That's 5/8. So your doughnut-hogging friend is 5/8ths of a mile closer to the store. To summarize, we learned about using fraction notation to perform basic operations. To multiply, we just multiply the numerators, then multiply the denominators. With division, we first flip the second fraction. This flipped fraction is called the reciprocal. Then we multiply them together. When adding or subtracting, we need to find common denominators. Then we add or subtract the numerators. At the end of this lesson you should understand how and be able to add, subtract, multiply and divide fractions. Math / Algebra I: High School What is Factoring in Algebra? - Definition & Example 5:32 How to Find the Prime Factorization of a Number 5:36 Using Prime Factorizations to Find the Least Common Multiples 7:28 Equivalent Expressions and Fraction Notation 5:46 Using Fraction Notation: Addition, Subtraction, Multiplication & Division 6:12 Factoring Out Variables: Instructions & Examples Combining Numbers and Variables When Factoring 6:35 Transforming Factoring Into A Division Problem 5:11 Factoring By Grouping: Steps, Verification & Examples 7:46 Multiplying & Dividing Fractions & Mixed Numbers | Operations & Examples What is Fractional Notation? - Definition & Conversion What is a Fraction? Factors of a Number | How to Find Prime Factorization of a Number Linear Algebra: Help & Tutorials Big Ideas Math Algebra 1: Online Textbook Help Honors Algebra 1 Textbook View High School: Algebra 1 Smarter Balanced Assessments - Math Grade 11: Test Prep & Practice GED Math: Quantitative, Arithmetic & Algebraic Problem Solving Math 105: Precalculus Algebra College Algebra: Help and Review High School Algebra I: Help and Review High School Algebra I: Homework Help Resource High School Algebra I: Tutoring Solution Holt McDougal Algebra 2: Online Textbook Help Prentice Hall Algebra 2: Online Textbook Help McDougal Littell Pre-Algebra: Online Textbook Help High School Algebra II: Homeschool Curriculum Study Guide & Help Courses Quiz & Worksheet - Angles in Pythagorean Triples Practice Problems Quiz & Worksheet - Resultant Vector Formula Quiz & Worksheet - Tangent Ratio Quiz & Worksheet - Triangular Pyramids Quiz & Worksheet - Singular Matrices NY Regents - Geometric Solids: Help and Review NY Regents Exam - Geometry Help and Review Flashcards NY Regents - Logic in Mathematics: Tutoring Solution NY Regents - Introduction to Geometric Figures: Tutoring Solution NY Regents - Similar Polygons: Tutoring Solution ESL Teaching Strategies for ESL Students Lesson Plans for Teachers Common Core Math Grade 8 - Expressions & Equations: Standards UExcel Weather and Climate: Study Guide & Test Prep Psychology 105: Research Methods in Psychology ILTS Social Science - Sociology and Anthropology (249): Test Practice and Study Guide Calculus: Homework Help Resource AP World History - The Medieval Warm Period: Help and Review Measurements of Inflation Quiz & Worksheet - WIP Accounting Journal Entries Quiz & Worksheet - Accounts Payable Journal Quiz & Worksheet - Characteristics of Computer Science Quiz & Worksheet - The Knowledge Economy Quiz & Worksheet - Reengineering in Business What Is Pharmacology? - Definition & Principles State-Dependent Memory: Definition & Overview California Department of Education: Educator Effectiveness Funds A Rose for Emily Lesson Plan Science Word Walls Special Education Advocacy Groups Dinosaur Activities for Kids How to Earn a Micro Credential Science Writing Prompts Preschool Book List Earth Science Projects for Middle School Teacher Associations in Texas How do you multiply fractions? Explain with examples. Divide 11 by 2\frac{1}{2} . A "Mushy" doll costs $30 1/10. A whales doll costs $12 7/10 more than a "Mushy" doll. How much should Samantha pay if she buys one of each? In the division of fractions, why do we only flip the second fraction? Why can't we flip the first fraction? If you walk { \frac{3}{4} } mile and then jog { \frac{2}{5} } mile, what is the total distance covered? What is the sum (\frac{1}{2b}) + (\frac{b}{2})? Write in simplest form. \frac{\frac{1}{10}}{\frac{1}{4}} What is { (\frac{2}{9} \times \frac{b}{9}) }? A magazine full page is 30".How many inches in a \frac{1}{8} page and a \frac{3}{4} page? Which of the following numbers could be added to 3/16 to make a sum greater than 1/2? (a) 5/16 (b) 10/32 (c) 2/6 (d) 1/4
CommonCrawl
Artificial Intelligence Blog We're blogging machines! 100 Theorems You are currently browsing the archive for the Optimization category. Maximizing a function over integers (Part 2) December 26, 2020 in Math, Optimization by hundalhh | Permalink In part 1, we defined the term strictly concave and introduced the theorem that if $f$ is strictly concave and $f(x)=f(x+1)$, then the integer(s) that maximize $f$ are exactly the set $\{\mathrm{floor}(x+1), \mathrm{ceil}(x)\}$ which is usually a set containing only one number $\mathrm{round}(x+1/2)$. Parabolas The simplest concave functions are parabolas with the vertex at the top. All parabolas can be described by three real numbers $a$, $b$, and $c$, and the formula $y=a x^2 + b x + c$. For strictly concave parabolas $a<0$. For example, if $a=-2$, $b=3$, and $c=5$, then the function $y= f(x) = -2x^2 + 3 x +5$ is the parabola shown below. If the vertex (shown above in blue) is above the x-axis, then the x-coordiante of the vertex can be computed by finding the points where the concave parabola crosses the x-axis (shown above in orange). In our case, the parabola $y=f(x)$ crosses the x-axis when $0=y=f(x)$, or when $$\begin{aligned}0&= -2x^2 + 3 x +5\\0&=-(2x^2-3×-5)\\0&=-(2×-5)(x+1).\end{aligned}.$$ If $0=\alpha\cdot \beta$, then either $\alpha=0$ or $\beta=0$. So, the parabola crosses the x-axis when either $0=2×-5$ or $0=x+1$ which means that it crosses when $x=5/2$ or $x=-1$. Parabolas are symmetric about the vertical line going through the vertex, so the x-coordinate of the vertex is half way between the x-coordinates of the crossing points (a.k.a the roots of $f(x)$). $$\begin{aligned} x_\mathrm{vertex} &= \frac{x_\mathrm{crossing1} + x_\mathrm{crossing2}}2\\&= \frac{ -1 +5/2}2\\&=\frac{3/2}2\\&=3/4.\end{aligned}$$ So the x-coordinate of the vertex is $x=3/4$. This is also the real number that maximizes $f(w)$ over all real numbers $w$. In general, the x-coordinates of the crossing points can be found with the quadratic formula $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2 a}.$$ The $\pm$ sign means that one crossing can be found by replacing $\pm$ with + and the other can be found by replacing the $\pm$ with -. But, if you have one number that is $x_1=\alpha+\beta$ and another that is $x_2=\alpha-\beta$, then the average of the two numbers is just $\alpha$. In other words if the two values of $x$ are $x=\alpha\pm\beta$, then the average value is just $\alpha$. So, to computer the average x-coordinate of the crossing points, all we have to do is remove the $\pm$ sign and whatever it is applied to from the quadratic formula. $$ x_\mathrm{vertex} = \frac{-b}{2 a}.$$ theorem #2 Informally, if $y=f(x)$ is a parabola, $f(x) = a x^2 + b x + c$, and $a<0$, then the integer(s) that maximize $f(x)$ are the set $\{\mathrm{floor}(x+1/2), \mathrm{ceil}(x-1/2)\}$ where $x=\frac{-b}{2 a}$. If $x$ is an integer plus 1/2 (e.g. 2.5, 7.5, …), this set has two elements $x+1/2$ and $x-1/2$. If $x$ is not an integer plus 1/2, the set has only one element $\mathrm{round}(x)$ and that is the integer that produces the highest possible value of $f(z)$ among all integers $z$. Example 1: If $f(x)= -2x^2 + 3 x +5$, then $a=-2$, $b=3$, and $c=5$. The values of $x$ in the theorem is the same as the x-coordinate of the vertex $$x=\frac{-b}{2a} = \frac{-3}{2\cdot(-2)}= \frac{-3}{-4}=3/4.$$ The integer(s) that maximize $f(z)$ among all integers $z$ are the set $$\begin{aligned}\{\mathrm{floor}(x+1/2), \mathrm{ceil}(x-1/2)\}&= \{\mathrm{floor}(3/4+1/2), \mathrm{ceil}(3/4-1/2)\}\\&= \{\mathrm{floor}(5/4), \mathrm{ceil}(1/4)\}\\&=\{1\}.\end{aligned}$$ Example 2: If we lower the parabola from example 1 by a little bit setting $f(x)= -2x^2 + 3 x +\sqrt{17}$, then $a=-2$, $b=3$, and $c=\sqrt{17}$. The value of $x$ in the theorem is the same as the x-coordinate of the vertex $x=\frac{-b}{2a} =3/4.$ The result does not depend on $c$, so as in example 1, the integer that maximizes $f(z)$ among all integers $z$ is $z=\mathrm{round}(3/4)=1$. Example 3: $f(x)= -x^2+x = (1-x)x$, then $a=-1$, $b=1$, and $c=0$. The value of $x$ in the theorem is the same as the x-coordinate of the vertex $$x=\frac{-b}{2a} =\frac{-1}{2\cdot (-1)}=1/2.$$ The integer(s) that maximize $f(z)$ among all integers $z$ are the set $$\begin{aligned}\{\mathrm{floor}(x+1/2), \mathrm{ceil}(x-1/2)\}&= \{\mathrm{floor}(1/2+1/2), \mathrm{ceil}(1/2-1/2)\}\\&= \{\mathrm{floor}(1), \mathrm{ceil}(0)\}\\&=\{1,0\}.\end{aligned}$$ The integers that maximize $f(z)$ among all integers $z$ are 0 and 1. If we look at the graph of $f(x)$ below, we can see that the graph crosses the x-axis at $x=0$ and $x=1$. So, $f(0)=f(1)=0$. All other integers inputs produce negative values. the spirit of the proof It turns out that Theorem 2 can be proven from theorem 1. Recall that in Theorem 1, we wanted to find the value of $x$ where $f(x)=f(x+1)$. If we found that value, then the integer(s) that maximize $f$ are exactly the set $\{\mathrm{floor}(x+1), \mathrm{ceil}(x)\}$. In Theorem 2, $f(x) = a x^2 + b x + 1$. So if $f(x)=f(x+1)$, then $$\begin{aligned}a x^2 + b x + c &= a (x+1)^2 + b (x+1) + c\\a x^2 + b x &= a (x+1)^2 + b (x+1)\\ a x^2 + b x &= a (x^2+2x+1)+ b x+b\\ a x^2 &= a x^2+2ax+a+ b \\ 0 &= 2ax+a+ b \\ -a-b &= 2ax \\ \frac{-a-b}{2a} &= x\\ \frac{-b}{2a}-\frac12 &= x\\\end{aligned}.$$ Thus the integers that maximize $f(x)$ must be $$\begin{aligned} \{\mathrm{floor}(x+1), \mathrm{ceil}(x)\ \}&= \{\mathrm{floor}(\frac{-b}{2a}-\frac12+1), \mathrm{ceil}(\frac{-b}{2a}-\frac12)\} \\ &= \{\mathrm{floor}(\frac{-b}{2a}+\frac12), \mathrm{ceil}(\frac{-b}{2a}-\frac12)\}.\end{aligned}$$ Why DId I WANTED to know this I was looking at several games where the optimal strategy depended on finding an integer $t$ that maximized $f(t)$. In the game Slay the Spire, I wanted to maximize the amount of poison damage that I was going to do with the cards "Noxious Fumes" and "Catalyst". If I played the "Catalyst" on turn $t$ and the combat ended on turn $T$, then the poison damage done was $$\frac{(t+1)t}{2} -1 + (T-t)t$$ where $ \frac{(t+1)t}{2}-1$ (note the triangle number) was the damage done by "Noxious Fumes" and $ (T-t)t$ was the additional damage done by playing the Catalyst. I wanted to maximize $f(t) = (T-t)t = -t^2 + T t$. Using Theorem 2, $a=-1$, $b=T$, and $c=0$. The Theorem says that the maximum damage occurs when you play the "Catalyst" on round $t$ where $t$ is contained in the set $\{\mathrm{floor}(x+1/2), \mathrm{ceil}(x-1/2)\}$ with $x=\frac{-b}{2 a}$. So $x=\frac{-T}{2\cdot(-1)}=T/2$. The best time to play the Catalyst was around half way through the combat. The catalyst should be played on round $T/2$ if $T$ is even. If $T$ is odd, then the best round to play the "Catalyst" was for $t$ in the set $$\{\mathrm{floor}(T/2+1/2), \mathrm{ceil}(T/2-1/2)\}= \{\frac{T+1}{2}, \frac{T-1}{2} \}.$$ So the first two theorems formalized rules of thumb that I though were true. If you can find an $x$ where $f(x)=f(x+1)$, then the optimal integer(s) is $z=\mathrm{round}(x+1/2)$ with a special round that provides two answers if $x$ is an integer, and If $f(x)=a x^2 + b x + c$ (a parabola), then the optimal integer(s) is $z=\mathrm{round}(x)$ where $x=\frac{-b}{2 a}$ which is the $x$ that maximizes $f(w)$ over all real numbers $w$ (i.e. the x-coordinate of the vertex). If you want to see a more formal mathematical writeup, click here. I proved a few simple but interesting theorems about integers that maximize a function $f$ when $f$ is a strictly concave function that maps real numbers to real numbers. For example, the real number that maximizes $f(x)=x(1-x)$ is $x=1/2$, but among all the integers, the maximum possible value of the function is 0. And that maximum is achieved twice with integers 0 and 1, $f(0)=0=f(1)$. A function is strictly concave if you can pick any two points on its graph, draw a line between them, and the curve $y=f(x)$ between the two points lies entirely above the line between the two points. Informally stated, the first theorem is that if you can find a real number $x$ such that $f(x)=f(x+1)$, then the integer(s) that maximize $f$ are the set $\{\mathrm{floor}(x+1), \mathrm{ceil}(x)\}$ where "floor" just means round down and "ceil" means round up. That set usually contains only one element which is $\mathrm{round}(x+1/2)$, but if $x$ is an integer, then it will contain two consecutive integers $x$ and $x+1$. For example, if $f(x) = 4-4x^2$, then the value of $x$ that satisfies $f(x)=f(x+1)$ is $x=-1/2$ because $f(-1/2)=3=f(1/2)$. So the theorem says that any integer that maximizes $f$ must be in the set $$\begin{aligned}\{\mathrm{floor}(x+1), \mathrm{ceil}(x)\} &= \{\mathrm{floor}(-1/2+1), \mathrm{ceil}(-1/2)\}\\&= \{\mathrm{floor}(1/2), \mathrm{ceil}(-1/2)\} \\&= \{0\}.\end{aligned}$$ The integer that does the best is 0 which is also the real number that maximizes $f$. Here is another example. If $f(x) = \sin(x)$, then the value of $x$ that satisfies $f(x)=f(x+1)$ is $x=1.0708$ because $f(1.0708)=0.877583=f(2.0708)$. So the theorem says that any integer that maximizes $f$ must be in the set $$\begin{aligned}\{\mathrm{floor}(x+1), \mathrm{ceil}(x)\} &= \{\mathrm{floor}(1.0708+1), \mathrm{ceil}(1.0708)\}\\&= \{\mathrm{floor}(2.0708), \mathrm{ceil}(1.0708)\} \\&= \{2\}.\end{aligned}.$$ The integer that does the best is 2. So, that was the first theorem. In part 2, we state a second theorem about finding the integer which maximizes a quadratic with some examples and one application the game "Slay the Spire". Richardson Extrapolation April 4, 2017 in General ML, Optimization by hundalhh | Permalink Dr Xu at Penn State introduced me to Richardson Extrapolation way back in the early 90's. We used it to increase the speed of convergence of finite element analysis algorithms for partial differential equations, but it can be used for many numerical methods. https://en.wikipedia.org/wiki/Richardson_extrapolation "Multi-stage Markov Chain Monte Carlo Methods for Porous Media Flows" June 26, 2014 in Graphical Models, Optimization, PDEs by hundalhh | Permalink I was doing a little bit of research on Mulit-stage Markov Chains for a friend when I ran across this nice set of slides on Multi-stage Markov Chains for diffusion through porous media (i.e. oil or water flowing through the ground) by Pereira and Rahunanthan. The authors first review the multi-scale equations for diffusion under pressure (see Darcy's law) and mixed finite element analysis. Then, interestingly they introduce a Bayesian Markov chain Monte Carlo method (MCMC) to get approximate solutions to the differential equation. They use Multi-Stage Hastings-Metropolis and prefetching (to parallelize the computation) to improve the speed of MCMC. The slides also display the results of their algorithm applied to aquifer contamination and oil recovery. It's really cool how methods used for probabilistic graphical models can be used to approximate the solutions to partial differential equations. I'm hoping to get back to 2048 in a few weeks. Turns out that it takes a long time to write code, make the code look nice, run the code, analyze the results, and then put together a blog post. It's much easier and quicker to read papers and summarize them or to try to explain things you already know. Have a great weekend. – Hein "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice" May 16, 2013 in General ML, Optimization by hundalhh | 2 comments Nuit Blanche has a nice summary of the FOCS 2012 53rd Annual IEEE Symposium on Foundations of Computer Science Workshop on "Randomized Numerical Linear Algebra (RandNLA): Theory and Practice". Faster random algorithms for QR, SVD, Eigenvalues, Least Squares, … using random projections and other techniques were discussed. "Stochastic Superoptimization" and "Programming by Optimization" April 12, 2013 in Multi-Armed Bandit Problem, Optimization, Programming by hundalhh | Permalink John Regehr writes this post about a compiler speed optimization technique called "Stochastic Superopimization". "Stochastic Superopimization" systematically searches for algorithmic improvement in code using machine learning algorithms similar to multi-armed bandit strategies. It appears to be related to "Programming by Optimization". "Stochastic Superopimization" is more like a very good optimization flag on a compiler. "Programming by Optimization" is constructing the program in such a fashion that design options are exposed and easily manipulable by an optimization program trying to maximize some performance metric. The "Programming by Optimization" community seems to mostly use BOA, the Bayesian optimization algorithm (see [1], [2]). I am hoping to read and write more about both of these ideas later. "Algorithm Portfolio Design: Theory vs. Practice" March 20, 2013 in Optimization by hundalhh | Permalink In "Algorithm Portfolio Design: Theory vs. Practice", Gomes and Selman (2013) study the use of a portfolio of stochastic search algorithm to solve computationally hard search problems. Here are some interesting quotes from the paper: "Our studies reveal that in many cases the performance of a single algorithm dominates all others, on the problem class under consideration." "Given the diversity in performance profiles among algorithms, various approaches have been developed to combine different algorithms to take into account the computational resource constraints and to optimize the overall performance. These considerations led to the development of anytime algorithms (Dean and Boddy 1988), decision theoretic metareasoning and related approaches (Horvitz and Zilberstein 1996; Russell and Norvig 1995), and algorithm portfolio design (Huberman et al. 1997)." "In addition, we also show that a good strategy for designing a portfolio is to combine many short runs of the same algorithm. The effectiveness of such portfolios explains the common practice of "restarts" for stochastic procedures, where the same algorithm is run repeatedly with different initial seeds for the random number generator. (For related work on the effectiveness of restarts, see e.g., Aldous and Vazirani 1994; Ertel 1991; Selman and Kirkpatrick 1996.)" "Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained" March 14, 2013 in Multi-Armed Bandit Problem, Optimization, Statistics by hundalhh | Permalink "…we often joke that our job, as the team that builds the experimentation platform, is to tell our clients that their new baby is ugly, …" Andrew Gelman at Statistical Modeling, Causal Inference, and Social Science pointed me towards the paper "Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained" by Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, and Ya Xu all of whom seem to be affiliated with Microsoft. The paper itself recounted five online statistical experiments mostly done at Microsoft that had informative counter-intuitive results: Overall Evaluation Criteria for Bing Click Tracking Initial Effects Experiment Length Carry Over Effects. The main lessons learned were: Be careful what you wish for. – Short term effects may be diametrically opposed to long-term effects. Specifically, a high number clicks or queries per session could be indicative of a bug rather than success. It's important to choose the right metric. The authors ended up focusing on "sessions per user" as a metric as opposed to "queries per month" partly due to a bug which increased (in the short-term) queries and revenues while degrading the user's experience. Initial results are strongly affected by "Primacy and Novelty". – In the beginning, experienced users may click on a new option just because it is new, not because it's good. On the other hand, experienced users may be initially slowed by a new format even if the new format is "better". If reality is constantly changing, the experiment length may not improve the accuracy of the experiment. The underlying behavior of the users may change every month. A short-term experiment may only capture a short-term behavior. Rather than running the experiment for years, the best option may be to run several short-term experiments and adapt the website to the changing behavior as soon as the new behavior is observed. If the same user is presented with the same experiment repeatedly, her reaction to the experiment is a function of the number of times she has been exposed to the experiment. This effect must be considered when interpreting experimental results. The Poisson Distribution should not be used to model clicks. They preferred Negative Binomial. The paper is easy to read, well written, and rather informative. It is especially good for web analytics and for anyone new to experimental statistics. I found the references below to be especially interesting: "Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society" by Manzi (book) "Web Analytics: An Hour per Day" by Kaushik (book) "Controlled experiments on the web: survey and practical guide" by Kohavi, Longbotham, Sommerfield, and Henne (2009) "Seven Pitfalls to Avoid when Running Controlled Experiments on the Web" by Crook, Frasca, Kohavi, Longbotham (2009) "Linear Bandits in High Dimension and Recommendation Systems" March 4, 2013 in Multi-Armed Bandit Problem, Optimization by hundalhh | Permalink Thanks to Nuit Blanche for pointing me towards the presentation by Andrea Montanari "Collaborative Filtering: Models and Algorithms" and the associated Deshpande and Montanari paper "Linear Bandits in High Dimension and Recommendation Systems" (2012). In the presentation, Montanari reviews Spectral, Gradient Descent, Stochasitc Gradient Descent, Convex Relaxation, and Linear Bandit methods for approximating the standard linear model for recommendation systems and some accuracy guarantees. Assuming the $j$th movie has features $v_{j1}, v_{j2}, \ldots, v_{jr}$, then the $i$th viewer gives the rating $R_{ij} = \langle u_i, v_j \rangle +\epsilon_{ij}$ where $u_i$ is an $r$ dimensional vector representing the preferences of $i$th viewer and $\epsilon_{ij}$ is Gaussian noise. The paper introduces a new Linear Bandit method, Smooth Explore, better suited for recommendation systems. Their method is motivated by the three objectives: Constant-optimal cumulative reward, Constant-optimal regret, and Approximate monotonicity (rewards approximately increase with time). Smooth Explore estimates the user preferences vectors with a regularized least squares regression. Proofs of optimality and numerical results are provided. "The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo" February 6, 2013 in Graphical Models, Optimization, Statistics by hundalhh | Permalink In "The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo", Homan and Gelman present an improvement of the Markov Chain Monte Carlo and the Hamiltonian Monte Carlo methods. Here's the abstract: Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information. These features allow it to converge to high-dimensional target distributions much more quickly than simpler methods such as random walk Metropolis or Gibbs sampling. However, HMC's performance is highly sensitive to two user-specified parameters: a step size and a desired number of steps L. In particular, if L is too small then the algorithm exhibits undesirable random walk behavior, while if L is too large the algorithm wastes computation. We introduce the No-U-Turn Sampler (NUTS), an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps. Empirically, NUTS perform at least as efficiently as and sometimes more efficiently than a well tuned standard HMC method, without requiring user intervention or costly tuning runs. We also derive a method for adapting the step size parameter $\epsilon$ on the fly based on primal-dual averaging. NUTS can thus be used with no hand-tuning at all. NUTS is also suitable for applications such as BUGS-style automatic inference engines that require efficient "turnkey" sampling algorithms. Abstraction for Learning (9) Assorted Links (3) Category Theory (7) Complexity (5) Compressed Sensing (1) Deep Belief Networks (26) Ensemble Learning (12) General ML (36) Graphical Models (15) Information Theory (11) Multi-Armed Bandit Problem (27) Neural Nets (26) Optimization (19) PDEs (1) Reinforcement Learning (14) Sparsity (5) Support Vector Machines (3) Subscribe to ArtEnt via Email
CommonCrawl
Methodology Article Intervention in prediction measure: a new approach to assessing variable importance for random forests Irene Epifanio1Email authorView ORCID ID profile BMC BioinformaticsBMC series – open, inclusive and trusted201718:230 Random forests are a popular method in many fields since they can be successfully applied to complex data, with a small sample size, complex interactions and correlations, mixed type predictors, etc. Furthermore, they provide variable importance measures that aid qualitative interpretation and also the selection of relevant predictors. However, most of these measures rely on the choice of a performance measure. But measures of prediction performance are not unique or there is not even a clear definition, as in the case of multivariate response random forests. A new alternative importance measure, called Intervention in Prediction Measure, is investigated. It depends on the structure of the trees, without depending on performance measures. It is compared with other well-known variable importance measures in different contexts, such as a classification problem with variables of different types, another classification problem with correlated predictor variables, and problems with multivariate responses and predictors of different types. Several simulation studies are carried out, showing the new measure to be very competitive. In addition, it is applied in two well-known bioinformatics applications previously used in other papers. Improvements in performance are also provided for these applications by the use of this new measure. This new measure is expressed as a percentage, which makes it attractive in terms of interpretability. It can be used with new observations. It can be defined globally, for each class (in a classification problem) and case-wise. It can easily be computed for any kind of response, including multivariate responses. Furthermore, it can be used with any algorithm employed to grow each individual tree. It can be used in place of (or in addition to) other variable importance measures. Random forest Variable importance measure Multivariate response Feature selection Conditional inference trees High-dimensional problems, those that involve so-called p>n data [1], are of great importance in many areas of computational biology. Predicting problems in which the number of features or variables p is much larger than the number of samples or observations n is a statistical challenge. In addition, many bioinformatics data sets contain highly correlated variables with complex interactions, and they may also contain variables that are irrelevant to the prediction. Furthermore, data sets may contain data of a mixed type, i.e. categorical (with a different number of categories) and numerical, not only as predictors but also as outputs or responses. Decision trees are a nonparametric and highly nonlinear method that can be used successfully with that kind of challenging data. Furthermore, they are robust to outliers in the input space, invariant to monotone transformations of numerical predictors, and can also handle missing values. Thanks to these properties, decision trees have become a very popular tool in bioinformatics and data mining problems in general. However, the predictive power of decision trees is their Achilles heel. A bagging strategy can be considered to improve their individual performance. A random forest (RF) is an ensemble of a large collection of trees [2]. There are several types of RFs according to the type of response. If the response is categorical, we refer to RF classification. If the response is continuous, we refer to RF regression. If the responses are right censored survival data, we refer to Random Survival Forests. Multivariate RFs refer to RFs with multiple responses [3]. RFs with only one response are applied to many different problems in bioinformatics [4, 5]. However, the number of studies with multivariate RFs is much smaller [6]. Besides good performance, another advantage of RFs is that they require little tuning. Another property of RFs that makes them attractive is that they return variable importance measures (VIMs). These VIMs can be used to rank variables and identify those which most influence prediction. This favors interpretability. Predictors are not usually equally relevant. In fact, often only a few of them have a substantial influence on the response, i.e. the rest of them are irrelevant and could have been excluded from the analysis. It is often useful to learn the contribution or importance of explanatory variables in the response [1]. The most widely used VIMs for RFs, such as the Gini VIM (GVIM), permutation VIM (PVIM) and conditional permutation VIM (CPVIM) [7], rely on the choice of a performance measure. However, measures of prediction performance are not unique [8]. Some examples for classification are misclassification cost, Brier score, sensitivity and specificity measures (binary problems), etc., while some examples for regression problems are mean squared error, mean absolute error, etc. In the case of unbalanced data, i.e. data where response class sizes differ considerably, the area under the curve (AUC) is suggested by [9] instead of the common error rate. There is no clear appropriate performance measure for survival data [10], less so in the case of multivariate response and even less so if the responses are of different types. To solve this issue, an approach for selecting variables that depends on the structure of the trees, without depending on performance measures, was proposed by [10]. They proposed an algorithm based on the minimal depth (MD) statistic, i.e. based on the idea that variables that tend to split close to the root node should have more importance in prediction. By removing the dependence on performance measures, the arrangement of the trees gains strength, as in the case of splitting rules. Recently, the author has proposed a new alternative importance measure in RFs called Intervention in Prediction Measure (IPM) [11] in an industrial application. IPM is also based on the structure of the trees, like MD. Therefore, it is independent of any prediction performance measure. IPM only depends on the forest and tree parameter settings. However, unlike MD, IPM is a case-based measure. Note also that IPM is expressed as a percentage, which makes it attractive in terms of interpretability. IPM can be used with new observations that were not used in the RF construction, without needing to know the response, unlike other VIMs. IPM can be defined globally, for each class (in a classification problem) and locally. In addition, IPM can easily be computed for any kind of response, including multivariate responses. IPM can be used with any algorithm employed to grow each individual tree, from the Classification And Regression Trees (CART) algorithm developed by [12] to Conditional Inference Trees (CIT) [13]. The objective of this work is to compare the new IPM with other well-known VIMs in different contexts, such as a classification problem with variables of different types, another classification problem with correlated predictor variables, and problems with multivariate responses and predictors of different types. Several simulation studies were carried out to show the competitiveness of IPM. Furthermore, the objective is also to stress the advantages of using IPM in bioinformatics. Consequently, the use of IPM is also illustrated in two well-known bioinformatics applications previously employed in other papers [14–16]. Although the majority of the data used here are not p>n, they could be relevant to this kind of scenarios, but this is something to be explored in the future. As mentioned previously, trees are a nonlinear regression procedure. Trees are grown by binary recursive partitioning. The broad idea behind binary recursive partitioning is to iteratively choose one of the predictors and the binary split in this variable in order to ultimately fit a constant model in each cell of the resulting partition, which constitutes the prediction. Two known problems with such models are overfitting and a selection bias towards predictors with many possible splits or missing values [15]. To solve these problems, [13] proposed a conditional inference framework (CIT) for recursive partitioning, which is also applicable to multivariate response variables, and it will be used with IPM. As outlined above, trees are a low-bias but high-variance technique, which makes them especially suited for bagging [1]. Growing an ensemble of trees significantly increases the accuracy. The term random forest was coined by [2] for techniques where random vectors that control the growth of each tree in the ensemble are generated. This randomness comes from randomly choosing a group of mtry (mtry <<p) predictors to split on at each node and bootstrapping a sample from the training set. The non-selected cases are called out-of-bag (OOB). Here, RFs based on CART (CART-RF) and CIT (CIT-RF) are considered since the VIMs reviewed later are based on these. Both are implemented in R [17]. Breiman's RF algorithm [2] is implemented in the R package randomForest [18, 19] and also in the R package randomForestSRC [20–22], while CIT-RF can be found in the R package party [23–25]. Multivariate RFs can be computed by the R package randomForestSRC and the R package party, but not by the R package randomForest. In the R package randomForestSRC, for multivariate regression responses, a composite normalized mean-squared error splitting rule is used; for multivariate classification responses, a composite normalized Gini index splitting rule is used; and when both regression and classification responses are detected, a multivariate normalized composite split rule of mean-squared error and Gini index splitting is invoked. Regardless of the specific RF implementation, VIMs can be computed, which are a helpful tool for data interpretation and feature selection. VIMs can be used to obtain a ranking of the predictors according to their association with the response. In the following section, the most used VIMs are briefly reviewed and our IPM proposal is introduced. Random forest variable importance measures and variable selection procedures The most popular VIMs based on RF include GVIM, PVIM and CPVIM. GVIM and PVIM are derived from CART-RF and can be computed with the R package randomForest [18, 19]. PVIM can also be derived from CIT-RF and obtained with the R package party [23–25]. CPVIM is based on CIT-RF, and can be calculated using the R package party. A very popular stepwise procedure for variable selection using PVIM is the one proposed by [26] (varSelRF), which is available from the R package varSelRF [27, 28]. The procedure for variable selection based on the tree-based concept termed MD proposed by [10] (varSelMD) is available from the R package randomForestSRC [20–22]. The results of the chosen variables for variable selection methods can be interesting, although the objective of variable selection methods is not really returning the importance of variables, but returning a set of variables that are subject to a certain objective, such as preserving accuracy in [26]. The underlying rationale is that the accuracy of prediction will not change if irrelevant predictors are removed, while it drops if relevant ones are removed. Table 1 gives an overview of the methods. Summary of some characteristics of VIMs GVIM PVIM (CART-RF) PVIM (CIT-RF) CPVIM varSelRF varSelMD Main references [13, 15] [11] and this manuscript Key characteristic Node impurity Accuracy after variable permutation Alternative of PVIM; Conditional permutation Backward elimination Variable selection based on MD Variables intervening in prediction RF-based CART-RF CIT-RF CART-RF or CIT-RF Handling of response Univariate All (multivariate included) Main R implementation randomForest [18, 19] randomForest [18, 19] randomForestSRC [20–22] varSelRF [27, 28] randomForestSRC [20–22] Additional file 2 Casewise importance GVIM is based on the node impurity measure for node splitting. The node impurity is measured by the Gini index for classification and by the residual sum of squares for regression. The importance of a variable is defined as the total decrease in node impurities from splitting on the variable, averaged over all trees. It is a global measure for each variable; it is not defined locally or by class (for classification problems). When there are different types of variables, GVIM is strongly biased in favor of continuous variables and variables with many categories (the statistical reasons for this are well explained by [15]). PVIM The importance of variable k is measured by averaging over all trees the decrease in accuracy between the prediction error for the OOB data of each tree and the same after permuting that predictor variable. The PVIM derived from CIT-RF is referred to as PVIM-CIT-RF. Although CIT-RF can fit multivariate responses, PVIM cannot be computed (as previously discussed, there is no clear appropriate performance measure for multivariate responses). The PVIM derived from CART-RF is referred to as PVIM-CART-RF, which is scaled (normalized by the standard deviation of the difference) by default in the randomForest function from the R package randomForest [18, 19]. The problems of this scaled measure are explored by [29]. According to [30], PVIM is often very consistent with GVIM. PVIM can be computed for each class, and it can also be computed casewise. The local or casewise variable importance is the increase in percent of times a case i is OOB and misclassified when the variable k is permuted. This option is available from the R package randomForest [18, 19], but not from the R package party [23–25]. An alternative version of PVIM to correct bias for correlated variables is CPVIM, which uses a conditional permutation scheme [25]. According to [31], CPVIM would be more appropriate if the objective is to identify a set of truly influential predictors without considering the correlated effects. Otherwise, PVIM would be preferable, as correlations are an inherent mutual property of predictors. In CPVIM, the variable importance of a predictor is computed conditionally on the values of other associated/correlated predictor variables, i.e. possible confounders are taken into account, unlike PVIM. The concept of confounding is well illustrated with a simple example considered in [32] (see [32] for a more extensive explanation): a classification problem for assessing fetal health during pregnancy. Let Y be the response with two possible values (Y=0 if the diagnosis is incorrect and Y=1 otherwise). Let us consider the following predictors: X 1, which assesses the quality of ultrasound devices in the hospital, X 2, which assesses whether the hospital staff are trained to use them and interpret the images and X 3, which assesses the cleanliness of hospital floors. Note that X 2 is related to Y and X 3, which are linked to the hospital's quality standards. If X 2 was not taken into account in the analysis, a strong association between Y and X 3 would probably be found, i.e. X 2 would act as a confounder. In the multiple regression model, if X 2 was included as predictor in the model, the questionable influence of X 3 would disappear. This is the underlying rationale for CPVIM: conditionally on X 2, X 3 does not have any effect on Y. Díaz-Uriarte and Alvarez de Andrés [26] presented a backward elimination procedure using RF for selecting genes from microarray data. This procedure only applies to RF classification. They examine all forests that result from iteratively eliminating a fraction (0.2 by default) of the least important predictors used in the previous iteration. They use the unscaled version of PVIM-CART-RF. After fitting all forests, they examine the OOB error rates from all the fitted random forests. The OOB error rate is an unbiased estimate of the test set error [30]. They select the solution with the smallest number of genes whose error rate is within 1 (by default) standard error of the minimum error rate of all forests. MD assesses the predictiveness of a variable by its depth relative to the root node of a tree. A smaller value corresponds to a more predictive variable [33]. Specifically, the MD of a predictor variable v is the shortest distance from the root of the tree to the root of the closest maximal subtree of v; and a maximal subtree for v is the largest subtree whose root node is split using v, i.e. no other parent node of the subtree is split using v. MD can be computed for any kind of RF, including multivariate RF. A high-dimensional variable selection method based on the MD concept was introduced by [10]. It uses all data and all variables simultaneously. Variables with an average MD for the forest that exceeds the mean MD threshold are classified as noisy and are removed from the final model. Intervention in prediction measure (IPM) IPM was proposed in [11], where an RF with two responses (an ordered factor and a numeric variable) was used for child garment size matching. IPM is a case-wise technique, i.e. IPM can be computed for each case, whether new or used in the training set. This is a different perspective for addressing the problem of importance variables. The IPM of a new case, i.e. one not used to grow the forest and whose true response does not need to be known, is computed as follows. The new case is put down each of the ntree trees in the forest. For each tree, the case goes from the root node to a leaf through a series of nodes. The variable split in these nodes is recorded. The percentage of times a variable is selected along the case's way from the root to the terminal node is calculated for each tree. Note that we do not count the percentage of times a split occurred on variable k in tree t, but only the variables that intervened in the prediction of the case. The IPM for this new case is obtained by averaging those percentages over the ntree trees. Therefore, for IPM computation it is only necessary to know the structure of the trees forming the forest; the response is not necessary. The IPM for a case in the training set is calculated by considering and averaging over only the trees where the case belongs to the OOB set. Once the casewise IPMs are estimated, the IPM can be computed for each class (in the case of RF-classification) and globally, averaging over the cases in each class or all the cases, respectively. Since it is a case-wise technique, it is also possible to estimate the IPM for subsets of data, with no need to regrow the forest for those subsets. An anonymous reviewer raised the question of using in-sample observations in the IPM estimation. In fact, the complete sample could be used, which would increase the sample size. This is a matter for future study. Although IPM is not based on prediction, i.e. it does not need the responses for its computation once the RF is built, the responses of in-sample observations were effectively used in the construction of the trees. So brand new and unused data (OOB observations) were preferred for IPM estimation, in order to ensure generalization. In Additional file 1, there is an example using all samples. The new IPM and all of the code to reproduce the results are available in Additional file 2. Comparison studies The performance of IPM in relation to the other well-established VIMs is compared in several scenarios, with simulated and real data. Two different kinds of responses are analyzed with both simulated and real data, specifically RF-classification and Multivariate RF are considered in order to cover the broadest possible spectrum of responses. The importance of the variables is known a priori with simulated data, as we know the model which generated the data. In this way, we can reliably analyze the successes in the ranking and variable selection for each method, and also the stability of the results, as different data sets are generated for each model. For RF-classification, the simulation models are analogous to those considered in previous works. For Multivariate RFs, simulation models are designed starting from scratch in order to analyze their performance under different situations. Analyses are also conducted on real data, which have previously been analyzed in the literature in order to supply additional evidence based on realistic bioinformatics data structures that usually incorporate complex interdependencies. Once importance values are computed, predictors can be ranked in decreasing order of importance, i.e. the most important variable appears in first place. For some methods there are ties (two variables are equally important). In such cases, the average ranking is used for those variables. All the computations are made in R [17]. The packages and parameters used are detailed for each study. Simulated data Categorical response: Scenarios 1 and 2 Two classification problems are simulated. In both cases, a binary response Y has to be predicted from a set of predictors. In the first scenario, the simulation design was similar to that used in [15], where predictors varied in their scale and number of categories. The first predictor X 1 was continuous, the other predictors from X 2 to X 5 were categorical with a different number of categories. Only predictor X 2 intervened in the generation of the response Y, i.e. only X 2 was important, the other variables were uninformative, i.e. noise. This should be reflected in the VIM results. The simulation design of Scenario 1 appears in Table 2. The number of cases (predictors and response) generated in each data set was 120. A total of 100 data sets were generated, so the stability of the results could also be assessed. Simulation design for Scenario 1 Y|X 2=0 N(0,1) B(1,0.5) DU(1/4) DU(1/10) B(1,0.5 - rel) B(1,0.5 + rel) The variables are sampled independently from the following distributions. N(0,1) stands for the standard normal distribution. B(1, π) stands for the Binomial distribution with n = 1, i.e the Bernoulli distribution, and probability π. DU(1/n) stands for the Discrete Uniform distribution with values 1, …, n. The relevance parameter rel indicates the degree of dependence between Y and X 2, and is set at 0.1, which is not very high The parameter settings for RFs were as follows. CART-RF was computed with bootstrap sampling without replacement, with n t r e e=50 as in [15], and two values for mtry: 2 (sqrt(p) the default value in [18]) and 5 (equal to p). GVIM, PVIM and IPM were computed for CART-RF. CIT-RF was computed with the settings suggested for the construction of an unbiased RF in [15], again with ntree =50 and mtry equal to 2 and 5. PVIM, CPVIM, and IPM were computed for CIT-RF. varSelRF [27] was used with the default parameters (ntree =5000). varSelMD [20] was used with the default parameters (ntree =1000 and m t r y=2) and also with mtry =5. The simulation design for the second scenario was inspired by the set-up in [25, 31, 34]. The binary response Y was modeled by means of a logistic model: $$P(Y = 1 | X = x) = \frac{e^{x^{T}\beta}}{1 + e^{x^{T}\beta}} $$ where the coefficients β were: β=(5,5,2,0,−5,−5,−2,0,0,0,0,0) T . The twelve predictors followed a multivariate normal distribution with mean vector μ=0 and covariance matrix Σ, with σ j,j =1 (all variables had unit variance), \(\sigma _{j,j'} = 0.9\phantom {\dot {i}\!}\) for j ≠ j ′≤ 4 (the first four variables were block-correlated) and the other variables were independent with \(\phantom {\dot {i}\!}\sigma _{j,j'} = 0\). The behavior of VIMs under predictor correlation could be studied with this model. As before, 120 observations were generated for 100 data sets. The parameter settings for RFs were as follows. CART-RF was computed with bootstrap sampling without replacement, with ntree =500 as in [25], and two values for mtry: 3 (sqrt(p) the default value in [18]) and 12 (equal to p). GVIM, PVIM and IPM were computed for CART-RF. CIT-RF was computed with the settings suggested for the construction of an unbiased RF in [15], again with ntree =500 and mtry equal to 3 and 12. PVIM, CPVIM, and IPM were computed for CIT. varSelRF was used with the default parameters (ntree =5000). varSelMD was used with the default parameters (ntree =1000 and mtry =3) and also with mtry =12. Multivariate responses: Scenarios 3 and 4 Again two scenarios were simulated. The design of the simulated data was inspired by the type of variable composition of the real problem with multivariate responses that would be analyzed. In this problem, responses were continuous and there were continuous and categorical predictors. The configuration of the third and fourth scenarios were quite similar. Table 3 reports the predictor distributions, which were identical in both scenarios. Table 4 reports the response distributions, two continuous responses per scenario. Of the 7 predictors, only two were involved in the response simulation: the binary X 1 and the continuous X 2. However, in the fourth scenario X 2 only participated in the response generation when X 1=0. This arrangement was laid out in this way to analyze the ability of the methods to detect this situation. The rest of the predictors did not take part in the response generation, but X 5 was very highly correlated with X 2. In addition, the noise predictors X 6 (continuous) and X 7 (categorical) were created by randomly permuting the values of X 2 and X 1, respectively, as in [10]. The other irrelevant predictors, X 3 and X 4 were continuous with different distributions. As before, 120 observations in each scenario were generated for 100 data sets. Simulation design of predictors for Scenario 3 and 4 Unif(4,6) X 2 + N(0,0.15) P(X 2) The variables are sampled independently from the following distributions. B(1, π) stands for the Binomial distribution with n = 1, i.e the Bernoulli distribution, and probability π. N(μ,σ) stands for the normal distribution with mean μ and standard deviation σ. Unif(a,b) stands for the continuous uniform distribution on the interval [a,b]. P(X) stands for random permutation of the values generated in the variable X Simulation design of responses for Scenario 3 and 4 Y 1|X 1=0 2 + 2 ·X 2 + N(0,0.1) 2 + 3 ·X 2 + N(0,0.15) 4 + N(0,0.2) The variables are sampled independently from the following distributions. N(μ,σ) stands for the normal distribution with mean μ and standard deviation σ With multivariate responses, only two VIMs could be computed. varSelMD was used with the default parameters (ntree =1000 and mtry =3) and also with mtry =7. IPM was computed for CIT-RF with the settings suggested for the construction of an unbiased RF in [15], with ntree =1000 and m t r y=7. Real data: Application to C-to-U conversion data and application to nutrigenomic study Two well-known real data sets were analyzed. The first was a binary classification problem, where the predictors were of different types. In the second, the response was multivariate and the predictors were continuous and categorical, as in the first set. The first data set was the Arabidopsis thaliana, Brassica napus, and Oryza sativa data from [14, 15], which can be downloaded from Additional file 2. It applies to C-to-U conversion data. RNA editing is the process whereby RNA is modified from the sequence of the corresponding DNA template [14, 15]. For example, cytidine-to-uridine (C-to-U) conversion is usual in plant mitochondria. Although the mechanisms of this conversion are not known, it seems that the neighboring nucleotides are important. Therefore, the data set is formed by 876 cases, each of them with the following recorded variables (one response and 43 predictors): Edit, with two values (edited or not edited at the site of interest). This is the binary response variable. The 40 nucleotides at positions –20 to 20 (named with those numbers), relative to the edited site, with 4 categories; the codon position, cp, which is categorical with 4 categories; the estimated folding energy, fe, which is continuous, and the difference in estimated folding energy (dfe) between pre-edited and edited sequences, which is continuous. The second data set derives from a nutrigenomic study [16] and is available in the R package randomForestSRC [20] and Additional file 2. The study examines the effects of 5 dietary treatments on 21 liver lipids and 120 hepatic gene expressions in wild-type and PPAR-alpha deficient mice. Therefore, the continuous responses are the lipid expressions (21 variables), while the predictors are the continuous gene expressions (120 variables), the diet (categorical with 5 categories), and the genotype (categorical with 2 categories). The number of observations is 40. According to [16], in vivo studies were conducted under European Union guidelines for the use and care of laboratory animals and were approved by their institutional ethics committee. Figure 1 shows the ranking distribution of X 2 for VIMs applied to Scenario 1. This information is also displayed in table form in Additional file 1: Table S1. The results for other sample sizes are shown in Additional file 1: Figures S1 and S2. In MD, the ranking according to minimal depth returned by the variable selection method varSelMD is shown. In theory, as X 2 was the only relevant predictor, X 2 should be in first place (the most important). The other uninformative variables should be in any place from 2nd to 5th, i.e. on average 3.5. The method which best identifies X 2 as the most important predictor is IPM from CIT-RF with mtry =5, for which X 2 was detected as the most important on 69% of occasions. The second best method is PVIM-CIT-RF with mtry =5, although it only identified X 2 as the most important predictor on 54% of occasions. It is not surprising that the methods based on CART-RF do not obtain good results due to the nature of the problem, since there are different types of predictors and different numbers of categories. In this situation, CIT provides a better alternative to CART, as is well explained in [15]. This statement is also corroborated by the results shown in Fig. 2, where the average rankings for each variable are shown. This information is also displayed in table form in Additional file 1: Table S2. The results for other sample sizes are shown in Additional file 1: Figures S3 and S4. Note that GVIM, MD and IPM from CART-RF selected X 2 erroneously most times as the least important predictor, and X 5, which is irrelevant, as the most important one. IPM from CIT-RF with mtry =5 was the method with the lowest average ranking for X 2, i.e. that which gave the highest importance, in average terms, for X 2. As regards the variable selection methods varSelRF and varSelMD, they have to be analyzed differently, as predictors are not ranked. The percentage of times that X 2 belonged to the final selected model of varSelRF was 66%, despite selecting two variables on 82 occasions, three variables on 15 occasions and four variables on 3 occasions. Remember that only X 2 was relevant. Note that IPM from CIT-RF with mtry =5 detected X 2 as the most important predictor on 69% of occasions, and X 2 was among the two most important predictors on 84% of occasions (much greater than 66%). The results for varSelMD were very poor with both mtry values. The method varSelMD with mtry =2 selected four predictors on 20% of occasions and five predictors, i.e. all the predictors, the remaining 80% of the times. It selects X 2 on 80% of occasions, precisely when all the variables were chosen. In other words, it selected the four non-relevant variables and left X 2 out of the model on 20% of occasions, and the remaining 80% of the times it did not make any selection, as it chose all the variables, including those which were irrelevant. The method varSelMD with mtry =5 selected all the predictors on 24% of occasions, four predictors 72% and three predictors 4%. X 2 was among those selected on only 26% of occasions (when all the variable were selected on 24% of occasions). Ranking distribution of X 2 for VIMs in Scenario 1. Barplots with the ranking distribution (in percentage) of X 2. The darker the bar, the greater the importance of X 2 for that method Average ranking of variables for VIMs in Scenario 1. Barplots with the average ranking of variables for VIMs. The lower the bar corresponding to X 2, the greater the importance of X 2 for that method IPM values are also easy to interpret, since they are positive and add one. The average IPM (from CIT-RF with mtry =5) values of cases in the 100 data sets for each variable were: 0.18 (X 1), 0.31 (X 2), 0.18 (X 3), 0.17 (X 4) and 0.16 (X 5). So X 2 was the most important, whereas it gave more or less the same importance to the other variables. An issue for further research is to determine from which threshold (maybe depending on the number of variables) a predictor can be considered irrelevant. IPM can also be computed in class-specific terms, as PVIM-CART-RF. (They can also be computed casewise, but we omit those results in the interests of brevity). As an illustrative example, results from a data set are examined. In [11] we showed two problems for which the results of IPM by class were more consistent with that expected than those of PVIM-CART-RF by class, and this is also the case with the current problem. Table 5 shows the importance measures by group and globally. The IPM rankings seem to be more consistent at a glance than those for PVIM-CART-RF. For instance, the ranking by PVIM for class 1 gave X 1 as the most important predictor, whereas X 1 was the fourth (the penultimate) most important predictor for class 0. We computed Kendall's coefficient W [35] to assess the concordance. Kendall's coefficient W is an index of inter-rater reliability of ordinal data [36]. Kendall's W ranges from 0 (no agreement) to 1 (complete agreement). Kendall's W for the ranking of PVIM CART-RF (mtry =2) for class 0 and 1 was 0.5, whereas for IPM CIT-RF (mtry =5) it was 0.95. We repeated this procedure for each of the 100 data sets, and the average Kendall's W were 0.71 and 0.96 for PVIM-CART-RF (m t r y=2) and IPM CIT-RF (mtry =5), respectively. Therefore, the agreement between the class rankings for IPM was very high. Note that in this case, the importance of predictors followed the same pattern for each response class as reflected by the IPM results, but it could be different in other cases. This has great potential in applied research, as explained in [32, 37]: for example, different predictors may be informative with different cancer subtypes. Analysis by class of a data set in Scenario 1 The first column is the name of the variables. The two following columns correspond to the PVIM ranking (CART-RF, mtry = 2) for each class, whereas the third column is the same but calculated globally (labeled as G). The last three columns contain the ranking of the IPM values (CIT-RF, mtry = 5) first by group and the last column computed globally (labeled as G) According to the model generation, the most important variables were X 1, X 2, X 5 and X 6, which were equally important. The following variables in terms of importance were X 3 and X 7, which were also equally important. The other variables were irrelevant. However, there was a correlation pattern between variables X 1, X 2, X 3 and X 4, which were highly correlated, but they were uncorrelated to the other variables. Each VIM can be affected by this in different ways. Theoretically, if we rank the 12 variables by importance (from the most to the least important), the true ranking of each variable should be: 2.5, 2.5, 5.5, 9.5, 2.5, 2.5, 5.5, 9.5, 9.5, 9.5, 9.5, 9.5. Note that X 1, X 2, X 5 and X 6 should be in any of the first four positions, and 2.5 is the mean of 1, 2, 3 and 4. Analogously, X 3 and X 7 should be in 5th or 6th position, and 5.5 is the mean of these two values. Similarly, for the other variables, the mean of the 7th, 8th, 9th, 10th, 11th and 12th positions is 9.5. Figure 3 shows the (flipped) average ranking (from the 100 data sets) for each method with mtry =3 and m t r y=12. The results for other sample sizes are shown in Additional file 1: Figures S5 and S6. As in [25], correlated predictors were given high importance with small mtry values, although X 3 was not so important, even when X 4 was irrelevant. Note that with m t r y=3, only MD gave higher importance (least ranking) to X 5 and X 6 than to the irrelevant X 4. This is due to the high correlation with truly important variables X 1 and X 2. This effect is mitigated when mtry is increased. For mtry =12, the ranking profiles were more like the true one. The closest ranking profile to the true one was that given by IPM CIT-RF. The profiles of MD and IPM CART-RF were near to IPM CIT-RF, and all these methods are based on the tree structure. The IPM CIT-RF ranking for the most important (equally) variables was around 3 for all of them. For other methods there was more variation among the rankings of these four variables. For example, for CPVIM the average ranking for X 1 and X 2 was around 2.5, but it increased to around 4 for X 5 and X 6, despite being equally important. For the second equally important variables (X 3 and X 7), the IPM CIT-RF ranking gave a value of 5.5 for X 3 (equal to the true one) and 7.5 for X 7. The other methods gave more importance (lower ranking) to X 3 and less importance to X 7 (higher ranking), i.e. the other methods were further away from the true ranking. As regards the irrelevant variables, the IPM CIT-RF ranking gave a value of 6.8 for X 4 and around 9 for the other uninformative variables. The X 4 ranking for other methods was lower, i.e. they erroneously placed more importance on the uninformative variable X 4. Therefore, IPM CIT-RF with m t r y=12 better identified the true pattern of importance. VIM rankings in Scenario 2. Average ranking in reverse order (a high number refers to high importance) for each VIM, for mtry =3 and mtry =12. The code of each VIM appears in the figure legend. As the y axis of the figure is flipped, 12 indicates a very important variable, whereas 1 a low important predictor. Theoretically, the true representation would be in this figure: 10.5, 10.5, 7.5, 3.5, 10.5, 10.5, 7.5, 3.5, 3.5, 3.5, 3.5, 3.5 The variable selection methods do not rank the variables, so the analysis can only be based on the selections. The distributions of the selections can be seen in Table 6. The number of variables selected by varSelRF varied from 2 to 8, although the most frequent numbers were 5 (on 31 occasions) and 6 (on 44 occasions). Note that the uninformative variable X 4 was selected more times than the most important variables X 5 and X 6. As regards varSelMD, the results were good and were in line with MD. For m t r y=3, it selected 3 to 6 variables: four variables on 26 occasions, five variables on 51 occasions and six variables on 20 occasions. It detected the most important variables, although X 4 was incorrectly selected 62% of the times, and X 7 was only selected 3% of the times. The same happened for varSelMD with m t r y=12, although this time the number of X 4 selections was lower (30%) and the number of X 7 selections was higher (21%). The number of variables selected by varSelMD with mtry =12 ranged from 3 to 6, although it usually selected 4 (40%) or 5 (44%) variables. Remember that varSelRF takes into account the error rate for selecting the solution. This could be the reason why X 4 is frequently selected by this method, because X 4 is highly correlated with other variables in the model generation. This could also be the reason why the results for varSelRF in Scenario 1 were not as bad as for other methods based on CART-RF. Distribution (in percentage) of selections for variable selection methods in Scenario 2 varSelMD (mtry = 3) varSelMD (mtry = 12) IPM for a new case IPM can be computed for a new case using either CIT-RF or CART-RF. Let us consider one of the data sets. A new artificial case with a value of zero in all the variables, i.e. the population mean of the generating model, can be built. According to the model construction, the importance pattern should be as discussed above. Figure 4 shows the IPM with mtry =12 for CIT-RF and CART-RF for this new case. Note that the other methods do not allow importance values to be computed for new cases. The four most important variables match the generating model. Variable X 3 was the following most important. Variable X 7 was as important as X 3, but as before, its importance was underestimated. The importance of the other variables was negligible, although IPM CART-RF attached some importance to X 4. Therefore, RF not only predicts the response of unlabeled samples (with unknown status), but also the importance of variables for each of those samples can be obtained with IPM. As discussed previously, local importance could reveal variables that are important for a subset of samples of the same class, which could be masked from global importance values [37]. This has potential and it is something to be explored further in the future, as only few studies have used local importances as yet [38, 39]. However, this should be approached with caution, as local importances could be noisier since they are based on smaller sample sizes. IPM for a new case in Scenario 2. IPM values with mtry =12 for CIT-RF (dark grey) and CART-RF (light grey) Application to C-to-U conversion data First of all, let us analyze the VIMs. The composition of this data is similar to the data structure of Scenario 1 (predictors of different types), so only the results for the best methods in Scenario 1 are shown here. PVIM was computed with CIT-RF as in Scenario 1, with n t r e e=50 and m t r y=3 as in [15]. CPVIM could not be computed for this data set (not even if the threshold was changed) due to the high storage needs. But, as shown in [25], with increasing mtry values the unconditional importance resembled the behavior of the conditional importance, so PVIM-CIT-RF with mtry =43 was also considered. IPM-CIT-RF with mtry =43 was also computed. RFs are a randomized method. Therefore, the RF was computed 100 times from different seeds to gain stability in VIMs values. The VIM values from the 100 replications were averaged, and are displayed by barplots in Fig. 5. The results are similar to those in [15], although with some slight differences. As in [14, 15], position -1 was very important, followed by position 1, which was not detected in [14]. Note that GVIM, the method with the worst results in Scenario 1, was used in [14]. In [15], fe and dfe were somewhat important (only fe in [14]), but according to the results with PVIM (mtry =43) and IPM, cp (more than fe) and fe were somewhat important, but not dfe. Furthermore, according to IPM, there were also two somewhat important variables: positions -13 and 13. VIM barplots for the C-to-U conversion data. The first row corresponds to PVIM-CIT with mtry = 3, the second row to mtry = 43, while the bottom row to IPM-CIT-RF values with mtry = 43. In all cases, values are averaged over 100 replications. The variable names appear at bottom of the barplots In this real case, we do not know the true importance pattern of the variables. So, besides the variable importance study, let us also analyze the prediction accuracy in this data set. The same scheme as in [15] was considered. The original data were split into a training and test set with a size ratio of 2:1. This procedure was repeated 100 times. Each time the following operations were carried out. A RF-CART with bootstrap sampling without replacement, with ntree =50 and mtry =3 as in [15] was grown using the training set, and observations in the test set were predicted with this RF. The same procedure was performed with a CIT-RF with the settings suggested for the construction of an unbiased RF in [15], again with ntree =50 and mtry =3, as in [15]. These two procedures were used in [15]. On this occasion, the variable selection varSelRF was considered (varSelMD was not considered due to the poor results obtained in Scenario 1). An RF-CART was built as before, but using only the variables selected by varSelRF, and predictions in the test set were calculated. This is a kind of regularization, which attempts to erase noise (uninformative predictors) to improve the prediction. The same strategy was employed to exploit the best VIMs in Scenario 1. The idea was as follows. The same VIMs that appeared before in Fig. 5 were considered. The predictors were ranked according to these VIMs. The m=10 most important predictors were selected, and used to grow a CIT-RF with the settings suggested for the construction of an unbiased RF in [15], again with n t r e e=50 and m t r y=3. The test set was predicted by this RF. The value m=10 (a round number) was an intermediate value, not too small or too high, in view of the importance patterns displayed in Fig. 5 (there were not many important variables among the 43 predictors). The tuning parameter m should be further investigated, but the results are reported here without refining the value of m. The mean and standard deviation of the misclassification rates over the 100 runs appear in Table 7. Mean and standard deviation of misclassification rates in C-to-U conversion data RF-CART RF-CIT R-PVIM (mtry = 3) R-PVIM (mtry = 43) R- IPM Std. deviation Results produced by using the regularization procedure derived by VIMs are preceded by an R The results obtained are similar those that appear in [15]. The greatest successes were achieved by methods conducted with CIT, as expected due to the data composition with predictors of different types. Furthermore, the regularization strategy using VIMs reported the best performance. In particular, the best results were achieved by using PVIM and IPM with CIT-RF and m t r y=43 to select the most important predictors before refitting the RF-CIT. In fact, both methods significantly (p-values of Student's t-test for paired samples are well below 0.05) improved on the results given by RF-CART, CIT-RF and varSelRF. Scenarios 3 and 4 Tables 8 and 9 show the average ranking (from the 100 data sets) for each method. The results for other sample sizes are shown in Additional file 1: Tables S3, S4, S5 and S6. For scenario 3 and 4, variables X 1 and X 2 participated in the response generation, so both predictors were important. However, their importance level differs in each scenario. In scenario 4, X 2 only intervened in the response generation when X 1=0, so intuitively it should have had less importance than in scenario 3. As X 1 and X 2 participated in the response generation in a different way, and they are also variables of different types, it is difficult to judge their relative importance theoretically. In any case, X 1 and X 2 should rank in the first or second positions in scenario 3, while the other irrelevant variables should rank in 5th position (the mean of positions 3, 4, 5, 6, and 7). Average ranking of variables for VIMs in Scenario 3 MD (mtry = 3) IPM (CIT-RF, mtry = 7) In scenario 3, IPM considered X 2 as the most important variable, followed by X 1 in all the runs. The average IPM values were 32% for X 1 and 68% for X 2, and near zero for the other variables. The average rank for the other variables was around 5, except for X 5 (the variable that was highly correlated with X 2), with an average ranking of 3.4. Nevertheless, MD considered mostly X 1 as the most important variable and in second position X 2, although not in all the runs. For MD with m t r y=3, X 5 was the most important on 8 occasions, and the second most important on 24 occasions. For MD with m t r y=7, X 5 is the most important on 4 occasions, and the second most important on 8 occasions. Furthermore, MD always ranked X 7 in last position (presumably because of its categorical nature), when it was no less important than the other uninformative variables. In scenario 4, MD with mtry =7 considered X 1 as the most important variable, followed by X 2 in all the runs. On the other hand, MD with mtry =3 considered X 1 as the most important in all the runs, as well as IPM. However, X 2 was the second most important in all runs, except on 28 occasions for MD with mtry =3 and 6 occasions for IPM, where X 2 was considered the third. The average IPM values were 40% for X 1, 26% for X 2 and around 7% for the other variables. However, as IPM is defined casewise, IPM can be also computed according to the group values of X 1. Note that this kind of information supplied by IPM about importance in subgroups of data could only be available for MD if the RF was regrown with that subset of data. For samples with X 1=0, the average IPM values were 47% for X 1 and 53% for X 2. Remember that when X 1=0, the variable X 2 intervened in the generation of the responses. For samples with X 1=1 (X 2 did not intervene in the generation of the responses), the average IPM values were 35% for X 1, 7% for X 2, 13% for X 3, 13% for X 4, 6% for X 5, 13% for X 6 and 12% for X 7. Note that when X 1=1, neither of the variables intervened in the model generation, so all the variables were equally unimportant. The selection frequency with CIT should be similar [15]. The sum of IPM of the two correlated variables X 2 and X 5 was 13%. Note also that this situation, where neither of the predictors is related with responses, is not expected (nor desirable) in practice. Application to a nutrigenomic study Let us first analyze the VIMs. As the response is multivariate, only MD and IPM-CIT-RF can be computed. This problem is placed in a high-dimensional setting: it deals with large p (122) and small n (40). As explained in [10], as p increases, the tree becomes overwhelmed with variables, so trees will be too shallow. If we compute varSelMD and IPM-CIT-RF with mtry =122 and n t r e e=1000, only the variable diet is selected in both cases. This solution could be viewed as a 'degenerate' solution. Then, the default mtry value in function rfsrc from R package randomForestSRC [20] is used. In this case mtry =p/3 (rounded up), i.e. mtry =41. A total of 34 variables were selected by varSelMD, diet being the least deep and genotype being the fourth least deep. However, except for diet, the depth values were not very different. To provide stability, this procedure was repeated 100 times with different seeds. A total of 44 variables were selected in some of the replicates. Half of these, 22 variables, were selected in all the replicates and 27 of them were selected on more than 75% of occasions. In particular, these were the following 27 predictors (the number of times they were selected over the 100 replicates is given in brackets): ACAT2 (100), ACBP (100), ACC2 (100), ACOTH (100), apoC3 (100), BSEP (89), CAR1 (100), CYP2c29 (100), CYP3A11 (100), CYP4A10 (100), CYP4A14 (100), diet (100), G6Pase (96), genotype (100), GK (77), GSTpi2 (100), HPNCL (100), Lpin (100), Lpin1 (100), Lpin2 (97), Ntcp (100), PLTP (100), PMDCI (100), S14 (100), SPI1.1 (100), SR.BI (99), and THIOL (100). For the 100 replicates of IPM-CIT-RF with m t r y=41, the ranking of the first variables was very stable: diet was the first in all replicates, and genotype the second. The average ranking of the first ten ranked predictors together with the standard deviation for the 100 replicates can be seen in Table 10. Although not suggested by the varSelMD results, the IPM values indicate that four variables accounted for most relevance (nearly 70%). In particular, these are the averaged IPM values for the four variables in brackets: diet (33.7%), genotype (19.5%), PMDCI (7.5%) and THIOL (6.2%). The barplot of these IPM values can be seen in Fig. 6. The seventh and eight most important predictors according to IPM, BIEN and AOX were not selected in any of the replications of varSelMD. Barplot of averaged IPM values for nutrigenomic study. The ranking of the 10 predictors with the largest IPM values appears at the bottom. Their names can be found in Table 10 Average ranking of the first 10 ranked variables in the nutrigenomic study for IPM (SD in brackets) PMDCI THIOL CYP3A11 AOX 12.4 (5.2) Let us analyze the prediction performance. Prediction error was calculated using OOB data. As the responses were continuous, performance was measured in terms of mean-squared-error. The prediction error for each of the 21 responses was standardized (dividing by the variance as in the R package randomForestSRC [20]) for proper comparison. The prediction error using the function rfsrc from R package randomForestSRC with the default values was computed, which is referred to as rfsrc. varSelMD was applied as before, and the function rfsrc was again used for prediction afterwards, but the selected variables were used as input instead all the variables. This procedure is referred as varSelMD. Finally, instead of varSelMD, IPM-CIT-RF was applied as before, and the 10 most important variables were selected, for prediction. This procedure was referred as IPM. The standardized errors for each response were averaged over 100 independent experiments, and their results are summarized in Table 11 and displayed in Fig. 7. The lower prediction errors from IPM can be clearly observed. Furthermore, by pooling the samples for each variable and using a Student's t-test for paired samples with α=0.05, IPM significantly improves varSelMD, which in turn improves rfsrc. Boxplots of standardized prediction errors for nutrigenomic study. Distributions of standardized prediction errors for each variable, for methods rfsrc (in blue), varSelMD (in black) and IPM (in red) from 100 replications Mean and standard deviation of standardized prediction errors in the nutrigenomic study rfsrc What are the advantages and limitations of IPM? One of the advantages is the out-performance of IPM in the previous comparisons. In addition, its case-wise constitution should also be highlighted. IPM can be defined globally, for each class (for RF-classification) and locally, as well as for new cases (even if we do not know their true response). Only PVIM can also be computed locally, by class and globally, but except for IPM, none of the other methods are able to estimate a variable's influence in a new case without knowing its true response. Furthermore, IPM can be computed for subsets of data, with no need to regrow the forest for those subsets. Furthermore, IPM is independent of any prediction error, like MD. This is very advantageous as it is not always known how the prediction error can be clearly measured [33], and VIMs (rankings of important variables) can be different if different prediction error measures are employed. Furthermore, as IPM is only based on the tree structures, it can be applied to all kind of forests, regardless of the outcome, from RF-classification to Multivariate RFs. [33] indicates another possible advantage of MD due to the fact that it is not linked to any prediction error, which is also the case of IPM. Although RFs are excellent in prediction for high dimensions, prediction performance breaks down when the number of noisy predictors increases, as it overwhelms the trees. As a consequence, it is difficult to select variables effectively, and methods that are based on prediction error, such as PVIM, may be more susceptible to these effects than methods based on tree structure. As long as a predictor v repeatedly splits across the forest, MD or IPM have a good chance of identifying v, even in the presence of many noisy predictors. As regards computational complexity, IPM basically depends on exploring the trees for each observation, so the most computational burden part is growing the trees. In addition, IPM is conceptually simple and its interpretation is very accessible for everyone as it is expressed in percentages. The influence of sample size on the performance of IPM is investigated in the simulated scenarios. Results are shown for sample sizes of n=50 and n=500 in Additional file 1. As indicated by [40], the data sets are usually high dimensional, i. e. with a small sample size relative to the dimension, and RFs with the largest trees are optimal in such studies. However, this is not the case with n=500, where there are few variables with a high number of observation. In such situations, it is desirable to have the terminal node size go up with the sample size [40]. In those cases, the maximum depth (maxdepth) of the trees in RFs may regulate overfitting [41]. As IPM results are based only on the tree structure, it is fundamental to grow trees that are not overfit, otherwise noise is introduced, which can distort the results. The drawbacks of IPM are common to any other rank-based VIM, in the sense that caution is needed when interpreting any linear ranking because it is possible that multiple sets of weak predictive variables may be jointly predictive. A new VIM for RFs, IPM, has been introduced and assessed in different scenarios, within both simulated and real frameworks. IPM can be used in place of (or in addition to) other VIMs. The advantages and limitations of IPM have been highlighted in the previous Section. There also some questions that deserve further research, such as the choice of mtry or maxdepth (for a high n) in RFs for IPM computation. As the objective of IPM is not prediction, but to indicate the contribution of variables to prediction, high mtrys with CIT-RF have given very good performance in the simulation studies carried out. However, when p >n, mtry should be reduced. Besides the qualitative information provided by IPM for understanding problems, if we want to use that information for predicting, we have to select a threshold for selecting the variables for regrowing the RF. In the problems, a round fixed number of the 10 variables with the highest IPM values was selected for predicting purposes, with promising results. An open question would be to explore the selection of this number and its relationship with the distribution of IPM values (possibly selecting variables with IPM values above a certain threshold), together with the number of observations and predictors. A detailed study should be made with scenarios covering p >n cases, such as scenarios with a few relevant variables and many irrelevant variables and scenarios with many slightly relevant variables. Another open question it is to try to perform a theoretical study of IPM, as in [10] for MD. Note that, according to [42], it seems very difficult to carry out a detailed theoretical study of PVIM, but IPM is not a randomization procedure like PVIM. CPVIM: Conditional permutation VIM Classification and Regression Trees CIT: CART-RF: Random forest based on CART CIT-RF: Random forest based on CIT Cytidine-to-Uridine (C-to-U); GVIM: Gini VIM IPM: Intervention in prediction measure MD: Minimal depth OOB: Out-of-bag PVIM: Permutation VIM PVIM-CIT-RF: PVIM derived from CIT-RF PVIM-CART-RF: PVIM derived from CART-RF RF: Residual Sum of Squares (RSS); varSelRF: Variable selection procedure proposed by [26] varSelMD: Variable selection procedure using MD proposed by [10] VIM: The author would like to thank the Editor and two anonymous reviewers for their very constructive suggestions, which have led to improvements in the manuscript. This work has been partially supported by Grant DPI2013- 47279-C2-1- R from the Spanish Ministerio de Economía y Competitividad. The funders played no role in the design or conclusions of this study. The R code and all data simulated or analyzed in this work are included in Additional file 2. IE is the only author. The author declares that she has no competing interests. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1 Supplementary file. This file shows the results of VIMs in the simulated scenarios for sample sizes of n=50 and n=500. Figure S1: Ranking distribution (in percentage) of X 2 for VIMs in Scenario 1 with n = 50. Figure S2: Ranking distribution (in percentage) of X 2 for VIMs in Scenario 1 with n=500. Figure S3: Average ranking of variables for VIMs in Scenario 1 with n=50. Figure S4: Average ranking of variables for VIMs in Scenario 1 with n=500. Table S1: Ranking distribution (in percentage) of X 2 for VIMs in Scenario 1 with n = 120. Table S2: Average ranking of variables for VIMs in Scenario 1 with n=120. Figure S5: Average ranking for each VIM in Scenario 2, for mtry =3 and mtry =12, with n=50. Figure S6: Average ranking for each VIM in Scenario 2, for mtry =3 and mtry =12, with n=500. Table S3: Average ranking of variables for VIMs in Scenario 3, with n=50. Table S4: Average ranking of variables for VIMs in Scenario 3, with n=500. Table S5: Average ranking of variables for VIMs in Scenario 4, with n=50. Table S6: Average ranking of variables for VIMs in Scenario 4, with n=500. (PDF 166 kb) Additional file 2 R source code. This is a compressed (.zip) file with data and R codes for reproducing the results. There is a file called Readme.txt that explains the contents of six files. Two files contain the two real data sets. The R code for computing IPM for RF-CART and CIT-RF is available in one file. The R code for each of the simulations is available in another file, while the other two files contain the R codes for each application to real data. (ZIP 43 kb) Departament de Matemàtiques and Institut de Matemàtiques i Aplicacions de Castelló, Universitat Jaume I, Campus del Riu Sec, Castelló, 12071, Spain Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning. Data Mining, Inference and Prediction, 2nd ed. New York: Springer-Verlag; 2009.Google Scholar Breiman L. Random forests. Mach Learn. 2001; 45(1):5–32.View ArticleGoogle Scholar Segal M, Xiao Y. Multivariate random forests. Wiley Interdiscip Rev Data Min Knowl Discov. 2011; 1(1):80–7.View ArticleGoogle Scholar Boulesteix AL, Janitza S, Kruppa J, König IR. Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics. Wiley Interdiscip Rev Data Min Knowl Discov. 2012; 2(6):493–507.View ArticleGoogle Scholar Chen X, Ishwaran H. Random forests for genomic data analysis. Genomics. 2012; 99(6):323–9. doi:10.1016/j.ygeno.2012.04.003.View ArticlePubMedPubMed CentralGoogle Scholar Xiao Y, Segal M. Identification of yeast transcriptional regulation networks using multivariate random forests. PLoS Comput Biol. 2009; 5(6):1–18. doi:10.1371/journal.pcbi.1000414.View ArticleGoogle Scholar Wei P, Lu Z, Song J. Variable importance analysis: A comprehensive review. Reliab Eng Syst Saf. 2015; 142:399–432.View ArticleGoogle Scholar Steyerberg E, Vickers A, Cook N, Gerds T, Gonen M, Obuchowski N, Pencina M, Kattan M. Assessing the performance of prediction models: a framework for some traditional and novel measures. Epidemiology. 2010; 21(1):128–38.View ArticlePubMedPubMed CentralGoogle Scholar Janitza S, Strobl C, Boulesteix AL. An AUC-based permutation variable importance measure for random forests. BMC Bioinformatics. 2013; 14(1):119. doi:10.1186/1471-2105-14-119.View ArticlePubMedPubMed CentralGoogle Scholar Ishwaran H, Kogalur U, Gorodeski E, Minn A, Lauer M. High-dimensional variable selection for survival data. J Am Stat Assoc. 2010; 105(489):205–17.View ArticleGoogle Scholar Pierola A, Epifanio I, Alemany S. An ensemble of ordered logistic regression and random forest for child garment size matching. Comput Ind Eng. 2016; 101:455–65.View ArticleGoogle Scholar Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and regression trees. Stat Probab Series. Belmont: Wadsworth Publishing Company; 1984.Google Scholar Hothorn T, Hornik K, Zeileis A. Unbiased recursive partitioning: A conditional inference framework. J Comput Graph Stat. 2006; 15(3):651–74.View ArticleGoogle Scholar Cummings MP, Myers DS. Simple statistical models predict C-to-U edited sites in plant mitochondrial RNA. BMC Bioinformatics. 2004; 5(1):132. doi:10.1186/1471-2105-5-132.View ArticlePubMedPubMed CentralGoogle Scholar Strobl C, Boulesteix AL, Zeileis A, Hothorn T. Bias in random forest variable importance measures: Illustrations, sources and a solution. BMC Bioinformatics. 2007; 8(1):25. doi:10.1186/1471-2105-8-25.View ArticlePubMedPubMed CentralGoogle Scholar Martin PGP, Guillou H, Lasserre F, Déjean S, Lan A, Pascussi JM, SanCristobal M, Legrand P, Besse P, Pineau T. Novel aspects of ppar α-mediated regulation of lipid and xenobiotic metabolism revealed through a nutrigenomic study. Hepatology. 2007; 45(3):767–77. doi:10.1002/hep.21510.View ArticlePubMedGoogle Scholar R Development Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2016. http://www.R-project.org/.Google Scholar Breiman L, Cutler A, Liaw A, Wiener M. Breiman and Cutler's Random Forests for Classification and Regression. 2015. R package version 4.6.12. https://cran.r-project.org/package=randomForest. Accessed 12 Jan 2017. Liaw A, Wiener M. Classification and regression by randomforest. R News. 2002; 2(3):18–22.Google Scholar Ishwaran H, Kogalur UB. Random Forests for Survival, Regression and Classification (RF-SRC). 2016. R package version 2.4.1. https://cran.r-project.org/package=randomForestSRC. Accessed 12 Jan 2017. Ishwaran H, Kogalur UB. Random survival forests for r. R News. 2007; 7(2):25–31.Google Scholar Ishwaran H, Kogalur UB, Blackstone EH, Lauer MS. Random survival forests. Ann Appl Stat. 2008; 2(3):841–60.View ArticleGoogle Scholar Hothorn T, Hornik K, Strobl C, Zeileis A. A Laboratory for Recursive Partytioning. 2016. R package version 1.0.25. https://cran.r-project.org/package=party. Accessed 12 Jan 2017. Hothorn T, Buehlmann P, Dudoit S, Molinaro A, Van Der Laan M. Survival ensembles. Biostatistics. 2006; 7(3):355–73.View ArticlePubMedGoogle Scholar Strobl C, Boulesteix AL, Kneib T, Augustin T, Zeileis A. Conditional variable importance for random forests. BMC Bioinformatics. 2008; 9(1):307. doi:10.1186/1471-2105-9-307. Díaz-Uriarte R, Alvarez de Andrés S. Gene selection and classification of microarray data using random forest. BMC Bioinformatics. 2006; 7(1):3. doi:10.1186/1471-2105-7-3. Díaz-Uriarte R. Variable Selection Using Random Forests. 2016. R package version 0.7.5. https://cran.r-project.org/package=varSelRF. Accessed 12 Jan 2017. Díaz-Uriarte R. Genesrf and varselrf: a web-based tool and r package for gene selection and classification using random forest. BMC Bioinformatics. 2007; 8(1):328. doi:10.1186/1471-2105-8-328. Strobl C, Zeileis A. Danger: High power! – Exploring the statistical properties of a test for random forest variable importance. In: Brito P, editor. COMPSTAT 2008 – Proceedings in Computational Statistics, vol. II. Heidelberg: Physica Verlag: 2008. p. 59–66. https://eeecon.uibk.ac.at/%7Ezeileis/papers/Strobl+Zeileis-2008.pdf. Breiman L, Cutler A. Random Forests. 2004. http://www.stat.berkeley.edu/%7Ebreiman/RandomForests/. Accessed 12 Jan 2017. Nicodemus KK, Malley JD, Strobl C, Ziegler A. The behaviour of random forest permutation-based variable importance measures under predictor correlation. BMC Bioinformatics. 2010; 11(1):110. doi:10.1186/1471-2105-11-110. Boulesteix AL, Janitza S, Hapfelmeier A, Van Steen K, Strobl C. Letter to the editor: On the term 'interaction' and related phrases in the literature on random forests. Brief Bioinform. 2015; 16(2):338–45. doi:10.1093/bib/bbu012.View ArticlePubMedGoogle Scholar Ishwaran H, Kogalur UB, Chen X, Minn AJ. Random survival forests for high-dimensional data. Stat Anal Data Mining. 2011; 4(1):115–32. doi:10.1002/sam.10103.View ArticleGoogle Scholar Hapfelmeier A, Hothorn T, Ulm K, Strobl C. A new variable importance measure for random forests with missing data. Stat Comput. 2014; 24(1):21–34. doi:10.1007/s11222-012-9349-1.View ArticleGoogle Scholar Kendall MG. Rank Correlation Methods. Oxford: C. Griffin; 1948.Google Scholar Gamer M, Lemon J, Singh IFP. Irr: Various Coefficients of Interrater Reliability and Agreement. 2012. R package version 0.84. http://CRAN.R-project.org/package=irr. Accessed 12 Jan 2017. Touw WG, Bayjanov JR, Overmars L, Backus L, Boekhorst J, Wels M, Hijum SAFT. Data mining in the life sciences with random forest: a walk in the park or lost in the jungleBrief Bioinform. 2013; 14(3):315–26. doi:10.1093/bib/bbs034.View ArticlePubMedGoogle Scholar Wuchty S, Arjona D, Li A, Kotliarov Y, Walling J, Ahn S, Zhang A, Maric D, Anolik R, Zenklusen JC, Fine HA. Prediction of associations between microRNAs and gene expression in glioma biology. PLOS ONE. 2011; 6(2):1–10. doi:10.1371/journal.pone.0014681.View ArticleGoogle Scholar Bayjanov JR, Molenaar D, Tzeneva V, Siezen RJ, van Hijum SAFT. Phenolink - a web-tool for linking phenotype to ~omics data for bacteria: application to gene-trait matching for Lactobacillus plantarum strains. BMC Genomics. 2012; 13(1):170. doi:10.1186/1471-2164-13-170.View ArticlePubMedPubMed CentralGoogle Scholar Lin Y, Jeon Y. Random forests and adaptive nearest neighbors. J Am Stat Assoc. 2006; 101(474):578–90.View ArticleGoogle Scholar Strobl C, Malley J, Tutz G. An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychol Methods. 2009; 14(4):323–48.View ArticlePubMedPubMed CentralGoogle Scholar Ishwaran H. Variable importance in binary regression trees and forests. Electron J Stat. 2007; 1:519–37. doi:10.1214/07-EJS039. Breiman L. Manual On Setting Up, Using, and Understanding Random Forests V4.0. Statistics Department, University of California, Berkeley. Statistics Department, University of California, Berkeley. 2003. https://www.stat.berkeley.edu/%7Ebreiman/Using_random_forests_v4.0.pdf.
CommonCrawl
On memo-viability of fractional equations with the Caputo derivative Ewa Girejko1, Dorota Mozyrska1 & Małgorzata Wyrwas1 Advances in Difference Equations volume 2015, Article number: 58 (2015) Cite this article In this paper viability results for nonlinear fractional differential equations with the Caputo derivative are proved. We give a necessary condition for fractional viability of a locally closed set with respect to a nonlinear function. A specific sufficient condition is also provided. Fractional calculus deals with fractional derivatives and integrals of any order, and it is a field of mathematics that grows out of the traditional definitions of calculus integral and derivative operators. Hence, fractional differential equations are generalizations of ordinary differential equations to equations with an arbitrary order. In recent years, fractional differential equations have been investigated by many authors [1–6]. However, the problem of viability of fractional differential equations, which consists in finding at least one solution to the equation starting and staying in a constrained set, has not been well developed so far. The classical viability theory has its origin in the Nagumo theorem [7] and is widely exploited starting from ordinary differential equations and reaching differential inclusions based on set-valued maps with a wide range of applications [8–10]. In this paper we continue the subject of the Nagumo theorem for a fractional differential equation with the Caputo derivative. In [11] we showed a sufficient condition for solutions to be viable with respect to a constrained set. In the present paper a necessary condition of viability of a fractional differential equation with the Caputo derivative is proved. It is not trivial to show a necessary condition of viability of a fractional differential equation with the Caputo derivative. Thus we implement the idea that we used in [12] for a fractional differential equation with the Riemann-Liouville derivative, i.e., the initialization problem that leads to a modification of the problem, namely we consider viability of solutions in the memory domain. Then we employ the formula that joins these two types of fractional derivatives. This idea allows us to prove the necessity of the Nagumo theorem going through the classical tools as Bouligand cone, contingent vectors, etc. The paper is organized as follows. In Section 2 we gather preliminary definitions, notations and some results. Section 3 includes the initialization problem. We formulate the inner value problem with the Caputo derivative on the basis of a similar problem with the Riemann-Liouville derivative. The key result is given in Proposition 7. The last section concerns the viability problem. Theorem 11 and corollaries give necessary conditions of viability of a fractional differential equation with the Caputo derivative. Then an illustrative example is provided. Finally Theorem 17 gives a base to formulate a sufficient condition of viability for an equation involving the Caputo derivative that is, however, slightly different from the one that we formulated in [11]. We finish the paper with a block scheme that shows relations among viability conditions presented in the paper. In this section we make a review of notations, definitions, and some preliminary facts, which are useful for the paper. We recall definitions of fractional integrals of arbitrary order, the Caputo and Riemann-Liouville derivatives of order \(q\in(0,1)\), and a description of special functions in the fractional calculus. ([5, 13, 14]) Let \(\varphi\in L_{1} ([0,t_{1}],\mathbb{R} )\). The integral $$ \bigl(I^{q}_{0+}\varphi\bigr) (t)=\frac{1}{\Gamma(q)} \int_{0}^{t} \varphi(s) (t-s)^{q-1}\,ds , \quad 0< t\leq t_{1}, $$ where Γ is the gamma function and \(q>0\), is called the left-sided fractional Riemann-Liouville integral of order q. Additionally we define \(I^{0}_{0+}:=\pmb{I}\) (identity operator). Note that \(I_{0+}^{q}f(t)=(f*\varphi_{q})(t)\), where \(\varphi_{q}(t)=\frac{t^{q-1}}{\Gamma(q)}\) for \(t>0\), \(\varphi_{q}(t)=0\) for \(t\leq0\), and \(\varphi_{q}\rightarrow\delta(t)\) as \(q\rightarrow0\), with δ the delta Dirac pseudo function. Moreover, fractional integration has the following property: $$ I^{q}_{0+} \bigl(I^{p}_{0+} \varphi \bigr)=I^{q+p}_{0+}\varphi,\quad q\geq0 , p\geq0 . $$ The best known fractional derivatives are the Riemann-Liouville and the Caputo ones. ([5, 13]) Let φ be defined on the interval \([0,t_{1}]\) and n be the natural number satisfying \(n=\lfloor q \rfloor+1\) with \(\lfloor q \rfloor\) denoting the integer part of q. The left-sided Riemann-Liouville derivative of order q and the lower limit 0 is defined through the following: $$ \bigl(D^{q}_{0+}\varphi \bigr) (t)= \frac{1}{\Gamma(n-q)} \biggl(\frac {d}{dt} \biggr)^{n} \int _{0}^{t} \varphi(s) (t-s)^{n-q-1}\,ds . $$ The left-sided Caputo derivative of order q and the lower limit 0 is defined through the following: $$\bigl({}^{C}D^{q}_{0+}\varphi \bigr) (t)=\frac{1}{\Gamma(n-q)} \int_{0}^{t} \varphi^{(n)}(s) (t-s)^{n-q-1}\,ds . $$ If \(q\in(0,1)\), then the left-sided Riemann-Liouville fractional derivative of order q takes the form $$ \bigl(D^{q}_{0+}\varphi \bigr) (t)=\frac{1}{\Gamma(1-q)} \frac{d}{dt} \int_{0}^{t} \varphi(s) (t-s)^{-q}\,ds= \frac{d}{dt} \bigl( \bigl(I^{1-q}_{0+} \varphi \bigr) (t) \bigr) , $$ and the left-sided Caputo fractional derivative of order q takes the form $$ \bigl({}^{C}D^{q}_{0+}\varphi \bigr) (t)= \frac{1}{\Gamma(1-q)} \int_{0}^{t} \varphi^{\prime}(s) (t-s)^{-q}\,ds= \biggl(I^{1-q}_{0+} \frac{d}{ds}\bigl(\varphi(s)\bigr) \biggr) (t) . $$ If \(q\in(0,1]\), then the following comparison formula of the Caputo and Riemann-Liouville derivatives holds. $$ \bigl({}^{C}D^{q}_{0+}\varphi \bigr) (t)= \bigl(D^{q}_{0+} \varphi \bigr) (t)-\frac{t^{-q}}{\Gamma(1-q)} \varphi\bigl(0^{+}\bigr) , $$ where \(\varphi(0^{+})=\lim_{t\rightarrow0^{+}}\varphi(t)\). From [13], Lemmas 2.4 and 2.5, we have the following properties. Proposition 5 If \(q>0\), then \(D^{q}_{0+} (I^{q}_{0+}\varphi )(t) =\varphi(t)\) for any \(\varphi\in L_{1}(0,t_{1})\), while $$\bigl(I^{q}_{0+}D^{q}_{0+}\varphi \bigr) (t)=\varphi(t) $$ is satisfied for \(\varphi\in I^{q}_{0+}(L_{1}(0,t_{1}))\) with $$I^{q}_{0+}\bigl(L_{1}(0,t_{1})\bigr)= \bigl\{ \varphi(t): \varphi(t)= \bigl(I^{q}_{0+}\psi \bigr) (t), \psi\in L_{1}(0,t_{1})\bigr\} . $$ For \(q\in(0,1]\) we have $$\bigl(I^{q}_{0+}D^{q}_{0+}\varphi \bigr) (t)=\varphi(t) - \frac{t^{q-1}}{\Gamma(q)} \bigl(I^{1-q}_{0+}\varphi \bigr) (t)\Big| _{t=0} . $$ The following formulas are useful: $$ I^{q}_{0+}t^{p}=\frac{\Gamma(p+1)}{\Gamma(p+q+1)}t^{p+q} \quad\mbox{and}\quad D^{q}_{0+}t^{p}=\frac{\Gamma(p+1)}{\Gamma(p-q+1)}t^{p-q}, $$ in particular, $$ I^{q}_{0+}1=\frac{t^{q}}{\Gamma(q+1)},\qquad D^{q}_{0+} 1=\frac {t^{-q}}{\Gamma(1-q)},\qquad {}^{C}D^{q}_{0+} 1=0 . $$ The inner value problem Let us consider the fractional differential equations with the Caputo fractional derivative $$ \bigl({}^{C}{D}^{q}_{0+} x \bigr) (t)=f\bigl(t,x(t)\bigr),\quad 0< q < 1, t\in(0,T] , $$ with \(x:(0,T]\to\mathbb{R}^{n}\), satisfying the inner condition $$ x(t_{0})=x_{0} \in\mathbb{R}^{n} , $$ where \(t_{0}\in(0,T)\). By (2) equation (3) can be rewritten with the Riemann-Liouville fractional derivative as follows: $$ \bigl({D}^{q}_{0+} x \bigr) (t)=f\bigl(t,x(t) \bigr)+\frac{t^{-q}}{\Gamma (1-q)}x\bigl(0^{+}\bigr) , $$ where for \(0< q<1\), \(t\in(0,T]\). Let us define the function \(g:(0,T]\times \mathbb {R}^{n}\rightarrow \mathbb {R}^{n}\) as \(g(t,x(t)):=f(t,x(t))+\frac{t^{-q}}{\Gamma(1-q)}x(0^{+})\). Then we get $$ \bigl({D}^{q}_{0+} x \bigr) (t)=g\bigl(t,x(t) \bigr),\quad t\in(0,T] . $$ Assume that (6) satisfies the boundary inner condition $$ x(t_{0})=x_{0} \in\mathbb{R}^{n},\quad 0< t_{0}<T . $$ From [13] we know the form of the Volterra fractional integral for \(q\in(0,1)\) $$ x(t)= \bigl(I^{q}_{0+}g\bigl(s,x(s)\bigr) \bigr) (t)+ \biggl(\frac{t_{0}}{t} \biggr)^{1-q}\cdot \bigl(x_{0}- \bigl(I^{q}_{0+}g\bigl(s,x(s)\bigr) \bigr) (t_{0}) \bigr) . $$ $$\begin{aligned} x(t)={}& \bigl(I^{q}_{0+}f\bigl(s,x(s)\bigr) \bigr) (t)+ \biggl(\frac{t_{0}}{t} \biggr)^{1-q}\cdot \bigl(x_{0}- \bigl(I^{q}_{0+}f\bigl(s,x(s)\bigr) \bigr) (t_{0}) \bigr) \\ &{}+x\bigl(0^{+}\bigr) \biggl(1- \biggl(\frac{t_{0}}{t} \biggr)^{1-q} \biggr) . \end{aligned}$$ Since for \(x(0^{+})=0\) we have \(f\equiv g\), we look for the solutions of (6) in the set of functions \(x(\cdot)\) such that \(x(0^{+})\neq0\). Using the fractional integral \(I^{1-q}_{0+}\) for both sides of (9) and applying formula (1), we get $$\begin{aligned} \bigl(I^{1-q}_{0+}x \bigr) (t)={}& \bigl(I^{1}_{0+}f\bigl(s,x(s)\bigr) \bigr) (t) \\ &{}+ \biggl(I^{1-q}_{0+} \biggl(\frac{t_{0}}{s} \biggr)^{1-q} \biggr) (t)\cdot \bigl(x_{0}- \bigl(I^{q}_{0+}f\bigl(s,x(s)\bigr) \bigr) (t_{0}) \bigr) \\ &{}+x\bigl(0^{+}\bigr) \biggl(I^{1-q}_{0+} \biggl(1- \biggl( \frac{t_{0}}{s} \biggr)^{1-q} \biggr) \biggr) (t) . \end{aligned}$$ Moreover, as \((I^{1-q}_{0+} (\frac{t_{0}}{s} )^{1-q} )(t)=t_{0}^{1-q}\frac{\Gamma(q)}{\Gamma(1-q+q)}t^{q+1-q-1}=\Gamma(q)t_{0}^{1-q}\), then from (10) we have that $$\begin{aligned} \bigl(I^{1-q}_{0+}x \bigr) (t)={}&\int_{0}^{t}f\bigl(s,x(s)\bigr)\,ds +\Gamma(q)t_{0}^{1-q} \bigl(x_{0}- \bigl(I^{q}_{0+}f\bigl(s,x(s)\bigr) \bigr) (t_{0}) \bigr) \\ &{}+x\bigl(0^{+}\bigr) \biggl(\frac{t^{1-q}}{\Gamma(2-q)}-\Gamma(q)t_{0}^{1-q} \biggr) . \end{aligned}$$ Let us put \(m(t_{0},t):= (I^{1-q}_{0+}x )(t)\), then \(\frac {d}{dt} m(t_{0},t) =g(t,x(t))\), \(\frac{d}{dt} m(t_{0},t)|_{t=t_{0}} = ({D}^{q}_{0+} x )(t_{0})=g(t_{0}, x(t_{0})) \). In formulas (9)-(11) we have the value \(x(0^{+})\) as an additional parameter. As we are going to consider the initialization process and its viability in the memory domain, it is important to consider assumptions connected with the memory function \(m(t_{0},\cdot)\). We claim that \(\lim_{t\rightarrow0^{+}}m(t_{0},t)=0\). Further, we observe that taking the limit of (11) with t tending to zero one gets the following: $$ \bigl(I^{q}_{0^{+}}f\bigl(s,x(s)\bigr) \bigr) (t_{0})=x(t_{0})-x\bigl(0^{+}\bigr) . $$ Calculating values of \(f(s,x(s))\) for \(s\in(0,t_{0}]\) one finds some difficulties in predicting values of the memory in the light of viability, thus we consider the modification of the problem that leads to the main results. Let us consider a new inner value problem $$ {D}^{q}_{0+} x(t)=\widetilde{g}\bigl(t,x(t) \bigr),\quad 0< q < 1, t\in (0,T] , $$ satisfying the inner condition $$ x(t_{0})=x_{0} \in\mathbb{R}^{n},\quad 0< t_{0}<T , $$ $$ \widetilde{g}\bigl(t,x(t)\bigr)= \left \{ \begin{array}{@{}l@{\quad}l} g(t_{0},x_{0}), & t\in[0,t_{0}), \\ g(t,x(t)), & t \geq t_{0}. \end{array} \right . $$ Function \(\widetilde{g}\) is still continuous if g is continuous. Let us denote by \(\widetilde{x}(\cdot)\) the solution of the inner value problem (13)-(14), and let \(\widetilde{m}(t_{0},t):= (I^{1-q}_{0+}\widetilde{x} )(t)\). For \(t\in(0,t_{0}]\) we have the formula of the solution: $$\widetilde{x}(t)=\frac{t^{q}-t_{0}\cdot t^{q-1}}{\Gamma(q+1)} \biggl(f(t_{0},x_{0})+ \frac{t_{0}^{-q}}{\Gamma(1-q)}x\bigl(0^{+}\bigr) \biggr)+ \biggl(\frac {t_{0}}{t} \biggr)^{1-q}\cdot x_{0} $$ $$\widetilde{m}(t_{0},t)= \biggl(t-\frac{t_{0}}{q} \biggr)\cdot \biggl(f(t_{0},x_{0})+\frac{t_{0}^{-q}}{\Gamma(1-q)}x\bigl(0^{+}\bigr) \biggr)+\Gamma (q)t_{0}^{1-q}x_{0} . $$ Then \((D^{q}_{0+}\widetilde{x} )(t)=g(t_{0},x_{0})\), \(\widetilde {x}(t_{0})=x_{0}\) and $$ \widetilde{m}(t_{0},t_{0})= \frac{q-1}{q}t_{0} f(t_{0},x_{0})+\Gamma (q)t_{0}^{1-q}x_{0}+\frac{q-1}{q\Gamma(1-q)}t_{0}^{1-q}x \bigl(0^{+}\bigr) . $$ Observe that \(\lim_{t_{0}\rightarrow0^{+}}\widetilde{m}(t_{0},t_{0})=0\), \(\lim_{q\rightarrow1}\widetilde{m}(t_{0},t_{0})=x_{0} \). The steps presented for the new function \(\widetilde{g}\) are useful if we need to know the value of \(m_{0}:=m(t_{0},t_{0})= (I^{1-q}_{0+}x )(t_{0})\). Let f be bounded on \([0,T]\). For any \(\varepsilon>0\), there is \(0< t_{0}<T\) such that $$\bigl\Vert \widetilde{m}(t_{0},t)-m(t_{0},t) \bigr\Vert \leq\varepsilon,\quad t\in[t_{0},T] . $$ Let us take \(\varepsilon>0\) and \(\Vert f(t,x(t))\Vert \leq M\) for \(t\in[0,T]\). Then for \(0< t_{0}\leq t<T\) we have \(\Vert g(t,x(t))\Vert =\Vert f(t,x(t))+\frac{t^{-q}}{\Gamma(1-q)}x(0^{+})\Vert \leq \Vert f(t,x(t))\Vert +\frac{t^{-q}}{\Gamma(1-q)}\Vert x(0^{+})\Vert \leq M+\frac{t_{0}^{-q}}{\Gamma(1-q)}\Vert x(0^{+})\Vert \) and $$\begin{aligned} \bigl\Vert \widetilde{m}(t_{0},t)-m(t_{0},t) \bigr\Vert ={}& \biggl\| \int_{0}^{t_{0}} \bigl(g(t_{0},x_{0})-g\bigl(s,x(s)\bigr) \bigr)\,ds \\ &{}-\Gamma(q)t_{0}^{1-q} \bigl(I^{q}_{0+}g(t_{0},x_{0}) \bigr) (t_{0}) +\Gamma(q)t_{0}^{1-q} \bigl( I^{q}_{0+}g\bigl(s,x(s)\bigr) \bigr) (t_{0}) \biggr\| \\ \leq{}& \bigl\Vert t_{0}g(t_{0},x_{0})- \Gamma(q)t_{0}^{1-q} \bigl(I^{q}_{0+}g(t_{0},x_{0}) \bigr) (t_{0}) \bigr\Vert \\ &{}+\biggl\Vert \Gamma(q)t_{0}^{1-q} \bigl( I^{q}_{0+}g\bigl(s,x(s)\bigr) \bigr) (t_{0})-\int _{0}^{t_{0}}g\bigl(s,x(s)\bigr)\,ds\biggr\Vert \\ ={}& t_{0}\frac{1-q}{q}\bigl\Vert g(t_{0},x_{0}) \bigr\Vert \\ &{}+\biggl\Vert \Gamma (q)t_{0}^{1-q} \bigl( I^{q}_{0+}g\bigl(s,x(s)\bigr) \bigr) (t_{0})-\int _{0}^{t_{0}}g\bigl(s,x(s)\bigr)\,ds\biggr\Vert \\ \leq{}& t_{0}\frac{1-q}{q} \biggl(M+\frac{t_{0}^{-q}}{\Gamma(1-q)}\bigl\Vert x\bigl(0^{+}\bigr)\bigr\Vert \biggr) \\ &{} + \biggl(M+\frac{t_{0}^{-q}}{\Gamma(1-q)} \bigl\Vert x\bigl(0^{+}\bigr)\bigr\Vert \biggr) \cdot \biggl\vert \Gamma(q)t_{0}^{1-q}\frac{t_{0}^{q}}{\Gamma(1+q)}-t_{0} \biggr\vert \\ ={}& 2t_{0}\frac{1-q}{q} \biggl(M+\frac{t_{0}^{-q}}{\Gamma(1-q)}\bigl\Vert x \bigl(0^{+}\bigr)\bigr\Vert \biggr) . \end{aligned}$$ Then for $$t_{0}\leq\min \biggl(\frac{q\varepsilon}{4(1-q)M}, \biggl[\frac{q\Gamma (1-q)\varepsilon}{4(1-q)\Vert x(0^{+})\Vert } \biggr]^{\frac {1}{1-q}} \biggr) $$ $$\bigl\Vert \widetilde{m}(t_{0},t)-m(t_{0},t)\bigr\Vert \leq\frac {2(1-q)M}{q}t_{0}+\frac{2(1-q)\Vert x(0^{+})\Vert }{q\Gamma (1-q)}t_{0}^{1-q} \leq\varepsilon . $$ Viability problem Before going to viability terms, we set some notations. For \(\varepsilon>0\), by ε-neighborhood of a set \(K\subset \mathbb{R}^{n}\) we mean the following: $$K^{\varepsilon}:=\bigl\{ x\in\mathbb{R}^{n}:\operatorname{dist}(x,K)< \varepsilon\bigr\} . $$ Let us define the distance between two sets \(A\subset\mathbb{R}^{n}\) and \(B\subset\mathbb{R}^{n}\) as \(\triangle(A,B):=\sup\{\operatorname{dist}(q,B):q\in A\}\). Note that \(\triangle(A,B)\) is not the usual symmetric distance between two sets. Indeed, if \(A\subset B\) then \(\triangle(A,B)=0\) while \(\triangle(B,A)\neq0\). By definitions of m, \(\widetilde{m}\) and Proposition 7 the following proposition is obvious. Let f be bounded on \([0,T]\). Let us take \(\varepsilon>0\) and \(t_{0}\in (0,T]\) such that $$\bigl\| \widetilde{m}(t_{0},t)-m(t_{0},t)\bigr\| < \varepsilon. $$ $$\triangle\bigl(\operatorname{Graph}\bigl(m(t_{0},\cdot)\bigr),[t_{0},T] \times K\bigr)=0\quad\Leftrightarrow\quad \triangle \bigl(\operatorname{Graph}\bigl(\widetilde{m}(t_{0}, \cdot)\bigr),[t_{0},T]\times K\bigr)< \varepsilon, $$ which we can write equivalently $$m(t_{0},t)\in K\quad\Leftrightarrow\quad\widetilde{m}(t_{0},t)\in K^{\varepsilon}\quad \textit{for all } t\in[t_{0},T]. $$ Similarly as for the ordinary differential equations (see [16]), one can define the viability of a subset with respect to the fractional differential equation (3). Let us denote by I an open interval in ℝ. Let \(K\subset\mathbb{R}^{n}\) be a nonempty locally closed set and \(f: I\times K\to\mathbb{R}^{n}\). The subset K is fractionally memo-viable with respect to f if for any \((t_{0},x_{0})\in I\times K\) equation (3) has at least one solution \(x:[t_{0},T]\to \mathbb{R}^{n}\) satisfying \(m(t_{0},t)\in K\) for \(t\in[t_{0},T]\), where \(t_{0}>0\). The idea of viability of fractional differential equations can be expressed using the concept of tangent cone. There are many notions of tangency of a vector to a set (see, for example, [16], Section 2.3). We will follow the concept of contingent vectors (see [9]). Let us recall that for \(K \subset\mathbb{R}^{n}\) and \(x_{0}\in K\) one can define the vector tangent to the set K as follows. Definition 10 The vector \(\eta\in\mathbb{R}^{n}\) is contingent to the set K at the point \(x_{0}\) if $$ \liminf_{h\downarrow0} \frac{1}{h} \operatorname{dist} (x_{0}+ {h}\eta; K )=0 . $$ The set of all vectors that are contingent to the set K at point \(x_{0}\) is a closed cone, see [16], Proposition 2.3.1. This cone, denoted by \(\mathcal{T}_{K} (x_{0})\), is called contingent cone (Bouligand cone) to the set K at \(x_{0}\in K\). From [16], Proposition 2.3.2, we know that \(\eta\in\mathcal{T}_{K} (x_{0})\) if and only if for every \(\varepsilon>0\) there exist \(h\in(0, \varepsilon)\) and \(p_{h} \in B(0, \varepsilon)\) such that \(x_{0} +{h}(\eta+p_{h})\in K\), where \(B(0, \varepsilon)\) denotes the closed ball in \(\mathbb{R}^{n}\) centered at 0 and of radius \(\varepsilon>0\). Theorem 11 Let \(K\subset\mathbb{R}^{n}\) be nonempty and \(f: I\times K \to\mathbb {R}^{n}\). If the subset K is fractionally memo-viable with respect to f, then \(g(t_{0},x_{0})\in\mathcal{T}_{K}(m_{0})\), where \(x_{0}=x(t_{0})\) and \(m_{0}=m(t_{0},t_{0})= (I^{1-q}_{0+}x )(t_{0})\). Let \((t_{0}, m_{0})\in I\times K\) and K be fractionally viable of order q with respect to f. Then there is \(T\in I\), \(T>t_{0}\), and a function \(x: [t_{0},T]\rightarrow K\) satisfying \((I^{1-q}_{0+}x )(t_{0})=m_{0}\) and \(D^{q}_{0+}x(t)=f(t,x(t))\) for every \(t\in[t_{0},T]\). Moreover we have $$\begin{aligned} &\lim_{h\downarrow0}\frac{1}{h}\bigl\Vert \bigl(I^{1-q}_{0+}x \bigr) (t_{0})+hg(t_{0},x_{0})-I^{1-q}_{0+}x(t_{0}+h) \bigr\Vert \\ &\quad=\lim_{h\downarrow0}\biggl\Vert g (t_{0},x_{0} )- \frac{ (I^{1-q}_{0+}x )(t_{0}+h)- (I^{1-q}_{0+}x )(t_{0})}{h}\biggr\Vert \\ &\quad= \bigl\Vert g (t_{0},x_{0} )-D^{q}_{0+}x(t_{0}) \bigr\Vert =0 . \end{aligned}$$ The above calculation shows that, for every \((t_{0},m_{0})\in I\times K\), \(g(t_{0},x_{0})\in\mathcal{T}_{K}(m_{0})\) and the proof is complete. □ Note that it is difficult to check the condition \(g(t_{0},x_{0})\in\mathcal {T}_{K}(m_{0})\) in Theorem 11. The above theorem is still true for the initial inner value problem with the right-hand side of \(\widetilde{g}\) given by (15). Let $$ \widetilde{f}\bigl(t,x(t)\bigr)= \left \{ \begin{array}{@{}l@{\quad}l} f(t_{0},x_{0}), & t\in[0,t_{0}), \\ f(t,x(t)), & t \geq t_{0}. \end{array} \right . $$ Then the following is true. Corollary 12 Let \(K\subset K^{\varepsilon}\subset\mathbb{R}^{n}\) and \(f: I\times K\to \mathbb{R}^{n}\). If the subset K is fractionally memo-viable with respect to f, then \(g(t_{0},x_{0})\in\mathcal{T}_{K^{\varepsilon}}(\widetilde{m}_{0})\). From Theorem 11 we get the following weaker result. Let \(K\subset\mathbb{R}^{n}\) be nonempty and \(f: I\times K\to\mathbb {R}^{n}\). If the subset K is fractionally memo-viable with respect to f, then \(f(t_{0},x_{0})\in\mathcal{T}_{K}(m_{0})\) or \(x(0^{+})\in\mathcal {T}_{K}(m_{0})\), where \(x_{0}=x(t_{0})\) and \(m_{0}=m(t_{0},t_{0})= (I^{1-q}_{0+}x )(t_{0})\). Let us consider a one-dimensional problem with the set \(K=\mathbb {R}_{+}\cup\{0\}\) and \(f(t,x(t))=c=\operatorname{const}\). Then \(g(t,x(t))=c+\frac{t^{-q}}{\Gamma(1-q)}x(0^{+})\) for \(0< t<T\) and $$\widetilde{g}\bigl(t,x(t)\bigr)= \begin{cases} c+\frac{t_{0}^{-q}}{\Gamma(1-q)}x(0^{+}) & \mbox{for }0< t<t_{0},\\ c+\frac{t^{-q}}{\Gamma(1-q)}x(0^{+}) & \mbox{for }t_{0}\leqslant t<T. \end{cases} $$ Moreover, for \(x_{0}=x(t_{0})\), we have $$x(t)=\frac{c}{\Gamma(1+q)}t^{q} \biggl(1-\frac{t_{0}}{t} \biggr)+x_{0} \biggl(\frac {t_{0}}{t} \biggr)^{1-q}+x \bigl(0^{+}\bigr) \biggl[1- \biggl(\frac{t_{0}}{t} \biggr)^{1-q} \biggr] $$ and \(m_{0}=m (t_{0},t_{0} )=ct_{0} (1-\frac{1}{q} )+t_{0}^{1-q}\Gamma(q) [x_{0}+x(0^{+}) (\frac{1}{\Gamma(q)\Gamma(2-q)}-1 ) ] \). Let us take \(m_{0}\geqslant0\), then $$\mathcal{T}_{K}(m_{0})= \begin{cases} \mathbb{R} & \mbox{for }m_{0}>0,\\ {[}0,+\infty ) & \mbox{for }m_{0}=0. \end{cases} $$ We will show that if \(g(t_{0},x_{0})\notin\mathcal{T}_{K}(m_{0})\), then K is not memo-viable. Since for \(m_{0}>0\) we have \(\mathcal{T}_{K}(m_{0})=\mathbb {R}\), it is nothing to show then. Let \(m_{0}=0\), then \(g(t_{0},x_{0})<0\), i.e., \(c+\frac {t_{0}^{-q}}{\Gamma(1-q)}x(0^{+})<0\). Let us notice that the term \(\frac {t_{0}^{-q}}{\Gamma(1-q)}\) is positive. Let \(x(0^{+})=0\), then obviously \(c<0\), and we get $$x(t)=c\frac{t^{q}}{\Gamma(1+q)} \biggl(1-q\frac{t_{0}}{t} \biggr)< 0 \quad\mbox{if only } t_{0}\leq t. $$ Integrating the last term, on the basis of monotonicity of integral one gets $$m(t_{0},t)= \bigl(I_{0^{+}}^{1-q}x \bigr) (t)=c(t-t_{0})< 0 . $$ Now let \(x(0^{+})>0\) and since \(g(t_{0},x_{0})<0\), which means that \(c<\frac{-t_{0}^{-q}}{\Gamma(1-q)}x(0^{+})\), thus \(c<0\). Therefore one can show the following: $$m(t_{0},t)=ct+x\bigl(0^{+}\bigr) \biggl(1-q\frac{t_{0}}{t} \biggr)t_{0}^{1-q}< 0 $$ provided \(t>\frac{x(0^{+})}{c} (\frac{1}{\Gamma(q)\Gamma(2-q)}-1 )t_{0}^{1-q}\). Finally, let us assume \(x(0^{+})<0\), and since \(g(t_{0},x_{0})<0\), we get \(c<\frac{-t_{0}^{-q}}{\Gamma(1-q)}x(0^{+})\). Again one can show the following: if only \(t_{0}< t< (1-\frac{1}{\Gamma(q)\Gamma(2-q)} )\Gamma (1-q)t_{0} \). Since \(m(t_{0},t)<0\), it follows that K is not memo-viable with respect to f. Finally, in order to prove the sufficient condition of viability, we need the following definition and proposition. Let \(K\subset\mathbb{R}^{n}\) be nonempty and \(f: I\times K\to\mathbb {R}^{n}\). The subset K is fractionally viable with respect to f if for any \((t_{0},x_{0})\in I\times K\) equation (3) has at least one solution \(x:[t_{0},T]\to K\) satisfying \(x(t_{0})=x_{0}\). Such a solution we call viable with respect to f. Let \(K\subset \mathbb {R}^{n}\) be a nonempty and locally closed set, and let \(f:I\times K\rightarrow \mathbb {R}^{n}\) be a vector-valued continuous function. If \(f(t_{0},x_{0})\in\mathcal{T}_{K}(x_{0})\) for every \((t_{0}, x_{0})\in I\times K\), then K is fractionally viable with respect to f. Let us re-scale and shift the set \(K^{\varepsilon}\) in such a way that elements of this set are again from the domain of f, namely $$ \tilde{K}^{\varepsilon}=\frac{t_{0}^{q-1}}{\Gamma(q)}K^{\varepsilon}+ \frac {1-q}{\Gamma(q+1)}t_{0}^{q} f(t_{0},x_{0})+ \frac{1-q}{\Gamma(q+1)\Gamma(1-q)}x\bigl(0^{+}\bigr). $$ To formulate next theorem, which is a middle step in getting a certain sufficient condition of viability, we use the notion (19). Let \(K\subset K^{\varepsilon}\subset\mathbb{R}^{n}\) be nonempty and \(f: I\times K \to\mathbb{R}^{n}\). If the subset K is fractionally memo-viable with respect to f, then \(g(t_{0},x_{0})\in\mathcal{T}_{\tilde {K}^{\varepsilon}}(x_{0})\), where \(x_{0}=x(t_{0})\). Let \((t_{0}, m_{0})\in I\times K\) and K be fractionally memo-viable with respect to f. Then Corollary 12 gives \(g(t_{0},x_{0})\in\mathcal {T}_{K^{\varepsilon}}(\tilde{m}_{0})\). The latter means that $$\liminf_{h\downarrow0}\frac{1}{h}\operatorname{dist}\bigl(\tilde {m}_{0}+hg(t_{0},x_{0}),K^{\varepsilon}\bigr)=0 . $$ By formula (16) we can rewrite the above equation in the following way: $$\min_{y\in K^{\varepsilon}}\frac{1}{h}\biggl\vert \frac{q-1}{q}t_{0} f(t_{0},x_{0})+ \Gamma(q)t_{0}^{1-q}x_{0}+\frac{q-1}{q\Gamma (1-q)}t_{0}^{1-q}x \bigl(0^{+}\bigr)+ hg(t_{0},x_{0})-y\biggr\vert \longrightarrow0 . $$ Let \(\tilde{K}^{\varepsilon}\) be as it is given in (19), i.e., $$\tilde{K}^{\varepsilon}=\frac{t_{0}^{q-1}}{\Gamma(q)}K^{\varepsilon}+ \frac{1-q}{\Gamma(q+1)}t_{0}^{q} f(t_{0},x_{0})+ \frac{1-q}{\Gamma(q+1)\Gamma (1-q)}x\bigl(0^{+}\bigr) , $$ then for \(\tilde{y}=\frac{q-1}{\Gamma(q+1)}t_{0}^{q} f(t_{0},x_{0})+\frac {q-1}{\Gamma(q+1)\Gamma(1-q)}x(0^{+})- \frac{t_{0}^{q-1}}{\Gamma(q)}y\) and \(\tilde{h}=\frac{\Gamma(q)t_{0}^{1-q}}{h}\), we get \(\tilde{y}\in\tilde {K}^{\varepsilon}\) and \(\tilde{h}\rightarrow0\) when \(h\rightarrow0\), thus $$\min_{\tilde{y}\in\tilde{K}^{\varepsilon}}\frac{1}{\tilde{h}}\bigl\vert x_{0}+ \tilde{h}g(t_{0},x_{0})-y\bigr\vert \longrightarrow0, $$ while \(\tilde{h}\rightarrow0\). Therefore we get \(g(t_{0},x_{0})\in\mathcal {T}_{\tilde{K}^{\varepsilon}}(x_{0})\). □ The next corollary gives, in fact, a sufficient condition of viability for an equation involving the Caputo derivative that is, however, slightly different from the one that we formulated in [11]. Let \(K\subset K^{\varepsilon}\subset\mathbb{R}^{n}\) be nonempty and \(f: I\times\mathbb{R}^{n} \to\mathbb{R}^{n}\). Let \(\tilde{K}^{\varepsilon}=\frac {t_{0}^{q-1}}{\Gamma(q)}K^{\varepsilon}+\frac{1-q}{\Gamma(q+1)}t_{0}^{q} f(t_{0},x_{0})+\frac{1-q}{\Gamma(q+1)\Gamma(1-q)}x(0^{+})\). If \(g(t_{0},x_{0})\in \mathcal{T}_{\tilde{K}^{\varepsilon}}(x_{0})\), where \(x_{0}=x(t_{0})\), then \(\tilde{K}^{\varepsilon}\) is viable with respect to g. The thesis is a simple consequence of Proposition 16 and Theorem 17. □ As a conclusion we propose a block scheme (see Figure 1) that shows relations among viability conditions for sets K, \(K^{\varepsilon}\) and \(\tilde{K}^{\varepsilon}\), where \(K\subset K^{\varepsilon}\subset\mathbb {R}^{n}\) and \(\tilde{K}^{\varepsilon}=\frac{t_{0}^{q-1}}{\Gamma (q)}K^{\varepsilon}+\frac{1-q}{\Gamma(q+1)}t_{0}^{q} f(t_{0},x_{0})+\frac {1-q}{\Gamma(q+1)\Gamma(1-q)}x(0^{+})\) for f and g being the right-hand sides of equations (3) and (6), respectively. Block scheme concerning the viability conditions. Almeida, R, Torres, DFM: Necessary and sufficient conditions for the fractional calculus of variations with Caputo derivatives. Commun. Nonlinear Sci. Numer. Simul. 16(3), 1490-1500 (2011) Article MATH MathSciNet Google Scholar Abdeljawad, T, Baleanu, D: Fractional differences and integration by parts. J. Comput. Anal. Appl. 13(3), 574-582 (2011) MATH MathSciNet Google Scholar Bai, Z, Lü, H: Positive solutions for boundary value problems of nonlinear fractional differential equations. J. Math. Anal. Appl. 311(2), 495-505 (2005) Zhang, S: Existence of a positive solution for some class of nonlinear fractional differential equations. J. Math. Anal. Appl. 278, 136-148 (2003) Podlubny, I: Fractional Differential Equations. Mathematics in Sciences and Engineering, vol. 198. Academic Press, San Diego (1999) Miller, KS, Ross, B: Fractional difference calculus. In: Proceedings of the International Symposium on Univalent Functions, Fractional Calculus and Their Applications, Nihon University, Kōriyama, Japan, pp. 139-152 (1988) Nagumo, N: Über die lage der integralkurven gewöhnlicher differentialgleichungen. Proc. Phys. Math. Soc. Jpn. 24, 551-559 (1942) Aubin, JP: Viability Theory. Birkhäuser, Berlin (1991) Aubin, JP, Frankowska, H: Set-Valued Analysis. Birkhäuser Boston, Boston (1990) Aubin, JP, Bayen, AM, Saint-Pierre, P: Viability Theory, New Directions. Springer, Berlin (2011) Girejko, E, Mozyrska, D, Wyrwas, M: A sufficient condition of viability for fractional differential equations with the Caputo derivative. J. Math. Anal. Appl. 381, 146-154 (2011) Mozyrska, D, Girejko, E, Wyrwas, M: A necessary condition of viability for fractional differential equations with initialization. Comput. Math. Appl. 62(9), 3642-3647 (2011) Kilbas, AA, Srivastava, HM, Trujillo, JJ: Theory and Applications of Fractional Differential Equations. North-Holland Mathematics Studies, vol. 204. Elsevier, Amsterdam (2006) Samko, SG, Kilbas, AA, Marichev, OI: Fractional Integrals and Derivatives: Theory and Applications. Gordon & Breach, Yverdon (1993) Benchora, M, Hamani, S, Ntouyas, SK: Boundary value problems for differential equations with fractional order. Surv. Math. Appl. 3, 1-12 (2008) Cañada, A, Drábek, P, Fonda, A: Handbook of Differential Equations: Ordinary Differential Equations. Elsevier, Amsterdam (2005) The authors are very grateful to anonymous reviewers for valuable suggestions and comments, which improved the quality of the paper. The work was supported by Bialystok University of Technology grant S/WI/2/2011. Department of Mathematics, Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, Białystok, 15-351, Poland Ewa Girejko, Dorota Mozyrska & Małgorzata Wyrwas Ewa Girejko Dorota Mozyrska Małgorzata Wyrwas Correspondence to Małgorzata Wyrwas. The authors have contributed to this work on an equal basis. All authors read and approved the final manuscript. Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. Girejko, E., Mozyrska, D. & Wyrwas, M. On memo-viability of fractional equations with the Caputo derivative. Adv Differ Equ 2015, 58 (2015). https://doi.org/10.1186/s13662-015-0403-0 Accepted: 03 February 2015 fractional derivative fractional differential equation Recent Progress in Differential and Difference Equations (2014)
CommonCrawl
Optimal production and emission reduction policies for a remanufacturing firm considering deferred payment strategy JIMO Home Independent sales or bundling? Decisions under different market-dominant powers doi: 10.3934/jimo.2020105 Robust observer-based control for discrete-time semi-Markov jump systems with actuator saturation Yueyuan Zhang 1,, , Yanyan Yin 2, and Fei Liu 2, School of Mechanical and Electric Engineering, Soochow University, Suzhou 215131, China Department of Mathematics and Statistics, Curtin University, GPO Box U1987, Perth, WA 6845, Australia, Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Institute of Automation,Jiangnan University, Wuxi 214122, China * Corresponding author: Yueyuan Zhang Received August 2019 Revised December 2019 Published June 2020 Fund Project: This work was partially supported by The National Natural Science Foundation of China (61773183) Full Text(HTML) Figure(3) / Table(1) This paper investigates the control synthesis for discrete-time semi-Markov jump systems with nonlinear input. Observer-based controllers are designed in this paper to achieve a better performance and robustness. The nonlinear input caused by actuator saturation is considered as a group of linear controllers in the convex hull. Moreover, the elapse time and mode dependent Lyapunov functions are investigated and sufficient conditions are derived to guarantee the $ H_\infty $ performance index. The largest domain of attraction is estimated as the existing saturation in the system. Finally, a numerical example is utilized to verify the effectiveness and feasibility of the developed strategy. Keywords: Discrete-time semi-Markov jump system, H∞ performance index, nonlinear input. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Yueyuan Zhang, Yanyan Yin, Fei Liu. Robust observer-based control for discrete-time semi-Markov jump systems with actuator saturation. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020105 Y. Altinok and D. Kolcak, An application of the semi-Markov model for earthquake occurrences in North Anatolia, Turkey, Journal of the Balkan Geophysical Society, 2 (1999), 90-99. Google Scholar V. S. Barbu, M. Boussemart and N. Limnios, Discrete-time semi-Markov model for reliability and survival analysis, Communications in Statistics Theory and Methods, 33 (2004), 2833-2868. doi: 10.1081/STA-200037923. Google Scholar V. S. Barbu and N. Limnios, Semi-Markov Chains and Hidden semi-Markov Models Toward Applications: Their Use in Reliability and DNA Analysis, Lecture Notes in Statistics, 191. Springer, New York, 2008. Google Scholar P. Bolzern, P. Colaneri and G. De Nicolao, Stochastic stability of positive Markov jump linear systems, Automatica, 50 (2014), 1181-1187. doi: 10.1016/j.automatica.2014.02.016. Google Scholar V. S. Barbu and N. Limnios, Empirical estimation for discrete-time semi-Markov processes with applications in reliability, Nonparametric Statistics, 18 (2006), 483-498. doi: 10.1080/10485250701261913. Google Scholar Y. Y. Cao and Z. Lin, Stability analysis of discrete-time systems with actuator saturation by a saturation-dependent Lyapunov function, Automatica, 39 (2003), 1235-1241. doi: 10.1016/S0005-1098(03)00072-4. Google Scholar H. Chen, M. M. Ma, H. Wang, Z. Y. Liu and Z. X. Cai, Moving Horizon H∞ Tracking Control of Wheeled Mobile Robots With Actuator Saturation, IEEE Transactions on Control Systems Technology, 17 (2009), 449-457. Google Scholar B. S. Chen and S. S. Wang, The stability of feedback control with nonlinear saturating actuator: Time domain approach, IEEE Transactions on Automatic Control, 33 (1988), 483-487. doi: 10.1109/9.1234. Google Scholar O. L. V Costa, E. Assumpção Filho, E. K. Boukas and R. Marques, Constrained quadratic state feedback control of discrete-time Markovian jump linear systems, Automatica, 35 (1999), 617-626. doi: 10.1016/S0005-1098(98)00202-7. Google Scholar G. D'Amico, M. Guillen and R. Manca, Full backward non-homogeneous semi-Markov processes for disability insurance models: A Catalunya real data application, Insurance: Mathematics and Economics, 45 (2009), 173-179. doi: 10.1016/j.insmatheco.2009.05.010. Google Scholar G. D'Amico, F. Petroni and F. Prattico, Wind speed modeled as an indexed semi-Markov process, Environmetrics, 24 (2013), 367-376. doi: 10.1002/env.2215. Google Scholar F. Garelli, P. Camocardi and R. J. Mantz, Variable structure strategy to avoid amplitude and rate saturation in pitch control of a wind turbine, International Journal of Hydrogen Energy, 35 (2010), 5869-5875. doi: 10.1016/j.ijhydene.2009.12.124. Google Scholar W. Gao and R. R. Selmic, Neural network control of a class of nonlinear systems with actuator saturation, IEEE Transactions on Neural Networks, 17 (2006), 147-156. doi: 10.1109/TNN.2005.863416. Google Scholar Z. Hou, J. Luo, P. Shi. Peng and S. K. Nguang, Stochastic stability of Ito differential equations with semi-Markovian jump parameters, IEEE Transactions on Automatic Control, 51 (2006), 1383-1387. doi: 10.1109/TAC.2006.878746. Google Scholar W. M. H. Heemels, J. Daafouz and G. Millerioux, Observer-Based Control of Discrete-Time LPV Systems With Uncertain Parameters, IEEE Transactions on Automatic Control, 55 (2010), 2130-2135. doi: 10.1109/TAC.2010.2051072. Google Scholar T. Hu, Z. Lin and B. M. Chen, An analysis and design method for linear systems subject to actuator saturation and disturbance, Automatica, 38 (2002), 351-359. doi: 10.1016/S0005-1098(01)00209-6. Google Scholar H. Huang, D. Li, Z. Lin and Y. Xi, An improved robust model predictive control design in the presence of actuator saturation, Automatica, 47 (2011), 861-864. doi: 10.1016/j.automatica.2011.01.045. Google Scholar F. Li, L. Wu, P. Shi and C. C. Lim, State estimation and sliding mode control for {semi-Markovian} jump systems with mismatched uncertainties, Automatica, 51 (2015), 385-393. doi: 10.1016/j.automatica.2014.10.065. Google Scholar F. Li, P. Shi, C. C. Lim and L. Wu, Fault detection filtering for nonhomogeneous Markovian jump systems via a fuzzy approach, IEEE Transactions on Fuzzy Systems, 26 (2018), 131-141. doi: 10.1109/TFUZZ.2016.2641022. Google Scholar C. H. Lien, Robust observer-based control of systems with state perturbations via LMI approach, IEEE Transactions on Automatic Control, 49 (2004), 1365-1370. doi: 10.1109/TAC.2004.832660. Google Scholar Z. Ning, L. Zhang and W. X. Zheng, Observer-Based Stabilization of Nonhomogeneous semi-Markov Jump Linear Systems With Mode-Switching Delays, IEEE Transactions on Automatic Control, 64 (2019), 2029-2036. doi: 10.1109/TAC.2018.2863655. Google Scholar J. Raouf and E. Boukas, Observer-based controller design for linear singular systems with Markovian switching, 2004 43rd IEEE Conference on Decision and Control (CDC), 4 (2004), 3619-3624. Google Scholar H. Shen, F. Li, S. Xu and V. Sreeram, Slow state variables feedback stabilization for semi-Markov jump systems with singular perturbations, IEEE Transactions on Automatic Control, 63 (2018), 2709-2714. doi: 10.1109/TAC.2017.2774006. Google Scholar O. Thomas and J. Sobanjo, Comparison of Markov chain and semi-Markov models for crack deterioration on flexible pavements, Journal of Infrastructure Systems, 19 (2013), 186-195. doi: 10.1061/(ASCE)IS.1943-555X.0000112. Google Scholar T. Yang, L. Zhang, V. Sreeram, A. N. Vargas, T. Hayat and B. Ahmad, Time-varying filter design for semi-Markov jump linear systems with intermittent transmission, International Journal of Robust and Nonlinear Control, 27 (2017), 4035-4049. doi: 10.1002/rnc.3779. Google Scholar T. Yang, L. Zhang and H. K. Lam, H∞ control of semi-Markov jump nonlinear systems under σ-error mean square stability, International Journal of Systems Science, 48 (2017), 2291-2299. Google Scholar Y. Yin, P. Shi, F. Liu, K. L. Teo and C. C. Lim, Robust filtering for nonlinear nonhomogeneous Markov jump systems by fuzzy approximation approach, IEEE Transactions on Cybernetics, 45 (2015), 1706-1716. doi: 10.1109/TCYB.2014.2358680. Google Scholar Y. Yin, P. Shi, F. Liu and K. L. Teo, Observer-based H∞ control on nonhomogeneous Markov jump systems with nonlinear input, International Journal of Robust and Nonlinear Control, 24 (2014), 1903-1924. doi: 10.1002/rnc.2974. Google Scholar L. Zhang, Y. Leng and P. Colaneri, Stability and stabilization of discrete-time semi-Markov jump linear systems via semi-Markov kernel approach, IEEE Transactions on Automatic Control, 61 (2016), 503-508. Google Scholar Y. Zhang, C. C. Lim and F. Liu, Robust mixed H2/H∞ model predictive control for Markov jump systems with partially uncertain transition probabilities, Journal of the Franklin Institute, 355 (2018), 3423-3437. doi: 10.1016/j.jfranklin.2018.01.035. Google Scholar B. Zhou, W. X. Zheng and G. R. Duan, Stability and stabilization of discrete-time periodic linear systems with actuator saturation, Automatica, 47 (2011), 1813-1820. doi: 10.1016/j.automatica.2011.04.015. Google Scholar Z. Zuo, D. W. Ho and Y. Wang, Fault tolerant control for singular systems with actuator saturation and nonlinear perturbation, Automatica, 46 (2010), 569-576. doi: 10.1016/j.automatica.2010.01.024. Google Scholar show all references Figure 1. The response of the jump mode Figure Options Download full-size image Download as PowerPoint slide Figure 2. The response of the system state Figure 3. The estimation of the domain attraction for different modes Table 1. The error $ \sigma $ for different maximum sojourn time in different mode Maximum sojourn time $ T^i_{max} $ error $ \sigma $ $ T^1_{max} $ $ T^2_{max} $ $ T^3_{max} $ $ \sigma $ 3 4 3 ln(0.032)=-3.442 4 4 3 ln(0.1048)=-2.2557 10 10 10 ln(1)=0 Download as excel Angelica Pachon, Federico Polito, Costantino Ricciuti. On discrete-time semi-Markov processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1499-1529. doi: 10.3934/dcdsb.2020170 Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021010 Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112 Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264 Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331 Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020166 Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, , () : -. doi: 10.3934/era.2021003 Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021012 Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571 Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020046 Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151 Guoliang Zhang, Shaoqin Zheng, Tao Xiong. A conservative semi-Lagrangian finite difference WENO scheme based on exponential integrator for one-dimensional scalar nonlinear hyperbolic equations. Electronic Research Archive, 2021, 29 (1) : 1819-1839. doi: 10.3934/era.2020093 Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 Xiaoping Zhai, Yongsheng Li. Global large solutions and optimal time-decay estimates to the Korteweg system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1387-1413. doi: 10.3934/dcds.2020322 Maoli Chen, Xiao Wang, Yicheng Liu. Collision-free flocking for a time-delay system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1223-1241. doi: 10.3934/dcdsb.2020251 Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321 Shigui Ruan. Nonlinear dynamics in tumor-immune system interaction models with delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 541-602. doi: 10.3934/dcdsb.2020282 PDF downloads (45) HTML views (232) on AIMS Yueyuan Zhang Yanyan Yin Fei Liu Article outline RIS(for EndNote,Reference Manager,ProCite) Citation and Abstract Recipient's E-mail*
CommonCrawl
Sure, those with a mental illness may very well need a little more monitoring to make sure they take their medications, but will those suffering from a condition with hallmark symptoms of paranoia and anxiety be helped by consuming a technology that quite literally puts a tracking device inside their body? For patients hearing voices telling them that they're being watched, a monitoring device may be a hard pill to swallow. Regarding other methods of cognitive enhancement, little systematic research has been done on their prevalence among healthy people for the purpose of cognitive enhancement. One exploratory survey found evidence of modafinil use by people seeking cognitive enhancement (Maher, 2008), and anecdotal reports of this can be found online (e.g., Arrington, 2008; Madrigal, 2008). Whereas TMS requires expensive equipment, tDCS can be implemented with inexpensive and widely available materials, and online chatter indicates that some are experimenting with this method. I took the pill at 11 PM the evening of (technically, the day before); that day was a little low on sleep than usual, since I had woken up an hour or half-hour early. I didn't yawn at all during the movie (merely mediocre to my eyes with some questionable parts)22. It worked much the same as it did the previous time - as I walked around at 5 AM or so, I felt perfectly alert. I made good use of the hours and wrote up my memories of ICON 2011. Furthermore, there is no certain way to know whether you'll have an adverse reaction to a particular substance, even if it's natural. This risk is heightened when stacking multiple substances because substances can have synergistic effects, meaning one substance can heighten the effects of another. However, using nootropic stacks that are known to have been frequently used can reduce the chances of any negative side effects. As it happens, these are areas I am distinctly lacking in. When I first began reading about testosterone I had no particular reason to think it might be an issue for me, but it increasingly sounded plausible, an aunt independently suggested I might be deficient, a biological uncle turned out to be severely deficient with levels around 90 ng/dl (where the normal range for 20-49yo males is 249-839), and finally my blood test in August 2013 revealed that my actual level was 305 ng/dl; inasmuch as I was 25 and not 49, this is a tad low. The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance. The ethics of cognitive enhancement have been extensively debated in the academic literature (e.g., Bostrom & Sandberg, 2009; Farah et al., 2004; Greely et al., 2008; Mehlman, 2004; Sahakian & Morein-Zamir, 2007). We do not attempt to review this aspect of the problem here. Rather, we attempt to provide a firmer empirical basis for these discussions. Despite the widespread interest in the topic and its growing public health implications, there remains much researchers do not know about the use of prescription stimulants for cognitive enhancement. Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. The smart pill that FDA approved is called Abilify MyCite. This tiny pill has a drug and an ingestible sensor. The sensor gets activated when it comes into contact with stomach fluid to detect when the pill has been taken. The data is then transmitted to a wearable patch that eventually conveys the information to a paired smartphone app. Doctors and caregivers, with the patient's consent, can then access the data via a web portal. There is much to be appreciated in a brain supplement like BrainPill (never mind the confusion that may stem from the generic-sounding name) that combines tried-and-tested ingredients in a single one-a-day formulation. The consistency in claims and what users see in real life is an exemplary one, which convinces us to rate this powerhouse as the second on this review list. Feeding one's brain with nootropics and related supplements entails due diligence in research and seeking the highest quality, and we think BrainPill is up to task. Learn More... I was contacted by the Longecity user lostfalco, and read through some of his writings on the topic. I had never heard of LLLT before, but the mitochondria mechanism didn't sound impossible (although I wondered whether it made sense at a quantity level14151617), and there was at least some research backing it; more importantly, lostfalco had discovered that devices for LLLT could be obtained as cheap as $15. (Clearly no one will be getting rich off LLLT or affiliate revenue any time soon.) Nor could I think of any way the LLLT could be easily harmful: there were no drugs involved, physical contact was unnecessary, power output was too low to directly damage through heating, and if it had no LLLT-style effect but some sort of circadian effect through hitting photoreceptors, using it in the morning wouldn't seem to interfere with sleep. Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. One of the most obscure -racetams around, coluracetam (Smarter Nootropics, Ceretropic, Isochroma) acts in a different way from piracetam - piracetam apparently attacks the breakdown of acetylcholine while coluracetam instead increases how much choline can be turned into useful acetylcholine. This apparently is a unique mechanism. A crazy Longecity user, ScienceGuy ponied up $16,000 (!) for a custom synthesis of 500g; he was experimenting with 10-80mg sublingual doses (the ranges in the original anti-depressive trials) and reported a laundry list of effects (as does Isochroma): primarily that it was anxiolytic and increased work stamina. Unfortunately for my stack, he claims it combines poorly with piracetam. He offered free 2g samples for regulars to test his claims. I asked & received some. Given the size of the literature just reviewed, it is surprising that so many basic questions remain open. Although d-AMP and MPH appear to enhance retention of recently learned information and, in at least some individuals, also enhance working memory and cognitive control, there remains great uncertainty regarding the size and robustness of these effects and their dependence on dosage, individual differences, and specifics of the task. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. The evidence? Found helpful in reducing bodily twitching in myoclonus epilepsy, a rare disorder, but otherwise little studied. Mixed evidence from a study published in 1991 suggests it may improve memory in subjects with cognitive impairment. A meta-analysis published in 2010 that reviewed studies of piracetam and other racetam drugs found that piracetam was somewhat helpful in improving cognition in people who had suffered a stroke or brain injury; the drugs' effectiveness in treating depression and reducing anxiety was more significant. Methylphenidate, commonly known as Ritalin, is a stimulant first synthesised in the 1940s. More accurately, it's a psychostimulant - often prescribed for ADHD - that is intended as a drug to help focus and concentration. It also reduces fatigue and (potentially) enhances cognition. Similar to Modafinil, Ritalin is believed to reduce dissipation of dopamine to help focus. Ritalin is a Class B drug in the UK, and possession without a prescription can result in a 5 year prison sentence. Please note: Side Effects Possible. See this article for more on Ritalin. It is not because of the few thousand francs which would have to be spent to put a roof [!] over the third-class carriages or to upholster the third-class seats that some company or other has open carriages with wooden benches. What the company is trying to do is to prevent the passengers who can pay the second class fare from traveling third class; it hits the poor, not because it wants to hurt them, but to frighten the rich. And it is again for the same reason that the companies, having proved almost cruel to the third-class passengers and mean to the second-class ones, become lavish in dealing with first-class passengers. Having refused the poor what is necessary, they give the rich what is superfluous. The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work. Phenotropil is an over-the-counter supplement similar in structure to Piracetam (and Noopept). This synthetic smart drug has been used to treat stroke, epilepsy and trauma recovery. A 2005 research paper also demonstrated that patients diagnosed with natural lesions or brain tumours see improvements in cognition. Phenylpiracetam intake can also result in minimised feelings of anxiety and depression. This is one of the more powerful unscheduled Nootropics available. With just 16 predictions, I can't simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number: The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? So it's no surprise that as soon as medical science develops a treatment for a disease, we often ask if it couldn't perhaps make a healthy person even healthier. Take Viagra, for example: developed to help men who couldn't get erections, it's now used by many who function perfectly well without a pill but who hope it will make them exceptionally virile. Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress. If stimulants truly enhance cognition but do so to only a small degree, this raises the question of whether small effects are of practical use in the real world. Under some circumstances, the answer would undoubtedly be yes. Success in academic and occupational competitions often hinges on the difference between being at the top or merely near the top. A scholarship or a promotion that can go to only one person will not benefit the runner-up at all. Hence, even a small edge in the competition can be important. In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals. These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics. Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo. Some cognitive enhancers, such as donepezil and galantamine, are prescribed for elderly patients with impaired reasoning and memory deficits caused by various forms of dementia, including Alzheimer disease, Parkinson disease with dementia, dementia with Lewy bodies, and vascular dementia. Children and young adults with attention-deficit/hyperactivity disorder (ADHD) are often treated with the cognitive enhancers Ritalin (methylphenidate) or Adderall (mixed amphetamine salts). Persons diagnosed with narcolepsy find relief from sudden attacks of sleep through wake-promoting agents such as Provigil (modafinil). Generally speaking, cognitive enhancers improve working and episodic (event-specific) memory, attention, vigilance, and overall wakefulness but act through different brain systems and neurotransmitters to exert their enhancing effects. A "smart pill" is a drug that increases the cognitive ability of anyone taking it, whether the user is cognitively impaired or normal. The Romanian neuroscientist Corneliu Giurgea is often credited with first proposing, in the 1960s, that smart pills should be developed to increase the intelligence of the general population (see Giurgea, 1984). He is quoted as saying, "Man is not going to wait passively for millions of years before evolution offers him a better brain" (Gazzaniga, 2005, p. 71). In their best-selling book, Smart Drugs and Nutrients, Dean and Morgenthaler (1990) reviewed a large number of substances that have been used by healthy individuals with the goal of increasing cognitive ability. These include synthetic and natural products that affect neurotransmitter levels, neurogenesis, and blood flow to the brain. Although many of these substances have their adherents, none have become widely used. Caffeine and nicotine may be exceptions to this generalization, as one motivation among many for their use is cognitive enhancement (Julien, 2001). Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it. The experiment then is straightforward: cut up a fresh piece of gum, randomly select from it and an equivalent dry piece of gum, and do 5 rounds of dual n-back to test attention/energy & WM. (If it turns out to be placebo, I'll immediately use the remaining active dose: no sense in wasting gum, and this will test whether nigh-daily use renders nicotine gum useless, similar to how caffeine may be useless if taken daily. If there's 3 pieces of active gum left, then I wrap it very tightly in Saran wrap which is sticky and air-tight.) The dose will be 1mg or 1/4 a gum. I cut up a dozen pieces into 4 pieces for 48 doses and set them out to dry. Per the previous power analyses, 48 groups of DNB rounds likely will be enough for detecting small-medium effects (partly since we will be only looking at one metric - average % right per 5 rounds - with no need for multiple correction). Analysis will be one-tailed, since we're looking for whether there is a clear performance improvement and hence a reason to keep using nicotine gum (rather than whether nicotine gum might be harmful). Adaptogens are plant-derived chemicals whose activity helps the body maintain or regain homeostasis (equilibrium between the body's metabolic processes). Almost without exception, adaptogens are available over-the-counter as dietary supplements, not controlled drugs. Well-known adaptogens include Ginseng, Kava Kava, Passion Flower, St. Johns Wort, and Gotu Kola. Many of these traditional remedies border on being "folk wisdom," and have been in use for hundreds or thousands of years, and are used to treat everything from anxiety and mild depression to low libido. While these smart drugs work in a many different ways (their commonality is their resultant function within the body, not their chemical makeup), it can generally be said that the cognitive boost users receive is mostly a result of fixing an imbalance in people with poor diets, body toxicity, or other metabolic problems, rather than directly promoting the growth of new brain cells or neural connections. The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia. Results: Women with high caffeine intakes had significantly higher rates of bone loss at the spine than did those with low intakes (−1.90 ± 0.97% compared with 1.19 ± 1.08%; P = 0.038). When the data were analyzed according to VDR genotype and caffeine intake, women with the tt genotype had significantly (P = 0.054) higher rates of bone loss at the spine (−8.14 ± 2.62%) than did women with the TT genotype (−0.34 ± 1.42%) when their caffeine intake was >300 mg/d…In 1994, Morrison et al (22) first reported an association between vitamin D receptor gene (VDR) polymorphism and BMD of the spine and hip in adults. After this initial report, the relation between VDR polymorphism and BMD, bone turnover, and bone loss has been extensively evaluated. The results of some studies support an association between VDR polymorphism and BMD (23-,25), whereas other studies showed no evidence for this association (26,27)…At baseline, no significant differences existed in serum parathyroid hormone, serum 25-hydroxyvitamin D, serum osteocalcin, and urinary N-telopeptide between the low- and high-caffeine groups (Table 1⇑). In the longitudinal study, the percentage of change in serum parathyroid hormone concentrations was significantly lower in the high-caffeine group than in the low-caffeine group (Table 2⇑). However, no significant differences existed in the percentage of change in serum 25-hydroxyvitamin D MPH was developed more recently and marketed primarily for ADHD, although it is sometimes prescribed off label or used nonmedically to increase alertness, energy, or concentration in conditions other than ADHD. Both MPH and AMP are on the list of substances banned from sports competitions by the World Anti-Doping Agency (Docherty, 2008). Both also have the potential for abuse and dependence, which detracts from their usefulness and is the reason for their classification as Schedule II controlled substances. Although the risk of developing dependence on these drugs is believed to be low for individuals taking them for ADHD, the Schedule II classification indicates that these drugs have a high potential for abuse and that abuse may lead to severe dependence. Creatine is a substance that's produced in the human body. It is initially produced in the kidneys, and the process is completed in the liver. It is then stored in the brain tissues and muscles, to support the energy demands of a human body. Athletes and bodybuilders use creatine supplements to relieve fatigue and increase the recovery of the muscle tissues affected by vigorous physical activities. Apart from helping the tissues to recover faster, creatine also helps in enhancing the mental functions in sleep-deprived adults, and it also improves the performance of difficult cognitive tasks. The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly. REPUTATION: We were blown away by the top-notch reputation that Thrive Naturals has in the industry. From the consumers we interviewed, we found that this company has a legion of loyal brand advocates. Their customers frequently told us that they found Thrive Naturals easy to communicate with, and quick to process and deliver their orders. The company has an amazing track record of customer service and prides itself on its Risk-Free No Questions Asked 1-Year Money Back Guarantee. As an online advocate for consumer rights, we were happy to see that they have no hidden fees nor ongoing monthly billing programs that many others try to trap consumers into. I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS. Many people quickly become overwhelmed by the volume of information and number of products on the market. Because each website claims its product is the best and most effective, it is easy to feel confused and unable to decide. Smart Pill Guide is a resource for reliable information and independent reviews of various supplements for brain enhancement. This formula presents a relatively high price and one bottle of 60 tables, at the recommended dosage of two tablets per day with a meal, a bottle provides a month's supply. The secure online purchase is available on the manufacturer's site as well as at several online retailers. Although no free trials or money back guarantees are available at this time, the manufacturer provides free shipping if the desired order exceeds a certain amount. With time different online retailers could offer some advantages depending on the amount purchased, so an online research is advised before purchase, as to assess the market and find the best solution. A 100mg dose of caffeine (half of a No-Doz or one cup of strong coffee) with 200mg of L-theanine is what the nootropics subreddit recommends in their beginner's FAQ, and many nootropic sellers, like Peak Nootropics, suggest the same. In my own experiments, I used a pre-packaged combination from Nootrobox called Go Cubes. They're essentially chewable coffee cubes (not as gross as it sounds) filled with that same beginner dose of caffeine, L-theanine, as well as a few B vitamins thrown into the mix. After eating an entire box of them (12 separate servings—not all at once), I can say eating them made me feel more alert and energetic, but less jittery than my usual three cups of coffee every day. I noticed enough of a difference in the past two weeks that I'll be looking into getting some L-theanine supplements to take with my daily coffee. But notice that most of the cost imbalance is coming from the estimate of the benefit of IQ - if it quadrupled to a defensible $8000, that would be close to the experiment cost! So in a way, what this VoI calculation tells us is that what is most valuable right now is not that iodine might possibly increase IQ, but getting a better grip on how much any IQ intervention is worth. Dosage is apparently 5-10mg a day. (Prices can be better elsewhere; selegiline is popular for treating dogs with senile dementia, where those 60x5mg will cost $2 rather than $3531. One needs a veterinarian's prescription to purchase from pet-oriented online pharmacies, though.) I ordered it & modafinil from Nubrain.com at $35 for 60x5mg; Nubrain delayed and eventually canceled my order - and my enthusiasm. Between that and realizing how much of a premium I was paying for Nubrain's deprenyl, I'm tabling deprenyl along with nicotine & modafinil for now. Which is too bad, because I had even ordered 20g of PEA from Smart Powders to try out with the deprenyl. (My later attempt to order some off the Silk Road also failed when the seller canceled the order.) Low-tech methods of cognitive enhancement include many components of what has traditionally been viewed as a healthy lifestyle, such as exercise, good nutrition, adequate sleep, and stress management. These low-tech methods nevertheless belong in a discussion of brain enhancement because, in addition to benefiting cognitive performance, their effects on brain function have been demonstrated (Almeida et al., 2002; Boonstra, Stins, Daffertshofer, & Beek, 2007; Hillman, Erickson, & Kramer, 2008; Lutz, Slagter, Dunne, & Davidson, 2008; Van Dongen, Maislin, Mullington, & Dinges, 2003). Popular among computer programmers, oxiracetam, another racetam, has been shown to be effective in recovery from neurological trauma and improvement to long-term memory. It is believed to effective in improving attention span, memory, learning capacity, focus, sensory perception, and logical thinking. It also acts as a stimulant, increasing mental energy, alertness, and motivation. Another classic approach to the assessment of working memory is the span task, in which a series of items is presented to the subject for repetition, transcription, or recognition. The longest series that can be reproduced accurately is called the forward span and is a measure of working memory capacity. The ability to reproduce the series in reverse order is tested in backward span tasks and is a more stringent test of working memory capacity and perhaps other working memory functions as well. The digit span task from the Wechsler (1981) IQ test was used in four studies of stimulant effects on working memory. One study showed that d-AMP increased digit span (de Wit et al., 2002), and three found no effects of d-AMP or MPH (Oken, Kishiyama, & Salinsky, 1995; Schmedtje, Oman, Letz, & Baker, 1988; Silber, Croft, Papafotiou, & Stough, 2006). A spatial span task, in which subjects must retain and reproduce the order in which boxes in a scattered spatial arrangement change color, was used by Elliott et al. (1997) to assess the effects of MPH on working memory. For subjects in the group receiving placebo first, MPH increased spatial span. However, for the subjects who received MPH first, there was a nonsignificant opposite trend. The group difference in drug effect is not easily explained. The authors noted that the subjects in the first group performed at an overall lower level, and so, this may be another manifestation of the trend for a larger enhancement effect for less able subjects. (I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.) I can test fish oil for mood, since the other claimed benefits like anti-schizophrenia are too hard to test. The medical student trial (Kiecolt-Glaser et al 2011) did not see changes until visit 3, after 3 weeks of supplementation. (Visit 1, 3 weeks, visit 2, supplementation started for 3 weeks, visit 3, supplementation continued 3 weeks, visit 4 etc.) There were no tests in between the test starting week 1 and starting week 3, so I can't pin it down any further. This suggests randomizing in 2 or 3 week blocks. (For an explanation of blocking, see the footnote in the Zeo page.) My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well. The Smart Pills Technology are primarily utilized for dairy products, soft drinks, and water catering in diverse shapes and sizes to various consumers. The rising preference for easy-to-carry liquid foods is expected to boost the demand for these packaging cartons, thereby, fueling the market growth. The changing lifestyle of people coupled with the convenience of utilizing carton packaging is projected to propel the market. In addition, Smart Pills Technology have an edge over the glass and plastic packaging, in terms of environmental-friendliness and recyclability of the material, which mitigates the wastage and reduces the product cost. Thus, the aforementioned factors are expected to drive the Smart Pills Technology market growth over the projected period. "I enjoyed this book. It was full of practical information. It was easy to understand. I implemented some of the ideas in the book and they have made a positive impact for me. Not only is this book a wealth of knowledge it helps you think outside the box and piece together other ideas to research and helps you understand more about TBI and the way food might help you mitigate symptoms." Stimulants are drugs that accelerate the central nervous system (CNS) activity. They have the power to make us feel more awake, alert and focused, providing us with a needed energy boost. Unfortunately, this class encompasses a wide range of drugs, some which are known solely for their side-effects and addictive properties. This is the reason why many steer away from any stimulants, when in fact some greatly benefit our cognitive functioning and can help treat some brain-related impairments and health issues. Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers. Analgesics Anesthetics General Local Anorectics Anti-ADHD agents Antiaddictives Anticonvulsants Antidementia agents Antidepressants Antimigraine agents Antiparkinson agents Antipsychotics Anxiolytics Depressants Entactogens Entheogens Euphoriants Hallucinogens Psychedelics Dissociatives Deliriants Hypnotics/Sedatives Mood Stabilizers Neuroprotectives Nootropics Neurotoxins Orexigenics Serenics Stimulants Wakefulness-promoting agents
CommonCrawl
Volume 25, Number 4B (2019), 3421-3458. Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods Gilles Blanchard and Oleksandr Zadorozhnyi More by Gilles Blanchard More by Oleksandr Zadorozhnyi Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a $\tau$-mixing process. Bernoulli, Volume 25, Number 4B (2019), 3421-3458. Received: January 2018 Revised: October 2018 First available in Project Euclid: 25 September 2019 https://projecteuclid.org/euclid.bj/1569398772 doi:10.3150/18-BEJ1095 Banach-valued process Bernstein inequality concentration spectral regularization weak dependence Blanchard, Gilles; Zadorozhnyi, Oleksandr. Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods. Bernoulli 25 (2019), no. 4B, 3421--3458. doi:10.3150/18-BEJ1095. https://projecteuclid.org/euclid.bj/1569398772 [1] Andrews, D.W.K. (1984). Nonstrong mixing autoregressive processes. J. Appl. Probab. 21 930–934. Zentralblatt MATH: 0552.60049 Digital Object Identifier: doi:10.2307/3213710 [2] Andrews, D.W.K. (1988). Laws of large numbers for dependent nonidentically distributed random variables. Econometric Theory 4 458–467. [3] Argyriou, A. and Dinuzzo, F. (2014). A unifying view of representer theorems. In International Conference on Machine Learning 31 (ICML 2014) (E.P. Xing and T. Jebara, eds.). Proceedings of Machine Learning Research 32 748–756. [4] Bauer, F., Pereverzev, S. and Rosasco, L. (2007). On regularization algorithms in learning theory. J. Complexity 23 52–72. Digital Object Identifier: doi:10.1016/j.jco.2006.07.001 [5] Benett, K. and Bredensteiner, J. (2000). Duality and geometry in support vector machine classifiers. In International Conference on Machine Learning 17 (ICML 2000) (P. Langley, ed.) 57–64. [6] Bernstein, S. (1924). On a modification of Chebyschev's inequality and of the error formula of Laplace. Ann. Sci. Inst. Sav. Ukraine, Sect. Math 4. [7] Bhatia, R. (1997). Matrix Analysis. Graduate Texts in Mathematics 169. New York: Springer. [8] Bickel, P.J. and Bühlmann, P. (1999). A new mixing notion and functional central limit theorems for a sieve bootstrap in time series. Bernoulli 5 413–446. Project Euclid: euclid.bj/1172617198 [9] Blanchard, G., Lee, G. and Scott, C. (2011). Generalizing from several related classification tasks to a new unlabeled sample. In Advances in Neural Inf. Proc. Systems 24 (NIPS 2011) (J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira and K.Q. Weinberger, eds.) 2438–2446. [10] Blanchard, G. and Mücke, N. (2018). Optimal rates for regularization of statistical inverse learning problems. Found. Comput. Math. 18 971–1013. Digital Object Identifier: doi:10.1007/s10208-017-9359-7 [11] Bosq, D. (1993). Bernstein-type large deviations inequalities for partial sums of strong mixing processes. Statistics 24 59–70. Digital Object Identifier: doi:10.1080/02331888308802389 [12] Bosq, D. (2000). Linear Processes in Function Spaces: Theory and Applications. Lecture Notes in Statistics 149. New York: Springer. [13] Bradley, R.C. (2005). Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv. 2 107–144. Digital Object Identifier: doi:10.1214/154957805100000104 [14] Canu, S., Mary, X. and Rakotomamonjy, A. (2003). Functional learning through kernel. 5 89–110. IOS Press. [15] Caponnetto, A. and De Vito, E. (2007). Optimal rates for the regularized least-squares algorithm. Found. Comput. Math. 7 331–368. [16] Combettes, P.L., Salzo, S. and Villa, S. (2018). Regularized learning schemes in feature Banach spaces. Anal. Appl. (Singap.) 16 1–54. Digital Object Identifier: doi:10.1142/S0219530516500202 [17] De Vito, E., Rosasco, L. and Caponnetto, A. (2006). Discretization error analysis for Tikhonov regularization. Anal. Appl. (Singap.) 4 81–99. [18] Dedecker, J., Doukhan, P., Lang, G., León R., J.R., Louhichi, S. and Prieur, C. (2007). Weak Dependence: With Examples and Applications. Lecture Notes in Statistics 190. New York: Springer. [19] Dedecker, J. and Merlevède, F. (2015). Moment bounds for dependent sequences in smooth Banach spaces. Stochastic Process. Appl. 125 3401–3429. Digital Object Identifier: doi:10.1016/j.spa.2015.05.002 [20] Doukhan, P. and Louhichi, S. (1999). A new weak dependence condition and applications to moment inequalities. Stochastic Process. Appl. 84 313–342. Digital Object Identifier: doi:10.1016/S0304-4149(99)00055-1 [21] Engl, H.W., Hanke, M. and Neubauer, A. (1996). Regularization of Inverse Problems. Mathematics and Its Applications 375. Dordrecht: Kluwer Academic. [22] Esary, J.D., Proschan, F. and Walkup, D.W. (1967). Association of random variables, with applications. Ann. Math. Stat. 38 1466–1474. Digital Object Identifier: doi:10.1214/aoms/1177698701 Project Euclid: euclid.aoms/1177698701 [23] Fan, X., Grama, I. and Liu, Q. (2015). Exponential inequalities for martingales with applications. Electron. J. Probab. 20 1–22. Digital Object Identifier: doi:10.1214/EJP.v20-3496 [24] Fortuin, C.M., Kasteleyn, P.W. and Ginibre, J. (1971). Correlation inequalities on some partially ordered sets. Comm. Math. Phys. 22 89–103. Digital Object Identifier: doi:10.1007/BF01651330 Project Euclid: euclid.cmp/1103857443 [25] Freedman, D.A. (1975). On tail probabilities for martingales. Ann. Probab. 3 100–118. Digital Object Identifier: doi:10.1214/aop/1176996452 Project Euclid: euclid.aop/1176996452 [26] Hang, H. and Steinwart, I. (2017). A Bernstein-type inequality for some mixing processes and dynamical systems with an application to learning. Ann. Statist. 45 708–743. Digital Object Identifier: doi:10.1214/16-AOS1465 Project Euclid: euclid.aos/1494921955 [27] Hein, M., Bousquet, O. and Schölkopf, B. (2005). Maximal margin classification for metric spaces. J. Comput. System Sci. 71 333–359. Digital Object Identifier: doi:10.1016/j.jcss.2004.10.013 [28] Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 13–30. Digital Object Identifier: doi:10.1080/01621459.1963.10500830 [29] Horváth, L. and Kokoszka, P. (2012). Inference for Functional Data with Applications. Springer Series in Statistics. New York: Springer. [30] Ibragimov, I.A. (1959). Some limit theorems for stochastic processes stationary in the strict sense. Dokl. Akad. Nauk SSSR 125 711–714. [31] Jirak, M. (2018). Rate of convergence for Hilbert space valued processes. Bernoulli 24 202–230. Digital Object Identifier: doi:10.3150/16-BEJ870 [32] Kolmogorov, A.N. and Rozanov, J.A. (1960). On a strong mixing condition for stationary Gaussian processes. Theory Probab. Appl. 5 204–208. [33] Kontorovich, L. (2006). Metric and mixing sufficient conditions for concentration of measure. Available at arxiv.org/abs/math/0610427. [34] Kontorovich, L. and Ramanan, K. (2008). Concentration inequalities for dependent random variables via the martingale method. Ann. Probab. 36 2126–2158. Digital Object Identifier: doi:10.1214/07-AOP384 [35] Marton, K. (2004). Measure concentration for Euclidean distance in the case of dependent random variables. Ann. Probab. 32 2526–2544. [36] Maume-Deschamps, V. (2006). Exponential inequalities and functional estimations for weak dependent data; applications to dynamical systems. Stoch. Dyn. 6 535–560. [37] Mc Leish, D. (1975). Invariance principles and mixing random variables. Econometric Theory 4 165–178. [38] Merlevède, F., Peligrad, M. and Rio, E. (2009). Bernstein inequality and moderate deviations under strong mixing conditions. In High Dimensional Probability V: The Luminy Volume. Inst. Math. Stat. (IMS) Collect. 5 273–292. Beachwood, OH: IMS. [39] Micchelli, C.A. and Pontil, M. (2004). A function representation for learning in Banach spaces. In Learning Theory. Lecture Notes in Computer Science 3120 255–269. Berlin: Springer. [40] Pinelis, I. (1992). An approach to inequalities for the distributions of infinite-dimensional martingales. In Probability in Banach Spaces, 8 (Brunswick, ME, 1991). Progress in Probability 30 128–134. Boston, MA: Birkhäuser. [41] Pinelis, I. (1994). Optimum bounds for the distributions of martingales in Banach spaces. Ann. Probab. 22 1679–1706. [42] Pinelis, I.F. and Sakhanenko, A.I. (1986). Remarks on inequalities for probabilities of large deviations. Theory Probab. Appl. 30 143–148. [43] Potapov, D. and Sukochev, F. (2014). Fréchet differentiability of $\mathcal{S}^{p}$ norms. Adv. Math. 262 436–475. [44] Rio, E. (1996). Sur le théorème de Berry–Esseen pour les suites faiblement dépendantes. Probab. Theory Related Fields 104 255–282. [45] Rio, E. (2013). Extensions of the Hoeffding–Azuma inequalities. Electron. Commun. Probab. 18 no. 54, 6. [46] Rosasco, L., Belkin, M. and De Vito, E. (2010). On learning with integral operators. J. Mach. Learn. Res. 2 905–934. [47] Rosenblatt, M. (1956). A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 42 43–47. Digital Object Identifier: doi:10.1073/pnas.42.1.43 [48] Samson, P.-M. (2000). Concentration of measure inequalities for Markov chains and $\Phi$-mixing processes. Ann. Probab. 28 416–461. [49] Song, G. and Zhang, H. (2011). Reproducing kernel Banach spaces with the $\ell^{1}$ norm II: Error analysis for regularized least square regression. Neural Comput. 23 2713–2729. Digital Object Identifier: doi:10.1162/NECO_a_00178 [50] Sriperumbudur, B., Fukumizu, K. and Lanckriet, G. (2011). Learning in Hilbert vs. Banach spaces: A measure embedding viewpoint. In Advances in Neural Information Processing Systems 24 (NIPS 2011) (J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira and K.Q. Weinberger, eds.) 1773–1781. [51] Steinwart, I. (2009). Two oracle inequalities for regularized boosting classifiers. Stat. Interface 2 271–284. Digital Object Identifier: doi:10.4310/SII.2009.v2.n3.a2 [52] van de Geer, S.A. (2002). On Hoeffding's inequality for dependent random variables. In Empirical Process Techniques for Dependent Data 161–169. Boston, MA: Birkhäuser. [53] Wintenberger, O. (2010). Deviation inequalities for sums of weakly dependent time series. Electron. Commun. Probab. 15 489–503. Digital Object Identifier: doi:10.1214/ECP.v15-1577 [54] Yurinskyi, V. (1970). The infinite-dimensional version of S.N. Bernšteĭn's inequalities. Theory Probab. Appl. 15 108–109. [55] Yurinsky, V. (1995). Sums and Gaussian Vectors. Lecture Notes in Math. 1617. Berlin: Springer. [56] Zhang, H., Xu, Y. and Zhang, J. (2009). Reproducing kernel Banach spaces for machine learning. J. Mach. Learn. Res. 10 2741–2775. [57] Zhang, H. and Zhang, J. (2013). Vector-valued reproducing kernel Banach spaces with applications to multi-task learning. J. Complexity 29 195–215. [58] Zhang, T. (2002). On the dual formulation of regularized learning schemes with convex risks. Mach. Learn. 46 91–129. Digital Object Identifier: doi:10.1023/A:1012498226479 Bernoulli Society for Mathematical Statistics and Probability Bernoulli Society Forthcoming Papers IMS Supported Journal Matrix concentration inequalities via the method of exchangeable pairs Mackey, Lester, Jordan, Michael I., Chen, Richard Y., Farrell, Brendan, and Tropp, Joel A., Annals of Probability, 2014 Bernstein inequality and moderate deviations under strong mixing conditions Merlevède, Florence, Peligrad, Magda, and Rio, Emmanuel, High Dimensional Probability V: The Luminy Volume, 2009 Sparsity in multiple kernel learning Koltchinskii, Vladimir and Yuan, Ming, Annals of Statistics, 2010 Optimal regression rates for SVMs using Gaussian kernels Eberts, Mona and Steinwart, Ingo, Electronic Journal of Statistics, 2013 Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space Lv, Shaogao, Lin, Huazhen, Lian, Heng, and Huang, Jian, Annals of Statistics, 2018 A Bernstein-type inequality for some mixing processes and dynamical systems with an application to learning Hang, Hanyuan and Steinwart, Ingo, Annals of Statistics, 2017 Time-uniform Chernoff bounds via nonnegative supermartingales Howard, Steven R., Ramdas, Aaditya, McAuliffe, Jon, and Sekhon, Jasjeet, Probability Surveys, 2020 ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels Chinot, Geoffrey, Electronic Journal of Statistics, 2020 Uniform Hanson-Wright type concentration inequalities for unbounded entries via the entropy method Klochkov, Yegor and Zhivotovskiy, Nikita, Electronic Journal of Probability, 2020 Density Problem and Approximation Error in Learning Theory Zhou, Ding-Xuan, Abstract and Applied Analysis, 2013 euclid.bj/1569398772
CommonCrawl
Enduring the great recession: Economic integration in the European Union Lauren Peritz1, Ryan Weldzius ORCID: orcid.org/0000-0002-8918-29842, Ronald Rogowski3 & Thomas Flaherty4 The Review of International Organizations (2021)Cite this article Scholars have long feared that regional economic specialization, fostered by freer trade, would make the European Union vulnerable to economic downturn. The most acute concerns have been over the adoption of the common currency: by adopting the euro, countries renounce their ability to meet an asymmetric shock with independent revaluations of their currencies. We systematically test the prediction that regional specialization increases vulnerability to economic downturn using a novel dataset that covers all of the EU's subnational regions and major sectors of the economy between 2000 and 2013. We find that, contrary to conventional wisdom, the most specialized regions actually fared better during the 2008-09 global financial crisis. Specialized regions performed worse only in states that remained outside the Eurozone. The heightened vulnerability of non-Eurozone states cannot be attributed to fiscal or social policy failures. Rather, our results suggest the common currency may have helped Eurozone members share risk. This bodes well for the resiliency of the EU, even as it navigates another economic downturn from the asymmetric impact of the novel coronavirus. Even before the COVID-19 crisis and the rancorous negotiations for an EU-wide relief package, it was clear that the European project—the goal of an "ever closer union"—was facing trouble. Brexit, the recurrent southern debt crises, a chain of banking insolvencies, the imperiled Euro, and increasing support for anti-European Union political parties have raised questions about whether the European project's design is fundamentally flawed. In its briefest form, the case for the prosecution runs as follows. From its inception, the European Union (EU) has taken as fundamental the principle of free trade in goods, services, and factors of production (labor and capital) among its member states. As in any system of free trade, but especially in one that trades so intensively as the EU, governments fear competitive devaluations of other states' currencies. This is a notoriously simple but highly effective form of import protection and export promotion. In the view of the states that had repeatedly fallen victim to competitive devaluations (chief among them Germany), efforts to obviate that threat through semi-fixed exchange rates had repeatedly failed. The only ironclad commitment against competitive devaluations was the adoption of a common currency (cf. Frieden 2015, p. 150). But, as we know from the theory of optimum currency areas (OCAs), separate states can sustain a common currency only to the extent that they experience symmetric business-cycle shocks, i.e., slump and boom together, or can compensate asymmetric shocks through automatic transfers. Hence, the whole project of European unification could easily collapse from the pressures of asymmetric shocks, unless Brussels received the mandate and means to transfer resources from the booming to the stricken economies. A lot hinges on Europe's immunity, or at least resilience, to asymmetric economic shocks. Yet the very success of economic integration may paradoxically have made Europe more vulnerable to such dislocations. Almost three decades ago, Krugman (1991) speculated that dismantling the remaining barriers to trade within the EU would lead to increased agglomeration of production and greater dissimilarity of member-state production profiles. As Krugman put it, perhaps "eventually Europe will look like America, with a similar degree of localization and specialization," i.e., concentration of specific sectors in a few places (ibid., p. 79). More presciently, he also conjectured that such specialization, precisely by exposing different regions and countries to greater risk of asymmetric shocks, might well make the EU even less of an optimal currency area and endanger the project of monetary union that had then been set in train. Whether greater regional specialization entails greater asymmetry in business cycles has, of course, been hotly debated. Examining twenty countries' bilateral trade and business cycles over thirty years, Frankel and Rose (1998) found that greater intensity of trade led to more symmetric business cycles. Willett et al. (2010) find that this relationship holds after the adoption of the euro in 1999, by comparing business cycle correlations in the decade before euro adoption (1980-1990) and the years immediately following adoption (1999-2005). On the other hand Kalemli-Ozcan et al. (2001) found, among OECD countries and US states, that greater specialization was indeed associated with more asymmetric business cycles. Underscoring the limitations of their findings, they concluded that "which effect will dominate in the European Monetary Union remains an open empirical question" (Kalemli-Ozcan et al. 2001, pp. 110). We address exactly this empirical question, considering the consequences of regional specialization—both between and within countries—during an economic downturn. The global financial crisis of 2008-09 constitutes an important test of European integration because it highlights susceptibility to a regionally-asymmetric shock. As the worst slump in aggregate demand, and hence the steepest decline in many industries' production, since the 1930s, the global financial crisis should have revealed any disparities in regions' differential vulnerability.Footnote 1 Leveraging newly-released data from Eurostat and a constellation of national data sources, we address empirical questions about specialization and asymmetry. Specifically, we ask at the level of Eurostat's 266 NUTS-2 regions, two questions: (a) Were Europe's most specialized regions hit hardest in the crisis? And (b) did membership in the Eurozone magnify the ill-effects of the crisis? Our answers are, in both cases, no. (a) Europe's most specialized regions were not, in fact, hit hardest by the 2008-09 economic shock. The EU's regions displayed steady levels of specialization between 2002 and 2008, the years immediately preceding the crisis.Footnote 2 By the point the crisis hit, the extent of specialization was a given. In more than half of EU states, the most specialized regions actually experienced a less severe decline in economic activity than their more diversified counterparts. In only six of the member states was greater regional specialization associated with a more severe downturn; all six countries are outside the Eurozone.Footnote 3 (b) Eurozone membership was actually associated with less severe economic contractions in the wake of the global financial crisis. Krugman's fears about asymmetric shocks destabilizing the EU, informed by OCA theory, Footnote 4 simply do not bear out, at least in the short-term. Instead, our findings suggest that Frankel and Rose were correct: a common currency pools risk amongst its members, insuring those areas most susceptible to economic downturn. The implications are significant. Whether the EU is a stable arrangement is a crucial and timely question, particularly given the asymmetric fallout from COVID-19. Our findings point toward a positive answer and, more broadly, suggest that this form of deep interstate cooperation can be quite resilient to sudden economic shocks. The crisis of 2008-09 dealt the least severe blow to Europe's most specialized regions. This clear finding lends some confidence that freer trade is making Europe less, not more, susceptible to asymmetric shocks. If, instead, the crisis had impacted regions asymmetrically, it would be bad news indeed. Had specialization necessarily rendered regions more vulnerable, an asymmetric downturn would pose a profound political challenge both for the EU and for the governments of member states. One political solution would then have been to slow or reverse specialization, e.g., through a more aggressive use of European regional development funds to promote diversification of less developed regions and dispersal of otherwise regionally concentrated high-tech sectors. That would however, as economists have long recognized, come with a cost to economic growth in the EU as a whole, since regionally concentrated industries are usually more efficient (Ciccone 2002). Our results indicate that these concessions are fortunately unwarranted. Deep economic integration, and the regional specialization it fosters, appears more sustainable than Krugman (1991) and others predicted. Economic integration in the European Union Scholars and architects of the European Union alike foresaw significant risks from deep economic integration and, in particular, a shared currency. The concern was that an economic downturn would hit asymmetrically, affecting countries differently and, worse still, delivering uneven punches to different regions within countries. Asymmetric shocks are concerning because Europe faces the simultaneous need for, but problematic possibility of, sustaining an optimum currency area. Asymmetric shocks would lead governments to disagree amongst themselves over the appropriate form and scope of an EU response: some might advocate an aggressive and others a modest collective response, each with the potential to stoke nationalist sentiments within countries (e.g., Bechtel et al. 2014). At the same time, those governments would be under tremendous pressure to decide which geographical regions, industries, or sectors within their domestic purview should receive assistance. This concern grew out of a fundamental quandary in the project of European unification. Central to the economic integration of Europe, countries elected to remove all barriers to trade among them. By allowing free movement of goods across borders, states could ostensibly reap the benefits of specialization and boost economic growth. There is strong evidence that national specialization of production profiles grew markedly between 1968 and 1990 (Amiti 1998). Regional specialization within countries also increased in tandem with the "free-ness" of trade within the bloc during the 1990s (Brakman et al. 2006). Yet trade could not become fully free, nor its full efficiency gains achieved, so long as European governments maintained independent currencies—or so it was widely believed. Governments that maintained their own currencies would always be tempted, particularly in a downturn or before an impending election, to boost exports and stimulate their economies through timely devaluations. Currency manipulation would have similar effects as might have been achieved by the production subsidies that the EU treaties clearly forbade. The brief and ill-fated experiment of a supposedly irrevocable peg, the Exchange Rate Mechanism, had amply confirmed these fears (Obstfeld 1996). In adopting the euro, the EU aimed to deepen and stabilize the internal market, removing the risk that member states would engage in competitive devaluation to gain market advantage. Countries gain from a shared currency, according to the theory of optimum currency areas (OCAs), to the extent that they trade with one another, but lose to the extent that their business cycles are uncorrelated—i.e., that they experience "asymmetric economic shocks." Clearly, asymmetric shocks are crucial for understanding the political feasibility of monetary unions. We highlight the most commonly recognized threat to asymmetric shocks in currency unions: specialization. In what has become the conventional wisdom, Krugman (1991) argued that deeper economic integration would increase regional specialization—both between countries and between local labor markets within countries—which would expose regions to a greater risk of asymmetric shocks due to industry-specific downturns. When most specialization occurs between countries, asymmetric shocks will fracture interests among them. We would expect this when member states significantly differ in factor endowments, which induces factor-based trade and therefore country-level specialization. If, however, member countries share similar factor endowments, then most trade will be intra-industry with specialization patterns developing between local labor markets within countries. For example, each member country develops its own manufacturing hub and its own agricultural regions to satisfy consumer love of variety. Italy will export to the rest of Europe cars manufactured in the Emilia-Romagna region and wines from the hills of Tuscany and Abruzzo, while at the same time importing German cars from Munich and French wine from Bourdeaux. Consistent with this, most countries exhibit significant variation in specialization between subnational regions, which drives geographic heterogeneity in economic outcomes such as unemployment, real wages, productivity, and innovation (Enrico 2011; Autor et al. 2016). To the extent that this type of specialization dominates a monetary union, asymmetric shocks will likely develop within members rather than between them. Asymmetric shocks within member states pose a significant threat to monetary unions. Asymmetries between local labor markets fracture domestic support for monetary integration along geographic lines. In the event of an economic crisis, more exposed specialized local labor markets will favor a more aggressive monetary policy than more diversified labor markets in the same country would prefer. This variant of asymmetric shocks can therefore tear apart member states from within. Consequently, the politics of monetary integration would look similar to the domestic politics of trade and immigration, emphasizing how anti-globalization sentiment reflects specialization patterns between local labor markets (Colantone and Stanig 2018a, b; Georgiadou et al. 2018). In reality, specialization varies both within and between countries. This produces the potential for asymmetric shocks to split monetary coalitions into country-groups and local labor market groups. In recognition of the potential importance of subnational regions in interstate politics, our analysis asks how specialization between European labor markets contributes to asymmetric shocks within and between countries. This leads us to our first empirical expectation: the more specialized a member state's regions, the more exposed it will be to an economic shock and the worse will be its subsequent economic performance. In other words, the financial crisis exposed the European Union to asymmetric shocks among member states and within them. Our analysis section describes in detail how a regional unit of analysis can be used to measure both types of asymmetry. The alternative hypothesis is that specialization does not induce asymmetric shocks. That is, specialized regions fare no worse than their less-specialized counterparts (or even fare better). We might expect this for three reasons: (i) sufficient, and ideally automatic, fiscal transfers; (ii) ample labor mobility between regions and countries; or (iii) a pooling of risk among members. First, fiscal transfers allow members to maintain symmetric business cycles by injecting stimulus into the areas experiencing the largest contractions. Economists have long warned that in the absence of a fiscal union—common taxation and policies to absorb regional disturbances—Europe would be on thin ice (Eichengreen 1990; Sala-i Martin and Sachs 1991; Bayoumi and Masson 1995). Brussels, whose revenues are less than 2 per cent of EU GDP, simply lacks the means to carry out significant interregional transfers. This is abundantly clear in the amounts allocated in the EU Cohesion fund, which aims to reduce disparities between member states by distributing funds to members whose gross national income is below 90 percent of the EU average. During the 2008-09 global financial crisis, the fund allocated approximately 57.8 billion euros, which amounts to just 0.036 per cent of the 1.6 trillion euros lost during the same time period.Footnote 5 More than a decade after the global financial crisis, there is hope for a deeper fiscal union with the July 2020 agreement for a 750 billion euro rescue package to help depressed areas hit hard by the novel coronavirus. But with only 390 billion taking the form of grants due to intransigence of the so-called "Frugal Four"—Austria, Denmark, Netherlands, Sweden—uncertainty remains for a systemic overhaul of EU-wide fiscal policy. Second, if fiscal transfers are lacking, the relocation of labor from low growth to high growth countries may allow for more symmetric business cycles. However, significant labor market frictions in the EU limit worker migration, thereby allowing differences in growth and unemployment to persist. Language barriers in the EU represent one such friction. Indeed, a survey among European labor market experts found that the single European labor market had not reached its potential due to these language barriers, but also due to the lack of EU harmonization of professional qualifications and social security systems (Krause et al. 2017). Comparative studies of labor mobility in the US and EU consistently show less mobility within the EU, although this gap narrows with the EU enlargement (Decressin et al. 1994; Bentivogli and Pagano 1999). There is little evidence of wage convergence between countries in the EU (Naz et al. 2017). Most important to our paper, Arpaia et al. (2016) and Jauer et al. (2019), and Basso et al. (2018) confirm that labor mobility in the EU is not high enough to absorb an economic downturn fully. The best estimates suggest that migration absorbed about one-quarter of the 2008-09 shock within a year. However, the results are driven largely by migration of recent EU accession country citizens. With limited fiscal transfers and insufficient migration to stymie asymmetric shocks, we might still expect specialized regions to fare no worse than their less-specialized counterparts due to risk sharing—either public or private—amongst regions. Risks are shared publicly—and so far within states, not among them—by automatic fiscal transfers, deposit insurance, and implicit guarantees. Risks are shared privately—both within and among states—through portfolio adjustments—e.g. by major banks and institutional investors that hedge against any regionally-specific shock—as well as through inter-regional trade, where external demand replaces the depressed local demand (Schelkle 2017). We contend that this private risk-sharing among member states substitutes for the absence of fiscal transfers (i.e., public risk-sharing) and labor mobility during a crisis. As Frankel and Rose (1998) suggested, OCAs may indeed be "endogenous." That is, as regional trade and specialization increase, so does risk-sharing, and with it, symmetric business cycles. As the financial crisis spread from its origins in the United States, few people anticipated how rapidly and comprehensively it would permeate Europe's economy. By the end of 2009, real GDP across Europe dropped by four per cent, the sharpest contraction in its history and the deepest recession since the 1930's (European Commission 2009). It displayed exactly the features that critics of the Eurozone feared. First, the crisis was unforeseen and sudden: it was, unlike the subsequent sovereign debt crisis, a predominantly exogenous shock.Footnote 6 Following the US stock market plunge, the downturn rapidly spread through Europe's financial sector and then the economy at large, precipitating a significant downturn in production. Second, the shock had a varied effect across regions in the EU, as well as sectors within the economy. Third, the downturn was too severe for the member states to mobilize a rapid coordinated response. Indeed the insufficiency of a coordinated response became evident in the following years. EU bureaucracy undertook greater economic policy surveillance and supervision of the financial sector (e.g., Hodson 2011;Bauer and Becker 2014) but the scale of reforms was limited. Moreover, the European Central Bank, with pressure from the conservative Bundesbank, was reluctant to engage in the unorthodox monetary policies that buoyed economies abroad. Thus the suddenness, asymmetry, and intractability of the financial crisis were precisely the perfect storm that risked unraveling the economic and monetary union. This leads to our second empirical expectation: If the Eurozone heightened member states' vulnerability to asymmetric shocks, then, all else equal, Eurozone membership should be associated with an especially severe contraction in production following a shock. Conversely, if members of the Eurozone fared equally well or better than their non-Eurozone counterparts, then it suggests a two-tiered system is a durable arrangement. We now turn to explaining the data we employ to analyze our two empirical expectations. Data and empirical model To assess the European Union's vulnerability to asymmetric economic shocks, we assembled an extensive data set. Our main dependent variable is the economic performance—measured as the one-year change in gross value added (GVA)—from each sub-national region within the EU between 2000 and 2013: \(\varDelta \log (\text {GVA})_{t\text { to }t+1}\).Footnote 7 Sub-national regions are measured at the NUTS-2 level reported by Eurostat, and we distinguish among seven sectors of production. Here, as in calculating the other variables, we separated or combined industries in order to merge across data sources and fill in more complete data from national sources. Some member states joined the EU after our sample commences; for these countries, the first year of observation is the year of admission.Footnote 8 Thus our underlying data set consists of an imbalanced panel covering 19 countries,Footnote 9 266 regions, and seven sectors for each year from 2000 (or year of accession) through 2013, for a total of 23,156 observations. See the Appendix for details. The global financial crisis was felt across the EU unevenly. Figure 1 displays the economic performance for each region. It is measured as the annual percent change in GVA from 2008 to 2009 and illustrates the geographical variation in the crisis across the EU at the level of NUTS-2 regions. Some regions, especially in the EU periphery, experienced drastic contractions in their local economies, up to 23 per cent while others weathered the crisis relatively unscathed. Six regions experienced minute growth during this period.Footnote 10 Likewise, there was a significant range in country-wide contractions. For instance, France's country-wide decline in GVA was approximately 2.8 percent while the UK suffered dramatically in that same time frame with a 14.5 per cent contraction. Clearly, the financial crisis had major negative repercussions for some countries' overall economies but not others. Despite significant cross-national and cross-regional discrepancies, the time trends in GVA for regions in the most and least specialized countries are strikingly similar in the pre-crisis period. EU Map of the % Change in GVA between 2008 and 2009, by Quantile We select this time-frame to focus on the short–run repercussions of the 2008-09 crisis. We do so for good reason. The 2008 to 2009 period brought a distinctly severe contraction in GVA across the vast majority of EU regions, as shown in Fig. 2. Production in the EU fell by nearly 4.5%, the most severe decline in the history of the institution. There was a slight rebound in the ensuing year and then an approximate return to previous growth rates. Distribution of annual change in GVA for regions. The boxplot displays yearly distributions of the one-year percentage change in GVA for NUTS-2 regions. Boxes cover 25th to 75th percentiles with medians shown. The most significant economic contraction occurred from 2008 to 2009 We end our sample with the 2012–2013 change in GVA, but also examine abbreviated samples to ensure we are not confounding the financial crisis with the sovereign debt crisis, a related but distinct economic event that subsequently shook the EU. Figure 2 clearly demonstrates that the the 2012–2013 change in GVA was similar to trends in the immediate years preceding the crisis. This dispels concern that our analysis is confounded by a second downturn from the sovereign debt crisis; the 2008-09 shock was uniquely severe. Our goal is to test whether specialized regions were more susceptible to an EU-wide economic shock than their more diversified counterparts. We define an asymmetric economic shock by the interaction between regional specialization and the sudden 2008 economic shock. The two constituent parts of the interaction term represent the degree of asymmetry, measured as the extent to which regions are economically specialized (typically in different sectors), and the shock, which occurred suddenly and at approximately the same time for each region. To the extent that some regions were quite specialized and others were not, they would be expected to be asymmetrically vulnerable to the shock. The first component of the interaction term and our most salient explanatory variable, specialization, measures inter-sectoral heterogeneity within each region. We derive specialization from the same region- and sector-level data on gross value added that inform our dependent variable. We construct an absolute Gini index of specialization to measure the inter-sectoral heterogeneity within regions, which is calculated by the standard Gini equation: $$ \textsc{Specialization}_{jt}=\frac{2{\sum\limits_{i=1}^{n}iy_{ijt}}}{{n\sum\limits_{i=1}^{n}y_{ijt}}}-\frac{n+1}{n}, $$ where yijt represents region j's value added (GVA) in sector i (i = 1,…,n, the sectors ranked in ascending order) in year t as a share of total value added in that given region and year. Specifically: $$ y_{ijt}=\frac{\text{GVA}_{ijt}}{{\sum\limits_{i=1}^{n}\text{GVA}_{ijt}}}. $$ Our measure summarizes the sectoral variation and produces a panel data set consisting of 3,308 region-year observations. The absolute Gini index remains very stable over our sample period 2000–2013, increasing by merely half a per cent with no interruption from the 2008-09 crisis. Variation among regions also remained steady. Footnote 11 The steadiness of specialization in the period under study is helpful for our research design because it allows us to isolate the potential impact of a sudden asymmetric shock. The absolute Gini index is only one of several possible ways to measure specialization. As a robustness check, we also calculate a "relative Gini index" to measure the inter-sectoral heterogeneity between regions in the EU (Dixon et al. 1987) and we find it to produce nearly identical results. Nonetheless, following Aiginger and Rossi-Hansberg (2006), we prefer to use the absolute Gini over the relative Gini index because the latter is unstable for small countries.Footnote 12 Our second explanatory variable, shock, is an indicator that equals one in the year 2008 and zero otherwise. As clearly depicted in Fig. 2, 2008 brought a uniquely severe decline in production.Footnote 13 The acute drop in demand confirms that the crisis quickly spread throughout the economy and was by no means limited to the financial sector. In robustness checks, we alternatively use a continuous shock variable measured as the regional deviation in GVA from its long-term trend.Footnote 14 This accounts for the possibility that the crisis deepened over the subsequent years. Figure 3 displays geographical variation in specialization at the outset of the crisis in 2008. The economic "core"—e.g., UK, France, Germany, and Benelux—tends to exhibit high specialization: regional GVA originates from a few sectors. By contrast, "peripheral" countries—e.g., Spain, Portugal, Greece, Ireland, Romania, and Bulgaria—are more diversified. Specialization varies substantially across countries and regions within those countries; but it varies little over the years in our sample. EU Map of specialization in 2008, by Quantile In summary, we test whether the most specialized regions suffered disproportionately from the 2008 economic shock. If specialization produced greater asymmetry in the downturn, we would expect to observe either that \(\varDelta \log \)(GVA) correlated negatively (or that its variance increased) with the interaction of specialization and shock. To evaluate our predictions, we model a linear relationship between the financial crisis and specialization on the one hand, and subsequent economic performance on the other. The modeling challenge is to capture the data structure of regions nested within countries, each observed over time. A multilevel (hierarchical) model is the perfect tool for this task. It allows us to estimate both "pooled" and country-specific effects. The "pooled" portion gives the mean relationship over all NUTS-2 regions, regardless of the country in which they are located. The "unpooled" or country-specific effects give the mean relationship for the regions within each country, through the use of country random intercepts and coefficients. For instance, Spain's estimated effect, based on annual observations of its 19 regions, will differ from Germany's estimated effect, which is based on its 37 regions observed over time. Our baseline model is: $$ \begin{array}{@{}rcl@{}} \varDelta\log(\text{GVA})_{t\text{ to }t+1}\!& = &\! \beta_{1}(\text{Specialization}_{t}\times\text{Shock}) + \beta_{2}\text{Specialization}_{t} + \beta_{3}\text{Shock} \\ && + \beta_{n}\text{Controls}_{t} + \mu_{0[k]} + \mu_{1[k]}(\text{Specialization}_{t}\times\text{Shock}) + \eta_{t[k]} \end{array} $$ where μ0[k] allows countries k ∈ 1,2,...19 to have different intercepts. The random intercepts account for unobserved country-level factors. Parameters μ1[k] model country-heterogeneity by allowing slopes to vary by country. Because the multilevel model captures the structure of time series NUTS-2 panels nested within countries, it allows us to examine how countries experienced strain from asymmetric shocks. Although our reliance on observational data makes it impossible to identify causal effects, we do use a one-year lag to mitigate simple endogeneity. In summary, the interaction term captures the joint effect of the recessionary shock and specialization in 2008—i.e. an asymmetric shock—on GVA growth from 2008 to 2009 for all regions (pooled) and by country (unpooled). The multilevel model is especially helpful in capturing the two types of asymmetric shocks. First is the asymmetry between regions within a country. Each country coefficient estimates how specialization within a country during the crisis affects subsequent economic output. Second is the asymmetry across the entirety of the European Union. The pooled coefficient estimates how specialization across the EU during the crisis affects subsequent output. If EU-wide asymmetry is what matters most, then we should observe a negative pooled coefficient and the country-specific coefficients would not be significant. If within-country asymmetry is more important, then we should observe an insignificant pooled coefficient and many statistically significant country-specific coefficients. Variation between countries is captured by the heterogeneity of the country-specific coefficients. To improve model estimates, we control for factors that make regions and countries more likely to specialize, and thus more likely to experience asymmetric shocks. We also account for factors that insulate a region from the ill-effects of a shock. Membership in the Eurozone is thought to reinforce a country's economic specialization. Eurozone members have accepted Maastricht "convergence criteria" including constraints on their fiscal policy. At the same time, the coordination of policies mandated by Eurozone membership likely affects economic performance beyond what might be predicted by specialization alone—most notably their shared monetary policy as set by the European Central Bank. We control for Eurozone status using an indicator that varies by country and year and determine whether membership status is associated with worse economic performance.Footnote 15 We include basic predictors for economic output: the population density and per-employee productivity measured at the NUTS-2 region level. One might expect a more densely-populated region to have a stronger urban core with a mobile, adaptable, and highly-productive labor force and hence be better able to withstand the shock; while a region with lower pre-shock labor productivity (i.e., lower GVA per employee) might be more vulnerable to firm turnover (Melitz and Ottaviano 2008). Another key control is trade with other member states of the EU, measured as the given country's exports to other EU members as a portion of national GDP.Footnote 16 Trade integration is sometimes seen as stimulant to private risk-sharing (Frankel and Rose 1998; Kalemli-Ozcan et al. 2005). The more firms rely on markets outside their immediate regions, and the more value chains span different member states within the EU, the less sensitive any single region may be to an economic shock. Although intra-EU exports vary cross-nationally, they remain stable over time for core EU members. Only among the newer members has trade intensified.Footnote 17 We expect that—all else equal—countries with more intra-EU exports experience less severe asymmetric shocks. A positive coefficient on trade would lend support for the idea that private risk-sharing through trade integration reduces shock asymmetry. Our second set of controls accounts for social and fiscal policies that might make countries more or less vulnerable to an economic shock. We account for a country's core government spending each year as a percentage of its GDP,Footnote 18 public expenditures on social transfers as a percentage of GDP,Footnote 19 and the generosity of unemployment benefits measured as the wage replacement rate.Footnote 20 Countries with more generous safety nets are thought better able to absorb the negative effects of an economic downturn. Necessarily, some expansion of government spending occurred in the wake of the crisis and our third set of controls accounts for such factors. The European Commission was greatly concerned with "fiscal space," or the ability of governments to run temporary fiscal deficits without threatening their public finances or external positions (European Commission 2009). The size of the response was limited in countries with large public debts and vulnerable current account positions.Footnote 21 Thus to the extent they could, some governments quickly responded by extending assistance to banks. The European Investment Bank also participated in "bailouts" by extending loans.Footnote 22 We account for each outlay, measured as a share of GDP. Despite the political importance of these fiscal responses, their scale paled in comparison to efforts implemented by the United States. To the extent that the "bailout" efforts were endogenous to the depth of the shock, we expect they were modest enough in magnitude that they did little to counteract the asymmetry. Summary statistics are provided in the online appendix, which can be found on the Review of International Organizations' webpage. Effect of asymmetric shocks on total economic output Did specialization exacerbate the ill-effects of the crisis? To evaluate this central question, we examine regressions of the yearly change in GVA on our main measure of the asymmetric shock, i.e. the interaction between shock and specialization. We find that asymmetric shocks are not the culprit: specialized regions within countries fared approximately as well as their more diversified counterparts. First, we establish the basic correlations in our data by running a simple linear regression model. The first model in Table 1 does not include any of the hierarchical features; it only uses fixed effects by country. Model (1) shows that overall, regions that were most specialized when the crisis hit in 2008 experienced the least severe downturn.Footnote 23 Table 1 Regression of change in regional value added on asymmetric shock Next, we present our hierarchical model results in Table 1(2) through (6). For statistical reasons, we must restrict this analysis to the subset of EU countries with at least four NUTS-2 regions.Footnote 24 Consider the pooled estimates for the asymmetric shock. Here, we find no significant effect. Had asymmetric shocks provoked more severe declines in GVA, we would have observed a significant negative coefficient on the interaction term, Specialization×Shock. This null effect appears whether we use a binary indicator for the shock year (Table 1(2)) or a continuous measure of the shock's severity (Table 1(3)). The decline in regional GVA during this time was dramatic, with some regions experiencing as much as a 14 per cent fall in a single year, and the most substantial burden was borne by Eurozone countries. Did specialization have any bearing at all on regional economic performance? To evaluate this possibility, we expand the model by introducing a series of control variables. Table 1(4-5) includes economic and fiscal controls. Productivity per employee tends to be higher in regions dominated by high-tech manufacturing, financial services, etc. and indeed these regions that had the most to lose declined significantly. Conversely, trade integration appears to have mitigated regional economic contractions. The more export-oriented a state's economy, the more comfortably it could rely on demand outside its national borders and the better it fared. The significant positive coefficient on trade is consistent with the idea that private risk-sharing through trade integration reduces shock asymmetry. Social transfers—as a share of a country's GDP—grew steadily over the sample period (column 5), displaying little deviation from the trend during the crisis year. Social spending was associated with worse subsequent economic performance, suggesting that this form of government outlays did little to buoy hard-hit regions. Unemployment benefits, measured as the average percentage of wage replacement each claimant receives, are not a significant predictor. The response of national governments and the European Investment Bank (EIB) to the crisis did not seem to play a significant role in mitigating the downturn. In model (6) we demonstrate that the "bailout" efforts were not significantly associated with subsequent economic performance. In additional tests shown in the appendix, we found that the type of electoral system had little bearing on economic performance. Nor did the degree of district proportionality. One might anticipate that government ideology plays a role; leftist governments may tend to be more interventionist (or pursue a more Keynesian policy response to the crisis). Our additional tests suggest that leftist governments were associated with languid rebounds, although the direction of the causal arrow is far from clear. One might argue that the full brunt of the financial crisis was not felt immediately; that there was a delay between the shock and decline in GVA. We checked this possibility by varying the temporal lag and confirmed that the downturn did emerge within one year, as originally posited in our statistical model. Turning our focus now to the variation across countries, Fig. 4 shows the country-effects of Specialization×Shock corresponding to Table 1(3). In six of the nineteen countries we were able to examine—Poland, Romania, Sweden, Hungary, Czech Republic, and the UK—the more specialized regions experienced especially severe declines in GVA. Otherwise, specialization was either associated with better outcomes (ten countries, including such "peripheral" member states as Spain, Italy, Portugal, and Greece) or made no difference (three countries). The country-level heterogeneity is hardly trivial: the standard deviation in random effects on the interaction term markedly exceeds that of the pooled effect.Footnote 25 As we suggest below, some of the heterogeneity can be attributed to the specific sectors in which countries specialized. Random Effects of Specialization×Shock on Change in GVA from Multilevel Model. Random effects on interaction between specialization and shock, grouped by country. Estimates are based on Table 1(3) Notably, all of the countries in which greater regional specialization was associated with worse regional outcomes were ones that retained their own currencies—i.e., they were not members of the Eurozone. Table 2 presents our analysis in which we compare Eurozone member states to nonmembers during the Great Recession. We use country random intercepts and our various control measures.Footnote 26 While membership in the monetary union was associated with slower economic growth over the whole period, it predicts better (or, at least, less bad) outcomes. This is consistent with the idea that use of a common currency encourages the development of risk-sharing mechanisms that can cushion the shock of a downturn. Table 2 Multilevel regression of change in GDP on asymmetric shock and Eurozone×shock It might still be the case that more specialized regions, even though they generally survived the shock better, exhibited a greater diversity of responses: the classic "rust belt vs. sun belt" asymmetry. We turn now to that possibility, asking (a) to what extent more diversified regions diverged from EU-wide trends, and (b) how much they diverged from the average response within their own country? While it may seem counterintuitive, we conjecture from the Frankel-Rose perspective that deviations will have been greater within than across countries. Within countries, governments construct extensive mechanisms for public risk-sharing and regions can specialize without much private sharing of risk. Across countries, the overall EU architecture of governance is far too weakFootnote 27 to permit any reliance on public risk-sharing; hence regions can only specialize to the extent that private parties hedge against EU-wide risk. Our logic here is similar to that propounded by Estevez-Abe et al. (2001) with respect to firm- and sector-specific production. Individual specialization is rational where generous welfare states insure against obsolescence of specific skills, and in less generous regimes individuals will insure against that risk by developing more general skills. Analogously, we expect that regions will tend to specialize within countries only to the extent that their domestic governments provide mechanisms of public risk-sharing, i.e. state aid to regions encountering hard times, and that extensive public risk-sharing will reduce incentives of firms to privately share risk by diversifying production and regional reliance. The difference will manifest during a sharp economic contraction. Under reliable public risk-sharing, specialized individuals or regions will exhibit highly varying responses; where risks are shared only privately, the response even of specialized individuals or regions will vary less. We conducted a series of additional statistical tests, presented in the Appendix. We find that relative to EU performance, specialized regions tended on average to endure a less severe economic contraction. During non-shock years, these specialized regions differed more from EU-wide trends, providing further support that these regions tend to be more productive than less specialized regions. Perhaps these results are not only about specialization in general, but rather, the sector in which a region specializes, in particular. Accordingly, we address the role of sectoral specialization in surviving a global economic downturn. Sectoral effects Our results thus far have shown that asymmetric shocks were far from the hazard they were purported to be. While regional specialization per se was associated on average, and in most countries, with better outcomes, the sector in which a region specialized also mattered. We suggest here that sectoral effects explain a great deal of the regional variation across, and even more within, countries. Table 3 displays the EU-wide loss in value-added in each of the seven sectors we consider. Clearly agriculture, manufacturing, and construction suffered most, the public sector, public utilities, and mining and quarrying least. This EU-wide measure, however, fails to capture the impact at the regional level, not least because some sectors constituted so small (agriculture), or so regionally uniform (public administration), a share of total production. Hence we replicate the multivariate regression of the preceding analysis, but include, instead of the overall Gini of specialization, the shares of regional production in sector i in year t. Table 3 Average regional decline in sector value added, 2008 to 2009 We evaluate the economic performance of the regions as function of specialization in each of the sectors. We run a series of regressions where our outcome is again the percentage change in GVA and the explanatory variables are specialization in each sector. Because sector shares are compositional, we exclude sector A (agriculture) as the reference category. Country random intercepts are included, as is a dummy variable indicating Eurozone membership. Instead of using an interaction between regional specialization (here, in a specific sector) and the shock dummy, we disaggregate the analysis by year. In other words, we evaluate whether sectoral specialization predicted a particularly severe downturn during the 2008-09 crisis as compared to performance in years before and after the shock.Footnote 28 This enables a fine-grained comparison across sectors in which regions specialize.Footnote 29 Figure 5 shows the estimated effect of specialization in each sector by year with 0.95 confidence intervals.Footnote 30 There are two key points conveyed in this figure. First, the year in which the global financial crisis struck—2008—reveals a distinctive pattern. That year brought a marked downturn for most sectors, as demonstrated by the significantly negative coefficients on specialization in 2008 and positive or insignificant coefficients in other years. The second key point is that the sectors in which a region specializes matter a great deal. Again, we emphasize that the crisis quickly extended beyond its origins in the financial sector across the larger economy. Regions specialized in manufacturing (sector C) and vehicles and transportation (G-J) were hardest hit, while those whose value-added came chiefly from finance, insurance, real estate, legal services, or accounting (K-N) weathered the storm better than most. Regions with extensive public sectors (O-U) emerged relatively unscathed. Eurozone members experienced slightly better outcomes in 2008, again suggesting that the shared currency and accompanying standards may have offered some protection. Estimated Effect of Sector Specialization on Change in GVA by Year. Estimated sector effects from model fitted with country-random intercepts. Agriculture (A) is omitted category. Years are 2006-2011. Point estimates and 95% confidence intervals shown. Across most sectors, 2008 is an outlier year. Sector codes: B,D,E mining, electricity, etc., C manufacturing, F construction, G-J transportation, telecommunication, etc., K-N finance, real estate, legal, etc., O-U public administration, health care, education, etc. The sectoral differences align well with the the cross-country heterogeneity that we have observed. In the UK, for example, the manufacturing regions of northern England were devastated to an extent that outweighed the relatively robust response of central London's focus on financial and legal services. Here, specialization overall was associated with worse outcomes. In Austria, Belgium, and France, on the other hand, many of the most specialized regions focused heavily on the relatively unscathed public sector or on even more resilient financial, legal, or private administrative services: specialization per se was associated in these countries with better outcomes. Finally, we explore the Frankel-Rose argument that members of a currency union pool risk as a private form of consumption smoothing, which we think correctly explains our two sets of results on specialization and Eurozone membership. Private risk-sharing in the Eurozone The Frankel-Rose risk-sharing explanation can be helpfully introduced with a metaphor. In times of economic trouble, risk-averse households will seek to spread economic uncertainty in various manners in order to smooth consumption—e.g., through credit or insurance mechanisms (Morduch 1995). Indeed, there is evidence of exactly this behavior by US households in response to decades of wage stagnation (Blyth 2013) and the financial crisis of 2008 (Mian and Sufi 2016). Similarly, countries (or regions) have an incentive to rely on credit to smooth consumption during an economic downturn. But, as noted previously, credit availability in the EU was entirely inadequate due to the lack of a fiscal union. The absence of independent counter-cyclic monetary policy within the Eurozone left these countries at the mercy of the European Central Bank, which followed orthodoxy during the 2008 crisis.Footnote 31 There was some effort to enable credit transfers between national central banks during the downturn through the TARGET2 mechanism (Schelkle 2017). But the benefits of this monetary credit were largely squandered by national politicians who were less than enthusiastic to enact internal fiscal adjustments (Tornell 2013). Thus, like households with no credit line or insurance to speak of, Eurozone countries (and regions) were left with few options. Countries that select into Eurozone membership do so, in large part, on the basis of trade ties. The more a country trades with the Eurozone, the greater the potential gains to adopting the shared currency. If membership is granted, trade ties increase further as the shared currency lowers the costs to cross-border transactions. Eurozone membership should therefore correlate with high trade integration. Crucially, this trade makes the business cycles of member states more symmetric—they rise and fall together—when inter-regional trade accounts for most trade. Similar to households' use of credit or insurance mechanisms to smooth consumption in times of crisis, regions within the Eurozone have an implicit risk-sharing mechanism via inter-regional trade. For example, if demand for German autos produced in Oberbayern (Upper Bavaria) diminishes in that region (or in Germany), demand from other regions within the Eurozone should buoy the Bavarian auto sector. Inter-regional trade may therefore serve an important risk-sharing function that pools the threat of sector-specific downturns in highly specialized regions (cf. Frankel and Rose1998). We explore this risk-sharing mechanism using data from Thissen et al. (2018) and EUREGIO. The authors estimate regional input-output tables at the NUTS-2 level between 2000 and 2010. Using their estimates for both intermediate goods and final demand, we find that regions within the Eurozone have much stronger trade links with other regions in the currency area, than they do with regions outside the currency area. Table 4 demonstrates that inter-regional trade within the Eurozone in the years before the global financial crisis (2000-2007) accounted for 51 per cent of all Eurozone regions' trade on average, while the remaining 49 per cent is with countries who do not share the same currency.Footnote 32 Conversely, EU regions outside of the Eurozone trade much more with other regions/countries outside of the Eurozone (58 per cent) than they do with Eurozone member states (42 per cent). As trade tends to reflect medium- to long-term structural trends, it is not surprising that these percentages remained stable between 2008 and 2009. Table 4 Private risk-sharing: inter-regional trade as % of total trade, average 2000-2007 These rough calculations suggest that Eurozone trade links offer regions an insurance policy through inter-regional trade. When demand declines in a home region, demand from another region within the currency area can smooth income losses. Given the scope of this short research note, we are not able to fully test the risk-sharing mechanism, leaving this empirical venture to future work. However, we do believe this provides suggestive evidence of private risk-sharing at work within the Eurozone, supporting the Frankel-Rose hypothesis of endogenous optimum currency areas. Discussion: Specialization and EU stability Our results suggest that both the euro project and the EU more generally are Janus-faced. While there can be no doubt that, at the country level, the euro often served as a "golden straitjacket" that forced internal devaluation on such member states as Greece, Ireland, and Portugal—and, as we see from the consistently negative coefficient on Eurozone membership in Table 1, on regions within the Eurozone generally—it worked also, as the Maastricht framers intended, to deepen interregional ties and to cushion specialized regions against adverse economic shocks. Countries that maintained their own currencies likely did so, at least in part, because they anticipated that their future shocks would deviate more from the central tendency of the EU. In the severe shock of 2008-09, they could indeed adjust more rapidly by devaluing externally. At the same time, their very flexibility may have created a degree of moral hazard, encouraging highly specialized regions to rely on implicit mechanisms of public risk-sharing that, in the event, proved inadequate. Specialized regions within the Eurozone, and those who invested in them, had to hedge against the possibility of asymmetric shocks; and, on the evidence presented here, did so with some success. We see that although both the EU and the Eurozone have fostered greater regional specialization, those more specialized regions have not suffered the most in the aftermath of the global financial crisis. In the EU as a whole, the most specialized regions actually survived the crisis much better than did more diversified regional economies. Only in countries outside the Eurozone—notably in the UK, Sweden, the Czech Republic, Hungary, Romania, and Poland—did the most specialized regions experience a significantly sharper downturn. Nor can this have owed to any failure of those member states to intervene fiscally to meet the crisis. Most responded more aggressively than did the EU on average, and in almost all cases more aggressively than did the comparatively slothful European Central Bank. To be sure, manufacturing suffered the worst downturn of all sectors; but over-specialization in manufacturing cannot explain the negative relationship between specialization and outcomes in the non-Eurozone states. Some Eurozone states' regions were equally specialized in manufacturing, yet their specialization was associated with nothing like the downturns that emerged in non-Eurozone states, perhaps most sharply and ominously in the United Kingdom.Footnote 33 While our findings are by no means conclusive, they tilt the balance toward support for the Frankel-Rose hypothesis. The sharing of a common currency seems endogenously to have brought the Eurozone closer to being an optimum currency area, one in which risk-sharing has allowed the most specialized regions to survive an adverse shock better, and with less variance, than ones with more diverse economies. This risk-sharing is almost certainly private—through inter-regional trade, where external demand replaces depressed local demand—because public mechanisms remain relatively weak. Within the Eurozone, inter-regional trade actually increased as a percentage of total trade during the 2008-09 economic shock, providing further evidence of this private risk-sharing mechanism. As global demand waned, demand from within the currency union insured against even greater losses. Revisiting our example from earlier on trade in wine and cars, although there were severe drops in wine sales in Europe during the 2008-09 economic shock,Footnote 34 the northern Italian manufacturing sector continued exporting vehicle parts to the southern Germany manufacturing sector, while southern Germany continued exporting finished vehicles to northern Italy.Footnote 35 This interdependence between regions and industries within the Eurozone provides an insurance policy against economic catastrophe.Footnote 36 Notably, our empirical findings appear consistent with certain predictions of new-new trade theory (e.g. Melitz 2003). If the very specialized regions had the most productive ("superstar") firms—which tend to export/import several goods and services to/from several countries and regions—this could explain why specialized regions were hit less hard during the crisis. This mechanism would lend further support to the idea that private risk-sharing helped to provide a form of insurance because as local (regional) demand dropped, external demand filled the void, most powerfully for specialized regions. Future research could tease out this mechanism. Our results bode surprisingly well, despite current anxieties, for the future of the European Union as it encounters economic shocks. The EU's novel approach to deep economic integration—a two tiered arrangement that demands governments uniformly eliminate barriers to trade while allowing governments to elect whether or not to retain autonomy over monetary policy—may very well prove to be a sustainable compromise. It was precisely within member states of the Eurozone that specialized regions experienced a more symmetric downturn; it was precisely in many of the non-Eurozone members of the EU that specialized regions experienced worse, and more asymmetric, shocks. The former had to rely on a slow and meager response from the European Central Bank; the latter governments enjoyed greater freedom to move quickly to buffer the shock. As Willett et al. (2010) aptly stated: "The euro has proved neither to be the disaster that the strongest critics predicted nor the rose garden envisioned by some of its strongest supporters" (p. 868). Far from tearing the EU apart, the 2008-09 crisis illuminated the extent to which the EU is incentive compatible: as member states have committed to more encompassing European obligations and their markets have become more integrated, they seem to have gained resiliency. We emphasize here that we focus on how the 2008-09 slump in aggregate demand affected production across the EU rather than narrowly on the financial aspects. We do not examine the subsequent sovereign debt crisis due to the many endogenous features that prevent a clean estimation strategy. See the online appendix, available on the Review of International Organizations' webpage. These countries include the Czech Republic, Hungary, Poland, Romania, Sweden, and the United Kingdom. For more on the theory of optimum currency areas, see Mundell (1961), Frieden et al. (1998), Alesina et al. (2002), and Jonung and Eoin Drea (2009). The Cohesion fund allocated 213 billion euros between 2000-2006; 347 billion euros between 2007-2013; and 450 billion euros between 2014-2020. The financial crisis, which resulted from US regulatory failures and decisions by individual banks and pension funds, quickly spilled over to Europe. By contrast, the sovereign debt crisis that started in the Eurozone 18-months after the financial crisis hit, was the direct consequence of centralized policy-making by the ECB and was therefore far more endogenous. As confirmed by Tooze (2018, p. 6), "The historical narrative seemed to neatly arrange itself with a European crisis following an American crisis, each with its own distinct economic and political logic." The first difference using the natural logarithm, \(\log (\text {GVA})_{t+1} - \log (\text {GVA})_{t}\), approximates a year-over-year percentage change. Cyprus, Estonia, Hungary, Latvia, Lithuania, Malta, Slovakia and Slovenia were officially admitted in 2004. Bulgaria and Romania joined in 2007. Crotia joined in the final year of our sample, 2013. For reasons discussed below, we must restrict our analysis to countries with at least four NUTS-2 regions, thus omitting from our analysis Croatia, Cyprus, Estonia, Ireland, Lithuania, Luxembourg, Latvia, Malta, and Slovenia. These regions were located within Belgium, Greece, Finland, France, Netherlands and Slovakia. The mean Gini in 2000 was 0.366; in 2013 it was 0.386. Over the same period, the median displayed only a minor increase from 0.365 to 0.384, while the standard deviation remained steady at 0.066. The relative Gini index was developed as a "bootstrapping" technique by Dixon et al. (1987), modified by Damgaard and Weiner (2000), and critiqued by Palan (2010). The relative and absolute measures are inversely correlated (− 0.22). Aiginger and Rossi-Hansberg (2006) argue that Gini indices, in general, are known to be skewed by the shares in the middle of the distribution and recommend measuring the share of the largest three industries in a region. Accordingly, we calculated the sector share and find that it correlates well with the absolute measure (+ 0.96). The decision to classify 2008 as the shock year is further validated by time trends in trade, which reveal a significant drop in demand, as we discuss in the Appendix. The long-term trend in regional GVA is obtained using the Hodrick-Prescott (HP) filter. Several countries became EMU members in the middle of our sample: Slovenia (2007), Cyprus (2008), Slovakia (2009), and Estonia (2011). We account for their status during and after the crisis. While related studies have examined bilateral and intra-industry trade data, national aggregate trade flows are sufficiently granular for our purposes. In robustness checks, we control for duration of EU membership. Core government spending is the amount spent by the national government net of interest and transfer payments. Social transfers are social transfers in kind, which are part of the discretionary, a-cyclical spending netted out of core government spending. Unemployment benefits are measured as the initial net rate of wage replacement for an average-income earner. This variable is preferable to unemployment benefit spending, which is endogenous to the downturn. Additionally, countries with high specialization in the public sector require large core government expenditures that may limit the fiscal space available to respond to surprise downturns. In robustness checks, we controlled for public sector specialization. The EIB lends to private firms for the purposes of European integration, private and financial sector development, infrastructure development, energy security, and environmental sustainability. During the crisis, most loans to EU member states were in energy (Belgium, Czech Republic, Finland, France, Italy, Luxembourg, Netherlands, Poland, Romania), manufacturing (Germany, Hungary, Italy, Portugal), construction (Austria, Latvia, Lithuania, Poland, UK), and the financial sector (Bulgaria, Czech Republic, Greece, Hungary, Ireland, Poland, Slovakia). Standard errors are clustered by NUTS-2 region to account for the panel structure of the data. The restricted sample consists of: Austria, Belgium, Bulgaria, Czech Republic, Germany, Denmark, Greece, Spain, Finland, France, Hungary, Italy, Netherlands, Poland, Portugal, Romania, Sweden, Slovakia, and the United Kingdom. Omitted are Cyprus, Estonia, Croatia, Ireland, Lithuania, Luxembourg, Latvia and Slovenia because they have three or fewer NUTS-2 regions. Results are robust if we vary the threshold for number of NUTS-2 regions. Using both 2008 and 2009 as the shock years yields similar effects. The standard deviation in random effects on the interaction term is 0.039; that of the pooled effect 0.026. We do not use random slopes in this table to avoid over-fitting problems. Recall that the EU budget amounts to less than 1 percent of EU GDP, while the average member state controls over 40 per cent of its GDP. Our modeling approach is more transparent than fitting a single multilevel model where the unit of analysis is the sector-year-region and regions are nested within countries. As before, we restrict the sample to countries with at least four NUTS-2 regions. For simplicity, we only present coefficients for years 2006 through 2011; years before and after display similar patterns. It was not until the 2012 Euro crisis that the ECB used unorthodox policies to supply credit. These calculations do not account for trade within a country across its own internal NUTS-2 regions, or intra-regional trade. Rather, we focus here on extra-country trade, which is the type of trade suggested in the Frankel-Rose hypothesis. Willett et al. (2010) report that intra-Eurozone trade as a percentage of GDP grew from 25 per cent in the mid-1990s to over 40 per cent by 2000, further illustrating the importance of this trade for the overall economy of the Eurozone. On the relationship between regional economic decline and support for Brexit, see Colantone and Stanig (2018a). At the same time U.S. imports of European wine remained strong, thus weakening the blow to the French and Italian wine industries. Data obtained from https://oec.world/ on August 4, 2020. Our findings may also lend insight into the languid attempts to implement a European Unemployment Benefits Scheme (EUBS). Asymmetric shocks may not have generated as much tension across the EU as feared, the financial crisis of 2008-9 did not contribute to demand for the inter-state transfers a EUBS would provide. Aiginger, K., & Rossi-Hansberg, E. (2006). Specialization and concentration: A note on theory and evidence. Empirica, 33(4), 255–266. Alesina, A., Barro, R.J. , & Tenreyros, S. (2002). Optimal currency areas. National Bureau of Economic Research Working Paper 9072. Amiti, M. (1998). New trade theories and industrial location in the EU: A survey of evidence. Oxford Review of Economic Policy, 14, 45–53. Arpaia, A., Kiss, A., Palvolgyi, B., & Turrini, A. (2016). Labour mobility and labour market adjustment in the EU. IZA Journal of Migration, 5 (1), 21. Autor, D.H., Dorn, D., & Hanson, G.H. (2016). The China shock: Learning from labor-market adjustment to large changes in trade. Annual Review of Economics, 8, 205–240. Basso, G., D'Amuri, F., & Peri, G. (2018). Immigrants, labor market dynamics and adjustment to shocks in the Euro Area. NBER Working Papers 25091 National Bureau of Economic Research, Inc. Bauer, M.W., & Becker, S. (2014). The unexpected winner of the crisis: The European Commission's strengthened role in economic governance. Journal of European Integration, 36(3), 213–229. Bayoumi, T., & Masson, P.R. (1995). Fiscal flows in the United States and Canada: Lessons for monetary union in Europe. European Economic Review, 39 (2), 253–274. Bechtel, M.M., Hainmueller, J., & Margalit, Y. (2014). Preferences for international redistribution: The divide over the eurozone bailouts. American Journal of Political Science, 58(4), 835–856. Bentivogli, C., & Pagano, P. (1999). Regional disparities and labour mobility: The euro-11 versus the USA. Labour, 13(3), 737–760. Blyth, M. (2013). Austerity: The history of a dangerous idea. Oxford: Oxford University Press. Brakman, S., Garretsen, H., & Schramm, M. (2006). Putting New Economic Geography to the Test: Free-ness of Trade and Agglomeration in the EU Regions. Regional Science and Urban Economics, 36, 613–635. Ciccone, A. (2002). Agglomeration effects in Europe. European Economic Review, 46, 213–227. Colantone, I., & Stanig, P. (2018a). Global competition and Brexit. American Political Science Review, 112(2), 201–218. Colantone, I., & Stanig, I. (2018b). The trade origins of economic nationalism: Import competition and voting behavior in Western Europe. American Journal of Political Science, 62(4), 936–953. Damgaard, C., & Weiner, J. (2000). Describing inequality in plant size or fecundity. Ecology, 81, 1139–1142. Decressin, J., Fatas, A., & et al. (1994). Regional labor market dynamics in Europe. Dixon, P.M., Weiner, J., Mitchell-Olds, T., & Woodley, R. (1987). Bootstrapping the gini coefficient of inequality. Ecology, 68, 1548–1551. Eichengreen, B. (1990). One money for Europe? Lessons from the US currency union. Economic Policy, 5(10), 117–187. Enrico, M. (2011). Local labor markets. In Handbook of labor economics, (Vol. 4 pp. 1237–1313): Elsevier. Estevez-Abe, M., Iversen, T., Soskice, D., & et al. (2001). Social protection and the formation of skills: A reinterpretation of the welfare state. In Varieties of capitalism: The institutional foundations of comparative advantage, Vol. 145. European Commission. (2009). Economic crisis in Europe: Causes, consequences and responses. http://ec.europa.eu/economy_finance/publications/pages/publication15887_en.pdf. Frankel, J.A., & Rose, A.K. (1998). The endogenity of the optimum currency area criteria. The Economic Journal, 108(449), 1009–1025. Frieden, J.A. (2015). Currency politics: The political economy of exchange rate policy. Princeton: Princeton University Press. Frieden, J., Gros, D., & Jones, E. (1998). The new political economy of EMU. Lanham, MD, and Oxford, England: Rowman and Littlefield Publishers. Georgiadou, V., Rori, L., & Roumanias, C. (2018). Mapping the European far right in the 21st century: A meso-level analysis. Electoral Studies, 54, 103–115. Hodson, D. (2011). Governing the euro area in good times and bad. Oxford: Oxford University Press. Jauer, Julia, Liebig, Thomas, Martin, John P, & Puhani, Patrick A. (2019). Migration as an adjustment mechanism in the crisis? A comparison of Europe and the United States 2006–2016. Journal of Population Economics, 32 (1), 1–22. Jonung, Lars, & Eoin Drea. (2009). The Euro: It Can't Happen. It's a Bad Idea. It Won't Last: US Economists on the EMU, 1989 - 2002. Economic Papers, Directorate-General for Economic and Financial Affairs, European Commission (395). Kalemli-Ozcan, Sebnem, Sørensen, B.E., & Yosha, O. (2001). Economic integration, industrial specialization, and the asymmetry of macroeconomic fluctuations. Journal of International Economics, 55(1), 107–137. Kalemli-Ozcan, S., Sørensen, B.E., Yosha, O., Huizinga, H., & Jonung, L. (2005). Asymmetrix shocks and risk-sharing in a monetary union: Updated evidence and policy implications for Europe. In The Internationalization of Asset Ownership in Europe (pp. 173–204): Cambridge University Press. Krause, A., Rinne, U., & Zimmermann, K.F. (2017). European labor market integration: What the experts think. International Journal of Manpower. Krugman, P. (1991). Geography and trade (Gaston Eyskens lecture series). Leuven Belgium: Leuven University Press; Cambridge, MA, and London: MIT Press. Melitz, M.J. (2003). The impact of trade on intra-industry reallocations and aggregate industry productivity. Econometrica, 71(6), 1695–1725. Melitz, M.J., & Ottaviano, G.I.P. (2008). Market size, trade, and productivity. The Review of Economic Studies, 75(1), 295–316. Mian, A., & Sufi, A. (2016). Who bears the cost of recessions? The role of house prices and household debt. In Handbook of Macroeconomics, (Vol. 2 pp. 255–296): Elsevier. Morduch, Jonathan. (1995). Income smoothing and consumption smoothing. Journal of Economic Perspectives, 9(3), 103–114. Mundell, R.A. (1961). A theory of optimum currency areas. American Economic Review, 51, 657–665. Naz, A., Ahmad, N., & Naveed, A. (2017). Wage convergence across European regions: Do international borders matter? Journal of Economic Integration:35–64. Obstfeld, Maurice. (1996). Models of currency crises with self-fulfilling features. European Economic Review, 40(3-5), 1037–1047. Palan, Nicole. (2010). Measurement of specialization: The choice of indices. Forschungsschwerpunkt InternationaleWirtschaft, Austrian Federal Ministry of Science, Research, and Economy http://www.fiw.ac.at, Accessed 5 September 2014. FIW Working Paper no. 62. Sala-i Martin, Xavier, and Jeffrey Sachs. 1991. Fiscal federalism. Sala-i Martin, X., & Sachs, J. (1991). Fiscal federalism and optimum currency areas: Evidence for Europe from the United States. Technical report National Bureau of Economic Research. Schelkle, W. (2017). The political economy of monetary solidarity: Understanding the Euro experiment. Oxford: Oxford University Press. Thissen, M., Lankhuizen, M., van Oort, F., Los, B., & Diodato, D. (2018). EUREGIO: The construction of a global IO DATABASE with regional detail for Europe for 2000–2010. Tooze, A. (2018). Crashed: How a decade of financial crises changed the world. Penguin. Tornell, A. (2013). The tragedy of the commons in the Eurozone and Target2. Willett, T.D., Permpoon, O., & Wihlborg, C. (2010). Endogenous OCA analysis and the early euro experience. The World Economy, 33(7), 851–872. We thank Friederike Kelle, Axel Dreher, three anonymous referees, and the participants at the IC3JM workshop (2014), the Center for European Studies conference (2016), and the International Political Economy Society conference (2016) for helpful comments. Flaherty acknowledges that this material is based in part upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2018241622. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Rogowski is grateful for research funding from New York University, Abu Dhabi. Weldzius gratefully acknowledges support from Washington University in St. Louis and the Niehaus Center for Globalization and Governance at Princeton University. We alone remain responsible for any errors or omissions. Department of Political Science, University of California, Davis, Davis, CA, USA Lauren Peritz Department of Political Science, Villanova University, Villanova, PA, USA Ryan Weldzius Department of Political Science, University of California, Los Angeles, Los Angeles, CA, USA Ronald Rogowski Department of Political Science, University of California, San Diego, San Diego, CA, USA Thomas Flaherty Correspondence to Lauren Peritz. Below is the link to the electronic supplementary material. (ZIP 5.34 MB) Peritz, L., Weldzius, R., Rogowski, R. et al. Enduring the great recession: Economic integration in the European Union. Rev Int Organ (2021). https://doi.org/10.1007/s11558-020-09410-0 Asymmetric shock Optimum currency area
CommonCrawl
Quasi-Local Energy-Momentum and Angular Momentum in General Relativity Part of a collection: Mathematical Relativity László B. Szabados1 Living Reviews in Relativity volume 12, Article number: 4 (2009) Cite this article Latest version View article history The Original Version of this article was published on 01 December 2004 The present status of the quasi-local mass, energy-momentum and angular-momentum constructions in general relativity is reviewed. First, the general ideas, concepts, and strategies, as well as the necessary tools to construct and analyze the quasi-local quantities, are recalled. Then, the various specific constructions and their properties (both successes and deficiencies are discussed. Finally, some of the (actual and potential) applications of the quasi-local concepts and specific constructions are briefly mentioned. Over the last 35 years, one of the greatest achievements in classical general relativity has certainly been the proof of the positivity of the total gravitational energy, both at spatial and null infinity. It is precisely its positivity that makes this notion not only important (because of its theoretical significance), but also a useful tool in the everyday practice of working relativists. This success inspired the more ambitious claim to associate energy (or rather energy-momentum and, ultimately, angular momentum as well) to extended, but finite, spacetime domains, i.e., at the quasi-local level. Obviously, the quasi-local quantities could provide a more detailed characterization of the states of the gravitational 'field' than the global ones, so they (together with more general quasi-local observables) would be interesting in their own right. Moreover, finding an appropriate notion of energy-momentum and angular momentum would be important from the point of view of applications as well. For example, they may play a central role in the proof of the full Penrose inequality (as they have already played in the proof of the Riemannian version of this inequality). The correct, ultimate formulation of black hole thermodynamics should probably be based on quasi-locally defined internal energy, entropy, angular momentum, etc. In numerical calculations, conserved quantities (or at least those for which balance equations can be derived) are used to control the errors. However, in such calculations all the domains are finite, i.e., quasi-local. Therefore, a solid theoretical foundation of the quasi-local conserved quantities is needed. However, contrary to the high expectations of the 1980s, finding an appropriate quasi-local notion of energy-momentum has proven to be surprisingly difficult. Nowadays, the state of the art is typically postmodern: although there are several promising and useful suggestions, we not only have no ultimate, generally accepted expression for the energy-momentum and especially for the angular momentum, but there is not even a consensus in the relativity community on general questions (for example, what do we mean by energy-momentum? just a general expression containing arbitrary functions, or rather a definite one, free of any ambiguities, even of additive constants), or on the list of the criteria of reasonableness of such expressions. The various suggestions are based on different philosophies/approaches and give different results in the same situation. Apparently, the ideas and successes of one construction have very little influence on other constructions. The aim of the present paper is, therefore, twofold. First, to collect and review the various specific suggestions, and, second, to stimulate the interaction between the different approaches by clarifying the general, potentially-common points, issues and questions. Thus, we wanted not only to write a 'who-did-what' review, but to concentrate on the understanding of the basic questions (such as why should the gravitational energy-momentum and angular momentum, or, more generally, any observable of the gravitational 'field', be necessarily quasi-local) and ideas behind the various specific constructions. Consequently, one third of the present review is devoted to these general questions. We review the specific constructions and their properties only in the second part, and in the third part we discuss very briefly some (potential) applications of the quasi-local quantities. Although this paper is at heart a review of known and published results, we believe that it contains several new elements, observations, suggestions etc. Surprisingly enough, most of the ideas and concepts that appear in connection with the gravitational energy-momentum and angular momentum can be introduced in (and hence can be understood from) the theory of matter fields in Minkowski spacetime. Thus, in Section 2.1, we review the Belinfante-Rosenfeld procedure that we will apply to gravity in Section 3, introduce the notion of quasi-local energy-momentum and angular momentum of the matter fields and discuss their properties. The philosophy of quasi-locality in general relativity will be demonstrated in Minkowski spacetime where the energy-momentum and angular momentum of the matter fields are treated quasi-locally. Then we turn to the difficulties of gravitational energy-momentum and angular momentum, and we clarify why the gravitational observables should necessarily be quasi-local. The tools needed to construct and analyze the quasi-local quantities are reviewed in the fourth section. This closes the first (general) part of the review (Sections 2–4). The second part is devoted to the discussion of the specific constructions (Sections 5–12). Since most of the suggestions are constructions, they cannot be given as a short mathematical definition. Moreover, there are important physical ideas behind them, without which the constructions may appear ad hoc. Thus, we always try to explain these physical pictures, the motivations and interpretations. Although the present paper is intended to be a nontechnical review, the explicit mathematical definitions of the various specific constructions will always be given, while the properties and applications are usually summarized only. Sometimes we give a review of technical aspects as well, without which it would be difficult to understand even some of the conceptual issues. The list of references connected with this second part is intended to be complete. We apologize to all those whose results were accidentally left out. The list of the (actual and potential) applications of the quasi-local quantities, discussed in Section 13, is far from being complete, and might be a bit subjective. Here we consider the calculation of gravitational energy transfer, applications to black hole physics and cosmology, and a quasi-local characterization of the pp-wave metrics. We close this paper with a discussion of the successes and deficiencies of the general and (potentially) viable constructions. In contrast to the positivistic style of Sections 5–12, Section 14 (as well as the choice of subject matter of Sections 2–4) reflects our own personal interest and view of the subject. The theory of quasi-local observables in general relativity is far from being complete. The most important open problem is still the trivial one: 'Find quasi-local energy-momentum and angular momentum expressions satisfying the points of the lists of Section 4.3'. Several specific open questions in connection with the specific definitions are raised both in the corresponding sections and in Section 14; these are simple enough to be worked out by graduate students. On the other hand, applying them to solve physical/geometrical problems (e.g., to some mentioned in Section 13) would be a real achievement. In the present paper we adopt the abstract index formalism. The signature of the spacetime metric gab is −2, and the curvature Ricci tensors and curvature scalar of the covariant derivative ∇a are defined by (\(({\nabla _c}{\nabla _d} - {\nabla _d}{\nabla _c}){X^a}: = - {R^a}_{bcd}{X^b},{R_{bd}}: = {R^a}_{bad}\) and \(R: = {R_{bd}}{g^{bd}}\), respectively. Hence, Einstein's equations take the form \({G_{ab}} + \lambda {g_{ab}}: = {R_{ab}} - {1 \over 2}R{g_{ab}} + \lambda {g_{ab}} = - 8\pi G{T_{ab}}\), where G is Newton's gravitational constant and λ is the cosmological constant (and the speed of light is c =1). However, apart from special cases stated explicitly, the cosmological constant will be assumed to be vanishing, and in Sections 3.1.1, 13.3 and 13.4 we use the traditional cgs system. Energy-Momentum and Angular Momentum of Matter Fields Energy-momentum and angular-momentum density of matter fields The symmetric energy-momentum tensor It is a widely accepted view that the canonical energy-momentum and spin tensors are well defined and have relevance only in flat spacetime, and, hence, are usually underestimated and abandoned. However, it is only the analog of these canonical quantities that can be associated with gravity itself. Thus, we first introduce these quantities for the matter fields in a general curved spacetime. To specify the state of the matter fields operationally, two kinds of devices are needed: the first measures the value of the fields, while the other measures the spatio-temporal location of the first. Correspondingly, the fields on the manifold M of events can be grouped into two sharply-distinguished classes. The first contains the matter field variables, e.g., finitely many (r, s)-type tensor fields \({\Phi _N}_{{b_1} \ldots {b_{\mathcal S}}}^{{a_1} \ldots {a_r}}\), whilst the second contains the fields specifying the spacetime geometry, i.e., the metric gab in Einstein's theory. Suppose that the dynamics of the matter fields is governed by Hamilton's principle specified by a Lagrangian \({L_{\rm{m}}} = {L_{\rm{m}}}({g^{ab}},{\Phi _N},{\nabla _e}{\Phi _N}, \ldots, {\nabla _{{e_1} \ldots}}{\nabla _{{e_k}}}{\Phi _N})\). If Im[gab, ΦN] is the action functional, i.e., the volume integral of Lm on some open domain D with compact closure, then the equations of motion are $$E_{\;\;\;a \ldots}^{Nb \ldots}: = {1 \over {\sqrt {\vert g \vert}}}{{\delta {I_{\rm{m}}}} \over {\delta {\Phi _{N_{b \ldots}^{a \ldots}}}}} = \sum\limits_{n = 0}^k {{{(-)}^n}{\nabla _{{e_n}}} \ldots {\nabla _{{e_1}}}\left({{{\partial {L_{\rm{m}}}} \over {\partial \left({{\nabla _{{e_1}}} \ldots {\nabla _{{e_n}}}{\Phi _{N_{b \ldots}^{a \ldots}}}} \right)}}} \right) =} 0,$$ the Euler-Lagrange equations. (Here, of course, \(\delta {I_{\rm{m}}}/\delta {\Phi _N}_{b \ldots}^{a \ldots}\) denotes the formal variational derivative of Im with respect to the field variable \({\Phi _N}_{b \ldots}^{a \ldots}\).) The symmetric (or dynamical) energy-momentum tensor is defined (and is given explicitly) by $${T_{ab}}: = {1 \over {\sqrt {\vert g\vert}}}{{\delta {I_{\rm{m}}}} \over {\delta {g^{ab}}}} = 2{{\partial {L_{\rm{m}}}} \over {\partial {g^{ab}}}} - {L_{\rm{m}}}{g_{ab}} + {1 \over 2}{\nabla ^e}({\sigma _{abe}} + {\sigma _{bae}} - {\sigma _{aeb}} - {\sigma _{bea}} - {\sigma _{eab}} - {\sigma _{eba}}),$$ where we introduced the canonical spin tensor $${\sigma ^{ea}}_b: = \sum\limits_{n = 1}^k {\sum\limits_{i = 1}^n {{{( - )}^i}\delta _{{e_i}}^e{\nabla _{{e_{i - 1}}}}...{\nabla _{{e_1}}}\left( {\frac{{\partial {L_m}}}{{\partial ({\nabla _{{e_1}}}...{\nabla _{{e_n}}}{\Phi _N}_{d...}^{c...})}}} \right)}} \;\Delta_{be_{i+1...}e_{n}d...}^{ac...}{_{h...}^{f_{i+1...}f_{n}g}}\Delta_{f_{i+1}}...\Delta_{f_{n}}\;\Phi_{N_{g...}^{h...}}.$$ (The terminology will be justified in Section 2.2.) Here \(\Delta _{b{d_1} \ldots {d_q}{h_1} \ldots {h_p}}^{a{c_1} \ldots {c_p}{g_1} \ldots {g_q}}\) is the (p + q + 1, p + q + 1)-type invariant tensor, built from the Kronecker deltas, appearing naturally in the expression of the Lie derivative of the (p, q)-type tensor fields in terms of the torsion free covariant derivatives: \({{-\!\!\!\! L}}_{\rm{K}}\Phi _{d \ldots}^{c \ldots} = {\nabla _{\rm{K}}}\Phi _{d \ldots}^{c \ldots} - {\nabla _a}{K^b}\nabla _{bd \ldots h \ldots}^{ac \ldots g \ldots}\Phi _{g \ldots}^{h \ldots}\). (For the general idea behind the derivation of Tab and Eq. (2.2), see, e.g., Section 3 of [240].) The canonical Noether current Suppose that the Lagrangian is weakly diffeomorphism invariant in the sense that, for any vector field Ka and the corresponding local one-parameter family of diffeomorphisms ϕt, one has $$(\phi _t^{\ast} {L_{\rm{m}}})({g^{ab}},{\Phi _N},{\nabla _e}{\Phi _N}, \ldots) - {L_{\rm{m}}}(\phi _t^{\ast} {g^{ab}},\phi _t^{\ast} {\Phi _N},\phi _t^{\ast} {\nabla _e}{\Phi _N}, \ldots) = {\nabla _e}B_t^e,$$ for some one-parameter family of vector fields \(B_t^e = B_t^e({g^{ab}},{\Phi _N}, \ldots)\). (Lm is called diffeomorphism invariant if \({\nabla _e}B_t^e = 0\), e.g., when Lm is a scalar.) Let Ka be any smooth vector field on M. Then, calculating the divergence ∇a(LmKa) to determine the rate of change of the action functional Im along the integral curves of Ka, by a tedious but straightforward computation, one can derive the Noether identity: \({E^N}_{a \ldots}^{b \ldots}{{-\!\!\!\! L}_{\rm{K}}}{\Phi _N}_{b \ldots}^{a \ldots} + {1 \over 2}{T_{ab}}{{-\!\!\!\! L}_{\rm{K}}}{g^{ab}} + {\nabla _e}{C^e}[{\rm{K]}}\,{\rm{=}}\,{\rm{0}}\), where ŁK denotes the Lie derivative along Ka, and Ce[K], the Noether current, is given explicitly by $${C^e}[{\bf{K}}] = {\dot B^e} + {\theta ^{ea}} + {K_a} + \left({{\sigma ^{e[ab]}} + {\sigma ^{a[be]}} + {\sigma ^{b[ae]}}} \right){\nabla _a}{K_b}.$$ Here Ḃe is the derivative of \(B_t^e\) with respect to t at t = 0, which may depend on Ka and its derivatives, and \({\theta ^a}_b\), the canonical energy-momentum tensor, is defined by $${\theta ^a}_b: = - {L_{\rm{m}}}\delta _b^a - \sum\limits_{n = 1}^k {\sum\limits_{i = 1}^n {{{(-)}^i}\delta _{{e_i}}^a{\nabla _{{e_{i - 1}}}} \ldots {\nabla _{{e_1}}}\left({{{\partial {L_{\rm{m}}}} \over {\partial ({\nabla _{{e_1}}} \ldots {\nabla _{{e_n}}}{\Phi _{N_{d \ldots}^{c \ldots}}})}}} \right)}} {\nabla _b}{\nabla _{{e_{i + 1}}}} \ldots {\nabla _{{e_n}}}{\Phi _N}_{d \ldots}^{c \ldots}.$$ Note that, apart from the term Ḃe, the current Ce[K] does not depend on higher than the first derivative of Ka, and the canonical energy-momentum and spin tensors could be introduced as the coefficients of Ka and its first derivative, respectively, in Ce[K]. (For the original introduction of these concepts, see [73, 74, 438]. If the torsion \({\Theta ^c}_{ab}\) is not vanishing, then in the Noether identity there is a further term, \({1 \over 2}{S^{ab}}_c{{-\!\!\!\! L}_{\rm{K}}}{\Theta ^c}_{ab}\), where the dynamic spin tensor \({S^{ab}}_c\) is defined by \(\sqrt {\vert g\vert} {S^{ab}}_c: = 2\delta {I_{\rm{m}}}/\delta {\Theta ^c}_{ab}\), and the Noether current has a slightly different structure [259, 260].) Obviously, Ce[K] is not uniquely determined by the Noether identity, because that contains only its divergence, and any identically-conserved current may be added to it. In fact, \(B_t^e\) may be chosen to be an arbitrary nonzero (but divergence free) vector field, even for diffeomorphism-invariant Lagrangians. Thus, to be more precise, if Ḃe = 0, then we call the specific combination (2.3) the canonical Noether current. Other choices for the Noether current may contain higher derivatives of Ka, as well (see, e.g., [304]), but there is a specific one containing Ka algebraically (see points 3 and 4 below). However, Ca[K] is sensitive to total divergences added to the Lagrangian, and, if the matter fields have gauge freedom (e.g., if the matter is a Maxwell or Yang-Mills field), then in general it is not gauge invariant, even if the Lagrangian is. On the other hand, Tab is gauge invariant and is independent of total divergences added to Lm because it is the variational derivative of the gauge invariant action with respect to the metric. Provided the field equations are satisfied, the Noether identity implies [73, 74, 438, 259, 260] that ∇aTab = 0, Tab = θab + ∇c(σc[ab] + σc[ab] + σc[ab]), Ca[K] = TabKb + ∇c((σc[ab] − σc[ab] − σc[ab]Kb), where the second term on the right is an identically-conserved (i.e., divergence-free) current, and Ca[K] is conserved if Ka is a Killing vector. Hence, TabKb is also conserved and can equally be considered as a Noether current. (For a formally different, but essentially equivalent, introduction of the Noether current and identity, see [536, 287, 191].) The interpretation of the conserved currents, Ca[K] and TabKb, depends on the nature of the Killing vector, Ka. In Minkowski spacetime the ten-dimensional Lie algebra K of the Killing vectors is well known to split into the semidirect sum of a four-dimensional commutative ideal, T, and the quotient K/T, where the latter is isomorphic to so(1, 3). The ideal T is spanned by the constant Killing vectors, in which a constant orthonormal frame field \(\{E_{\underline a}^a\} {\rm{on}}\,M{\rm{,}}\,\underline a = 0, \ldots, 3\), forms a basis. (Thus, the underlined Roman indices \(\underline a, \underline b\), … are concrete, name indices.) By \({g_{ab}}E_{\underline a}^aE_{\underline b}^b: = {\eta _{\underline a \underline b}}: = {\rm{diag(1, - 1, - 1, - 1)}}\) the ideal T inherits a natural Lorentzian vector space structure. Having chosen an origin o ∈ M, the quotient K/T can be identified as the Lie algebra Ro of the boost-rotation Killing vectors that vanish at o. Thus, K has a '4 + 6' decomposition into translations and boost rotations, where the translations are canonically defined but the boost-rotations depend on the choice of the origin o ∈ M. In the coordinate system \(\{{x^{\underline a}}\}\) adapted to \(\{E_{\underline a}^a\}\) (i.e., for which the one-form basis dual to \(\{E_{\underline a}^a\}\) has the form \(\vartheta _a^{\underline a} = {\nabla _a}{x^{\underline a}})\), the general form of the Killing vectors (or rather one-forms) is \({K_a} = {T_{\underline a}}\vartheta _a^{\underline a} + {M_{\underline a \underline b}}({x^{\underline a}}\vartheta _a^{\underline b} - {x^{\underline b}}\vartheta _a^{\underline a})\) for some constants \({T_{\underline a}}\) and \({M_{\underline a \underline b}} = - {M_{\underline b \underline a}}\). Then, the corresponding canonical Noether current is \({C^e}[{\bf{K}}] = E_{\underline e}^e({\theta ^{\underline e \underline a}}{T_{\underline a}} - ({\theta ^{\underline e \underline a}}{x^{\underline b}} - {\theta ^{\underline e \underline b}}{x^{\underline a}} - 2{\sigma ^{\underline e [\underline a \underline {b]}}}){M_{\underline a \underline b}})\), and the coefficients of the translation and the boost-rotation parameters \({T_{\underline a}}\) and \({M_{\underline a \underline b}}\) are interpreted as the density of the energy-momentum and of the sum of the orbital and spin angular momenta, respectively. Since, however, the difference Ca[K] − TabKb is identically conserved and TabKb has more advantageous properties, it is TabKb, that is used to represent the energy-momentum and angular-momentum density of the matter fields. Since in de Sitter and anti-de Sitter spacetimes the (ten-dimensional) Lie algebra of the Killing vector fields, so(1, 4) and so(2, 3), respectively, are semisimple, there is no such natural notion of translations, and hence no natural '4 + 6' decomposition of the ten conserved currents into energy-momentum and (relativistic) angular momentum density. Quasi-local energy-momentum and angular momentum of the matter fields In Section 3 we will see that well-defined (i.e., gauge-invariant) energy-momentum and angular-momentum density cannot be associated with the gravitational 'field', and if we do not want to talk only about global gravitational energy-momentum and angular momentum, then these quantities must be assigned to extended, but finite, spacetime domains. In the light of modern quantum-field-theory investigations, it has become clear that all physical observables should be associated with extended but finite spacetime domains [232, 231]. Thus, observables are always associated with open subsets of spacetime, whose closure is compact, i.e., they are quasi-local. Quantities associated with spacetime points or with the whole spacetime are not observable in this sense. In particular, global quantities, such as the total energy or electric charge, should be considered as the limit of quasi-locally-defined quantities. Thus, the idea of quasi-locality is not new in physics. Although in classical nongravitational physics this is not obligatory, we adopt this view in talking about energy-momentum and angular momentum even of classical matter fields in Minkowski spacetime. Originally, the introduction of these quasi-local quantities was motivated by the analogous gravitational quasi-local quantities [488, 492]. Since, however, many of the basic concepts and ideas behind the various gravitational quasi-local energy-momentum and angular momentum definitions can be understood from the analogous nongravitational quantities in Minkowski spacetime, we devote Section 2.2 to the discussion of them and their properties. The definition of quasi-local quantities To define the quasi-local conserved quantities in Minkowski spacetime, first observe that, for any Killing vector Ka ∈ K, the 3-form ωabc:= KeTef εfabc is closed, and hence, by the triviality of the third de Rham cohomology group, H3(ℝ4) = 0, it is exact: For some 2-form ⋃[K]ab we have \({K_e}{T^{ef}}{\varepsilon _{fabc}} = 3{\nabla _{[a}} \cup {[{\bf{K}}]_{bc] \cdot}}\,{\vee ^{cd}}: = - {1 \over 2} \cup {[{\bf{K}}]_{ab}}{\varepsilon ^{abcd}}\) may be called a 'superpotential' for the conserved current 3-form ωabc. (However, note that while the superpotential for the gravitational energy-momentum expressions of Section 3 is a local function of the general field variables, the existence of this 'superpotential' is a consequence of the field equations and the Killing nature of the vector field Ka. The existence of globally-defined superpotentials that are local functions of the field variables can be proven even without using the Poincaré lemma [535].) If \(\tilde \cup {[{\bf{K}}]_{ab}}\) is (the dual of) another superpotential for the same current ωabc, then by \({\nabla _{[a}}(\cup {[{\bf{K}}]_{bc]}} - \tilde \cup {[{\bf{K}}]_{bc]}}) = 0\) and H2(ℝ4) = 0 the dual superpotential is unique up to the addition of an exact 2-form. If, therefore, \({\mathcal S}\) is any closed orientable spacelike two-surface in the Minkowski spacetime then the integral of ⋃[K]ab on \({\mathcal S}\) is free from this ambiguity. Thus, if Σ is any smooth compact spacelike hypersurface with smooth two-boundary \({\mathcal S}\), then $${Q_{\mathcal S}}[{\bf{K}}]: = {\textstyle{1 \over 2}}\oint\nolimits_{\mathcal S} {\cup {{[{\bf{K}}]}_{ab}}} = \int\nolimits_\Sigma {{K_e}{T^{ef}}{\textstyle{1 \over {3!}}}{\varepsilon _{f\;abc}}}$$ depends only on \({\mathcal S}\). Hence, it is independent of the actual Cauchy surface Σ of the domain of dependence D(Σ) because all the spacelike Cauchy surfaces for D(Σ) have the same common boundary \({\mathcal S}\). Thus, \({Q_{\mathcal S}}[{\bf{K}}]\) can equivalently be interpreted as being associated with the whole domain of dependence D(Σ), and, hence, it is quasi-local in the sense of [232, 231] above. It defines the linear maps \({P_{\mathcal S}}:{\rm{T}} \rightarrow {\rm{{\mathbb R}}}\), and \({J_{\mathcal S}}:{{\rm{R}}_o} \rightarrow {\rm{\mathbb R}}\,{\rm{by}}\,{{\rm{Q}}_{\mathcal S}}[{\bf{K}}] =: {T_{\underline a}}P_{\mathcal S}^{\underline a} + {M_{\underline a \underline b}}J_{\mathcal S}^{\underline a \underline b}\) i.e., they are elements of the corresponding dual spaces. Under Lorentz rotations of the Cartesian coordinates \(P_{\mathcal S}^{\underline a}\) and \(J_{\mathcal S}^{\underline a \underline b}\) transform as a Lorentz vector and anti-symmetric tensor, respectively. Under the translation \({x^{\underline a}} \mapsto {a^{\underline a}} + {\eta ^{\underline a}}\) of the origin \(P_{\mathcal S}^{\underline a}\) is unchanged, but \(J_{\mathcal S}^{\underline a \underline b}\) transforms as \(J_{\mathcal S}^{\underline a \underline b} \mapsto J_{\mathcal S}^{\underline a \underline b} + 2{\eta ^{[\underline a}}P_{\mathcal S}^{\underline b ]}\). Thus, \(P_{\mathcal S}^{\underline a}\) and \(J_{\mathcal S}^{\underline a \underline b}\) may be interpreted as the quasi-local energy-momentum and angular momentum of the matter fields associated with the spacelike two-surface \({\mathcal S}\), or, equivalently, to D(Σ). Then the quasi-local mass and Pauli-Lubanski spin are defined, respectively, by the usual formulae \(m_{\mathcal S}^2: = {\eta _{\underline a \underline b}}P_{\mathcal S}^{\underline a}P_{\mathcal S}^{\underline b}\) and \(S_{\mathcal S}^{\underline a}: = {1 \over 2}{\varepsilon ^{\underline a}}_{\underline b \underline c \underline d}P_{\mathcal S}^{\underline b}J_{\mathcal S}^{\underline c \underline d}\). (If m2 ≠ 0, then the dimensionally-correct definition of the Pauli-Lubanski spin is \({1 \over m}S_{\mathcal S}^{\underline a}\).) As a consequence of the definitions, \({\eta _{\underline a \underline b}}P_{\mathcal S}^{\underline a}S_{\mathcal S}^b = 0\) holds, i.e., if \(P_{\mathcal S}^{\underline a}\) is timelike then \(S_{\mathcal S}^{\underline a}\) is spacelike or zero, but if \(P_{\mathcal S}^{\underline a}\) is null (i.e., \(m_{\mathcal S}^2 = 0\)) then \(S_{\mathcal S}^{\underline a}\) is spacelike or proportional to \(P_{\mathcal S}^{\underline a}\). Obviously we can form the flux integral of the current Tabξb on the hypersurface even if ξa is not a Killing vector, even in general curved spacetime: $${E_\Sigma}[{\xi ^a}]: = \int\nolimits_\Sigma {{\xi _e}{T^{e\;f}}{\textstyle{1 \over {3!}}}{\varepsilon _{f\;abc}}}.$$ then, however, the integral EΣ[ξa] does depend on the hypersurface, because it is not connected with the spacetime symmetries. In particular, the vector field ξa can be chosen to be the unit timelike normal ta of Σ. Since the component μ:= Tabtatb of the energy-momentum tensor is interpreted as the energy-density of the matter fields seen by the local observer ta, it would be legitimate to interpret the corresponding integral EΣ[ta] as 'the quasi-local energy of the matter fields seen by the fleet of observers being at rest with respect to Σ'. Thus, EΣ[ta] defines a different concept of the quasi-local energy: While that based on \({Q_{\mathcal S}}[{\bf{K}}]\) is linked to some absolute element, namely to the translational Killing symmetries of the spacetime, and the constant timelike vector fields can be interpreted as the observers 'measuring' this energy, EΣ[ta] is completely independent of any absolute element of the spacetime and is based exclusively on the arbitrarily chosen fleet of observers. Thus, while \(P_{\mathcal S}^{\underline a}\) is independent of the actual normal ta of \({\mathcal S}\), EΣ[ξa] (for non-Killing ξa) depends on ta intrinsically and is a genuine three-hypersurface rather than a two-surface integral. If \(P_b^{\underline a}: = \delta _b^a - {t^a}{t_b}\), the orthogonal projection to Σ, then the part \({j^a}: = P_b^a{T^{bc}}{t_c}\) of the energy-momentum tensor is interpreted as the momentum density seen by the observer ta. Hence, $$({t_a}{T^{ab}})({t_c}{T^{cd}}){g_{bd}} = {\mu ^2} + {h_{ab}}{j^a}{j^b} = {\mu ^2} - \vert {j^a}{\vert ^2}$$ is the square of the mass density of the matter fields, where hab is the spatial metric in the plane orthogonal to ta. If Tab satisfies the dominant energy condition (i.e., TabVb is a future directed nonspacelike vector for any future directed nonspacelike vector Va, see, e.g., [240]), then this is non-negative, and hence, $${M_\Sigma}: = \int\nolimits_\Sigma {\sqrt {{\mu ^2} - \vert {j^e}{\vert ^2}} {\textstyle{1 \over {3!}}}{t^f}{\varepsilon _{f\;abc}}}$$ can also be interpreted as the quasi-local mass of the matter fields seen by the fleet of observers being at rest with respect to Σ, even in general curved spacetime. However, although in Minkowski spacetime EΣ[K] for the four translational Killing vectors gives the four components of the energy-momentum \(P_{\mathcal S}^{\underline a}\), the mass MΣ is different from \({m_{\mathcal S}}\). In fact, while \({m_{\mathcal S}}\) is defined as the Lorentzian norm of \(P_{\mathcal S}^{\underline a}\) with respect to the metric on the space of the translations, in the definition of MΣ the norm of the current Tabtb is first taken with respect to the pointwise physical metric of the space-time, and then its integral is taken. Nevertheless, because of more advantageous properties (see Section 2.2.3), we prefer to represent the quasi-local energy(-momentum and angular momentum) of the matter fields in the form \({Q_{\mathcal S}}[{\bf{K}}]\) instead of EΣ[ξa]. Thus, even if there is a gauge-invariant and unambiguously-defined energy-momentum density of the matter fields, it is not a priori clear how the various quasi-local quantities should be introduced. We will see in the second part of this review that there are specific suggestions for the gravitational quasi-local energy that are analogous to \(P_{\mathcal S}^0\), others to EΣ[ta], and some to MΣ. Hamiltonian introduction of the quasi-local quantities In the standard Hamiltonian formulation of the dynamics of the classical matter fields on a given (not necessarily flat) spacetime (see, e.g., [283, 558] and references therein) the configuration and momentum variables, ϕA and πA, respectively, are fields on a connected three-manifold Σ, which is interpreted as the typical leaf of a foliation Σt of the spacetime. The foliation can be characterized on Σ by a function N, called the lapse. The evolution of the states in the spacetime is described with respect to a vector field Ka = Nta + Na ('evolution vector field' or 'general time axis'), where ta is the future-directed unit normal to the leaves of the foliation and Na is some vector field, called the shift, being tangent to the leaves. If the matter fields have gauge freedom, then the dynamics of the system is constrained: Physical states can be only those that are on the constraint surface, specified by the vanishing of certain functions Ci = Ci(ϕA, DeϕA,…, πA, DeπA,…), i = 1,…, n, of the canonical variables and their derivatives up to some finite order, where De is the covariant derivative operator in Σ. Then the time evolution of the states in the phase space is governed by the Hamiltonian, which has the form $$H\;[{\bf{K}}] = \int\nolimits_\Sigma {(\mu N + {j_a}{N^a} + {C_i}{N^i} + {D_a}{Z^a})} \;d\Sigma.$$ Here dΣ is the induced volume element, the coefficients μ and ja are local functions of the canonical variables and their derivatives up to some finite order, the Nis are functions on Σ, and Za is a local function of the canonical variables and is a linear function of the lapse, the shift, the functions Ni, and their derivatives up to some finite order. The part CiNi of the Hamiltonian generates gauge motions in the phase space, and the functions Ni are interpreted as the freely specifiable 'gauge generators'. However, if we want to recover the field equations for ϕA (which are partial differential equations on the spacetime with smooth coefficients for the smooth field ϕA) on the phase space as the Hamilton equations and not some of their distributional generalizations, then the functional differentiability of H[K] must be required in the strong sense of [534].Footnote 1 Nevertheless, the functional differentiability (and, in the asymptotically flat case, also the existence) of H[K] requires some boundary conditions on the field variables, and may yield restrictions on the form of Za. It may happen that, for a given Za, only too restrictive boundary conditions would be able to ensure the functional differentiability of the Hamiltonian, and, hence, the 'quasi-local phase space' defined with these boundary conditions would contain only very few (or no) solutions of the field equations. In this case, Za should be modified. In fact, the boundary conditions are connected to the nature of the physical situations considered. For example, in electrodynamics different boundary conditions must be imposed if the boundary is to represent a conducting or an insulating surface. Unfortunately, no universal principle or 'canonical' way of finding the 'correct' boundary term and the boundary conditions is known. In the asymptotically flat case, the value of the Hamiltonian on the constraint surface defines the total energy-momentum and angular momentum, depending on the nature of Ka, in which the total divergence DaZa corresponds to the ambiguity of the superpotential 2-form ⋃[K]ab: An identically-conserved quantity can always be added to the Hamiltonian (provided its functional differentiability is preserved). The energy density and the momentum density of the matter fields can be recovered as the functional derivative of H[K] with respect to the lapse N and the shift Na, respectively. In principle, the whole analysis can be repeated quasi-locally too. However, apart from the promising achievements of [13, 14, 442] for the Klein-Gordon, Maxwell, and the Yang-Mills-Higgs fields, as far as we know, such a systematic quasi-local Hamiltonian analysis of the matter fields is still lacking. Properties of the quasi-local quantities Suppose that the matter fields satisfy the dominant energy condition. Then EΣ[ξa] is also non-negative for any nonspacelike ξa, and, obviously, EΣ[ta] is zero precisely when Tab = 0 on Σ, and hence, by the conservation laws (see, e.g., page 94 of [240]), on the whole domain of dependence D(Σ). Obviously, MΣ = 0 if and only if \({L^a}: = {T^{ab}}{t_b}\) is null on Σ. Then, by the dominant energy condition it is a future-pointing vector field on Σ, and LaTab = 0 holds. Therefore, Tab on Σ has a null eigenvector with zero eigenvalue, i.e., its algebraic type on Σ is pure radiation. The properties of the quasi-local quantities based on \({Q_{\mathcal S}}[{\bf{K}}]\) in Minkowski spacetime are, however, more interesting. Namely, assuming that the dominant energy condition is satisfied, one can prove [488, 492] that \(P_{\mathcal S}^{\underline a}\) is a future directed nonspacelike vector, \(m_{\mathcal S}^2 \geq 0\) \(P_{\mathcal S}^{\underline a}\) if and only if Tab = 0 on D(Σ); \(m_{\mathcal S}^2 = 0\) if and only if the algebraic type of the matter on D(Σ) is pure radiation, i.e., TabLb = 0 holds for some constant null vector La. Then Tab = τLaLb for some non-negative function τ. In this case \(P_{\mathcal S}^{\underline a} = e{L^{\underline a}}\), where \({L^{\underline a}}: = {L^a}\vartheta _a^{\underline a}\) For \(m_{\mathcal S}^2\) = 0 the angular momentum has the form \(J_{\mathcal S}^{\underline a \underline b} = {e^{\underline a}}{L^{\underline b}} - {e^{\underline b}}{L^{\underline a}}\), where \({e^{\underline a}}: = \int\nolimits_\Sigma {{x^{\underline a}}} \tau {L^a}{1 \over {3!}}{\varepsilon _{abcd}}\). Thus, in particular, the Pauli-Lubanski spin is zero. Therefore, the vanishing of the quasi-local energy-momentum characterizes the 'vacuum state' of the classical matter fields completely, and the vanishing of the quasi-local mass is equivalent to special configurations representing pure radiation. Since EΣ[ta] and MΣ are integrals of functions on a hypersurface, they are obviously additive, e.g., for any two hypersurfaces Σ1 and Σ2 (having common points at most on their boundaries \({{\mathcal S}_1}\) and \({{\mathcal S}_2}\) one has \({E_{{\Sigma _1} \cup {\Sigma _2}}}[{t^a}] = {E_{{\Sigma _1}}}[{t^a}] + {E_{{\Sigma _2}}}[{t^a}]\). On the other hand, the additivity of \(P_{\mathcal S}^{\underline a}\) is a slightly more delicate problem. Namely, \(P_{{{\mathcal S}_1}}^{\underline a}\) and \(P_{{{\mathcal S}_2}}^{\underline a}\) are elements of the dual space of the translations, and hence, we can add them and, as in the previous case, we obtain additivity. However, this additivity comes from the absolute parallelism of the Minkowski spacetime: The quasi-local energy-momenta of the different two-surfaces belong to one and the same vector space. If there were no natural connection between the Killing vectors on different two-surfaces, then the energy-momenta would belong to different vector spaces, and they could not be added. We will see that the quasi-local quantities discussed in Sections 7, 8, and 9 belong to vector spaces dual to their own 'quasi-Killing vectors', and there is no natural way of adding the energy-momenta of different surfaces. Global energy-momenta and angular momenta If Σ extends either to spatial or future null infinity, then, as is well known, the existence of the limit of the quasi-local energy-momentum can be ensured by slightly faster than \({\mathcal O}({r^{- 3}})\) (for example by \({\mathcal O}({r^{- 4}})\) falloff of the energy-momentum tensor, where r is any spatial radial distance. However, the finiteness of the angular momentum and center-of-mass is not ensured by the \({\mathcal O}({r^{- 4}})\) falloff. Since the typical falloff of Tab — for the electromagnetic field, for example — is \({\mathcal O}({r^{- 4}})\), we may not impose faster than this, because otherwise we would exclude the electromagnetic field from our investigations. Thus, in addition to the \({\mathcal O}({r^{- 4}})\) falloff, six global integral conditions for the leading terms of Tab must be imposed. At spatial infinity these integral conditions can be ensured by explicit parity conditions, and one can show that the 'conservation equations' Tab;b = 0 (as evolution equations for the energy density and momentum density) preserve these falloff and parity conditions [497]. Although quasi-locally the vanishing of the mass does not imply the vanishing of the matter fields themselves (the matter fields must be pure radiative field configurations with plane wave fronts), the vanishing of the total mass alone does imply the vanishing of the fields. In fact, by the vanishing of the mass, the fields must be plane waves, furthermore, by \({T_{ab}} = {\mathcal O}({r^{- 4}})\), they must be asymptotically vanishing at the same time. However, a plane-wave configuration can be asymptotically vanishing only if it is vanishing. Quasi-local radiative modes and a classical version of the holography for matter fields By the results of Section 2.2.4, the vanishing of the quasi-local mass, associated with a closed spacelike two-surface \({\mathcal S}\), implies that the matter must be pure radiation on a four-dimensional globally hyperbolic domain D(Σ). Thus, \({m_{\mathcal S}} = 0\) characterizes 'simple', 'elementary' states of the matter fields. In the present section we review how these states on D(Σ) can be characterized completely by data on the two-surface \({\mathcal S}\), and how these states can be used to formulate a classical version of the holographic principle. For the (real or complex) linear massless scalar field ϕ and the Yang-Mills fields, represented by the symmetric spinor fields \(\phi _{AB}^\alpha, \alpha = 1, \ldots, N\), where N is the dimension of the gauge group, the vanishing of the quasi-local mass is equivalent [498] to plane waves and the pp-wave solutions of Coleman [152], respectively. Then, the condition TabLb = 0 implies that these fields are completely determined on the whole D(Σ) by their value on \({\mathcal S}\) (in which case the spinor fields \(\phi _{AB}^\alpha\) are necessarily null: \(\phi _{AB}^\alpha = {\phi ^\alpha}{O_A}{O_B}\), whereϕα are complex functions and OA is a constant spinor field such that La = OAOA′). Similarly, the null linear zero-rest-mass fields ϕAB…E = ϕOAOB … OE on D(Σ) with any spin and constant spinor OA are completely determined by their value on \({\mathcal S}\). Technically, these results are based on the unique complex analytic structure of the u = const. two-surfaces foliating Σ, where La = ∇au, and, by the field equations, the complex functions ϕ and ϕα turn out to be antiholomorphic [492]. Assuming, for the sake of simplicity, that \({\mathcal S}\) is future and past convex in the sense of Section 4.1.3 below, the independent boundary data for such a pure radiative solution consist of a constant spinor field on \({\mathcal S}\) and a real function with one, and another with two, variables. Therefore, the pure radiative modes on D(Σ) can be characterized completely by appropriate data (the holographic data) on the 'screen' \({\mathcal S}\). These 'quasi-local radiative modes' can be used to map any continuous spinor field on D(Σ) to a collection of holographic data. Indeed, the special radiative solutions of the form ϕOA (with fixed constant-spinor field OA), together with their complex conjugate, define a dense subspace in the space of all continuous spinor fields on Σ. Thus, every such spinor field can be expanded by the special radiative solutions, and hence, can also be represented by the corresponding family of holographic data. Therefore, if we fix a foliation of D(Σ) by spacelike Cauchy surfaces Σt, then every spinor field on D(Σ) can also be represented on \({\mathcal S}\) by a time-dependent family of holographic data, as well [498]. This fact may be a specific manifestation in classical nongravitational physics of the holographic principle (see Section 13.4.2). On the Energy-Momentum and Angular Momentum of Gravitating Systems On the gravitational energy-momentum and angular momentum density: The difficulties The root of the difficulties: Gravitational energy in Newton's theory In Newton's theory the gravitational field is represented by a singe scalar field ϕ on the flat 3-space Σ ≈ ℝ3 satisfying the Poisson equation −habDaDbϕ = 4πGρ. (Here hab is the flat (negative definite) metric, Da is the corresponding Levi-Civita covariant derivative operator and ρ is the (non-negative) mass density of the matter source.) Hence, the mass of the source contained in some finite three-volume D ⊂ Σ can be expressed as the flux integral of the gravitational field strength on the boundary \({\mathcal S}: = \partial D\) $${m_D} = {1 \over {4\pi G}}\oint\nolimits_{\mathcal S} {{\upsilon ^a}({D_a}\phi)\;d{\mathcal S}},$$ where va is the outward-directed unit normal to \({\mathcal S}\). If \({\mathcal S}\) is deformed in Σ through a source-free region, then the mass does not change. Thus, the rest mass of the source is analogous to charge in electrostatics. Following the analogy with electrostatics, we can introduce the energy density and the spatial stress of the gravitational field, respectively, by $$\begin{array}{*{20}c}{U: = {1 \over {8\pi G}}{h^{cd}}({D_c}\phi)\,({D_d}\phi),} & {{\Sigma _{ab}}: = {1 \over {4\pi G}}\left({({D_a}\phi)\,({D_b}\phi) - {1 \over 2}{h_{ab}}{h^{cd}}({D_c}\phi)\,({D_d}\phi)} \right).} \\ \end{array}$$ Note that since gravitation is always attractive, U is a binding energy, and hence it is negative definite. However, by the Galileo-Eötvös experiment, i.e., the principle of equivalence, there is an ambiguity in the gravitational force: It is determined only up to an additive constant covector field ae, and hence by an appropriate transformation Deϕ ↦ Deϕ + ae the gravitational force Deϕ at a given point p ∈ Σ can be made zero. Thus, at this point both the gravitational energy density and the spatial stress have been made vanishing. On the other hand, they can be made vanishing on an open subset U ⊂ Σ only if the tidal force, DaDbϕ, is vanishing on U. Therefore, the gravitational energy and the spatial stress cannot be localized to a point, i.e., they suffer from the ambiguity in the gravitational force above. In a relativistically corrected Newtonian theory both the internal energy density u of the (matter) source and the energy density U of the gravitational field itself contribute to the source of gravity. Thus (in the traditional units, when c is the speed of light) the corrected field equation could be expected to be the genuinely non-linear equation $$- {h^{ab}}{D_a}{D_b}\phi = 4\pi G\left({\rho + {1 \over {{c^2}}}\left({u + U} \right)} \right).$$ (Note that, together with additional corrections, this equation with the correct sign of U can be recovered from Einstein's equations applied to static configurations [199] in the first post-Newtonian approximation. Note, however, that the theory defined by (3.3) and the usual formula for the force density, is internally inconsistent [221]. A thorough analysis of this theory, and in particular its inconsistency, is given by Giulini [221].) Therefore, by (3.3) $${E_D}: = \int\nolimits_D {({c^2}\rho + u + U){\rm{d}}\Sigma} = {{{c^2}} \over {4\pi G}}\oint\nolimits_{\mathcal S} {{\upsilon ^a}({D_a}\phi)\;\,d{\mathcal S}},$$ i.e., now it is the energy of the source plus gravity system in the domain D that can be rewritten into the form of a two-surface integral on the boundary of the domain D. Note that the gravitational energy reduces the source term in (3.3) (and hence the energy ED also), and, more importantly, the quasi-local energy ED of the source + gravity system is free of the ambiguity that is present in the gravitational energy density. This in itself already justifies the introduction and use of the quasi-local concept of energy in the study of gravitating systems. By the negative definiteness of U, outside the source the quasi-local energy ED is a decreasing set function, i.e., if D1 ⊂ D2 and D2 − D1 is source free, then \({E_{{D_2}}} \leq {E_{{D_1}}}\). In particular, for a 2-sphere of radius r surrounding a localized spherically symmetric homogeneous source with negligible internal energy, the quasi-local energy is \({E_{{D_r}}} = {{{c^4}} \over G}{\rm{m}}(1 + {1 \over 2}{{\rm{m}} \over r}) + O({r^{- 2}})\), where the mass parameter is \({\rm{m: =}}{{GM} \over {{c^2}}}(1 - {3 \over 5}{{GM} \over {{c^2}R}}) + O({c^{- 6}})\) and M is the rest mass and R is the radius of the source. For a more detailed discussion of the energy in the (relativistically corrected) Newtonian theory, see [199]. The root of the difficulties: Gravitational energy-momentum in Einstein's theory The action Im for the matter fields is a functional of both kinds of fields, thus one can take the variational derivatives both with respect to \({\Phi _N}_{b \ldots}^{a \ldots}\) and \({g^{ab}}\). The former give the field equations, while the latter define the symmetric energy-momentum tensor. Moreover, gab provides a metrical geometric background, in particular a covariant derivative, for carrying out the analysis of the matter fields. The gravitational action Ig is, on the other hand, a functional of the metric alone, and its variational derivative with respect to gab yields the gravitational field equations. The lack of any further geometric background for describing the dynamics of gab can be traced back to the principle of equivalence [36] (i.e., the Galileo-Eötvös experiment), and introduces a huge gauge freedom in the dynamics of gab because that should be formulated on a bare manifold: The physical spacetime is not simply a manifold M endowed with a Lorentzian metric gab, but the isomorphism class of such pairs, where (M, gab) and (M, ϕ*gab) are considered to be equivalent for any diffeomorphism ϕ of M onto itself.Footnote 2 Thus, we do not have, even in principle, any gravitational analog of the symmetric energy-momentum tensor of the matter fields. In fact, by its very definition, Tab is the source density for gravity, like the current \(J_A^a: = \delta {I_p}/\delta A_a^A\) in Yang-Mills theories (defined by the variational derivative of the action functional of the particles, e.g., of the fermions, interacting with a Yang-Mills field \(A_a^A\)), rather than energy-momentum. The latter is represented by the Noether currents associated with special spacetime displacements. Thus, in spite of the intimate relation between Tab and the Noether currents, the proper interpretation of Tab is only the source density for gravity, and hence it is not the symmetric energy-momentum tensor whose gravitational counterpart must be searched for. In particular, the Bel-Robinson tensor \({T_{abcd}}: = {\psi _{ABCD}}{{\bar \psi}_{{A{\prime}}{B{\prime}}{C{\prime}}{D{\prime}}}}\), given in terms of the Weyl spinor, (and its generalizations introduced by Senovilla [449, 448]), being a quadratic expression of the curvature (and its derivatives), is (are) expected to represent only 'higher-order' gravitational energy-momentum. (Note that according to the original tensorial definition the Bel-Robinson tensor is one-fourth of the expression above. Our convention follows that of Penrose and Rindler [425].) In fact, the physical dimension of the Bel-Robinson 'energy-density' Tabcdtatbtctd is cm−4, and hence (in the traditional units) there are no powers A and B such that cAGB Tabcdtatbtctd would have energy-density dimension. As we will see, the Bel-Robinson 'energy-momentum density' Tabcdtbtctd appears naturally in connection with the quasi-local energy-momentum and spin angular momentum expressions for small spheres only in higher-order terms. Therefore, if we want to associate energy-momentum and angular momentum with the gravity itself in a Lagrangian framework, then it is the gravitational counterpart of the canonical energy-momentum and spin tensors and the canonical Noether current built from them that should be introduced. Hence it seems natural to apply the Lagrange-Belinfante-Rosenfeld procedure, sketched in the previous Section 2.1, to gravity too [73, 74, 438, 259, 260, 486]. Pseudotensors The lack of any background geometric structure in the gravitational action yields, first, that any vector field Ka generates a symmetry of the matter-plus-gravity system. Its second consequence is the need for an auxiliary derivative operator, e.g., the Levi-Civita covariant derivative coming from an auxiliary, nondynamic background metric (see, e.g., [307, 430]), or a background (usually torsion free, but not necessarily flat) connection (see, e.g., [287]), or the partial derivative coming from a local coordinate system (see, e.g., [525]). Though the natural expectation would be that the final results be independent of these background structures, as is well known, the results do depend on them. In particular [486], for Hilbert's second-order Lagrangian LH:= R/16πG in a fixed local coordinate system {xα} and derivative operator ∂μ instead of ∇e, Eq. (2.4) gives precisely Møller's energy-momentum pseudotensor \({{\rm{M}}^{{\theta ^\alpha}}}\beta\), which was defined originally through the superpotential equation \(\sqrt {\vert g\vert} (8\pi {G_{\rm{M}}}{\theta ^\alpha}_\beta - {G^\alpha}_\beta): = {\partial _{\mu {\rm{M}}}}{\cup _\beta}^{\alpha \mu}\), where \(_{\rm{M}}{\cup _\beta}^{\alpha \mu}: = \sqrt {\vert g\vert} {g^{\alpha \rho}}{g^{\mu \omega}}({\partial _{[\omega}}{g_{\rho ]\beta}})\) is the Møller superpotential [367]. (For another simple and natural introduction of Møller's energy-momentum pseudotensor, see [131].) For the spin pseudotensor, Eq. (2.2) gives $$8\pi G{\sqrt {\vert g\vert} _{\rm{M}}}{\sigma ^{\mu \alpha}}_\beta = {- _{\rm{M}}}{\cup _\beta}^{\alpha \mu} + \;{\partial _\nu}\left({\sqrt {\vert g\vert} \delta _\beta ^{\left[ \mu \right.}{g^{\left. \nu \right]\alpha}}} \right),$$ which is, in fact, only pseudotensorial. Similarly, the contravariant form of these pseudotensors and the corresponding canonical Noether current are also pseudotensorial. We saw in Section 2.1.2 that a specific combination of the canonical energy-momentum and spin tensors gave the symmetric energy-momentum tensor, which is gauge invariant even if the matter fields have gauge freedom, and one might hope that the analogous combination of the energy-momentum and spin pseudotensors gives a reasonable tensorial energy-momentum density for the gravitational field. The analogous expression is, in fact, tensorial, but unfortunately it is just the negative of the Einstein tensor [486, 487].Footnote 3 Therefore, to use the pseudotensors, a 'natural' choice for a 'preferred' coordinate system would be needed. This could be interpreted as a gauge choice, or a choice for the reference configuration. A further difficulty is that the different pseudotensors may have different (potential) significance. For example, for any fixed k ∈ R Goldberg's 2kth symmetric pseudotensor \(t_{(2k)}^{\alpha \beta}\) is defined by \(2\vert g{\vert ^{k + 1}}(8\pi Gt_{(2k)}^{\alpha \beta} - {G^{\alpha \beta}}): = {\partial _\mu}{\partial _\nu}[\vert g{\vert ^{k + 1}}({g^{\alpha \beta}}{g^{\mu \nu}} - {g^{\alpha \nu}}{g^{\beta \mu}})]\) (which, for k = 0, reduces to the Landau-Lifshitz pseudotensor, the only symmetric pseudotensor which is a quadratic expression of the first derivatives of the metric) [222]. However, by Einstein's equations, this definition implies that \({\partial _\alpha}[\vert g{\vert ^{k + 1}}(t_{(2k)}^{\alpha \beta} + {T^{\alpha \beta}})] = 0\). Hence what is (coordinate-)divergence-free (i.e., 'pseudo-conserved') cannot be interpreted as the sum of the gravitational and matter energy-momentum densities. Indeed, the latter is |g|1/2 Tαβ, while the second term in the divergence equation has an extra weight |g|k+1/2. Thus, there is only one pseudotensor in this series, which satisfies the 'conservation law' with the correct weight. In particular, the Landau-Lifshitz pseudotensor also has this defect. On the other hand, the pseudotensors coming from some action (the 'canonical pseudotensors') appear to be free of this kind of difficulty (see also [486, 487]). Excellent classical reviews on these (and several other) pseudotensors are [525, 77, 15, 223], and for some recent ones (using background geometric structures) see, e.g., [186, 187, 102, 211, 212, 304, 430]. A particularly useful and comprehensive recent review with many applications and an extended bibliography is that of Petrov [428]. We return to the discussion of pseudotensors in Sections 3.3.1, 4.2.2 and 11.3.5. Strategies to avoid pseudotensors I: Background metrics/connections One way of avoiding the use of pseudotensorial quantities is to introduce an explicit background connection [287] or background metric [437, 305, 310, 307, 306, 429, 184]. (The superpotential of Katz, Bičák, and Lynden-Bell [306] has been rediscovered recently by Chen and Nester [137] in a completely different way. We return to a discussion of the approach of Chen and Nester in Section 11.3.2.) The advantage of this approach would be that we could use the background not only to derive the canonical energy-momentum and spin tensors, but to define the vector fields Ka as the symmetry generators of the background. Then, the resulting Noether currents are, without doubt, tensorial. However, they depend explicitly on the choice of the background connection or metric not only through Ka: The canonical energy-momentum and spin tensors themselves are explicitly background-dependent. Thus, again, the resulting expressions would have to be supplemented by a 'natural' choice for the background, and the main question is how to find such a 'natural' reference configuration from the infinitely many possibilities. A particularly interesting special bimetric approach was suggested in [407] (see also [408]), in which the background (flat) metric is also fixed by using Synge's world function. Strategies to avoid pseudotensors II: The tetrad formalism In the tetrad formulation of general relativity, the gab-orthonormal frame fields \(\{E_{\underline a}^a\}, \underline a = 0, \ldots, 3\), are chosen to be the gravitational field variables [533, 314]. Re-expressing the Hilbert Lagrangian (i.e., the curvature scalar) in terms of the tetrad field and its partial derivatives in some local coordinate system, one can calculate the canonical energy-momentum and spin by Eqs. (2.4) and (2.2), respectively. Not surprisingly at all, we recover the pseudotensorial quantities that we obtained in the metric formulation above. However, as realized by Møller [368], the use of the tetrad fields as the field variables instead of the metric makes it possible to introduce a first-order, scalar Lagrangian for Einstein's field equations: If \(\gamma _{\underline e \underline b}^{\underline a}: = E_{\underline e}^e\gamma _{e\underline b}^{\underline a}: = E_{\underline e}^e\vartheta _a^{\underline a}{\nabla _e}E_{\underline b}^a\), the Ricci rotation coefficients, then Møller's tetrad Lagrangian is $$L: = {1 \over {16\pi G}}\left[ {R - 2{\nabla _a}(E_{\underline a}^a{\eta ^{\underline a \underline b}}\gamma _{\underline c \underline b}^{\underline c})} \right] = {1 \over {16\pi G}}\left({E_{\underline a}^aR_{\underline b}^b - E_{\underline a}^bE_{\underline b}^a} \right)\gamma _{a\underline c}^{\underline a}\gamma _b^{\underline c \underline b}.$$ (Here \(\left\{{\vartheta _a^{\underline a}} \right\}\) is the one-form basis dual to \(\left\{{E_{\underline a}^a} \right\}\).) Although L depends on the actual tetrad field \(\left\{{E_{\underline a}^a} \right\}\), it is weakly O(1, 3)-invariant. Møller's Lagrangian has a nice uniqueness property [412]: Any first-order scalar Lagrangian built from the tetrad fields, whose Euler-Lagrange equations are the Einstein equations, is Møller's Lagrangian. (Using Dirac spinor variables Nester and Tung found a first-order spinor Lagrangian [392], which turned out to be equivalent to Møller's Lagrangian [530]. Another first-order spinor Lagrangian, based on the use of the two-component spinors and the anti-self-dual connection, was suggested by Tung and Jacobson [529]. Both Lagrangians yield a well-defined Hamiltonian, reproducing the standard ADM energy-momentum in asymptotically flat spacetimes.) The canonical energy-momentum θaβ derived from Eq. (3.5) using the components of the tetrad fields in some coordinate system as the field variables is still pseudotensorial, but, as Møller realized, it has a tensorial superpotential: $${\vee _b}^{ae}: = 2\left({- \gamma _{\underline b \underline c}^{\underline a}{\eta ^{\underline c \underline e}} + \gamma _{\underline d \underline c}^{\underline d}{\eta ^{\underline c \underline s}}\left({\delta _{\underline b}^{\underline a}\delta _{\underline s}^{\underline e} - \delta _{\underline s}^{\underline a}\delta _{\underline b}^{\underline e}} \right)} \right)\;\vartheta _b^{\underline b}E_{\underline a}^aE_{\underline e}^e = {\vee _b}^{[ae]}.$$ The canonical spin turns out to be essentially \({\vee _b}^{ae}\), i.e., a tensor. The tensorial nature of the superpotential makes it possible to introduce a canonical energy-momentum tensor for the gravitational 'field'. Then, the corresponding canonical Noether current Ca[K] will also be tensorial and satisfies $$8\pi G{C^a}[{\bf{K}}] = {G^{ab}}{K_b} + {\textstyle{1 \over 2}}{\nabla _c}({K^b}{\vee _b}^{ac}).$$ Therefore, the canonical Noether current derived from Møller's tetrad Lagrangian is independent of the background structure (i.e., the coordinate system) that we used to do the calculations (see also [486]). However, Ca[K] depends on the actual tetrad field, and hence, a preferred class of frame fields, i.e., an O(1, 3)-gauge reduction, is needed. Thus, the explicit background dependence of the final result of other approaches has been transformed into an internal O(1, 3)-gauge dependence. It is important to realize that this difficulty always appears in connection with the gravitational energy-momentum and angular momentum, at least in disguise. In particular, the Hamiltonian approach in itself does not yield a well defined energy-momentum density for the gravitational 'field' (see, e.g., [379, 353]). Thus in the tetrad approach the canonical Noether current should be supplemented by a gauge condition for the tetrad field. Such a gauge condition could be some spacetime version of Nester's gauge conditions (in the form of certain partial differential equations) for the orthonormal frames of Riemannian manifolds [378, 381]. (For the existence and the potential obstruction to the existence of the solutions to this gauge condition on spacelike hypersurfaces, see [384, 196].) Furthermore, since Ca[K] + TabKb is conserved for any vector field Ka, in the absence of the familiar Killing symmetries of the Minkowski spacetime it is not trivial to define the 'translations' and 'rotations', and hence the energy-momentum and angular momentum. To make them well defined, additional ideas would be needed. For recent reviews of the tetrad formalism of general relativity, including an extended bibliography, see, e.g., [486, 487, 403, 286]. In general, the frame field \(\{E_{\underline a}^a\}\) is defined only on an open subset U ⊂ M. If the domain of the frame field can be extended to the whole M, then M is called parallelizable. For time and space-orientable spacetimes this is equivalent to the existence of a spinor structure [206], which is known to be equivalent to the vanishing of the second Stiefel-Whitney class of M [364], a global topological condition on M. The discussion of how Møller's superpotential \({\vee _e}^{ab}\) is related to the Nester-Witten 2-form, by means of which an alternative form of the ADM energy-momentum is given and and by means of which several quasi-local energy-momentum expressions are defined, is given in Section 3.2.1 and in the first paragraphs of Section 8. Strategies to avoid pseudotensors III: Higher derivative currents Giving up the paradigm that the Noether current should depend only on the vector field Ka and its first derivative — i.e., if we allow a term Ḃa to be present in the Noether current (2.3), even if the Lagrangian is diffeomorphism invariant — one naturally arrives at Komar's tensorial superpotential K∨ [K]ab:= ∇[aKb] and the corresponding Noether current \({C^a}[{\bf{K}}]: = {G^a}_b{K^b} + {\nabla _b}{\nabla ^{[a}}{K^{b]}}\) [322] (see also [77]). Although its independence of any background structure (viz. its tensorial nature) and its uniqueness property (see Komar [322] quoting Sachs) is especially attractive, the vector field Ka is still to be determined. A new suggestion for the approximate spacetime symmetries that can, in principle, be used in Komar's expression, both near a point and a world line, is given in [235]. This is a generalization of the affine collineations (including the homotheties and the Killing symmetries). We continue the discussion of the Komar expression in Sections 3.2.2, 3.2.3, 4.3.1 and 12.1, and of the approximate spacetime symmetries in Section 11.1. On the global energy-momentum and angular momentum of gravitating systems: The successes As is well known, in spite of the difficulties with the notion of the gravitational energy-momentum density discussed above, reasonable total energy-momentum and angular momentum can be associated with the whole spacetime, provided it is asymptotically flat. In the present section we recall the various forms of them. As we will see, most of the quasi-local constructions are simply 'quasi-localizations' of the total quantities. Obviously, the technique used in the 'quasi-localization' does depend on the actual form of the total quantities, yielding mathematically-inequivalent definitions for the quasi-local quantities. We return to the discussion of the tools needed in the quasi-localization procedures in Sections 4.2 and 4.3. Classical, excellent reviews of global energy-momentum and angular momentum are [208, 223, 28, 393, 553, 426], and a recent review of con-formal infinity (with special emphasis on its applicability in numerical relativity) is [195]. Reviews of the positive energy proofs from the early 1980s are [273, 427]. Spatial infinity: Energy-momentum There are several mathematically-inequivalent definitions of asymptotic flatness at spatial infinity [208, 475, 37, 65, 200]. The traditional definition is based on the existence of a certain asymptotically flat spacelike hypersurface. Here we adopt this definition, which is probably the weakest one in the sense that the spacetimes that are asymptotically flat in the sense of any reasonable definition are asymptotically flat in the traditional sense as well. A spacelike hypersurface Σ will be called k-asymptotically flat if for some compact set K ⊂ Σ the complement Σ − K is diffeomorphic to ℝ3 minus a solid ball, and there exists a (negative definite) metric 0hab on Σ, which is flat on Σ − K, such that the components of the difference of the physical and the background metrics, hij − 0hij, and of the extrinsic curvature χij in the 0hij-Cartesian coordinate system {xk} fall off as r−k and r−k−1, respectively, for some k > 0 and r2:= δijxixj [433, 64]. These conditions make it possible to introduce the notion of asymptotic spacetime Killing vectors, and to speak about asymptotic translations and asymptotic boost rotations. Σ − K together with the metric and extrinsic curvature is called the asymptotic end of Σ. In a more general definition of asymptotic flatness Σ is allowed to have finitely many such ends. As is well known, finite and well-defined ADM energy-momentum [23, 25, 24, 26] can be associated with any k-asymptotically flat spacelike hypersurface, if \(k > {1 \over 2}\), by taking the value on the constraint surface of the Hamiltonian H[Ka], given, for example, in [433, 64], with the asymptotic translations Ka (see [144, 52, 399, 145]). In its standard form, this is the r → ∞ limit of a two-surface integral of the first derivatives of the induced three-metric hab and of the extrinsic curvature χab for spheres \({\mathcal S_r}\) of large coordinate radius r. Explicitly: $$E = {1 \over {16\pi G}}\;\underset {r \rightarrow \infty}{\lim} \oint\nolimits_{{{\mathcal S}_r}} {{\upsilon ^a}{{{(_0}{D_c}{h_{da}}{- _0}{D_a}{h_{cd}})}_0}{h^{cd}}{\rm{d}}{{\mathcal S}_r}},$$ $${P^{\bf{i}}} = - {1 \over {8\pi G}} \underset {r \rightarrow \infty}{\lim} \oint\nolimits_{{{\mathcal S}_r}} {{\upsilon ^a}({\chi _a}^b - \chi \delta _a^b){\;_0}D{x^{\bf{i}}}{\rm{d}}{{\mathcal S}_r}},$$ where 0De is the Levi-Civita derivative oparator determined by 0hab, and va is the outward pointing unit normal to \({{\mathcal S}_r}\) and tangent to Σ. The ADM energy-momentum, \({P^{\underline a}} = (E,{P^{\rm{i}}}\), is an element of the space dual to the space of the asymptotic translations, and transforms as a Lorentzian four-vector with respect to asymptotic Lorentz transformations of the asymptotic Cartesian coordinates. The traditional ADM approach to the introduction of the conserved quantities and the Hamiltonian analysis of general relativity is based on the 3+1 decomposition of the fields and the spacetime. Thus, it is not a priori clear that the energy and spatial momentum form a Lorentz vector (and the spatial angular momentum and center-of-mass, discussed below, form an antisymmetric tensor). One has to check a posteriori that the conserved quantities obtained in the 3 + 1 form are, in fact, Lorentz-covariant. To obtain manifestly Lorentz-covariant quantities one should not do the 3 + 1 decomposition. Such a manifestly Lorentz-covariant Hamiltonian analysis was suggested first by Nester [377], and he was able to recover the ADM energy-momentum in a natural way (see Section 11.3). Another form of the ADM energy-momentum is based on Møller's tetrad superpotential [223]: Taking the flux integral of the current Ca [K] + TabKb on the spacelike hypersurface Σ, by Eq. (3.7) the flux can be rewritten as the r → ∞ limit of the two-surface integral of Møller's superpotential on spheres of large r with the asymptotic translations Ka. Choosing the tetrad field \(E_{\underline a}^a\) to be adapted to the spacelike hypersurface and assuming that the frame \(E_{\underline a}^a\) tends to a constant Cartesian one as r−k, the integral reproduces the ADM energy-momentum. The same expression can be obtained by following the familiar Hamiltonian analysis using the tetrad variables too: By the standard scenario one can construct the basic Hamiltonian [379]. This Hamiltonian, evaluated on the constraints, turns out to be precisely the flux integral of Ca[K] + TabKb, on Σ. A particularly interesting and useful expression for the ADM energy-momentum is possible if the tetrad field is considered to be a frame field built from a normalized spinor dyad \(\{\lambda _A^{\underline A}\}, \underline A = 0,1\), on Σ, which is asymptotically constant (see Section 4.2.3). (Thus, underlined capital Roman indices are concrete name spinor indices.) Then, for the components of the ADM energy-momentum in the constant spinor basis at infinity, Møller's expression yields the limit of $${P^{\underline A \underline {B{\prime}}}} = {1 \over {4\pi G}}\oint\nolimits_{\mathcal S} {{{\rm{i}} \over 2}} \left({\overline \lambda _{A{\prime}}^{\underline {B{\prime}}}{\nabla _{BB{\prime}}}\lambda _A^{\underline A} - \overline \lambda _{B{\prime}}^{\underline {B{\prime}}}{\nabla _{AA{\prime}}}\lambda _B^{\underline A}} \right),$$ as the two-surface \({\mathcal S}\) is blown up to approach infinity. In fact, to recover the ADM energy-momentum in the form (3.10), the spinor fields \(\lambda _A^{\underline A}\) need not be required to form a normalized spinor dyad, it is enough that they form an asymptotically constant normalized dyad, and we have to use the fact that the generator vector field Ka has asymptotically constant components \({K^{\underline A {{\underline A}{\prime}}}}\) in the asymptotically constant frame field \(\lambda _{\underline A}^A\bar \lambda _{\underline {{A{\prime}}}}^{{A{\prime}}}\). Thus \({K^a} = {K^{\underline A {{\underline A}{\prime}}}}\lambda _{\underline A}^A\bar \lambda _{\underline A}^{{A{\prime}}}\) can be interpreted as an asymptotic translation. The complex-valued 2-form in the integrand of Eq. (3.10) will be denoted by \(u{({\lambda ^{\underline A}},{{\bar \lambda}^{\underline {{B{\prime}}}}})_{ab}}\), and is called the Nester-Witten 2-form. This is 'essentially Hermitian' and connected with Komar's superpotential, too. In fact, for any two spinor fields αA and βA one has $$u{\left({\alpha, \overline \beta} \right)_{ab}} - \overline {u{{(\beta, \overline \alpha)}_{ab}}} = - {\rm{i}}{\nabla _{\left[ a \right.}}{X_{\left. b \right]}},$$ $$u{\left({\alpha, \bar \beta} \right)_{ab}} - \overline {u{{(\beta, \bar \alpha)}_{ab}}} = {\textstyle{1 \over 2}}{\nabla _c}{X_d}{\varepsilon ^{cd}}_{ab} + {\rm{i}}\left({{\varepsilon _{A{\prime}B{\prime}}}{\alpha _{\left(A \right.}}{\nabla _{\left. B \right)C{\prime}}}{{\overline \beta}^{C{\prime}}} - {\varepsilon _{A\,B}}{{\overline \beta}_{{{\left(A{\prime}\right.}}}}{\nabla _{\left. {B{\prime}} \right)C}}{\alpha ^C}} \right),$$ where \({X_a}: = {\alpha _A}{{\bar \beta}_{{A{\prime}}}}\) and the overline denotes complex conjugation. Thus, apart from the terms in Eq. (3.12) involving ∇A′AαA and \({\nabla _{A{A{\prime}}}}{{\bar \beta}^{{A{\prime}}}}\), the Nester-Witten 2-form \(u{(\alpha, \bar \beta)_{ab}}\) is just \(- {{\rm{i}} \over 2}({\nabla _{[a}}{X_{b]}} + {\rm{i}}{\nabla _{[c}}{X_{d]}}{1 \over 2}{\varepsilon ^{cd}}_{ab})\), i.e., the anti-self-dual part of the curl of \(- {{\rm{i}} \over 2}{X_a}\) (The original expressions by Witten and Nester were given using Dirac, rather than two-component Weyl, spinors [559, 376]. The 2-form \(u{(\alpha, \bar \beta)_{ab}}\) in the present form using the two-component spinors probably appeared first in [276].) Although many interesting and original proofs of the positivity of the ADM energy are known even in the presence of black holes [444, 445, 559, 376, 273, 427, 300], the simplest and most transparent ones are probably those based on the use of two-component spinors: If the dominant energy condition is satisfied on the k-asymptotically flat spacelike hypersurface Σ, where \(k > {1 \over 2}\), then the ADM energy-momentum is future pointing and nonspacelike (i.e., the Lorentzian length of the energy-momentum vector, the ADM mass, is non-negative), and is null if and only if the domain of dependence D(Σ) of Σ is flat [276, 434, 217, 436, 88]. Its proof may be based on the Sparling equation [476, 175, 426, 358]: $${\nabla _{\left[ a \right.}}u{(\lambda, \overline \mu)_{\left. {bc} \right]}} = - {1 \over 2}{\lambda _E}{\overline \mu _{E{\prime}}}{G^{e\;f}}{1 \over {3!}}{\varepsilon _{f\;abc}} + \Gamma {(\lambda, \overline \mu)_{abc}}.$$ The significance of this equation is that, in the exterior derivative of the Nester-Witten 2-form, the second derivatives of the metric appear only through the Einstein tensor, thus its structure is similar to that of the superpotential equations in Lagrangian field theory, and \(\Gamma {(\lambda, \mu)_{abc}}\), known as the Sparling 3-form, is a homogeneous quadratic expression of the first derivatives of the spinor fields. If the spinor fields λA and μA solve the Witten equation on a spacelike hypersurface Σ, then the pullback of \(\Gamma {(\lambda, \bar \mu)_{abc}}\) to Σ is positive definite. This theorem has been extended and refined in various ways, in particular by allowing inner boundaries of Σ that represent future marginally trapped surfaces in black holes [217, 273, 427, 268]. The ADM energy-momentum can also be written as the two-sphere integral of certain parts of the conformally rescaled spacetime curvature [28, 29, 43]. This expression is a special case of the more general 'Riemann tensor conserved quantities' (see [223]): If \({\mathcal S}\) is any closed spacelike two-surface with area element \(d{\mathcal S}\), then for any tensor fields ωab = ω[ab] and μab = μ[ab] one can form the integral $${I_{\mathcal S}}[\omega, \mu ]: = \oint\nolimits_{\mathcal S} {{\omega ^{ab}}{R_{abcd}}{\mu ^{cd}}d{\mathcal S}}.$$ Since the falloff of the curvature tensor near spatial infinity is r−k−2, the integral \({I_{\mathcal S}}[\omega, \mu ]\) at spatial infinity gives finite value when ωabμcd blows up like rk as r → ∞. In particular, for the 1/r falloff, this condition can be satisfied by \({\omega ^{ab}}{\mu ^{cd}} = \sqrt {{\rm{Area(}}{\mathcal S}{\rm{)}}} {{\hat \omega}^{ab}}{{\hat \mu}^{cd}}\), where Area(\(({\mathcal S})\)) is the area of \({\mathcal S}\) and the hatted tensor fields are \({\mathcal O}(1)\). If the spacetime is stationary, then the ADM energy can be recovered at the r → ∞ limit of the two-sphere integral of (twice of) Komar's superpotential with the Killing vector Ka of stationarity [223] (see also [60]), as well. (See also the remark following Eq. (3.15) below.) On the other hand, if the spacetime is not stationary then, without additional restriction on the asymptotic time translation, the Komar expression does not reproduce the ADM energy. However, by Eqs. (3.11) and (3.12) such an additional restriction might be that Ka should be a constant combination of four future-pointing null vector fields of the form \({\alpha ^A}{{\bar \alpha}^{{A{\prime}}}}\), where the spinor fields aA are required to satisfy the Weyl neutrino equation ∇A′AαA = 0. This expression for the ADM energy-momentum has been used to give an alternative, 'four-dimensional' proof of the positivity of the ADM energy [276]. (For a more detailed recent review of the various forms of the ADM energy and linear momentum, see, e.g., [293].) In stationary spacetime the notion of the mechanical energy with respect to the world lines of stationary observers (i.e., the integral curves of the timelike Killing field) can be introduced in a natural way, and then (by definition) the total (ADM) energy is written as the sum of the mechanical energy and the gravitational energy. Then the latter is shown to be negative for certain classes of systems [308, 348]. The notion of asymptotic flatness at spatial infinity is generalized in [398]; here the background flat metric 0hab on Σ − K is allowed to have a nonzero deficit angle α at infinity, i.e., the corresponding line element in spherical polar coordinates takes the form −dr2 − r2(1 − α)(dθ2 + sin2 (θ) dϕ2). Then, a canonical analysis of the minimally-coupled Einstein-Higgs field is carried out on such a background, and, following a Regge-Teitelboim-type argumentation, an ADM-type total energy is introduced. It is shown that for appropriately chosen α this energy is finite for the global monopole solution, though the standard ADM energy is infinite. Spatial infinity: Angular momentum The value of the Hamiltonian of Beig and Ó Murchadha [64], together with the appropriately-defined asymptotic rotation-boost Killing vectors [497], define the spatial angular momentum and center-of-mass, provided k ≥ 1 and, in addition to the familiar falloff conditions, certain global integral conditions are also satisfied. These integral conditions can be ensured by the explicit parity conditions of Regge and Teitelboim [433] on the leading nontrivial parts of the metric hab and extrinsic curvature χab: The components in the Cartesian coordinates {xi} of the former must be even and the components of the latter must be odd parity functions of xi/r (see also [64]). Thus, in what follows we assume that k = 1. Then the value of the Beig-Ó Murchadha Hamiltonian parametrized by the asymptotic rotation Killing vectors is the spatial angular momentum of Regge and Teitelboim [433], while that parametrized by the asymptotic boost Killing vectors deviates from the center-of-mass of Beig and Ó Murchadha [64] by a term, which is the spatial momentum times the coordinate time. (As Beig and Ó Murchadha pointed out [64], the center-of-mass term of the Hamiltonian of Regge and Teitelboim is not finite on the whole phase space.) The spatial angular momentum and the new center-of-mass form an anti-symmetric Lorentz four-tensor, which transforms in the correct way under the four-translation of the origin of the asymptotically Cartesian coordinate system, and is conserved by the evolution equations [497]. The center-of-mass of Beig and Ó Murchadha was re-expressed recently [57] as the r → ∞ limit of two-surface integrals of the curvature in the form (3.14) with ωabμcd proportional to the lapse N times qacqbd − qadqbc, where qab is the induced two-metric on \({\mathcal S}\) (see Section 4.1.1). The geometric notion of center-of-mass introduced by Huisken and Yau [280] is another form of the Beig-Ó Murchadha center-of-mass [156]. The Ashtekar-Hansen definition for the angular momentum is introduced in their specific conformal model of spatial infinity as a certain two-surface integral near infinity. However, their angular momentum expression is finite and unambiguously defined only if the magnetic part of the spacetime curvature tensor (with respect to the Ω = const. timelike level hypersurfaces of the conformal factor) falls off faster than it would fall off in metrics with 1/r falloff (but no global integral, e.g., a parity condition had to be imposed) [37, 28]. If the spacetime admits a Killing vector of axisymmetry, then the usual interpretation of the corresponding Komar integral is the appropriate component of the angular momentum (see, e.g., [534]). However, the value of the Komar integral (with the usual normalization) is twice the expected angular momentum. In particular, if the Komar integral is normalized such that for the Killing field of stationarity in the Kerr solution the integral is m/G, for the Killing vector of axisymmetry it is 2ma/G instead of the expected ma/G ('factor-of-two anomaly') [305]. We return to the discussion of the Komar integral in Sections 4.3.1 and 12.1. Null infinity: Energy-momentum The study of the gravitational radiation of isolated sources led Bondi to the observation that the two-sphere integral of a certain expansion coefficient m(u, θ, ϕ) of the line element of a radiative spacetime in an asymptotically-retarded spherical coordinate system (u, r, θ, ϕ) behaves as the energy of the system at the retarded time u. Indeed, this energy is not constant in time, but decreases with u, showing that gravitational radiation carries away positive energy ('Bondi's mass-loss') [91, 92]. The set of transformations leaving the asymptotic form of the metric invariant was identified as a group, currently known as the Bondi-Metzner-Sachs (or BMS) group, having a structure very similar to that of the Poincaré group [440]. The only difference is that while the Poincaré group is a semidirect product of the Lorentz group and a four dimensional commutative group (of translations), the BMS group is the semidirect product of the Lorentz group and an infinite-dimensional commutative group, called the group of the supertranslations. A four-parameter subgroup in the latter can be identified in a natural way as the group of the translations. This makes it possible to compare the Bondi-Sachs four-momenta defined on different cuts of scri, and to calculate the energy-momentum carried away by the gravitational radiation in an unambiguous way. (For further discussion of the flux, see the fourth paragraph of Section 3.2.4.) At the same time the study of asymptotic solutions of the field equations led Newman and Unti to another concept of energy at null infinity [394]. However, this energy (currently known as the Newman-Unti energy) does not seem to have the same significance as the Bondi (or Bondi-Sachs [426] or Trautman-Bondi [147, 148, 146]) energy, because its monotonicity can be proven only between special, e.g., stationary, states. The Bondi energy, which is the time component of a Lorentz vector, the Bondi-Sachs energy-momentum, has a remarkable uniqueness property [147, 148]. Without additional conditions on Ka, Komar's expression does not reproduce the Bondi-Sachs energy-momentum in nonstationary spacetimes either [557, 223]: For the 'obvious' choice for Ka(twice of) Komar's expression yields the Newman-Unti energy. This anomalous behavior in the radiative regime could be corrected in at least two ways. The first is by modifying the Komar integral according to $${L_{\mathcal S}}[{\bf{K}}]: = {1 \over {8\pi G}}\oint\nolimits_{\mathcal S} {\left({{\nabla ^{\left[ c \right.}}{K^{\left. d \right]}} + \alpha {\nabla _e}{K^{e\; \bot}}{\varepsilon ^{cd}}} \right){1 \over 2}{\varepsilon _{cdab}}},$$ where ⊥εcd is the area 2-form on the Lorentzian two-planes orthogonal to \({\mathcal S}\) (see Section 4.1.1) and α is some real constant. For α =1 the integral \({L_{\mathcal S}}[{\bf{K}}]\), suggested by Winicour and Tamburino, is called the linkage [557]. (N.B.: The flux integral of the sum \({C^a}[{\bf{K}}] + {T^a}_b{K^b}\) of Komar's gravitational and the matter's currents on some compact spacelike hypersurface Σ with boundary \({\mathcal S}\) is \({1 \over {16\pi G}}\oint {_{\mathcal S}} {\nabla ^{[a}}{K^{b]}}{1 \over 2}{\varepsilon _{abcd}}\), which, for α = 0, is half of the linkage.) In addition, to define physical quantities by linkages associated to a cut of the null infinity one should prescribe how the two-surface \({\mathcal S}\) tends to the cut and how the vector field Ka should be propagated from the spacetime to null infinity into a BMS generator [557, 553]. The other way is to consider the original Komar integral (i.e., α = 0) on the cut of infinity in the conformally-rescaled spacetime and while requiring that Ka be divergence-free [210]. For such asymptotic BMS translations both prescriptions give the correct expression for the Bondi-Sachs energy-momentum. The Bondi-Sachs energy-momentum can also be expressed by the integral of the Nester-Witten 2-form [285, 342, 343, 276]. However, in nonstationary spacetimes the spinor fields that are asymptotically constant at null infinity are vanishing [106]. Thus, the spinor fields in the Nester-Witten 2-form must satisfy a weaker boundary condition at infinity such that the spinor fields themselves are the spinor constituents of the BMS translations. The first such condition, suggested by Bramson [106], was to require the spinor fields to be the solutions of the asymptotic twistor equation (see Section 4.2.4). One can impose several such inequivalent conditions, and all of these, based only on the linear first-order differential operators coming from the two natural connections on the cuts (see Section 4.1.2), are determined in [496]. The Bondi-Sachs energy-momentum has a Hamiltonian interpretation as well. Although the fields on a spacelike hypersurface extending to null rather than spatial infinity do not form a closed system, a suitable generalization of the standard Hamiltonian analysis could be developed [146] and used to recover the Bondi-Sachs energy-momentum. Similar to the ADM case, the simplest proofs of the positivity of the Bondi energy [446] are probably those that are based on the Nester-Witten 2-form [285] and, in particular, the use of two-component spinors [342, 343, 276, 274, 436]: The Bondi-Sachs mass (i.e., the Lorentzian length of the Bondi-Sachs energy-momentum) of a cut of future null infinity is non-negative if there is a spacelike hypersurface Σ intersecting null infinity in the given cut such that the dominant energy condition is satisfied on Σ, and the mass is zero iff the domain of dependence D(Σ) of Σ is flat. Converting the integral of the Nester-Witten 2-form into a (positive definite) 3-dimensional integral on Σ, a strictly positive lower bound can be given both for the ADM and Bondi-Sachs masses. Although total energy-momentum (or mass) in the form of a two-surface integral cannot be a introduced in closed universes (i.e., when Σ is compact with no boundary), a non-negative quantity m, based on this positive definite expression, can be associated with Σ. If the matter fields satisfy the dominant energy condition, then \({\rm{m}}\,{\rm{=}}\,{\rm{0}}\) if and only if the spacetime is flat and topologically Σ is a 3-torus; moreover its vanishing is equivalent to the existence of non-trivial solutions of Witten's gauge condition. This m turned out to be recoverable as the first eigenvalue of the square of the Sen-Witten operator. It is the usefulness and the applicability of this m in practice which tell us if this is a reasonable notion of total mass of closed universes or not [503]. Null infinity: Angular momentum At null infinity we have a generally accepted definition for angular momentum only in stationary or axi-symmetric, but not in general, radiative spacetime, where there are various, mathematically inequivalent suggestions for it (see Section 4.2.4). Here we review only some of those total angular momentum definitions that can be 'quasi-localized' or connected somehow to quasi-local expressions, i.e., those that can be considered as the null-infinity limit of some quasi-local expression. We will continue their discussion in the main part of the review, namely in Sections 7.2, 11.1 and 9. In their classic paper Bergmann and Thomson [78] raise the idea that while the gravitational energy-momentum is connected with the spacetime diffeomorphisms, the angular momentum should be connected with its intrinsic O(1, 3) symmetry. Thus, the angular momentum should be analogous with the spin. Based on the tetrad formalism of general relativity and following the prescription of constructing the Noether currents in Yang-Mills theories, Bramson suggested a superpotential for the six conserved currents corresponding to the internal Lorentz-symmetry [107, 108, 109]. (For another derivation of this superpotential from Møller's Lagrangian (3.5) see [496].) If \(\{\lambda _A^{\underline A}\}, \underline A = 0,1\), is a normalized spinor dyad corresponding to the orthonormal frame in Eq. (3.5), then the integral of the spinor form of the anti-self-dual part of this superpotential on a closed orientable two-surface \({\mathcal S}\) is $$J_{\mathcal S}^{\underline A \underline B}: = {1 \over {8\pi G}}\oint\nolimits_{\mathcal S} {- {\rm{i}}\lambda _{\left(A \right.}^{\underline A}\lambda _{\left. B \right)}^{\underline B}{\varepsilon _{A{\prime}B{\prime}}}},$$ where εA′B′ is the symplectic metric on the bundle of primed spinors. We will denote its integrand by \(w{({\lambda ^{\underline A}},{\lambda ^{\underline B}})_{ab}}\), and we call it the Bramson superpotential. To define angular momentum on a given cut of the null infinity by the formula (3.16), we should consider its limit when \({\mathcal S}\) tends to the cut in question and we should specify the spinor dyad, at least asymptotically. Bramson's suggestion for the spinor fields was to take the solutions of the asymptotic twistor equation [106]. He showed that this definition yields a well-defined expression. For stationary spacetimes this reduces to the generally accepted formula (4.15), and the corresponding Pauli-Lubanski spin, constructed from \({\varepsilon ^{\underline {{A{\prime}}} \underline {{B{\prime}}}}}{J^{\underline A \underline B}} + {\varepsilon ^{\underline A \underline B}} + {{\bar J}^{\underline A \underline {{\prime}{B{\prime}}}}}\) and the Bondi-Sachs energy-momentum \({P^{\underline A \underline {{A{\prime}}}}}\) (given, for example, in the Newman-Penrose formalism by Eq. (4.14)), is invariant with respect to supertranslations of the cut ('active supertranslations'). Note that since Bramson's expression is based on the solutions of a system of partial differential equations on the cut in question, it is independent of the parametrization of the BMS vector fields. Hence, in particular, it is invariant with respect to the supertranslations of the origin cut ('passive supertranslations'). Therefore, Bramson's global angular momentum behaves like the spin part of the total angular momentum. For a suggestion based on Bramson's superpotential at the quasi-local level, but using a different prescription for the spinor dyad, see Section 9. The construction based on the Winicour-Tamburino linkage (3.15) can be associated with any BMS vector field [557, 337, 45]. In the special case of translations it reproduces the Bondi-Sachs energy-momentum. The quantities that it defines for the proper supertranslations are called the super-momenta. For the boost-rotation vector fields they can be interpreted as angular momentum. However, in addition to the factor-of-two anomaly, this notion of angular momentum contains a huge ambiguity ('supertranslation ambiguity'): The actual form of both the boost-rotation Killing vector fields of Minkowski spacetime and the boost-rotation BMS vector fields at future null infinity depend on the choice of origin, a point in Minkowski spacetime and a cut of null infinity, respectively. However, while the set of the origins of Minkowski spacetime is parametrized by four numbers, the set of the origins at null infinity requires a smooth function of the form \(u:{S^2} \rightarrow {\rm{\mathbb R}}\). Consequently, while the corresponding angular momentum in the Minkowski spacetime has the familiar origin-dependence (containing four parameters), the analogous transformation of the angular momentum defined by using the boost-rotation BMS vector fields depends on an arbitrary smooth real valued function on the two-sphere. This makes the angular momentum defined at null infinity by the boost-rotation BMS vector fields ambiguous unless a natural selection rule for the origins, making them form a four parameter family of cuts, is found. Motivated by Penrose's idea that the 'conserved' quantities at null infinity should be searched for in the form of a charge integral of the curvature (which will be discussed in detail in Section 7), a general expression \({Q_{\mathcal S}}[{K^a}]\), associated with any BMS generator Ka and any cut \({\mathcal S}\) of scri, was introduced [174]. For real Ka this is real; it is vanishing in Minkowski spacetime; it reproduces the Bondi-Sachs energy-momentum for BMS translations; it yields nontrivial results for proper supertranslations; and for BMS rotations the resulting expressions can be interpreted as angular momentum. It was shown in [453, 173] that the difference \({Q_{{{\mathcal S}{\prime}}}}[{K^a}] - {Q_{{{\mathcal S}{{\prime\prime}}}}}[{K^a}]\) for any two cuts \({{\mathcal S}{\prime}}\) and \({{\mathcal S}{{\prime\prime}}}\) can be written as the integral of some local function on the subset of scri bounded by the cuts \({{\mathcal S}{\prime}}\) and \({{\mathcal S}{{\prime\prime}}}\), and this is precisely the flux integral of [44]. Unfortunately, however, the angular momentum introduced in this way still suffers from the same supertranslation ambiguity. A possible resolution of this difficulty could be the suggestion by Dain and Moreschi [169] in the charge integral approach to angular momentum of Moreschi [369, 370]. Their basic idea is that the requirement of the vanishing of the supermomenta (i.e., the quantities corresponding to the proper supertranslations) singles out a four-real-parameter family of cuts, called nice cuts, by means of which the BMS group can be reduced to a Poincaré subgroup that yields a well-defined notion of angular momentum. For further discussion of certain other angular momentum expressions, especially from the points of view of numerical calculations, see also [204]. Another promising approach might be that of Chruściel, Jezierski, and Kijowski [146], which is based on a Hamiltonian analysis of general relativity on asymptotically hyperboloidal spacelike hypersurfaces. They chose the six BMS vector fields tangent to the intersection of the spacelike hypersurface and null infinity as the generators of their angular momentum. Since the motions that their angular momentum generators define leave the domain of integration fixed, and apparently there is no Lorentzian four-space of origins, they appear to be the generators with respect to some fixed 'center-of-the-cut', and the corresponding angular momentum appears to be the intrinsic angular momentum. In addition to the supertranslation ambiguity in the definition of angular momentum, there could be another potential ambiguity, even if the angular momentum is well defined on every cut of future null infinity. In fact, if, for example, the definition of the angular momentum is based on the solutions of some linear partial differential equation on the cut (such as Bramson's definition, or the ones discussed in Sections 7 and 9), then in general there is no canonical isomorphism between the spaces of the solutions on different cuts, even if the solution spaces, as abstract vector spaces, are isomorphic. Therefore, the angular momenta on two different cuts belong to different vector spaces, and, without any natural correspondence between the solution spaces on the different cuts, it is meaningless to speak about the difference of the angular momenta. Thus, we cannot say anything about, e.g., the angular momentum carried away by gravitational radiation between two retarded time instants represented by two different cuts. One possible resolution of this difficulty was suggested by Helfer [264]. He followed the twistorial approach presented in Section 7 and used a special bijective map between the two-surface twistor spaces on different cuts. His map is based on the special structures available only at null infinity. Though this map is nonlinear, it is shown that the angular momenta on the different cuts can indeed be compared. Another suggestion for (only) the spatial angular momentum was given in [501]. This is based on the quasi-local Hamiltonian analysis that is discussed in Section 11.1, and the use of the divergence-free vector fields built from the eigenspinors with the smallest eigenvalue of the two-surface Dirac operators. The angular momenta, defined in these ways on different cuts, can also be compared. We give a slightly more detailed discussion of them in Sections 7.2 and 11.1, respectively. The main idea behind the recent definition of the total angular momentum at future null infinity of Kozameh, Newman and Silva-Ortigoza, suggested in [325, 326], is analogous to finding the center-of-charge (i.e., the time-dependent position vector with respect to which the electric dipole moment is vanishing) in flat-space electromagnetism: By requiring that the dipole part of an appropriate null rotated Weyl tensor component \(\psi _1^0\) be vanishing, a preferred set of origins, namely a (complex) center-of-mass line can be found in the four-complex-dimensional solution space of the good-cut equation (the H-space). Then the asymptotic Bianchi identities take the form of conservation equations, and certain terms in these can (in the given approximation) be identified with angular momentum. The resulting expression is just Eq. (4.15), to which all the other reasonable angular momentum expressions are expected to reduce in stationary spacetimes. A slightly more detailed discussion of the necessary technical background is given in Section 4.2.4. The necessity of quasi-locality for observables in general relativity Nonlocality of the gravitational energy-momentum and angular momentum One reaction to the nontensorial nature of the gravitational energy-momentum density expressions was to consider the whole problem ill defined and the gravitational energy-momentum meaningless. However, the successes discussed in Section 3.2 show that the global gravitational energy-momenta and angular momenta are useful notions, and hence, it could also be useful to introduce them even if the spacetime is not asymptotically flat. Furthermore, the nontensorial nature of an object does not imply that it is meaningless. For example, the Christoffel symbols are not tensorial, but they do have geometric, and hence physical content, namely the linear connection. Indeed, the connection is a nonlocal geometric object, connecting the fibers of the vector bundle over different points of the base manifold. Hence, any expression of the connection coefficients, in particular the gravitational energy-momentum or angular momentum, must also be nonlocal. In fact, although the connection coefficients at a given point can be taken to zero by an appropriate coordinate/gauge transformation, they cannot be transformed to zero on an open domain unless the connection is flat. Furthermore, the superpotential of many of the classical pseudotensors (e.g., of the Einstein, Bergmann, Møller's tetrad, Landau-Lifshitz pseudotensors), being linear in the connection coefficients, can be recovered as the pullback to the spacetime manifold of various forms of a single geometric object on the linear frame bundle, namely of the Nester-Witten 2-form, along various local cross sections [192, 358, 486, 487], and the expression of the pseudotensors by their super-potentials are the pullbacks of the Sparling equation [476, 175, 358]. In addition, Chang, Nester, and Chen [131] found a natural quasi-local Hamiltonian interpretation of each of the pseudotensorial expressions in the metric formulation of the theory (see Section 11.3.5). Therefore, the pseudotensors appear to have been 'rehabilitated', and the gravitational energy-momentum and angular momentum are necessarily associated with extended subsets of the spacetime. This fact is a particular consequence of a more general phenomenon [76, 439, 284]: Since (in the absence of any non-dynamical geometric background) the physical spacetime is the isomorphism class of the pairs (M, gab) (instead of a single such pair), it is meaningless to speak about the 'value of a scalar or vector field at a point p ∈ M'. What could have meaning are the quantities associated with curves (the length of a curve, or the holonomy along a closed curve), two-surfaces (e.g., the area of a closed two-surface) etc. determined by some body or physical fields. In addition, as Torre showed [523] (see also [524]), in spatially-closed vacuum spacetimes there can be no nontrivial observable, built as spatial integrals of local functions of the canonical variables and their finitely many derivatives. Thus, if we want to associate energy-momentum and angular momentum not only to the whole (necessarily asymptotically flat) spacetime, then these quantities must be associated with extended but finite subsets of the spacetime, i.e., must be quasi-local. The results of Friedrich and Nagy [202] show that under appropriate boundary conditions the initial boundary value problem for the vacuum Einstein equations, written into a first-order symmetric hyperbolic form, has a unique solution. Thus, there is a solid mathematical basis for the investigations of the evolution of subsystems of the universe, and hence, it is natural to ask about the observables, and in particular the conserved quantities, of their dynamics. Domains for quasi-local quantities The quasi-local quantities (usually the integral of some local expression of the field variables) are associated with a certain type of subset of spacetime. In four dimensions there are three natural candidates: the globally hyperbolic domains D ⊂ M with compact closure, the compact spacelike (in fact, acausal) hypersurfaces Σ with boundary (interpreted as Cauchy surfaces for globally hyperbolic domains D), and the closed, orientable spacelike two-surfaces \({\mathcal S}\) (interpreted as the boundary ∂Σ of Cauchy surfaces for globally hyperbolic domains). A typical example of type 3 is any charge integral expression: The quasi-local quantity is the integral of some superpotential 2-form built from the data given on the two-surface, as in Eq. (3.10), or the expression \({Q_{\mathcal S}}[{\bf{K}}]\) for the matter fields given by (2.5). An example of type 2 might be the integral of the Bel-Robinson 'momentum' on the hypersurface Σ: $${E_\Sigma}[{\xi ^a}]: = \int\nolimits_\Sigma {{\xi ^d}{T_{de\,f\,g}}{t^e}{t^f}{\textstyle{1 \over {3!}}}{\varepsilon ^g}_{abc}}.$$ This quantity is analogous to the integral EΣ[ξa] for the matter fields given by Eq. (2.6) (though, by the remarks on the Bel-Robinson 'energy' in Section 3.1.2, its physical dimension cannot be of energy). If ξa is a future-pointing nonspacelike vector then EΣ[ξa] ≥ 0. Obviously, if such a quantity were independent of the actual hypersurface Σ, then it could also be rewritten as a charge integral on the boundary ∂Σ. The gravitational Hamiltonian provides an interesting example for the mixture of type 2 and 3 expressions, because the form of the Hamiltonian is the three-surface integral of the constraints on Σ and a charge integral on its boundary ∂Σ, and thus, if the constraints are satisfied then the Hamiltonian reduces to a charge integral. Finally, an example of type 1 might be $${E_D}: = \inf \;\{{E_\Sigma}[{\bf{t}}]\vert \Sigma \;{\rm{is}}\;{\rm{a}}\;{\rm{Cauchy}}\;{\rm{surface}}\;{\rm{for}}\;D\},$$ the infimum of the 'quasi-local Bel-Robinson energies', where the infimum is taken on the set of all the Cauchy surfaces Σ for D with given boundary ∂Σ. (The infimum always exists because the Bel-Robinson 'energy density' Tabcdtatbtctd is non-negative.) Quasi-locality in any of these three senses is compatible with the quasi-locality of Haag and Kastler [231, 232]. The specific quasi-local energy-momentum constructions provide further examples both for charge-integraltype expressions and for those based on spacelike hypersurfaces. Strategies to construct quasi-local quantities There are two natural ways of finding the quasi-local energy-momentum and angular momentum. The first is to follow some systematic procedure, while the second is the 'quasi-localization' of the global energy-momentum and angular momentum expressions. One of the two systematic procedures could be called the Lagrangian approach: The quasi-local quantities are integrals of some superpotential derived from the Lagrangian via a Noether-type analysis. The advantage of this approach could be its manifest Lorentz-covariance. On the other hand, since the Noether current is determined only through the Noether identity, which contains only the divergence of the current itself, the Noether current and its superpotential is not uniquely determined. In addition (as in any approach), a gauge reduction (for example in the form of a background metric or reference configuration) and a choice for the 'translations' and 'boost-rotations' should be made. The other systematic procedure might be called the Hamiltonian approach: At the end of a fully quasi-local (covariant or not) Hamiltonian analysis we would have a Hamiltonian, and its value on the constraint surface in the phase space yields the expected quantities. Here one of the main ideas is that of Regge and Teitelboim [433], that the Hamiltonian must reproduce the correct field equations as the flows of the Hamiltonian vector fields, and hence, in particular, the correct Hamiltonian must be functionally differentiable with respect to the canonical variables. This differentiability may restrict the possible 'translations' and 'boost-rotations' too. Another idea is the expectation, based on the study of the quasi-local Hamiltonian dynamics of a single scalar field, that the boundary terms appearing in the calculation of the Poisson brackets of two Hamiltonians (the 'Poisson boundary terms'), represent the infinitesimal flow of energy-momentum and angular momentum between the physical system and the rest of the universe [502]. Therefore, these boundary terms must be gauge invariant in every sense. This requirement restricts the potential boundary terms in the Hamiltonian as well as the boundary conditions for the canonical variables and the lapse and shift. However, if we are not interested in the structure of the quasi-local phase space, then, as a short cut, we can use the Hamilton-Jacobi method to define the quasi-local quantities. The resulting expression is a two-surface integral. Nevertheless, just as in the Lagrangian approach, this general expression is not uniquely determined, because the action can be modified by adding an (almost freely chosen) boundary term to it. Furthermore, the 'translations' and 'boost-rotations' are still to be specified. On the other hand, at least from a pragmatic point of view, the most natural strategy to introduce the quasi-local quantities would be some 'quasi-localization' of those expressions that gave the global energy-momentum and angular momentum of asymptotically flat spacetimes. Therefore, respecting both strategies, it is also legitimate to consider the Winicour-Tamburino-type (linkage) integrals and the charge integrals of the curvature. Since the global energy-momentum and angular momentum of asymptotically flat spacetimes can be written as two-surface integrals at infinity (and, as we saw in Section 3.1.1 that the mass of the source in Newtonian theory, and as we will see in Section 7.1.1 that both the energy-momentum and angular momentum of the source in the linearized Einstein theory can also be written as two-surface integrals), the two-surface observables can be expected to have special significance. Thus, to summarize, if we want to define reasonable quasi-local energy-momentum and angular momentum as two-surface observables, then three things must be specified: an appropriate general two-surface integral (e.g., in the Lagrangian approaches the integral of a superpotential 2-form, or in the Hamiltonian approaches a boundary term together with the boundary conditions for the canonical variables), a gauge choice (in the form of a distinguished coordinate system in the pseudotensorial approaches, or a background metric/connection in the background field approaches or a distinguished tetrad field in the tetrad approach), and a definition for the 'quasi-symmetries' of the two-surface (i.e., the 'generator vector fields' of the quasi-local quantities in the Lagrangian, and the lapse and the shift in the Hamiltonian approaches, respectively, which, in the case of timelike 'generator vector fields', can also be interpreted as a fleet of observers on the two-surface). In certain approaches the definition of the 'quasi-symmetries' is linked to the gauge choice, for example by using the Killing symmetries of the flat background metric. Tools to Construct and Analyze Quasi-Local Quantities Having accepted that the gravitational energy-momentum and angular momentum should be introduced at the quasi-local level, we next need to discuss the special tools and concepts that are needed in practice to construct (or even to understand) the various special quasi-local expressions. Thus, first, in Section 4.1 we review the geometry of closed spacelike two-surfaces, with special emphasis on two-surface data. Then, in Sections 4.2 and 4.3, we discuss the special situations where there is a more-or-less generally accepted 'standard' definition for the energy-momentum (or at least for the mass) and angular momentum. In these situations any reasonable quasi-local quantity should reduce to them. The geometry of spacelike two-surfaces The first systematic study of the geometry of spacelike two-surfaces from the point of view of quasi-local quantities is probably due to Tod [514, 519]. Essentially, his approach is based on the Geroch-Held-Penrose (GHP) formalism [209]. Although this is a very effective and flexible formalism [209, 425, 426, 277, 479], its form is not spacetime covariant. Since in many cases the covariance of a formalism itself already gives some hint as to how to treat and solve the problem at hand, we concentrate here mainly on a spacetime-covariant description of the geometry of the spacelike two-surfaces, developed gradually in [489, 491, 492, 493, 198, 500]. The emphasis will be on the geometric structures rather than the technicalities. In the last paragraph, we comment on certain objects appearing in connection with families of spacelike two-surfaces. Our standard differential geometric reference is [318, 319]. The Lorentzian vector bundle The restriction \({{\rm{V}}^a}({\mathcal S})\) to the closed, orientable spacelike two-surface \({\mathcal S}\) of the tangent bundle TM of the spacetime has a unique decomposition to the gab-orthogonal sum of the tangent bundle TS of \({\mathcal S}\) and the bundle of the normals, denoted by NS. Then, all the geometric structures of the spacetime (metric, connection, curvature) can be decomposed in this way. If ta and va are timelike and spacelike unit normals, respectively, being orthogonal to each other, then the projections to \(T{\mathcal S}\) and \(N{\mathcal S}\) are \(\Pi _b^a: = \delta _b^a - {t^a}{t_b} + {\upsilon ^a}{\upsilon _b}\) and \(O_b^a: = \delta _b^a - \Pi _b^a\), respectively. The induced two-metric and the corresponding area 2-form on \({\mathcal S}\) will be denoted by qab = gab − tatb + vavb and εab = tcvdεcdab, respectively, while the area 2-form on the normal bundle will be ⊥εab = tavb − tbva. The bundle \({{\rm{V}}^a}({\mathcal S})\) together with the fiber metric gab and the projection \(\Pi _b^a\) will be called the Lorentzian vector bundle over \({\mathcal S}\). For the discussion of the global topological properties of the closed orientable two-manifolds, see, e.g., [10, 500]. The spacetime covariant derivative operator ∇e defines two connections on \({{\rm{V}}^a}({\mathcal S})\). The first covariant derivative, denoted by δe, is analogous to the induced (intrinsic) covariant derivative on (one-codimensional) hypersurfaces: \({\delta _e}{X^a}: = \Pi _b^a\Pi _e^f{\nabla _f}(\Pi _c^b{X^c}) + O_b^a\Pi _e^f{\nabla _f}(O_c^b{X^c})\) for any section Xa of \({{\rm{V}}^a}({\mathcal S})\). Obviously, δe annihilates both the fiber metric gab and the projection \(\Pi _b^a\). However, since for two-surfaces in four dimensions the normal is not uniquely determined, we have the 'boost gauge freedom' ta ↦ ta cosh u + va sinh u, va ↦ ta sinh u + va cosh u. The induced connection will have a nontrivial part on the normal bundle, too. The corresponding (normal part of the) connection one-form on \({\mathcal S}\) can be characterized, for example, by \({A_e}: = \Pi _e^f({\nabla _f}{t_a}){\upsilon ^a}\). Therefore, the connection δe can be considered as a connection on \({{\rm{V}}^a}({\mathcal S})\) coming from a connection on the O(2) ⊗ O(1, 1)-principal bundle of the gab-orthonormal frames adapted to \({\mathcal S}\). The other connection, Δe, is analogous to the Sen connection [447], and is defined simply by \({\Delta _e}{X^a}: = \Pi _e^f{\Delta _f}{X^a}\). This annihilates only the fiber metric, but not the projection. The difference of the connections Δe and δe turns out to be just the extrinsic curvature tensor: \({\Delta _e}{X^a} = {\delta _e}{X^a} + {Q^a}_{eb}{X^b} - {X^b}{Q_{be}}^a\). Here \({Q^a}_{eb}: = - \Pi _c^a{\Delta _e}\Pi _b^c = {\tau ^a}_e{t_b} - {v^a}_e{\upsilon _b}\), and \({\tau _{ab}}: = \Pi _a^c\Pi _b^d{\nabla _c}{t_d}\) and \({v_{ab}}: = \Pi _a^c\Pi _b^d{\nabla _c}{\upsilon _d}\) are the standard (symmetric) extrinsic curvatures corresponding to the individual normals ta and va, respectively. The familiar expansion tensors of the future-pointing outgoing and ingoing null normals, la := ta + υa and \({n^a}: = {1 \over 2}({t^a} - {\upsilon ^a})\), respectively, are θab = Qabclc and θ′ab = Qabcnc, and the corresponding shear tensors σab and σ′ab are defined by their trace-free part. Obviously, τab and νab (and hence the expansion and shear tensors θab, θ′ab, σab, and σ′ab) are boost-gauge-dependent quantities (and it is straightforward to derive their transformation from the definitions), but their combination \({Q^a}_{eb}\) is boost-gauge invariant. In particular, it defines a natural normal vector field to \({\mathcal S}\) as \({Q_b}: = {Q^a}_{ab} = \tau {t_b} - v{\upsilon _b} = {\theta {\prime}}{l_b} + \theta {n_b}\) and θ′ are the relevant traces. Qa is called the mean extrinsic curvature vector of \({\mathcal S}\). If \({{\tilde Q}_b}:{= ^ \bot}{\varepsilon ^a}_b{Q^b} = v{t_b} - \tau {\upsilon _b} = - {\theta {\prime}}{l_a} + \theta {n_a}\), called the dual mean curvature vector, then the norm of Qa and Qa is \({Q_a}{Q_b}{g^{ab}} = - {{\tilde Q}_a}{{\tilde Q}_b}{g^{ab}} = {\tau ^2} - {v^2} = 2\theta {\theta {\prime}}\), and they are orthogonal to each other: \({Q_a}{Q_b}{g^{ab}} = 0\). It is easy to show that \({\Delta _a}{{\tilde Q}^a} = 0,\,{\rm{i}}{\rm{.e}}{\rm{.,}}\,{{\tilde Q}^a}\) is the uniquely pointwise-determined direction orthogonal to the two-surface in which the expansion of the surface is vanishing. If Qa is not null, then \(\{{Q_a},{{\tilde Q}_a}\}\) defines an orthonormal frame in the normal bundle (see, e.g., [14]). If Qa is nonzero, but (e.g., future-pointing) null, then there is a uniquely determined null normal Sa to \({\mathcal S}\), such that QaSa = 1, and hence, {Qa, Sa} is a uniquely determined null frame. Therefore, the two-surface admits a natural gauge choice in the normal bundle, unless Qa is vanishing. Geometrically, Δe is a connection coming from a connection on the O(1, 3)-principal fiber bundle of the gab-orthonormal frames. The curvature of the connections δe and Δe, respectively, are $${f^a}_{bcd} = {- ^ \bot}{\varepsilon ^a}_b({\delta _c}{A_d} - {\delta _d}{A_c}) + {\textstyle{1 \over 2}}{}^{\mathcal S}R(\Pi _c^a{q_{bd}} - \Pi _d^a{q_{bc}}),$$ $$\begin{array}{*{20}c} {{F^a}_{bcd} = {f^a}_{bcd} - {\delta _c}({Q^a}_{db} - {Q_{bd}}^a) + {\delta _d}({Q^a}_{cb} - {Q_{bc}}^a) +} \\ {\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + {Q^a}_{ce}{Q_{bd}}^e + {Q_{ec}}^a{Q^e}_{db} - {Q^a}_{de}{Q_{bc}}^e - {Q_{ed}}^a{Q^e}_{cb},} \\ \end{array}$$ where \(^{\mathcal S}R\) is the curvature scalar of the familiar intrinsic Levi-Civita connection of \(^{\mathcal S}R\). The curvature of Δe is just the pullback to \({\mathcal S}\) of the spacetime curvature 2-form: \({F^a}_{bcd} = {R^a}_{bef}\Pi _c^e\Pi _d^f\). Therefore, the well-known Gauss, Codazzi-Mainardi, and Ricci equations for the embedding of \({\mathcal S}\) in M are just the various projections of Eq. (4.2). Embeddings and convexity conditions To prove certain statements about quasi-local quantities, various forms of the convexity of \({\mathcal S}\) must be assumed. The convexity of \({\mathcal S}\) in a three-geometry is defined by the positive definiteness of its extrinsic curvature tensor. If, in addition, the three-geometry is flat, then by the Gauss equation this is equivalent to the positivity of the scalar curvature of the intrinsic metric of \({\mathcal S}\). It is this convexity condition that appears in the solution of the Weyl problem of differential geometry [397]: if \(({S^2},{q_{ab}})\) is a C4 Riemannian two-manifold with positive scalar curvature, then this can be isometrically embedded (i.e., realized as a closed convex two-surface) in the Euclidean three-space ℝ3, and this embedding is unique up to rigid motions [477]. However, there are counterexamples even to local isometric embedability, when the convexity condition, i.e., the positivity of the scalar curvature, is violated [373]. We continue the discussion of this embedding problem in Section 10.1.6. In the context of general relativity the isometric embedding of a closed orientable two-surface into the Minkowski spacetime ℝ1,3 is perhaps more interesting. However, even a naïve function counting shows that if such an embedding exists then it is not unique. An existence theorem for such an embedding, \(i:{\mathcal S} \rightarrow {{\rm{{\mathbb R}}}^{1,3}}\), (with S2 topology) was given by Wang and Yau [543], and they controlled these isometric embeddings in terms of a single function τ on the two-surface. This function is just \({x^{\underline a}}{T_{\underline a}}\), the 'time function' of the surface in the Cartesian coordinates of the Minkowski space in the direction of a constant unit timelike vector field \({T_{\underline a}}\). Interestingly enough, \(({\mathcal S},{q_{ab}})\) is not needed to have positive scalar curvature, only the sum of the scalar curvature and a positive definite expression of the derivative δeτ is required to be positive. This condition is just the requirement that the surface must have a convex 'shadow' in the direction \({T^{\underline a}}\), i.e., the scalar curvature of the projection of the two-surface \(i({\mathcal S}) \subset {{\rm{{\mathbb R}}}^{1,3}}\) to the spacelike hyperplane orthogonal to \({T^{\underline a}}\) is positive. The Laplacian δeδeτ of the 'time function' gives the mean curvature vector of \(i({\mathcal S})\) in ℝ1,3 in the direction \({T^{\underline a}}\). If \({\mathcal S}\) is in a Lorentzian spacetime, then the weakest convexity conditions are conditions only on the mean null curvatures: \({\mathcal S}\) will be called weakly future convex if the outgoing null normals la are expanding on \({\mathcal S}\), i.e., θ:= qabθab > 0, and weakly past convex if θ′:= qabθ′ab < 0 [519]. \({\mathcal S}\) is called mean convex [247] if θθ′ < 0 on \({\mathcal S}\), or, equivalently, if \({{\tilde Q}_a}\) is timelike. To formulate stronger convexity conditions we must consider the determinant of the null expansions \(D: = \det \Vert {\theta ^a}_b\Vert \, = \,{1 \over 2}({\theta _{ab}}{\theta _{cd}} - {\theta _{ac}}{\theta _{bd}}){q^{ab}}{q^{cd}}\) and \({D{\prime}}: = \det \Vert{\theta{\prime}^{a}}_b\Vert \, = \,{1 \over 2}(\theta _{ab}{\prime}\theta _{cd}{\prime} - \theta _{ac}{\prime}\theta _{cd}{\prime}){q^{ab}}{q^{cd}}\). Note that, although the expansion tensors, and in particular the functions θ, θ′, D, and D′ are boost-gauge-dependent, their sign is gauge invariant. Then \({\mathcal S}\) will be called future convex if θ > 0 and D > 0, and past convex if θ′ < 0 and D′ > 0 [519, 492]. These are equivalent to the requirement that the two eigenvalues of \({\theta ^a}_b\) be positive and those of \({\theta{\prime}^{a}}_b\) be negative everywhere on \({\mathcal S}\), respectively. A different kind of convexity condition, based on global concepts, will be used in Section 6.1.3. The spinor bundle The connections δe and Δe determine connections on the pullback \({{\rm{S}}^A}({\mathcal S})\) to \({\mathcal S}\) of the bundle of unprimed spinors. The natural decomposition \({{\rm{V}}^a}({\mathcal S}) = T{\mathcal S} \oplus N{\mathcal S}\) defines a chirality on the spinor bundle \({{\rm{S}}^A}({\mathcal S})\) in the form of the spinor \({\gamma ^A}_B: = 2{t^{A{A{\prime}}}}{\upsilon _{B{A{\prime}}}}\), which is analogous to the γ5 matrix in the theory of Dirac spinors. Then, the extrinsic curvature tensor above is a simple expression of \({Q^A}_{eB}: = {1 \over 2}({\Delta _e}{\gamma ^A}_C){\gamma ^C}_B\) and \({\gamma ^A}_B\) (and their complex conjugate), and the two covariant derivatives on \({{\rm{S}}^A}({\mathcal S})\) are related to each other by \({\Delta _e}{\lambda ^A} = {\delta _e}{\lambda ^A} + {Q^A}_{eB}{\lambda ^B}\). The curvature \({F^A}_{Bcd}\) of Δe can be expressed by the curvature \({f^A}_{Bcd}\) of δe, the spinor \({Q^A}_{eB}\), and its δe-derivative. We can form the scalar invariants of the curvatures according to $$f: = {f_{abcd}}{1 \over 2}({\varepsilon ^{ab}} - {{\rm{i}}^ \bot}{\varepsilon ^{ab}})\;{\varepsilon ^{cd}} = {\rm{i}}{\gamma ^A}_B{f^B}_{Acd}{\varepsilon ^{cd}}{= ^{\mathcal S}}R - 2{\rm{i}}{\delta _c}({\varepsilon ^{cd}}{A_d}),$$ $$F: = {F_{abcd}}{1 \over 2}({\varepsilon ^{ab}} - {{\rm{i}}^ \bot}{\varepsilon ^{ab}}){\varepsilon ^{cd}} = {\rm{i}}{\gamma ^A}_B{F^B}_{Acd}{\varepsilon ^{cd}} = f + \theta \theta {\prime}- 2{\sigma {\prime}_{ea}}{\sigma ^e}_b({q^{ab}} + {\rm{i}}{\varepsilon ^{ab}}).$$ f is four times the complex Gauss curvature [425] of \({\mathcal S}\), by means of which the whole curvature \({f^A}_{Bcd}\) can be characterized: \({f^A}_{Bcd} = - {i \over 4}f{\gamma ^A}_B{\varepsilon _{cd}}\) If the spacetime is space and time orientable, at least on an open neighborhood of \({\mathcal S}\), then the normals ta and va can be chosen to be globally well defined, and hence, \(N{\mathcal S}\) is globally trivializable and the imaginary part of f is a total divergence of a globally well-defined vector field. An interesting decomposition of the SO(1, 1) connection one-form Ae, i.e., the vertical part of the connection δe, was given by Liu and Yau [338]: There are real functions α and γ, unique up to additive constants, such that Ae = εefδfα + δeγ. α is globally defined on \({\mathcal S}\), but in general γ is defined only on the local trivialization domains of \(N{\mathcal S}\) that are homeomorphic to ℝ2. It is globally defined if \({H^1}({\mathcal S}) = 0\). In this decomposition α is the boost-gauge-invariant part of Ae, while γ represents its gauge content. Since δeAe = δeδeγ, the 'Coulomb-gauge condition' δeAe = 0 uniquely fixes Ae (see also Section 10.4.1). By the Gauss-Bonnet theorem one has \(\oint\nolimits_{\mathcal S} {f\,d{\mathcal S} =} \oint\nolimits_{\mathcal S} {^{\mathcal S}Rd{\mathcal S} = 8\pi (1 - g)}\), where g is the genus of \({\mathcal S}\). Thus, geometrically the connection δe is rather poor, and can be considered as a part of the 'universal structure of \({\mathcal S}\)'. On the other hand, the connection Δe is much richer, and, in particular, the invariant F carries information on the mass aspect of the gravitational 'field'. The two-surface data for charge-type quasi-local quantities (i.e., for two-surface observables) are the universal structure (i.e., the intrinsic metric qab, the projection \(\Pi _b^a\) and the connection δe) and the extrinsic curvature tensor \({Q^a}_{eb}\). Curvature identities The complete decomposition of ΔAA′λB into its irreducible parts gives ΔAA′λA, the Dirac-Witten operator, and \({{\mathcal T}_{{E\prime}EA}}^B{\lambda _B}: = {\Delta _{{E\prime}(E}}{\lambda _{A)}} + {1 \over 2}\gamma EA{\gamma ^{CD}}{\Delta _{{E\prime}C}}{\lambda _D}\), the two-surface twistor operator. The former is essentially the anti-symmetric part ΔA′[AλB], the latter is the symmetric and (with respect to the complex metric γAB trace-free part of the derivative. (The trace \({\gamma ^{AB}}{\Delta _{{A\prime}A}}{\lambda _B}\) can be shown to be the Dirac-Witten operator, too.) A Sen-Witten-type identity for these irreducible parts can be derived. Taking its integral one has $$\oint\nolimits_{\mathcal S} {{{\overline \gamma}^{A{\prime}B{\prime}}}[({\Delta _{A{\prime}A}}{\gamma ^A})({\Delta _{B{\prime}B}}{\mu ^B}) + ({\tau _{A{\prime}CD}}^E{\lambda _E})({\tau _{B{\prime}}}^{CDF}{\mu _F})]\;\;d{\mathcal S}} = - {\textstyle{{\rm{i}} \over 2}}\oint\nolimits_{\mathcal S} {{\lambda ^A}{\mu ^B}{F_{A\,Bcd}}},$$ where λA and μA are two arbitrary spinor fields on \({\mathcal S}\), and the right-hand side is just the charge integral of the curvature \({F^A}_{Bcd}\) on \({\mathcal S}\). The GHP formalism A GHP spin frame on the two-surface \({\mathcal S}\) is a normalized spinor basis \(\varepsilon _{\rm{A}}^A: = \{{o^A},\,{\iota ^A}\}, \, {\bf{A}} = 0,1\), such that the complex null vectors \({m^a}: = {o^A}{{\bar \iota}^{{A\prime}}}\) and \({{\bar m}^a}: = {\iota ^A}{{\bar o}^{{A\prime}}}\) are tangent to \({\mathcal S}\) (or, equivalently, the future-pointing null vectors la := oAōA′ and \({n^a}: = {\iota ^A}{{\bar \iota}^{{A\prime}}}\) are orthogonal to \({\mathcal S}\)). Note, however, that in general a GHP spin frame can be specified only locally, but not globally on the whole \({\mathcal S}\). This fact is connected with the nontriviality of the tangent bundle \(T{\mathcal S}\) of the two-surface. For example, on the two-sphere every continuous tangent vector field must have a zero, and hence, in particular, the vectors ma and \({{\bar m}^a}\) cannot form a globally-defined basis on \({\mathcal S}\). Consequently, the GHP spin frame cannot be globally defined either. The only closed orientable two-surface with a globally-trivial tangent bundle is the torus. Fixing a GHP spin frame \(\{\varepsilon _{\rm{A}}^A\}\) on some open \(U \subset {\mathcal S}\), the components of the spinor and tensor fields on U will be local representatives of cross sections of appropriate complex line bundles E(p, q) of scalars of type (p, q) [209, 425]: A scalar ϕ is said to be of type (p, q) if, under the rescaling oA − λoA, ιA ↦ λ−1 ιA of the GHP spin frame with some nowhere-vanishing complex function λ: U → ℂ, the scalar transforms as \(\phi \mapsto {\lambda ^p}{{\bar \lambda}^q}\phi\). For example, \(\rho: = {\theta _{ab}}{m^a}{{\bar m}^b} = - {1 \over 2}\theta, \,{\rho \prime}: = \theta _{ab}\prime{m^a}{{\bar m}^b} = \theta - {1 \over 2}{\theta \prime},\,\sigma := {\theta _{ab}}{m^a}{m^b} = {\sigma _{ab}}{m^a}{m^b}\) and \(\sigma := \theta _{ab}\prime{{\bar m}^a}{{\bar m}^b}\) are of type (1,1), (−1, −1), (3, −1), and (−3, 1), respectively. The components of the Weyl and Ricci spinors, \({\psi _0}: = {\psi _{ABCD}}{o^A}{o^B}{o^C}{o^D},{\psi _1}: = {\psi _{ABCD}}{o^A}{o^B}{o^C}{\iota ^D},\,{\psi _2}: = {\psi _{ABCD}}{o^A}{o^B}{\iota ^C}{\iota ^D},\, \ldots, \,{\phi _{00}}: = {\phi _{A{B\prime}}}{o^A}{{\bar o}^{{B\prime}}},\,{\phi _{01}}: = {\phi _{A{B\prime}}}{o^A}{{\bar \iota}^{{B\prime}}},\, \ldots\), etc., also have definite (p, q)-type. In particular, Λ:= R/24 has type (0, 0). A global section of E(p, q) is a collection of local cross sections {(U, ϕ), (U′, ϕ′), …} such that {U,U′,…} forms a covering of \({\mathcal S}\) and on the nonempty overlappings, e.g., on U ⋂ U′, the local sections are related to each other by \(\phi = {\psi ^p}{{\bar \psi}^q}{\phi \prime}\), where ψ: U ⋂ U′ → ℂ is the transition function between the GHP spin frames: oA = ψo′A and ιA = ψ−1ι′A. The connection δe defines a connection ðe on the line bundles E(p,q) [209, 425]. The usual edth operators, ð and ð′, are just the directional derivatives ð:= maða and \({\eth\prime}: = {{\bar m}^a}{\eth_a}\) on the domain \(U \subset {\mathcal S}\) of the GHP spin frame \(\{\varepsilon _{\bf{A}}^A\}\). These locally-defined operators yield globally-defined differential operators, denoted also by ð and ð′, on the global sections of E(p, q). It might be worth emphasizing that the GHP spin coefficients β and β′, which do not have definite (p, q)-type, play the role of the two components of the connection one-form, and are built both from the connection one-form for the intrinsic Riemannian geometry of \(({\mathcal S},\,{q_{ab}})\) and the connection one-form Ae in the normal bundle. ð and ð′ are elliptic differential operators, thus, their global properties, e.g., the dimension of their kernel, are connected with the global topology of the line bundle they act on, and, in particular, with the global topology of \({\mathcal S}\). These properties are discussed in [198] in general, and in [177, 58, 490] for spherical topology. Irreducible parts of the derivative operators Using the projection operators \({\pi ^{\pm A}}_B: = {1 \over 2}(\delta _B^A \pm {\gamma ^A}_B)\), the irreducible parts Δa′aλA and \({{\mathcal T}_{E \prime EA}}^B{\lambda _B}\) can be decomposed further into their right-handed and left-handed parts. In the GHP formalism these chiral irreducible parts are $$\begin{array}{*{20}c} {- {\Delta ^ -}\lambda : = \;\eth{\lambda _1} + \rho {\prime}{\lambda _0},} & {{\Delta ^ +}\lambda : = \;\eth{\prime}{\lambda _0} + \rho {\lambda _1},} \\ {{\tau ^ -}\lambda : = \;\eth{\lambda _0} + \sigma {\lambda _1},} & {- {\tau ^ +}\lambda : = \;\eth{\prime}{\lambda _1} + \sigma {\prime}{\lambda _0},} \\ \end{array}$$ where λ:= (λ0,λ1) and the spinor components are defined by λA =: λ1oA − λ0ιA. The various first-order linear differential operators acting on spinor fields, e.g., the two-surface twistor operator, the holomorphy/antiholomorphy operators or the operators whose kernel defines the asymptotic spinors of Bramson [106], are appropriate direct sums of these elementary operators. Their global properties under various circumstances are studied in [58, 490, 496]. SO(1, 1)-connection one-form versus anholonomicity Obviously, all the structures we have considered can be introduced on the individual surfaces of one or two-parameter families of surfaces, as well. In particular [246], let the two-surface \({\mathcal S}\) be considered as the intersection \({{\mathcal N}^ +} \cap {{\mathcal N}^ -}\) of the null hypersurfaces formed, respectively, by the outgoing and the ingoing light rays orthogonal to \({\mathcal S}\), and let the spacetime (or at least a neighborhood of \({\mathcal S}\)) be foliated by two one-parameter families of smooth hypersurfaces {ν+ = const.} and {ν− = const.}, where ν±: M → ℝ, such that \({{\mathcal N}^ +} = \{{v_ +} = 0\}\) and \({{\mathcal N}^ -} = \{{v_ -} = 0\}\). One can form the two normals, n±a:= ∇aν±, which are null on \({{\mathcal N}^ +}\) and \({{\mathcal N}^ -}\), respectively. Then we can define \({\beta _{\pm e}}: = ({\Delta _e}{n_{\pm a}})n_ \mp ^a\), for which β+e + β−e = Δen2, where \({n^2}: = {g_{ab}}n_ + ^an_ - ^b\). (If n2 is chosen to be 1 on \({\mathcal S}\), then β−e = −β+e is precisely the SO(1, 1)-connection one-form Ae above.) Then the anholonomicity is defined by \({\omega _e}: = {1 \over {2{n^2}}}{[{n_ -},\,{n_ +}]^f}{q_{fe}} = {1 \over {2{n^2}}}({\beta _{+ e}} - {\beta _{- e}})\). Since ωe is invariant with respect to the rescalings ν+ ↦ exp(A)ν+ and ν− ↦ exp(B)ν− of the functions, defining the foliations by those functions A, B: M → ℝ, which preserve \({\nabla _{[a}}{n_{\pm b]}} = 0\), it was claimed in [246] that ωe depends only on \({\mathcal S}\). However, this implies only that ωe is invariant with respect to a restricted class of the change of the foliations, and that ωe is invariantly defined only by this class of the foliations rather than the two-surface. In fact, ωe does depend on the foliation: Starting with a different foliation defined by the functions \({{\bar v}_ +}: = \exp (\alpha){v_ +}\) and \({{\bar v}_ -}: = \exp (\beta){v_ -}\) for some α, β: M → ℝ, the corresponding anholonomicity \({{\bar \omega}_e}\) would also be invariant with respect to the restricted changes of the foliations above, but the two anholonomicities, ωe and \({{\bar \omega}_e}\), would be different: \({{\bar \omega}_e} - {\omega _e} = {1 \over 2}{\Delta _e}(\alpha - \beta)\). Therefore, the anholonomicity is a gauge-dependent quantity. Standard situations to evaluate the quasi-local quantities There are exact solutions to the Einstein equations and classes of special (e.g., asymptotically flat) spacetimes in which there is a commonly accepted definition of energy-momentum (or at least mass) and angular momentum. In this section we review these situations and recall the definition of these 'standard' expressions. Round spheres If the spacetime is spherically symmetric, then a two-sphere, which is a transitivity surface of the rotation group, is called a round sphere. Then in a spherical coordinate system (t, r, θ, ϕ) the spacetime metric takes the form gab = diag(exp(2γ), − exp(2α), −r2, −r2 sin2 θ), where γ and α are functions of t and r. (Hence, r is called the area-coordinate.) Then, with the notation of Section 4.1, one obtains \({R_{abcd}}{\varepsilon ^{ab}}{\varepsilon ^{cd}} = {4 \over {{r^2}}}(1 - \exp (- 2\alpha))\). Based on the investigations of Misner, Sharp, and Hernandez [365, 267], Cahill and McVitte [122] found $$E(t,r): = {1 \over {8G}}{r^3}{R_{abcd}}{\varepsilon ^{ab}}{\varepsilon ^{cd}} = {r \over {2G}}(1 - {e^{- 2\alpha}})$$ to be an appropriate (and hence, suggested to be the general) notion of energy, the Misner-Sharp energy, contained in the two-sphere \({\mathcal S}: = \{t = const.,\,r = const.\}\). (For another expression of E(t, r) in terms of the norm of the Killing fields and the metric, see [577].) In particular, for the Reissner-Nordström solution GE(t, r) = m − e2/2r, while for the isentropic fluid solutions \(E(t,\,r) = 4\pi \int\nolimits_0^r {{r\prime^{2}}\mu (t,\,{r\prime})d{r\prime}}\), where and are the usual parameters of the Reissner-Nordstroïm solutions and μ is the energy density of the fluid [365, 267] (for the static solution, see, e.g., Appendix B of [240]). Using Einstein's equations, simple equations can be derived for the derivatives ∂tE(t, r) and ∂tE(t, r), and if the energy-momentum tensor satisfies the dominant energy condition, then ∂rE(t, r) > 0. Thus, E(t, r) is a monotonic function of r, provided r is the area-coordinate. Since, by spherical symmetry all the quantities with nonzero spin weight, in particular the shears σ and σ′, are vanishing and ψ2 is real, by the GHP form of Eqs. (4.3), (4.4) the energy function E(t, r) can also be written as $$E({\mathcal S}) = {1 \over G}{r^3}\left({{1 \over 4}{}^{\mathcal S}R + \rho \rho {\prime}} \right) = {1 \over G}{r^3}(- {\psi _2} + {\phi _{11}} + \Lambda) = \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} \left({1 + {1 \over {2\pi}}\oint\nolimits_{\mathcal S} {\rho \rho {\prime}\;d{\mathcal S}}} \right).$$ Any of these expressions is considered to be the 'standard' definition of the energy for round spheres.Footnote 4 The last of these expressions does not depend on whether r is an area-coordinate or not. \(E({\mathcal S})\) contains a contribution from the gravitational 'field' too. For example, for fluids it is not simply the volume integral of the energy density μ of the fluid, because that would be \(4\pi \int\nolimits_0^r {{r\prime^{2}}\exp (\alpha)\mu \,d{r\prime}}\). This deviation can be interpreted as the contribution of the gravitational potential energy to the total energy. Consequently, \(E({\mathcal S})\) is not a globally monotonic function of r, even if μ ≥ 0. For example, in the closed Friedmann-Robertson-Walker spacetime (where, to cover the whole three-space, r cannot be chosen to be the area-radius and \(r \in [0,\pi ])\,E({\mathcal S})\) is increasing for r ∈ [0, π/2), taking its maximal value at r = π/2, and decreasing for r ∈ [π/2, π]. This example suggests a slightly more exotic spherically-symmetric spacetime. Its spacelike slice Σ will be assumed to be extrinsically flat, and its intrinsic geometry is the matching of two conformally flat metrics. The first is a 'large' spherically-symmetric part of a t = const. hypersurface of the closed Friedmann-Robertson-Walker spacetime with the line element \(d{l^2} = \Omega _{{\rm{FRW}}}^2dl_0^2\), where \(dl_0^2\) is the line element for the flat three-space and \(d{l^2} = \Omega _{{\rm{FRW}}}^2: = B{(1 + {{{r^2}} \over {4{T^2}}})^{- 2}}\) with positive constants B and T2, and the range of the Euclidean radial coordinate r is [0, r0], where r0 ∈ (2T, ∞). It contains a maximal two-surface at r = 2T with round-sphere mass parameter \(M: = GE(2T) = {1 \over 2}T\sqrt B\). The scalar curvature is R = 6/BT2, and hence, by the constraint parts of the Einstein equations and by the vanishing of the extrinsic curvature, the dominant energy condition is satisfied. The other metric is the metric of a piece of a t = const. hypersurface in the Schwarzschild solution with mass parameter m (see [213]): \(d{{\bar l}^2} = \Omega _S^2d\bar l_0^2\), where \(\Omega _S^2: = {(1 + {m \over {2\bar r}})^4}\) and the Euclidean radial coordinate \({\bar r}\) runs from \({{\bar r}_0}\) to ∞, where \({{\bar r}_0} \in (0,\,m/2)\). In this geometry there is a minimal surface at \(\bar r = m/2\), the scalar curvature is zero, and the round-sphere energy is \(E(\bar r) = m/G\). These two metrics can be matched to obtain a differentiable metric with a Lipschitz-continuous derivative at the two-surface of the matching (where the scalar curvature has a jump), with arbitrarily large 'internal mass' M/G and arbitrarily small ADM mass m/G. (Obviously, the two metrics can be joined smoothly, as well, by an 'intermediate' domain between them.) Since this space looks like a big spherical bubble on a nearly flat three-plane — like the capital Greek letter Ω — for later reference we will call it an 'ΩM,m-spacetime'. Spherically-symmetric spacetimes admit a special vector field, called the Kodama vector field Ka, such that KaGab is divergence free [321]. In asymptotically flat spacetimes Ka is timelike in the asymptotic region, in stationary spacetimes it reduces to the Killing symmetry of stationarity (in fact, this is hypersurface-orthogonal), but, in general, it is not a Killing vector. However, by ∇a(GabKb) = 0, the vector field Sa := GabKb has a conserved flux on a spacelike hypersurface Σ. In particular, in the coordinate system (t, r, θ, ϕ) and in the line element given in the first paragraph above Ka = exp[−(α + γ)](∂/∂t)a. If Σ is a solid ball of radius r, then the flux of Sa is precisely the standard round-sphere expression (4.7) for the two-sphere ∂Σ [375]. An interesting characterization of the dynamics of the spherically-symmetric gravitational fields can be given in terms of the energy function E(t, r) given by (4.7) (or by (4.8)) (see, e.g., [578, 352, 250]). In particular, criteria for the existence and formation of trapped surfaces and for the presence and nature of the central singularity can be given by E(t, r). Other interesting quasi-locally-defined quantities are introduced and used to study nonlinear perturbations and backreaction in a wide class of spherically-symmetric spacetimes in [483]. For other applications of E(t, r) in cosmology see, e.g., [484, 130]. Small surfaces In the literature there are two kinds of small surfaces. The first is that of the small spheres (both in the light cone of a point and in a spacelike hypersurface), introduced first by Horowitz and Schmidt [275], and the other is the concept of small ellipsoids in a spacelike hypersurface, considered first by Woodhouse in [313]. A small sphere in the light cone is a cut of the future null cone in the spacetime by a spacelike hypersurface, and the geometry of the sphere is characterized by data at the vertex of the cone. The sphere in a hypersurface consists of those points of a given spacelike hypersurface, whose geodesic distance in the hypersurface from a given point p, the center, is a small given value, and the geometry of this sphere is characterized by data at this center. Small ellipsoids are two-surfaces in a spacelike hypersurface with a more general shape. To define the first, let p ∈ M be a point, and ta a future-directed unit timelike vector at p. Let \({{\mathcal N}_p}: = \partial {I^ +}(p)\), the 'future null cone of p in M' (i.e., the boundary of the chronological future of p). Let la be the future pointing null tangent to the null geodesic generators of \({{\mathcal N}_p}\), such that, at the vertex p, lata = 1. With this condition we fix the scale of the affine parameter r on the different generators, and hence, by requiring r(p) = 0, we fix the parametrization completely. Then, in an open neighborhood of the vertex \(p,\,{{\mathcal N}_p} - \{p\}\) is a smooth null hypersurface, and hence, for sufficiently small r, the set \({\mathcal S_r}: = \{q \in M\vert r(q) = r\}\) is a smooth spacelike two-surface and is homeomorphic to \({{\mathcal S}^2}\). \({{\mathcal S}_r}\) is called a small sphere of radius r with vertex p. Note that the condition lata = 1 fixes the boost gauge, too. Completing la to get a Newman-Penrose complex null tetrad \(\{{l^a},{n^a},{m^a},{{\bar m}^a}\}\) such that the complex null vectors ma and \({{\bar m}^a}\) are tangent to the two-surfaces \({{\mathcal S}_r}\), the components of the metric and the spin coefficients with respect to this basis can be expanded as a series in r. If, in addition, the spinor constituent oA of la = oAōA′ is required to be parallelly propagated along la, then the tetrad becomes completely fixed, yielding the vanishing of several (combinations of the) spin coefficients. Then the GHP equations can be solved with any prescribed accuracy for the expansion coefficients of the metric qab on \({{\mathcal S}_r}\), the GHP spin coefficients ρ, σ, τ, p′, σ′ and β, and the higher-order expansion coefficients of the curvature in terms of the lower-order curvature components at p. Hence, the expression of any quasi-local quantity \({Q_{{{\mathcal S}_r}}}\) for the small sphere \(_{{{\mathcal S}_r}}\) can be expressed as a series of r, $${Q_{{{\mathcal S}_r}}} = \oint\nolimits_{\mathcal S} {\left({{Q^{\left(0 \right)}} + r{Q^{\left(1 \right)}} + {\textstyle{1 \over 2}}{r^2}{Q^{\left(2 \right)}} + \cdots} \right)\;\;d{\mathcal S}},$$ where the expansion coefficients Q(k) are still functions of the coordinates, \((\zeta, \,\bar \zeta)\) or (θ,ϕ), on the unit sphere \({\mathcal S}\). If the quasi-local quantity Q is spacetime-covariant, then the unit sphere integrals of the expansion coefficients Q(k) must be spacetime covariant expressions of the metric and its derivatives up to some finite order at p and the 'time axis' ta. The necessary degree of the accuracy of the solution of the GHP equations depends on the nature of \({Q_{{{\mathcal S}_r}}}\) and on whether the spacetime is Ricci-flat in the neighborhood of p or not.Footnote 5 These solutions of the GHP equations, with increasing accuracy, are given in [275, 313, 118, 494]. Obviously, we can calculate the small-sphere limit of various quasi-local quantities built from the matter fields in the Minkowski spacetime, as well. In particular [494], the small-sphere expressions for the quasi-local energy-momentum and the (anti-self-dual part of the) quasi-local angular momentum of the matter fields based on \({Q_{\mathcal S}}[{\bf{K}}]\), are, respectively, $$P_{{{\mathcal S}_r}}^{\underline A \underline {B{\prime}}} = {{4\pi} \over 3}{r^3}{T^{AA{\prime}\,BB{\prime}}}{t_{AA{\prime}}}\varepsilon _B^{\underline A}\bar \varepsilon _{B{\prime}}^{\underline {B{\prime}}} + {\mathcal O}\left({{r^4}} \right),$$ $$J_{{{\mathcal S}_r}}^{\underline A \underline B} = {{4\pi} \over 3}{r^3}{T_{AA{\prime}BB{\prime}}}{t^{AA{\prime}}}\left({r{t^{B{\prime}E}}{{\textstyle\varepsilon} ^{BF}}\varepsilon _{\left(E \right.}^{\underline A}\varepsilon _{\left. F \right)}^{\underline B}} \right) + {\mathcal O}\,({r^5}),$$ where \(\{{\mathcal E}{A \over A}\}, \,\underline A = 0,\,1\), is the 'Cartesian spin frame' at p and the origin of the Cartesian coordinate system is chosen to be the vertex p. Here \(K_a^{\underline A \,{{\underline B}\prime}} = {\mathcal E}_A^{\underline A}\bar {\mathcal E}_{{A\prime}}^{{{\underline B}\prime}}\) can be interpreted as the translation one-forms, while \(K_a^{\underline A \,\underline B} = r{t_{{A\prime}}}^E{\mathcal E}_{(E}^{\underline A}{\mathcal E}_{A)}^{\underline B}\) is an average on the unit sphere of the boost-rotation Killing one-forms that vanish at the vertex p. Thus, \(P_{{{\mathcal S}_r}}^{\underline A \,{{\underline B}\prime}}\) and \(J_{{{\mathcal S}_r}}^{\underline A \,\underline B}\) are the three-volume times the energy-momentum and angular momentum density with respect to p, respectively, that the observer with four-velocity ta sees at p. Interestingly enough, a simple dimensional analysis already shows the structure of the leading terms in a large class of quasi-local spacetime covariant energy-momentum and angular momentum expressions. In fact, if \({Q_{\mathcal S}}\) is any coordinate-independent quasi-local quantity built from the first derivatives ∂μgaβ of the spacetime metric, then in its expansion the difference of the power of r and the number of the derivatives in every term must be one, i.e., it must have the form $$\begin{array}{*{20}c} {{Q_{{{\mathcal S}_r}}} = {Q_2}[\partial g]\;{r^2}+{Q_3}\left[ {{\partial ^2}g,{{(\partial g)}^2}} \right]\;{r^3} + {Q_4}\left[ {{\partial ^3}g,({\partial ^2}g)\;(\partial g),{{(\partial g)}^3}} \right]\;{r^4} +} \\ {+ {Q_5}\left[ {{\partial ^4}g,({\partial ^3}g)\;(\partial g),{{({\partial ^2}g)}^2},({\partial ^2}g)\;{{(\partial g)}^2},{{(\partial g)}^4}} \right]\;{r^5} + \ldots,} \\ \end{array}$$ where Qi[A, B, …], i = 2, 3, …, are scalars. They are polynomial expressions of ta, gab and εabcd at the vertex p, and they depend linearly on the tensors that are constructed at p from \({g_{\alpha \beta}},\,{g^{\alpha \beta}}\) and linearly from the coordinate-dependent quantities A, B, …. Since there is no nontrivial tensor built from the first derivative ∂μgαβ and gαβ, the leading term is of order r3. Its coefficient Q3[∂2g, (dg)2] must be a linear expression of Rab and Cabcd, and polynomial in ta, gab and εabcd. In particular, if \({Q_{\mathcal S}}\) is to represent energy-momentum with generator Kc at p, then the leading term must be $${Q_{{{\mathcal S}_r}}}[{\bf{K}}] = {r^3}\left[ {a\left({{G_{ab}}{t^a}{t^b}} \right){t_c} + bR{t_c} + c\left({{G_{ab}}{t^a}P_c^b} \right)} \right]{K^c} + {\mathcal O}\left({{r^4}} \right)$$ for some unspecified constants a, b, and c, where \(P_b^a: = \delta _b^a - {t^a}{t_b}\), the projection to the subspace orthogonal to ta. If, in addition to the coordinate-independence of \({Q_{\mathcal S}}\), it is Lorentz-covariant, i.e., it does not, for example, depend on the choice for a normal to \({\mathcal S}\) (e.g., in the small-sphere approximation on ta) intrinsically, then the different terms in the above expression must depend on the boost gauge of the external observer ta in the same way. Therefore, a = c, in which case the first and the third terms can in fact be written as r3 ataGabKb. Then, comparing Eq. (4.11) with Eq. (4.9), we see that a = −1/(6G), and hence the term r3 bRtaKa would have to be interpreted as the contribution of the gravitational 'field' to the quasi-local energy-momentum of the matter + gravity system. However, this contributes only to energy, but not to linear momentum in any frame defined by the observer ta, even in a general spacetime. This seems to be quite unacceptable. Furthermore, even if the matter fields satisfy the dominant energy condition, \({Q_{{{\mathcal S}_r}}}\) given by Eq. (4.11) can be negative, even for c = a, unless b = 0. Thus, in the leading r3 order in nonvacuum, any coordinate and Lorentz-covariant quasi-local energy-momentum expression which is nonspacelike and future pointing, should be proportional to the energy-momentum density of the matter fields seen by the observer ta times the Euclidean volume of the three-ball of radius r. No contribution from the gravitational 'field' is expected at this order. In fact, this result is compatible the with the principle of equivalence, and the particular results obtained in the relativistically corrected Newtonian theory (considered in Section 3.1.1) and in the weak field approximation (see Sections 4.2.5 and 7.1.1 below). Interestingly enough, even for a timelike Killing field Ke, the well known expression of Komar does not satisfy this criterion. (For further discussion of Komar's expression see also Section 12.1.) If the neighborhood of p is vacuum, then the r3-order term is vanishing, and the fourth-order term must be built from ∇eCabcd. However, the only scalar polynomial expression of ta, gab, εabcd, ∇eCabcd and the generator vector Ka, depending linearly on the latter two, is the zero tensor field. Thus, the r4-order term in vacuum is also vanishing. At the fifth order the only nonzero terms are quadratic in the various parts of the Weyl tensor, yielding $${Q_{{{\mathcal S}_r}}}[{\bf{K}}] = {r^5}\;[(a{E_{ab}}{E^{ab}} + b{H_{ab}}{H^{ab}} + c{E_{ab}}{H^{ab}}){t_c} + d{E_{ae}}{H^e}_b{\varepsilon ^{ab}}_c]\;{K^c} + {\mathcal O}\;({r^6})$$ for constants a, b, c, and d, where Eab: = Caebftetf is the electric part and \({H_{ab}}: = {\ast} {C_{aebf}}{t^e}{t^f}: = {1 \over 2}{\varepsilon _{ae}}^{cd}{C_{cdbf}}{t^e}{t^f}\) is the magnetic part of the Weyl curvature, and εabc:=εabcdtd is the induced volume 3-form. However, using the identities CabcdCabcd = 8(EabEab − HabHab), Cabcd * Cabcd = 16EabHab, 4TabcdtatbtHd = EabEab + HabHab and \(2{T_{abcd}}{t^a}{t^b}{t^c}P_e^d = {E_{ab}}{H^a}_c{\varepsilon ^{bc}}_e\), we can rewrite the above formula to be $$\begin{array}{*{20}c} {{Q_{{{\mathcal S}_r}}}[{\bf{K}}] = {r^5}\;\left[ {\left({2(a + b){T_{abcd}}{t^a}{t^b}{t^c}{t^d} + {\textstyle{1 \over {16}}}(a - b){C_{abcd}}{C^{abcd}} +} \right.} \right.\quad \quad \quad \quad \quad \quad \quad \quad} \\ {\left. {\left. {+ {\textstyle{1 \over {16}}}c{C_{abcd}} {\ast} {C^{abcd}}} \right){t_e} + 2d{T_{abcd}}{t^a}{t^b}{t^c}P_e^d} \right]\;{K^e} + {\mathcal O}\;({r^6}).} \\ \end{array}$$ Again, if \({Q_{\mathcal S}}\) does not depend on ta intrinsically, then d = (a + b), in which case the first and the fourth terms together can be written into the Lorentz covariant form 2r5 dTabcdtatbtcKd. In a general expression the curvature invariants CabcdCabcd and Cabcd * Cabcd may be present. Since, however, Eab and Hab at a given point are independent, these invariants can be arbitrarily large positive or negative, and hence, for a ≠ b or c ≠ 0 the quasi-local energy-momentum could not be future pointing and nonspacelike. Therefore, in vacuum in the leading r5 order any coordinate and Lorentz-covariant quasi-local energy-momentum expression, which is nonspacelike and future pointing must be proportional to the Bel-Robinson 'momentum' Tabcdtatbtc. Obviously, the same analysis can be repeated for any other quasi-local quantity. For the energy-momentum, \({Q_{\mathcal S}}\) has the structure \(\oint\nolimits_{\mathcal S} {\mathcal Q} ({\partial _\mu}{g_{\alpha \beta}})\,d{\mathcal S}\), for angular momentum it is \(\oint\nolimits_{\mathcal S} {\mathcal Q} ({\partial _\mu}{g_{\alpha \beta}})r\, d{\mathcal S}\), while the area of \({\mathcal S}\) is \(\oint\nolimits_{\mathcal S} {d{\mathcal S}}\). Therefore, the leading term in the expansion of the angular momentum is r4 and r6 order in nonvacuum and vacuum with the energy-momentum and the Bel-Robinson tensors, respectively, while the first nontrivial correction to the area 4πr2 is of order rA and r6 in nonvacuum and vacuum, respectively. On the small geodesic sphere \({{\mathcal S}_r}\) of radius r in the given spacelike hypersurface Σ one can introduce the complex null tangents ma and \({{\bar m}^a}\) above, and if ta is the future-pointing unit normal of Σ and va the outward directed unit normal of \({{\mathcal S}_r}\) in Σ, then we can define la := ta + va and 2na:= ta − va. Then \(\{{l^a},{n^a},{m^a},{{\bar m}^a}\}\) is a Newman-Penrose complex null tetrad, and the relevant GHP equations can be solved for the spin coefficients in terms of the curvature components at p. The small ellipsoids are defined as follows [313]. If f is any smooth function on Σ with a nondegenerate minimum at p ∈ Σ with minimum value f(p) = 0, then, at least on an open neighborhood U of p in Σ, the level surfaces \({{\mathcal S}_r}: = \{q \in \Sigma |2f(q) = {r^2}\}\) are smooth compact two-surfaces homeomorphic to S2. Then, in the r → 0 limit, the surfaces \({{\mathcal S}_r}\) look like small nested ellipsoids centered at p. The function f is usually 'normalized' so that habDaDbf|p = −3. A slightly different framework for calculations in small regions was used in [327, 170, 235]. Instead of the Newman-Penrose (or the GHP) formalism and the spin coefficient equations, holonomic (Riemann or Fermi type normal) coordinates on an open neighborhood U of a point p ∈ M or a timelike curve γ are used, in which the metric, as well as the Christoffel symbols on U, are expressed by the coordinates on U and the components of the Riemann tensor at p or on γ. In these coordinates and the corresponding frames, the various pseudotensorial and tetrad expressions for the energy-momentum have been investigated. It has been shown that a quadratic expression of these coordinates with the Bel-Robinson tensor as their coefficient appears naturally in the local conservation law for the matter energy-momentum tensor [327]; the Bel-Robinson tensor can be recovered as some 'double gradient' of a special combination of the Einstein and the Landau-Lifshitz pseudotensors [170]; Møller's tetrad expression, as well as certain combinations of several other classical pseudotensors, yield the Bel-Robinson tensor [473, 470, 471]. In the presence of some non-dynamical (background) metric a 11-parameter family of combinations of the classical pseudotensors exists, which, in vacuum, yields the Bel-Robinson tensor [472, 474]. (For this kind of investigation see also [465, 468, 466, 467, 469]). In [235] a new kind of approximate symmetries, namely approximate affine collineations, are introduced both near a point and a world line, and used to introduce Komar-type 'conserved' currents. (For a readable text on the non-Killing type symmetries see, e.g., [233].) These symmetries turn out to yield a nontrivial gravitational contribution to the matter energy-momentum, even in the leading r3 order. Large spheres near spatial infinity Near spatial infinity we have the a priori 1/r and 1/r2 falloff for the three-metric hab and extrinsic curvature χab, respectively, and both the evolution equations of general relativity and the conservation equation \({T^{ab}}_{;b} = 0\) for the matter fields preserve these conditions. The spheres \({{\mathcal S}_r}\) of coordinate radius r in Σ are called large spheres if the values of r are large enough, such that the asymptotic expansions of the metric and extrinsic curvature are legitimate.Footnote 6 Introducing some coordinate system, e.g., the complex stereographic coordinates, on one sphere and then extending that to the whole Σ along the normals va of the spheres, we obtain a coordinate system \((r,\zeta, \,\bar \zeta)\) on Σ. Let \(\varepsilon _{\bf{A}}^A = \{{o^A},{\iota ^A}\}, \, {\bf{A}} = 0,\, 1\), be a GHP spinor dyad on Σ adapted to the large spheres in such a way that ma := oAῑA′ and \({{\bar m}^a} = {\iota ^A}{{\bar o}^{{A\prime}}}\) are tangent to the spheres and are tangent to the spheres and, the future directed unit normal of Σ. These conditions fix the spinor dyad completely, and, in particular, \({v^a} = _2^1{o^A}{{\bar o}^{{A\prime}}} - {\iota ^A}{{\bar \iota}^{{A\prime}}}\), the outward directed unit normal to the spheres tangent to Σ. The falloff conditions yield that the spin coefficients tend to their flat spacetime value as 1/r2 and the curvature components to zero like 1/r3. Expanding the spin coefficients and curvature components as a power series of 1/r, one can solve the field equations asymptotically (see [65, 61] for a different formalism). However, in most calculations of the large sphere limit of the quasi-local quantities, only the leading terms of the spin coefficients and curvature components appear. Thus, it is not necessary to solve the field equations for their second or higher-order nontrivial expansion coefficients. Using the flat background metric 0hab and the corresponding derivative operator 0De we can define a spinor field 0λA to be constant if 0De0λA = 0. Obviously, the constant spinors form a two-complex-dimensional vector space. Then, by the falloff properties \({D_{e0}}{\lambda _A} = {\mathcal O}({r^{- 2}})\). Thus, we can define the asymptotically constant spinor fields to be those λA that satisfy \({D_e}{\lambda _A} = {\mathcal O}({r^{- 2}})\), where De is the intrinsic Levi-Civita derivative operator on Σ. Note that this implies that, with the notation of Eq. (4.6), all the chiral irreducible parts, \({\Delta ^ +}\lambda, \,{\Delta ^ -}\lambda, \,{{\mathcal T}^ +}\lambda\), and \({{\mathcal T}^ -}\lambda\) of the derivative of the asymptotically constant spinor field λA are \({\mathcal O}({r^{- 2}})\). Large spheres near null infinity Let the spacetime be asymptotically flat at future null infinity in the sense of Penrose [413, 414, 415, 426] (see also [208]), i.e., the physical spacetime can be conformally compactified by an appropriate boundary ℐ+. Then future null infinity ℐ+ will be a null hypersurface in the conformally rescaled spacetime. Topologically it is \({\rm{\mathbb R}} \times {S^2}\), and the conformal factor can always be chosen such that the induced metric on the compact spacelike slices of ℐ+ is the metric of the unit sphere. Fixing such a slice \({{\mathcal S}_0}\) (called 'the origin cut of ℐ+') the points of ℐ+ can be labeled by a null coordinate, namely the affine parameter u ∈ ℝ along the null geodesic generators of ℐ+ measured from \({{\mathcal S}_0}\) and, for example, the familiar complex stereographic coordinates \((\zeta, \bar \zeta) \in {S^2}\), defined first on the origin cut \({{\mathcal S}_0}\) and then extended in a natural way along the null generators to the whole ℐ+. Then any other cut \({\mathcal S}\) of ℐ+ can be specified by a function \(u = f(\zeta, \bar \zeta)\). In particular, the cuts \({{\mathcal S}_u}: = \{u = {\rm{const}}.\}\) are obtained from \({{\mathcal S}_0}\) by a pure time translation. The coordinates \((u,\zeta, \bar \zeta)\) can be extended to an open neighborhood of ℐ+ in the spacetime in the following way. Let \({{\mathcal N}_u}\) be the family of smooth outgoing null hypersurfaces in a neighborhood of ℐ+, such that they intersect the null infinity just in the cuts \({{\mathcal S}_u}\), i.e., \({{\mathcal N}_u} \cap {{\mathscr I}^ +} = {{\mathcal S}_u}\). Then let r be the affine parameter in the physical metric along the null geodesic generators of \({{\mathcal N}_u}\). Then \((u,r,\zeta, \bar \zeta)\) forms a coordinate system. The u = const., r = const. two-surfaces \({{\mathcal S}_{u,r}}\) (or simply \({{\mathcal S}_r}\) if no confusion can arise) are spacelike topological two-spheres, which are called large spheres of radius r near future null infinity. Obviously, the affine parameter r is not unique, its origin can be changed freely: \(\bar r: = r + g(u,\zeta, \bar \zeta)\) is an equally good affine parameter for any smooth g. Imposing certain additional conditions to rule out such coordinate ambiguities we arrive at a 'Bondi-type coordinate system'.Footnote 7 In many of the large-sphere calculations of the quasi-local quantities the large spheres should be assumed to be large spheres not only in a general null, but in a Bondi-type coordinate system. For a detailed discussion of the coordinate freedom left at the various stages in the introduction of these coordinate systems, see, for example, [394, 393, 107]. In addition to the coordinate system, we need a Newman-Penrose null tetrad, or rather a GHP spinor dyad, \(\varepsilon _{\rm{A}}^A = \{{o^A},{\iota ^A}\}, \,{\rm{A = 0,1}}\), on the hypersurfaces \({{\mathcal N}_u}\). (Thus, boldface indices are referring to the GHP spin frame.) It is natural to choose oA such that la := oAōA′ be the tangent (∂/∂r)a of the null geodesic generators of \({{\mathcal N}_u}\), and oA itself be constant along la. Newman and Unti [394] chose ιA to be parallelly propagated along la. This choice yields the vanishing of a number of spin coefficients (see, for example, the review [393]). The asymptotic solution of the Einstein-Maxwell equations as a series of 1/r in this coordinate and tetrad system is given in [394, 179, 425], where all the nonvanishing spin coefficients and metric and curvature components are listed. In this formalism the gravitational waves are represented by the u-derivative \({{\dot \sigma}^0}\) of the asymptotic shear of the null geodesic generators of the outgoing null hypersurfaces \({{\mathcal N}_u}\). From the point of view of the large sphere calculations of the quasi-local quantities, the choice of Newman and Unti for the spinor basis is not very convenient. It is more natural to adapt the GHP spin frame to the family of the large spheres of constant 'radius' r, i.e., to require ma := oAῑA′ and \({{\bar m}^a} = {\iota ^A}{{\bar o}^{{A{\prime}}}}\) to be tangents of the spheres. This can be achieved by an appropriate null rotation of the Newman-Unti basis about the spinor oA. This rotation yields a change of the spin coefficients and the metric and curvature components. As far as the present author is aware, the rotation with the highest accuracy was done for the solutions of the Einstein-Maxwell system by Shaw [455]. In contrast to the spatial-infinity case, the 'natural' definition of the asymptotically constant spinor fields yields identically zero spinors in general [106]. Nontrivial constant spinors in this sense could exist only in the absence of the outgoing gravitational radiation, i.e., when \({{\dot \sigma}^0} = 0\). In the language of Section 4.1.7, this definition would be limr→∞rΔ+λ = 0, limr→∞ rΔ−λ = 0, \({\lim\nolimits_{r \rightarrow \infty}}r{{\mathcal T}^ +}\lambda = 0\) and \({\lim\nolimits_{r \rightarrow \infty}}r{{\mathcal T}^ -}\lambda = 0\). However, as Bramson showed [106], half of these conditions can be imposed. Namely, at future null infinity \({{\mathcal C}^ +}\lambda : = ({\Delta ^ +} \oplus {{\mathcal T}^ -})\lambda = 0\) (and at past null infinity \({{\mathcal C}^ -}\lambda : = ({\Delta ^ -} \oplus {{\mathcal T}^ +})\lambda = 0)\) can always be imposed asymptotically, and has two linearly-independent solutions \(\lambda _A^{\underline A},\underline A = 0,1\), on ℐ+ (or on ℐ−, respectively). The space \({\bf{S}}_\infty ^{\underline A}\) of its solutions turns out to have a natural symplectic metric \({\varepsilon _{\underline A \underline B}}\), and we refer to \(({\bf{S}}_\infty ^{\underline A},{\varepsilon _{\underline A \underline B}})\) as future asymptotic spin space. Its elements are called asymptotic spinors, and the equations \({\lim\nolimits_{r \rightarrow \infty}}r{{\mathcal C}^ \pm}\lambda = 0\), the future/past asymptotic twistor equations. At ℐ+ asymptotic spinors are the spinor constituents of the BMS translations: Any such translation is of the form \({K^{\underline A {{\underline A}{\prime}}}}\lambda _{\underline A}^A\bar \lambda _{{{\underline A}{\prime}}}^{{A{\prime}}} = {K^{\underline A {{\underline A}{\prime}}}}\lambda _A^1\bar \lambda _{\underline {{A{\prime}}}}^{{1{\prime}}}{\iota ^A}{{\bar \iota}^{{A{\prime}}}}\) for some constant Hermitian matrix \({K^{\underline A {{\underline A}{\prime}}}}\). Similarly, (apart from the proper supertranslation content) the components of the anti-self-dual part of the boost-rotation BMS vector fields are \(- \sigma _{\rm{i}}^{\underline A \underline B}\lambda _{\underline A}^1\lambda _{\underline B}^1\), where \(\sigma _{\rm{i}}^{\underline A \underline B}\) are the standard SU(2) Pauli matrices (divided by \(\sqrt 2)\) [496]. Asymptotic spinors can be recovered as the elements of the kernel of several other operators built from Δ+, Δ−, \({{\mathcal T}^ +}\), and \({{\mathcal T}^ -}\), too. In the present review we use only the fact that asymptotic spinors can be introduced as antiholomorphic spinors (see also Section 8.2.1), i.e., the solutions of \({{\mathcal H}^ -}\lambda : = ({\Delta ^ -} \oplus {{\mathcal T}^ -})\lambda = 0\) (and at past null infinity as holomorphic spinors), and as special solutions of the two-surface twistor equation \({\mathcal N}\lambda : = ({{\mathcal T}^ +} \oplus {{\mathcal T}^ -})\lambda = 0\) (see also Section 7.2.1). These operators, together with others reproducing the asymptotic spinors, are discussed in [496]. The Bondi-Sachs energy-momentum given in the Newman-Penrose formalism has already become its 'standard' form. It is the unit sphere integral on the cut \({\mathcal S}\) of a combination of the leading term \(\psi _2^0\) of the Weyl spinor component \({\psi _2}\), the asymptotic shear σ0 and its u-derivative, weighted by the first four spherical harmonics (see, for example, [393, 426]): $$P_{B\,S}^{\underline A \underline {B{\prime}}} = - {1 \over {4\pi G}}\oint {\left({\psi _2^0 + {\sigma ^0}{{\dot \bar \sigma}^0}} \right)\lambda _0^{\underline A}\bar \lambda _{0{\prime}}^{\underline {B{\prime}}}\;d{\mathcal S}},$$ where \(\lambda _0^{\underline A}: = \lambda _A^{\underline A}{o^A},\underline A = 0,1\), are the oA-component of the vectors of a spin frame in the space of the asymptotic spinors. (For the various realizations of these spinors see, e.g., [496].) The minimal assumptions on the physical Ricci tensor that already ensure that the Bondi-Sachs energy-momentum and Bondi's mass-loss are well defined are determined by Tafel [505]. The expression of the Bondi-Sachs energy-momentum in terms of the conformal factor is also given there. Similarly, the various definitions for angular momentum at null infinity could be rewritten in this formalism. Although there is no generally accepted definition for angular momentum at null infinity in general spacetimes, in stationary and in axi-symmetric spacetimes there is. The former is the unit sphere integral on the cut \({\mathcal S}\) of the leading term of the Weyl spinor component \({{\bar \psi}_{{1{\prime}}}}\), weighted by appropriate (spin-weighted) spherical harmonics: $${J^{\underline A \underline B}} = {1 \over {8\pi G}}\oint {\bar \psi _1^0,\lambda _0^{\underline A}\lambda _0^{\underline B}\,d{\mathcal S}}.$$ In particular, Bramson's expression also reduces to this 'standard' expression in the absence of the outgoing gravitational radiation [109]. If the spacetime is axi-symmetric, then the generally accepted definition of angular momentum is that of Komar with the numerical coefficient \({1 \over {16\pi G}}\) (rather than \({1 \over {8\pi G}}\)) and α = 0 in (3.15). This view is supported by the partial results of a quasi-local canonical analysis of general relativity given in [499], too. Instead of the Bondi type coordinates above, one can introduce other 'natural' coordinates in a neighborhood of ℐ+. Such is the one based on the outgoing asymptotically-shear-free null geodesics [27]. While the Bondi-type coordinate system is based on the null geodesic generators of the outgoing null hypersurfaces \({{\mathcal N}_u}\), and hence, in the rescaled metric these generators are orthogonal to the cuts \({{\mathcal S}_u}\), the new coordinate system is based on the use of outgoing null geodesic congruences that extend to ℐ+ but are not orthogonal to the cuts of ℐ+ (and hence, in general, they have twist). The definition of the new coordinates \((u,r,\zeta, \bar \zeta)\) is analogous to that of the Bonditype coordinates: \((u, \zeta, \bar \zeta)\) labels the intersection point of the actual geodesic and ℐ+, while r is the affine parameter along the geodesic. The tangent \({{\tilde l}^a}\) of this null congruence is asymptotically null rotated about na: In the NP basis \(\{{l^a},{n^a},{m^a},{{\bar m}^a}\}\) above \({{\tilde l}^a} = {l^a} + b{{\bar m}^a} + \bar b{m^a} + b\bar b{m^a}\), where \(b = - L(u,\zeta, \bar \zeta)/r + {\mathcal O}({r^{- 2}})\) and \(L = L(u,\zeta, \bar \zeta)\) is a complex valued function (with spin weight one) on ℐ+. Then Aronson and Newman show in [27] that if L is chosen to satisfy \(\eth L + L\dot L = {\sigma ^0}\), then the asymptotic shear of the congruence is, in fact, of order r−3, and by an appropriate choice for the other vectors of the NP basis many spin coefficients can be made zero. In this framework it is the function L that plays a role analogous to that of σ0, and, indeed, the asymptotic solution of the field equations is given in terms of L in [27]. This L can be derived from the solution Z of the good-cut equation, which, however, is not uniquely determined, but depends on four complex parameters: \(Z = Z({Z^{\underline a}},\zeta, \bar \zeta)\). It is this freedom that is used in [325, 326] to introduce the angular momentum at future null infinity (see Section 3.2.4). Further discussion of these structures, in particular their connection with the solutions of the good-cut equation and the H-space, as well as their applications, is given in [324, 325, 326, 5]. Other special situations In the weak field approximation of general relativity [525, 36, 534, 426, 303] the gravitational field is described by a symmetric tensor field hab on Minkowski spacetime (\(({{\rm{R}}^4},g_{ab}^0)\)), and the dynamics of the field hab is governed by the linearized Einstein equations, i.e., essentially the wave equation. Therefore, the tools and techniques of the Poincaré-invariant field theories, in particular the Noether-Belinfante-Rosenfeld procedure outlined in Section 2.1 and the ten Killing vectors of the background Minkowski spacetime, can be used to construct the conserved quantities. It turns out that the symmetric energy-momentum tensor of the field hab is essentially the second-order term in the Einstein tensor of the metric \({g_{ab}}: = g_{ab}^0 + {h_{ab}}\). Thus, in the linear approximation the field hab does not contribute to the global energy-momentum and angular momentum of the matter + gravity system, and hence these quantities have the form (2.5) with the linearized energy-momentum tensor of the matter fields. However, as we will see in Section 7.1.1, this energy-momentum and angular momentum can be re-expressed as a charge integral of the (linearized) curvature [481, 277, 426]. pp-waves spacetimes are defined to be those that admit a constant null vector field La, and they interpreted as describing pure plane-fronted gravitational waves with parallel rays. If matter is present, then it is necessarily pure radiation with wave-vector La, i.e., TabLb = 0 holds [478]. A remarkable feature of the pp-wave metrics is that, in the usual coordinate system, the Einstein equations become a two-dimensional linear equation for a single function. In contrast to the approach adopted almost exclusively, Aichelburg [8] considered this field equation as an equation for a boundary value problem. As we will see, from the point of view of the quasi-local observables this is a particularly useful and natural standpoint. If a pp-wave spacetime admits an additional spacelike Killing vector Ka with closed S1 orbits, i.e., it is cyclically symmetric too, then La and Ka are necessarily commuting and are orthogonal to each other, because otherwise an additional timelike Killing vector would also be admitted [485]. Since the final state of stellar evolution (the neutron star or black hole state) is expected to be described by an asymptotically flat, stationary, axisymmetric spacetime, the significance of these spacetimes is obvious. It is conjectured that this final state is described by the Kerr-Newman (either outer or black hole) solution with some well-defined mass, angular momentum and electric charge parameters [534]. Thus, axisymmetric two-surfaces in these solutions may provide domains, which are general enough but for which the quasi-local quantities are still computable. According to a conjecture by Penrose [418], the (square root of the) area of the event horizon provides a lower bound for the total ADM energy. For the Kerr-Newman black hole this area is \(4\pi (2{m^2} - {e^2} + 2m\sqrt {{m^2} - {e^2} - {a^2}})\). Thus, particularly interesting two-surfaces in these spacetimes are the spacelike cross sections of the event horizon [80]. There is a well-defined notion of total energy-momentum not only in the asymptotically flat, but even in the asymptotically anti-de Sitter spacetimes as well. This is the Abbott-Deser energy [1], whose positivity has also been proven under similar conditions that we had to impose in the positivity proof of the ADM energy [220]. (In the presence of matter fields, e.g., a self-interacting scalar field, the falloff properties of the metric can be weakened such that the 'charges' defined at infinity and corresponding to the asymptotic symmetry generators remain finite [265].) The conformal technique, initiated by Penrose, is used to give a precise definition of the asymptotically anti-de Sitter spacetimes and to study their general, basic properties in [42]. A comparison and analysis of the various definitions of mass for asymptotically anti-de Sitter metrics is given in [150]. Extending the spinorial proof [349] of the positivity of the total energy in asymptotically anti-de Sitter spacetime, Chruściel, Maerten and Tod [149] give an upper bound for the angular momentum and center-of-mass in terms of the total mass and the cosmological constant. (Analogous investigations show that there is a similar bound at the future null infinity of asymptotically flat spacetimes with no outgoing energy flux, provided the spacetime contains a constant-mean-curvature, hyperboloidal, initial-data set on which the dominant energy condition is satisfied. In this bound the role of the cosmological constant is played by the (constant) mean curvature of the hyperboloidal spacelike hypersurface [151].) Thus, it is natural to ask whether or not a specific quasi-local energy-momentum or angular momentum expression has the correct limit for large spheres in asymptotically anti-de Sitter spacetimes. On lists of criteria of reasonableness of the quasi-local quantities In the literature there are various, more or less ad hoc, 'lists of criteria of reasonableness' of the quasi-local quantities (see, for example, [176, 143]). However, before discussing them, it seems useful to first formulate some general principles that any quasi-local quantity should satisfy. In nongravitational physics the notions of conserved quantities are connected with symmetries of the system, and they are introduced through some systematic procedure in the Lagrangian and/or Hamiltonian formalism. In general relativity the total energy-momentum and angular momentum are two-surface observables, thus, we concentrate on them even at the quasi-local level. These facts motivate our three a priori expectations: The quasi-local quantities that are two-surface observables should depend only on the two-surface data, but they cannot depend, e.g., on the way that the various geometric structures on \({\mathcal S}\) are extended off the two-surface. There seems to be no a priori reason why the two-surface would have to be restricted to spherical topology. Thus, in the ideal case, the general construction of the quasi-local energy-momentum and angular momentum should work for any closed orientable spacelike two-surface. It is desirable to derive the quasi-local energy-momentum and angular momentum as the charge integral (Lagrangian interpretation) and/or as the value of the Hamiltonian on the constraint surface in the phase space (Hamiltonian interpretation). If they are introduced in some other way, they should have a Lagrangian and/or Hamiltonian interpretation. These quantities should correspond to the 'quasi-symmetries' of the two-surface, which quasisymmetries are special spacetime vector fields on the two-surface. In particular, the quasilocal energy-momentum should be expected to be in the dual of the space of the 'quasitranslations', and the angular momentum in the dual of the space of the 'quasi-rotations'. To see that these conditions are nontrivial, let us consider the expressions based on the linkage integral (3.15). \({L_{\mathcal S}}[{\bf{K}}]\) does not satisfy the first part of our first requirement. In fact, it depends on the derivative of the normal components of Ka in the direction orthogonal to \({\mathcal S}\) for any value of the parameter α. Thus, it depends not only on the geometry of \({\mathcal S}\) and the vector field Ka given on the two-surface, but on the way in which Ka is extended off the two-surface. Therefore, \({L_{\mathcal S}}[{\bf{K}}]\) is 'less quasi-local' than \({A_{\mathcal S}}[\omega ]\) or \({H_{\mathcal S}}[\lambda, \bar \mu ]\) that will be introduced in Sections 7.2.1 and 7.2.2, respectively. We will see that the Hawking energy satisfies our first requirement, but not the second and the third ones. The Komar integral (i.e., half of the linkage for α = 0) has the form of the charge integral of a superpotential, \({1 \over {16\pi G}}\oint\nolimits_{\mathcal S} {{\nabla ^{[a}}{K^{b]}}{1 \over 2}{\varepsilon _{abcd}}}\), i.e., it has a Lagrangian interpretation. The corresponding conserved Komar-current was defined by 8 \(8\pi G{C^a}[{\bf{K}}]: = {G^a}_b{K^b} + {\nabla _b}{\nabla ^{[a}}{K^{b]}}\). However, its flux integral on some compact spacelike hypersurface with boundary \({\mathcal S}: = \partial \Sigma\) cannot be a Hamiltonian on the ADM phase space in general. In fact, it is $$\begin{array}{*{20}c} {{}_KH\;[{\bf{K}}]: = \int\nolimits_\Sigma {{C^a}[{\bf{K}}]\,{t_a}\;d\Sigma} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \,\;} \\ {= \int\nolimits_\Sigma {(cN + {c_a}{N^a})\;d\Sigma + {1 \over {8\pi G}}\oint\nolimits_{\mathcal S} {{\upsilon _a}\left({{\chi ^a}_b{N^b} - {D^a}N + {1 \over {2N}}{{\dot N}^a}} \right)\;d{\mathcal S}}.}} \\ \end{array}$$ Here c and ca are, respectively, the Hamiltonian and momentum constraints of the vacuum theory, ta is the future-directed unit normal to Σ, va is the outward-directed unit normal to \({\mathcal S}\) in Σ, and N and Na are the lapse and shift part of Ka, respectively, defined by Ka =: Nta + Na. Thus, KH[K] is a well-defined function of the configuration and velocity variables (N, Na, hab) and (Ṅ, Ṅa, ḣab), respectively. However, since the velocity Ṅa cannot be expressed by the canonical variables (see e.g. [558, 63]), KH[K] can be written as a function on the ADM phase space only if the boundary conditions at ∂Σ ensure the vanishing of the integral of vaṄa/N. Pragmatic criteria Since in certain special situations there are generally accepted definitions for the energy-momentum and angular momentum, it seems reasonable to expect that in these situations the quasi-local quantities reduce to them. One half of the pragmatic criteria is just this expectation, and the other is a list of some a priori requirements on the behavior of the quasi-local quantities. One such list for the energy-momentum and mass, based mostly on [176, 143] and the properties of the quasi-local energy-momentum of the matter fields of Section 2.2, might be the following: The quasi-local energy-momentum \(P_{\mathcal S}^{\underline a}\) must be a future-pointing nonspacelike vector (assuming that the matter fields satisfy the dominant energy condition on some Σ for which \({\mathcal S} = \partial \Sigma\), and maybe some form of the convexity of \({\mathcal S}\) should be required) ('positivity'). \(P_{\mathcal S}^{\underline a}\) must be zero iff D(Σ) is flat, and null iff D(Σ) has a pp-wave geometry with pure radiation ('rigidity'). \(P_{\mathcal S}^{\underline a}\) must give the correct weak field limit. \(P_{\mathcal S}^{\underline a}\) must reproduce the ADM, Bondi-Sachs and Abbott-Deser energy-momenta in the appropriate limits ('correct large-sphere behaviour'). For small spheres \(P_{\mathcal S}^{\underline a}\) must give the expected results ('correct small sphere behaviour'): \({4 \over 3}\pi {r^3}{T^{ab}}{t_b}\) in nonvacuum and kr5Tabcdtbtctd in vacuum for some positive constant k and the Bel-Robinson tensor Tabcd. For round spheres \(P_{\mathcal S}^{\underline a}\) must yield the 'standard' Misner-Sharp round-sphere expression. For marginally trapped surfaces the quasi-local mass \({m_{\mathcal S}}\) must be the irreducible mass \(\sqrt {{\rm{Area(}}{\mathcal S}{\rm{)/16}}\pi {G^2}}\). For a different view on the positivity of the quasi-local energy see [391]. Item 1.7 is motivated by the expectation that the quasi-local mass associated with the apparent horizon of a black hole (i.e., the outermost marginally-trapped surface in a spacelike slice) be just the irreducible mass [176, 143]. Usually, \({m_{\mathcal S}}\) is expected to be monotonicgally increasing in some appropriate sense [143]. For example, if \({{\mathcal S}_1} = \partial \Sigma\) for some achronal (and hence spacelike or null) hypersurface Σ in which \({{\mathcal S}_2}\) is a spacelike closed two-surface and the dominant energy condition is satisfied on Σ, then \({m_{{{\mathcal S}_1}}} \geq {m_{{{\mathcal S}_2}}}\) seems to be a reasonable expectation [176]. (However, see also Section 4.3.3.) A further, and, in fact, a related issue is the (post) Newtonian limit of the quasi-local mass expressions. In item 1.4 we expected, in particular, that the quasi-local mass tends to the ADM mass at spatial infinity. However, near spatial infinity the radiation and the dynamics of the fields and the geometry die off rapidly. Hence, in vacuum asymptotically flat spacetimes in the asymptotic regime the gravitational 'field' approaches the Newtonian one, and hence its contribution to the total energy of the system is similar to that of the negative definite binding energy [400, 199]. Therefore, it seems natural to expect that the quasi-local mass tends to the ADM mass as a monotonically decreasing function (see also sections 3.1.1 and 12.3.3). In contrast to the energy-momentum and angular momentum of the matter fields on the Minkowski spacetime, the additivity of the energy-momentum (and angular momentum) is not expected. In fact, if \({{\mathcal S}_1}\) and \({{\mathcal S}_2}\) are two connected two-surfaces, then, for example, the corresponding quasi-local energy-momenta would belong to different vector spaces, namely to the dual of the space of the quasi-translations of the first and second two-surface, respectively. Thus, even if we consider the disjoint union \({{\mathcal S}_1} \cup {{\mathcal S}_2}\) to surround a single physical system, we can add the energy-momentum of the first to that of the second only if there is some physically/geometrically distinguished rule defining an isomorphism between the different vector spaces of the quasi-translations. Such an isomorphism would be provided for example by some naturally-chosen globally-defined flat background. However, as we discussed in Section 3.1.2, general relativity itself does not provide any background. The use of such a background would contradict the complete diffeomorphism invariance of the theory. Nevertheless, the quasi-local mass and the length of the quasi-local Pauli-Lubanski spin of different surfaces can be compared, because they are scalar quantities. Similarly, any reasonable quasi-local angular momentum expression \(J_{\mathcal S}^{\underline a \underline b}\) may be expected to satisfy the following: \(J_{\mathcal S}^{\underline a \underline b}\) must give zero for round spheres. For two-surfaces with zero quasi-local mass, the Pauli-Lubanski spin should be proportional to the (null) energy-momentum four-vector \(P_{\mathcal S}^{\underline a}\). \(J_{\mathcal S}^{\underline a \underline b}\) must give the correct weak field limit. \(J_{\mathcal S}^{\underline a \underline b}\) must reproduce the generally-accepted spatial angular momentum at spatial infinity, and in stationary and in axi-symmetric spacetimes it should reduce to the 'standard' expressions at the null infinity as well ('correct large-sphere behaviour'). For small spheres the anti-self-dual part of \(J_{\mathcal S}^{\underline a \underline b}\), defined with respect to the center of the small sphere (the 'vertex' in Section 4.2.2) is expected to give \({4 \over 3}\pi {r^3}{T_{cd}}{t^c}(r{\varepsilon ^{D(A}}{t^{B){D{\prime}}}})\) in nonvacuum and Cr5Tcdeftctdte(rεF(AtB)F′) in vacuum for some constant C ('correct small sphere behaviour'). Since there is no generally accepted definition for the angular momentum at null infinity, we cannot expect anything definite there in nonstationary, non-axi-symmetric spacetimes. Similarly, there are inequivalent suggestions for the center-of-mass at spatial infinity (see Sections 3.2.2 and 3.2.4). Incompatibility of certain 'natural' expectations As Eardley noted in [176], probably no quasi-local energy definition exists, which would satisfy all of his criteria. In fact, it is easy to see that this is the case. Namely, any quasi-local energy definition, which reduces to the 'standard' expression for round spheres cannot be monotonic, as the closed Friedmann-Robertson-Walker or the ΩM,m spacetimes show explicitly. The points where the monotonicity breaks down are the extremal (maximal or minimal) surfaces, which represent an event horizon in the spacetime. Thus, one may argue that since the event horizon hides a portion of spacetime, we cannot know the details of the physical state of the matter + gravity system behind the horizon. Hence, in particular, the monotonicity of the quasi-local mass may be expected to break down at the event horizon. However, although for stationary systems (or at the moment of time symmetry of a time-symmetric system) the event horizon corresponds to an apparent horizon (or to an extremal surface, respectively), for general nonstationary systems the concepts of the event and apparent horizons deviate. Thus, it does not seem possible to formulate the causal argument of Section 4.3.2 in the hypersurface Σ. Actually, the root of the nonmonotonicity is the fact that the quasi-local energy is a two-surface observable in the sense of requirement 1 in Section 4.3.1 above. This does not mean, of course, that in certain restricted situations the monotonicity ('local monotonicity') could not be proven. This local monotonicity may be based, for example, on Lie dragging of the two-surface along some special spacetime vector field. If the quasi-local mass should, in fact, tend to the ADM mass as a monotonically deceasing function in the asymptotic region of asymptotically flat spacetimes, then neither item 1.6 nor 1.7 can be expected to hold. In fact, if the dominant energy condition is satisfied, then the standard round-sphere Misner-Sharp energy is a monotonically increasing or constant (rather than strictly decreasing) function of the area radius r. For example, the Misner-Sharp energy in the Schwarzschild spacetime is the constant function <monospace>m</monospace>/G. The Schwarzschild solution provides a conterexample to item 1.7, too: Since both its ADM mass and the irreducible mass of the black hole are <monospace>m</monospace>/G, any quasi-local mass function of the radius r which is strictly decreasing for large r and coincides with them at infinity and on the horizon, respectively, would have to take its maximal value on some two-surface outside the horizon. However, it does not seem why such a gemetrically, and hence physically distinguished two-surface should exist. In the literature the positivity and monotonicity requirements are sometimes confused, and there is an 'argument' that the quasi-local gravitational energy cannot be positive definite, because the total energy of the closed universes must be zero. However, this argument is based on the implicit assumption that the quasi-local energy is associated with a compact three-dimensional domain, which, together with the positive definiteness requirement would, in fact, imply the monotonicity and a positive total energy for the closed universe. If, on the other hand, the quasi-local energy-momentum is associated with two-surfaces, then the energy may be positive definite and not monotonic. The standard round sphere energy expression (4.7) in the closed FriedmannRobertson-Walker spacetime, or, more generally, the Dougan-Mason energy-momentum (see Section 8.2.3) are such examples. The Bartnik Mass and its Modifications The Bartnik mass The main idea One of the most natural ideas of quasi-localization of the familiar ADM mass is due to Bartnik [54, 53]. His idea is based on the positivity of the ADM energy, and, roughly, can be summarized as follows. Let Σ be a compact, connected three-manifold with connected boundary \({\mathcal S}\), and let hab be a (negative definite) metric and χab a symmetric tensor field on Σ, such that they, as an initial data set, satisfy the dominant energy condition: if 16πGμ ≔ R + χ2 − χabχab and 8πGja ≔ Db(χab − χhab), then μ ≥ (−jaja)1/2. For the sake of simplicity we denote the triple (Σ, hab, χab) by Σ. Then let us consider all the possible asymptotically flat initial data sets (\(\hat \Sigma, {{\hat h}_{ab}},{{\hat \chi}_{ab}}\)) with a single asymptotic end, denoted simply by \({\hat \Sigma}\), which satisfy the dominant energy condition, have finite ADM energy and are extensions of Σ above through its boundary \({\mathcal S}\). The set of these extensions will be denoted by \({\mathcal E}(\Sigma)\). By the positive energy theorem, \({\hat \Sigma}\) has non-negative ADM energy \({E_{{\rm{ADM}}}}(\hat \Sigma)\), which is zero precisely when \({\hat \Sigma}\) is a data set for the flat spacetime. Then we can consider the infimum of the ADM energies, inf \(\{{E_{{\rm{ADM}}}}(\hat \Sigma)\vert \hat \Sigma \; \in \;{\mathcal E}(\Sigma)\}\), where the infimum is taken on \({\mathcal E}(\Sigma)\). Obviously, by the non-negativity of the ADM energies, this infimum exists and is non-negative, and it is tempting to define the quasi-local mass of Σ by this infimum.Footnote 8 However, it is easy to see that, without further conditions on the extensions of (Σ, hab, χab), this infimum is zero. In fact, Σ can be extended to an asymptotically flat initial data set \({\hat \Sigma}\) with arbitrarily small ADM energy such that \({\hat \Sigma}\) contains a horizon (for example in the form of an apparent horizon) between the asymptotically flat end and Σ. In particular, in the 'ΩM,m-spacetime' discussed in Section 4.2.1 on round spheres, the spherically symmetric domain bounded by the maximal surface (with arbitrarily-large round-sphere mass M/G) has an asymptotically flat extension, the complete spacelike hypersurface of the data set for the ΩM,m-spacetime itself, with arbitrarily small ADM mass m/G. Obviously, the fact that the ADM energies of the extensions can be arbitrarily small is a consequence of the presence of a horizon hiding Σ from the outside. This led Bartnik [54, 53] to formulate his suggestion for the quasi-local mass of Σ. He concentrated on time-symmetric data sets (i.e., those for which the extrinsic curvature ηab is vanishing), when the horizon appears to be a minimal surface of topology S2 in \({\hat \Sigma}\) (see, e.g., [213]), and the dominant energy condition is just the requirement of the non-negativity of the scalar curvature of the spatial metric: R ≥ 0. Thus, if \({{\mathcal E}_0}(\Sigma)\) denotes the set of asymptotically flat Riemannian geometries \(\hat \Sigma = (\hat \Sigma, {{\hat h}_{ab}})\) with non-negative scalar curvature and finite ADM energy that contain no stable minimal surface, then Bartnik's mass is $${m_{\rm{B}}}(\Sigma): = \inf \left\{{{E_{{\rm{ADM}}}}(\hat \Sigma)\vert \hat \Sigma \in {\varepsilon _0}(\Sigma)} \right\}.$$ The 'no-horizon' condition on \({\hat \Sigma}\) implies that topologically Σ is a three-ball. Furthermore, the definition of \({{\mathcal E}_0}(\Sigma)\) in its present form does not allow one to associate the Bartnik mass to those three-geometries (Σ, hab) that contain minimal surfaces inside Σ. Although formally the maximal two-surfaces inside Σ are not excluded, any asymptotically flat extension of such a Σ would contain a minimal surface. In particular, the spherically-symmetric three-geometry, with line element dl2 = − dr2 − sin2 r(dθ2 + sin2 θ dϕ2) with (θ, ϕ) ∈ S2 and r ∈ [0, r0], π/2 < r0 < π, has a maximal two-surface at r = π/2, and any of its asymptotically flat extensions necessarily contains a minimal surface of area not greater than 4π sin2 r0. Thus, the Bartnik mass (according to the original definition given in [54, 53]) cannot be associated with every compact time-symmetric data set (Σ, hab), even if Σ is topologically trivial. Since for 0 < r0 < π/2 this data set can be extended without any difficulty, this example shows that mB is associated with the three-dimensional data set Σ, and not only to the two-dimensional boundary ∂Σ. Of course, to rule out this limitation, one can modify the original definition by considering the set \({{\tilde {\mathcal E}}_0}(\mathcal S)\) of asymptotically flat Riemannian geometries \(\hat \Sigma = (\hat \Sigma, {{\hat h}_{ab}})\) (with non-negative scalar curvature, finite ADM energy and with no stable minimal surface), which contain \(({\mathcal S},{q_{ab}})\) as an isometrically-embedded Riemannian submanifold, and define \({{\tilde m}_{\rm{B}}}({\mathcal S})\) by Eq. (5.1) with \({{\mathcal E}_0}({\mathcal S})\) instead of \({{\mathcal E}_0}(\Sigma)\). Obviously, this \({{\tilde m}_{\rm{B}}}({\mathcal S})\) could be associated with a larger class of two-surfaces than the original mB(Σ) can be to compact three-manifolds, and \(0 \leq {{\tilde m}_{\rm{B}}}(\partial \Sigma) \leq {m_{\rm{B}}}(\Sigma)\) holds. In [279, 56] the set \({{\mathcal E}_0}(\Sigma)\) was allowed to include extensions \({\hat \Sigma}\) of Σ having boundaries as compact outermost horizons, when the corresponding ADM energies are still non-negative [217], and hence mB(Σ) is still well defined and non-negative. (For another description of \({{\mathcal E}_0}(\Sigma)\) allowing horizons in the extensions but excluding them between Σ and the asymptotic end, see [110] and Section 5.2 of this paper.) Bartnik suggests a definition for the quasi-local mass of a spacelike two-surface \({\mathcal S}\) (together with its induced metric and the two extrinsic curvatures), as well [54]. He considers those globally-hyperbolic spacetimes \(\hat M: = (\hat M,{{\hat g}_{ab}})\) that satisfy the dominant energy condition, admit an asymptotically flat (metrically-complete) Cauchy surface \({\hat \Sigma}\) with finite ADM energy, have no event horizon and in which \({\mathcal S}\) can be embedded with its first and second fundamental forms. Let \({{\mathcal E}_0}({\mathcal S})\) denote the set of these spacetimes. Since the ADM energy \({E_{{\rm{ADM}}}}(\hat M)\) is non-negative for any \(\hat M \in \;{{\mathcal E}_0}({\mathcal S})\) (and is zero precisely for flat \({\hat M}\)), the infimum $${m_{\rm{B}}}({\mathcal S}): = \inf \left\{{{E_{{\rm{ADM}}}}(\hat M)\vert \hat M \in {\varepsilon _0}({\mathcal S})} \right\}$$ exists and is non-negative. Although it seems plausible that mB(∂Σ) is only the 'spacetime version' of mB(Σ), without the precise form of the no-horizon conditions in \({{\mathcal E}_0}(\Sigma)\) and that in \({{\mathcal E}_0}({\mathcal S})\) they cannot be compared, even if the extrinsic curvature were allowed in the extensions \({\hat \Sigma}\) of Σ. The main properties of mB(Σ) The first immediate consequence of Eq. (5.1) is the monotonicity of the Bartnik mass. If Σ1 ⊂ Σ2, then \({{\mathcal E}_0}({\Sigma _2}) \subset {{\mathcal E}_0}({\Sigma _1})\), and hence, mB(Σ1) ≤ mB(Σ2). Obviously, by definition (5.1) one has \({m_{\rm{B}}}(\Sigma) \leq {m_{{\rm{ADM}}}}(\hat \Sigma)\) for any \(\hat \Sigma \in \;{{\mathcal E}_0}(\Sigma)\). Thus, if m is any quasi-local mass functional that is larger than mB (i.e., that assigns a non-negative real to any Σ such that m(Σ) ≥ mB(Σ) for any allowed Σ), furthermore if \(m(\Sigma) \leq {m_{{\rm{ADM}}}}(\hat \Sigma)\) for any \(\hat \Sigma \in \;{{\mathcal E}_0}(\Sigma)\), then by the definition of the infimum in Eq. (5.1) one has mB(Σ) ≥ m(Σ) −ε ≥ mB(Σ) − ε for any ε < 0. Therefore, mB is the largest mass functional satisfying \({m_{\rm{B}}}(\Sigma) \leq {m_{{\rm{ADM}}}}(\hat \Sigma)\) for any \(\hat \Sigma \in \;{{\mathcal E}_0}(\Sigma)\). Another interesting consequence of the definition of mB, due to Simon (see [56]), is that if \({\hat \Sigma}\) is any asymptotically flat, time-symmetric extension of Σ with non-negative scalar curvature satisfying \({m_{{\rm{ADM}}}}(\hat \Sigma) < {m_{\rm{B}}}(\Sigma)\), then there is a black hole in \({\hat \Sigma}\) in the form of a minimal surface between Σ and the infinity of \({\hat \Sigma}\). For further discussion of mB(Σ) from the point of view of black holes, as well as the relationship between the Bartnik mass and other expressions (e.g., the Hawking energy), see [460]. As we saw, the Bartnik mass is non-negative, and, obviously, if Σ is flat (and hence is a data set for flat spacetime), then mB(Σ) = 0. The converse of this statement is also true [279]: If mB(Σ) = 0, then Σ is locally flat. The Bartnik mass tends to the ADM mass [279]: If \((\hat \Sigma, {\hat h_{ab}})\) is an asymptotically flat Riemannian three-geometry with non-negative scalar curvature and finite ADM mass \({m_{{\rm{ADM}}}}(\hat \Sigma)\), and if {Σn}, n ∈ ℕ, is a sequence of solid balls of coordinate radius n in \({\hat \Sigma}\), then \({\lim\nolimits _{n \rightarrow \infty}}{m_{\rm{B}}}({\Sigma _n}) = {m_{{\rm{ADM}}}}(\hat \Sigma)\). The proof of these two results is based on the use of Hawking energy (see Section 6.1), by means of which a positive lower bound for mB(Σ) can be given near the nonflat points of Σ. In the proof of the second statement one must use the fact that Hawking energy tends to the ADM energy, which, in the time-symmetric case, is just the ADM mass. The proof that the Bartnik mass reduces to the 'standard expression' for round spheres is a nice application of the Riemannian Penrose inequality [279]. Let Σ be a spherically-symmetric Riemannian three-geometry with spherically-symmetric boundary \({\mathcal S}: = \partial \Sigma\). One can form its 'standard' round-sphere energy \(E({\mathcal S})\) (see Section 4.2.1), and take its spherically-symmetric asymptotically flat vacuum extension \({{\hat \Sigma}_{{\rm{SS}}}}\) (see [54, 56]). By the Birkhoff theorem the exterior part of \({{\hat \Sigma}_{{\rm{SS}}}}\) is a part of a t = const. hypersurface of the vacuum Schwarzschild solution, and its ADM mass is just \(E({\mathcal S})\). Then, any asymptotically flat extension \({\hat \Sigma}\) of Σ can also be considered as (a part of) an asymptotically flat time-symmetric hypersurface with minimal surface, whose area is \(16\pi {G^2}{E_{{\rm{ADM}}}}({{\hat \Sigma}_{{\rm{SS}}}})\). Thus, by the Riemannian Penrose inequality [279] \({E_{{\rm{ADM}}}}(\hat \Sigma) \geq {E_{{\rm{ADM}}}}({{\hat \Sigma}_{{\rm{SS}}}}) = E({\mathcal S})\). Therefore, the Bartnik mass of Σ is just the 'standard' round-sphere expression \(E({\mathcal S})\). The computability of the Bartnik mass Since for any given Σ the set \({\mathcal E_0}(\Sigma)\) of its extensions is a huge set, it is almost hopeless to parametrize it. Thus, by its very definition, it seems very difficult to compute the Bartnik mass for a given, specific (Σ, hab). Without some computational method the potentially useful properties of mB(Σ) would be lost from the working relativist's arsenal. Such a computational method might be based on a conjecture of Bartnik [54, 56]: The infimum in definition (5.1) of the mass mB(Σ) is realized by an extension \((\hat \Sigma, {{\hat h}_{ab}})\) of (Σ, hab) such that the exterior region, \((\hat \Sigma - \Sigma, {{\hat h}_{ab}}{\vert _{\hat \Sigma - \Sigma}})\), is static, the metric is Lipschitz-continuous across the two-surface \(\partial \Sigma \subset \hat \Sigma\), and the mean curvatures of ∂Σ of the two sides are equal. Therefore, to compute mB for a given (Σ, hab), one should find an asymptotically flat, static vacuum metric ĥab satisfying the matching conditions on ∂Σ, and where the Bartnik mass is the ADM mass of ĥab. As Corvino shows [154], if there is an allowed extension \({\hat \Sigma}\) of Σ for which \({m_{{\rm{ADM}}}}(\hat \Sigma) = {m_{\rm{B}}}(\Sigma)\), then the extension \(\hat \Sigma - \bar \Sigma\) is static; furthermore, if Σ1 ⊂ Σ2, mB(Σ1) = mB(Σ2) and Σ2 has an allowed extension \({\hat \Sigma}\) for which \({m_{\rm{B}}}({\Sigma _2}) = {m_{{\rm{ADM}}}}(\hat \Sigma)\), then \({\Sigma _2} - \overline {{\Sigma _1}}\) is static. Thus, the proof of Bartnik's conjecture is equivalent to the proof of the existence of such an allowed extension. The existence of such an extension is proven in [360] for geometries (Σ, hab) close enough to the Euclidean one and satisfying a certain reflection symmetry, but the general existence proof is still lacking. (For further partial existence results see [17].) Bartnik's conjecture is that (Σ, hab) determines this exterior metric uniquely [56]. He conjectures [54, 56] that a similar computation method can be found for the mass \({m_{\rm{B}}}({\mathcal S})\), defined in Eq. (5.2), as well, where the exterior metric should be stationary. This second conjecture is also supported by partial results [155]: If (Σ, hab, χab) is any compact vacuum data set, then it has an asymptotically flat vacuum extension, which is a spacelike slice of a Kerr spacetime outside a large sphere near spatial infinity. To estimate mB(Σ) one can construct admissible extensions of (Σ, hab) in the form of the metrics in quasi-spherical form [55]. If the boundary ∂Σ is a metric sphere of radius r with non-negative mean curvature k, then mB(Σ) can be estimated from above in terms of r and k. Bray's modifications Another, slightly modified definition for the quasi-local mass is suggested by Bray [110, 113]. Here we summarize his ideas. Let Σ = (Σ, hab, χab) be any asymptotically flat initial data set with finitely-many asymptotic ends and finite ADM masses, and suppose that the dominant energy condition is satisfied on Σ. Let \({\mathcal S}\) be any fixed two-surface in Σ, which encloses all the asymptotic ends except one, say the i-th (i.e., let \({\mathcal S}\) be homologous to a large sphere in the i-th asymptotic end). The outside region with respect to \({\mathcal S}\), denoted by \(O({\mathcal S})\), will be the subset of Σ containing the i-th asymptotic end and bounded by \({\mathcal S}\), while the inside region, \(I({\mathcal S})\), is the (closure of) \(\Sigma - O({\mathcal S})\). Next, Bray defines the 'extension' \({{\hat \Sigma}_{\rm{e}}}\) of \({\mathcal S}\) by replacing \(O({\mathcal S})\) by a smooth asymptotically flat end of any data set satisfying the dominant energy condition. Similarly, the 'fill-in' \({{\hat \Sigma}_{\rm{f}}}\) of \({\mathcal S}\) is obtained from Σ by replacing \(I({\mathcal S})\) by a smooth asymptotically flat end of any data set satisfying the dominant energy condition. Finally, the surface \({\mathcal S}\) will be called outer-minimizing if, for any closed two-surface \({\tilde {\mathcal S}}\) enclosing \({\mathcal S}\), one has \({\rm{Area}}({\mathcal S}) \leq {\rm{Area}}(\tilde {\mathcal S})\). Let \({\mathcal S}\) be outer-minimizing, and let \({\mathcal E}({\mathcal S})\) denote the set of extensions of \({\mathcal S}\) in which \({\mathcal S}\) is still outer-minimizing, and \({\mathcal F}({\mathcal S})\) denote the set of fill-ins of \({\mathcal S}\). If \({{\hat \Sigma}_{\rm{f}}} \in {\mathcal F}({\mathcal S})\) and \({A_{{{\hat \Sigma}_{\rm{f}}}}}\) denotes the infimum of the area of the two-surfaces enclosing all the ends of \({{\hat \Sigma}_{\rm{f}}}\) except the outer one, then Bray defines the outer and inner mass, \({m_{{\rm{out}}}}({\mathcal S})\) and \({m_{{\rm{in}}}}({\mathcal S})\), respectively, by $$\begin{array}{*{20}c} {{m_{{\rm{out}}}}({\mathcal S}): = \inf \left\{{{m_{{\rm{ADM}}}}({{\hat \Sigma}_e})\vert {{\hat \Sigma}_e} \in {\mathcal E} \,({\mathcal S})} \right\},} \\ {{m_{{\rm{in}}}}({\mathcal S}): = \sup \left\{{\sqrt {{{{A_{{{\hat \Sigma}_{\rm{f}}}}}} \over {16\pi G}}} \vert {{\hat \Sigma}_{\rm{f}}} \in {\mathcal F}\,({\mathcal S})} \right\}.} \\ \end{array}$$ \({m_{{\rm{out}}}}({\mathcal S})\) deviates slightly from Bartnik's mass (5.1) even if the latter would be defined for non-time-symmetric data sets, because Bartnik's 'no-horizon condition' excludes apparent horizons from the extensions, while Bray's condition is that \({\mathcal S}\) be outer-minimizing. A simple consequence of the definitions is the monotonicity of these masses: If \({{\mathcal S}_2}\) and \({{\mathcal S}_1}\) are outer-minimizing two-surfaces such that \({{\mathcal S}_2}\) encloses \({{\mathcal S}_1}\), then \({m_{{\rm{in}}}}({{\mathcal S}_2}) \geq {m_{{\rm{in}}}}({{\mathcal S}_1})\) and \({m_{{\rm{out}}}}({{\mathcal S}_2}) \geq {m_{{\rm{out}}}}({{\mathcal S}_1})\). Furthermore, if the Penrose inequality holds (for example, in a time-symmetric data set, for which the inequality has been proven), then for outer-minimizing surfaces \({m_{{\rm{out}}}}({\mathcal S}) \geq {m_{{\rm{in}}}}({\mathcal S})\) [110, 113]. Furthermore, if Σi is a sequence such that the boundaries ∂Σi shrink to a minimal surface \({\mathcal S}\), then the sequence mout(∂Σi) tends to the irreducible mass \(\sqrt {{\rm{Area}}({\mathcal S})/(16\pi {G^2})}\) [56]. Bray defines the quasi-local mass of a surface not simply to be a number, but the whole closed interval \([{m_{{\rm{in}}}}({\mathcal S}),{m_{{\rm{out}}}}({\mathcal S})]\). If \({\mathcal S}\) encloses the horizon in the Schwarzschild data set, then the inner and outer masses coincide, and Bray expects that the converse is also true: If \({m_{{\rm{in}}}}({\mathcal S}),{m_{{\rm{out}}}}({\mathcal S})\), then \({\mathcal S}\) can be embedded into the Schwarzschild spacetime with the given two-surface data on \({\mathcal S}\) [113]. For further modification of Bartnik's original ideas, see [311]. The Hawking Energy and its Modifications The Hawking energy Studying the perturbation of the dust-filled k = −1 Friedmann-Robertson-Walker spacetimes, Hawking found that $$\begin{array}{*{20}c} {{E_{\rm{H}}}({\mathcal S}): = \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} \left({1 + {1 \over {2\pi}}\oint\nolimits_{\mathcal S} {\rho \rho {\prime}\;d{\mathcal S}}} \right) = \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {= \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} {1 \over {4\pi}}\oint\nolimits_{\mathcal S} {(\sigma \sigma {\prime}+ \bar \sigma \bar \sigma {\prime}- {\psi _2} - {{\bar \psi}_{2{\prime}}} + 2{\phi _{11}} + 2\Lambda)\;d{\mathcal S}}} \\ \end{array}$$ behaves as an appropriate notion of energy surrounded by the spacelike topological two-sphere \({\mathcal S}\) [236]. Here we used the Gauss-Bonnet theorem and the GHP form of Eqs. (4.3) and (4.4) for F to express ρρ′ by the curvature components and the shears. Thus, Hawking energy is genuinely quasi-local. Hawking energy has the following clear physical interpretation even in a general spacetime, and, in fact, EH can be introduced in this way. Starting with the rough idea that the mass-energy surrounded by a spacelike two-sphere \({\mathcal S}\) should be the measure of bending of the ingoing and outgoing light rays orthogonal to \({\mathcal S}\), and recalling that under a boost gauge transformation la ↦ αla, na ↦ α−1na the convergences ρ and ρ′ transform as ρ ↦ αρ and ρ′ ↦ α−1ρ′, respectively, the energy must have the form \(C + D\oint\nolimits_{\mathcal S} {\rho \rho {\prime}d{\mathcal S}}\), where the unspecified parameters C and D can be determined in some special situations. For metric two-spheres of radius r in the Minkowski spacetime, for which ρ = −1/r and ρ′ = 1/2r, we expect zero energy, thus, D = C/(2π). For the event horizon of a Schwarzschild black hole with mass parameter m, for which ρ = 0 = ρ′, we expect m/G, which can be expressed by the area of \({\mathcal S}\). Thus, \({C^2} = {\rm{Area}}({\mathcal S})/(16\pi {G^2})\), and hence, we arrive at Eq. (6.1). Hawking energy for spheres Obviously, for round spheres, EH reduces to the standard expression (4.7). This implies, in particular, that the Hawking energy is not monotonic in general, since for a Killing horizon (e.g., for a stationary event horizon) ρ = 0, the Hawking energy of its spacelike spherical cross sections \({\mathcal S}\) is \(\sqrt {{\rm{Area}}({\mathcal S})/(16\pi {G^2})}\). In particular, for the event horizon of a Kerr-Newman black hole it is just the familiar irreducible mass \(\sqrt {2{m^2} - {e^2} + 2m\sqrt {{m^2} - {e^2} - {a^2}}}/(2G)\). For more general surfaces Hawking energy is calculated numerically in [272]. For a small sphere of radius r with center p ∈ M in nonvacuum spacetimes it is \({{4\pi} \over 3}{r^3}{T_{ab}}{t^a}{t^b}\), while in vacuum it is \({2 \over {45G}}{r^5}{T_{abcd}}{t^a}{t^b}{t^c}{t^d}\), where Tab is the energy-momentum tensor and Tabcd is the Bel-Robinson tensor at p [275]. The first result shows that in the lowest order the gravitational 'field' does not have a contribution to Hawking energy, that is due exclusively to the matter fields. Thus, in vacuum the leading order of EH must be higher than r3. Then, even a simple dimensional analysis shows that the number of the derivatives of the metric in the coefficient of the rk-order term in the power series expansion of EH is (k − 1). However, there are no tensorial quantities built from the metric and its derivatives such that the total number of the derivatives involved would be three. Therefore, in vacuum, the leading term is necessarily of order r5, and its coefficient must be a quadratic expression of the curvature tensor. It is remarkable that for small spheres EH is positive definite both in nonvacuum (provided the matter fields satisfy, for example, the dominant energy condition) and vacuum. This shows, in particular, that EH should be interpreted as energy rather than as mass: For small spheres in a pp-wave spacetime EH is positive, while, as we saw for matter fields in Section 2.2.3, a mass expression could be expected to be zero. (We will see in Sections 8.2.3 and 13.5 that, for the Dougan-Mason energy-momentum, the vanishing of the mass characterizes the pp-wave metrics completely.) Using the second expression in Eq. (6.1) it is easy to see that at future null infinity EH tends to the Bondi-Sachs energy. A detailed discussion of the asymptotic properties of EH near null infinity both for radiative and stationary spacetimes is given in [455, 457]. Similarly, calculating EH for large spheres near spatial infinity in an asymptotically flat spacelike hypersurface, one can show that it tends to the ADM energy. Positivity and monotonicity properties In general, Hawking energy may be negative, even in Minkowski spacetime. Geometrically this should be clear, since for an appropriately general (e.g., concave) two-surface \({\mathcal S}\), the integral \(\oint\nolimits_{\mathcal S} {\rho {\rho \prime}s} {\mathcal S}\) could be less than −2π. Indeed, in flat spacetime EH is proportional to \(\oint\nolimits_{\mathcal S} {(\sigma {\sigma \prime} + \bar \sigma {{\bar \sigma}\prime})d} {\mathcal S}\) by the Gauss equation. For topologically-spherical two-surfaces in the t = const. spacelike hyperplane of Minkowski spacetime σσ′ is real and nonpositive, and it is zero precisely for metric spheres, while for two-surfaces in the r = const. timelike cylinder σσ′ is real and non-negative, and it is zero precisely for metric spheres.Footnote 9 If, however, \({\mathcal S}\) is 'round enough' (not to be confused with the round spheres in Section 4.2.1), which is some form of a convexity condition, then EH behaves nicely [143]: \({\mathcal S}\) will be called round enough if it is a submanifold of a spacelike hypersurface Σ, and if among the two-dimensional surfaces in Σ, which enclose the same volume as \({\mathcal S}\) does, \({\mathcal S}\) has the smallest area. It is proven by Christodoulou and Yau [143] that if \({\mathcal S}\) is round enough in a maximal spacelike slice Σ on which the energy density of the matter fields is non-negative (for example, if the dominant energy condition is satisfied), then the Hawking energy is non-negative. Although Hawking energy is not monotonic in general, it has interesting monotonicity properties for special families of two-surfaces. Hawking considered one-parameter families of spacelike two-surfaces foliating the outgoing and the ingoing null hypersurfaces, and calculated the change of EH [236]. These calculations were refined by Eardley [176]. Starting with a weakly future convex two-surface \({\mathcal S}\) and using the boost gauge freedom, he introduced a special family \({{\mathcal S}_r}\) of spacelike two-surfaces in the outgoing null hypersurface \({\mathcal N}\), where r will be the luminosity distance along the outgoing null generators. He showed that \({E_H}({{\mathcal S}_r})\) is nondecreasing with r, provided the dominant energy condition holds on \({\mathcal N}\). Similarly, for weakly past convex \({\mathcal S}\) and the analogous family of surfaces in the ingoing null hypersurface \({E_H}({{\mathcal S}_r})\) is nonincreasing. Eardley also considered a special spacelike hypersurface, filled by a family of two-surfaces, for which \({E_H}({{\mathcal S}_r})\) is nondecreasing. By relaxing the normalization condition lana = 1 for the two null normals to lana = exp(f) for some \(f:{\mathcal S} \rightarrow {\mathbb R}\), Hayward obtained a flexible enough formalism to introduce a double-null foliation (see Section 11.2 below) of a whole neighborhood of a mean convex two-surface by special mean convex two-surfaces [247]. (For the more general GHP formalism in which lana is not fixed, see [425].) Assuming that the dominant energy condition holds, he showed that the Hawking energy of these two-surfaces is nondecreasing in the outgoing, and nonincreasing in the ingoing direction. In contrast to the special foliations of the null hypersurfaces above, Frauendiener defined a special spacelike vector field, the inverse mean curvature vector in the spacetime [194]. If \({\mathcal S}\) is a weakly future and past convex two-surface, then qa ≔ 2Qa/(QbQb) = −[1/(2ρ)]la − [1/(2ρ′)]na is an outward-directed spacelike normal to \({\mathcal S}\). Here Qb is the trace of the extrinsic curvature tensor: \({Q_b}: = {Q^b}_{ab}\) (see Section 4.1.2). Starting with a single weakly future and past convex two-surface, Frauendiener gives an argument for the construction of a one-parameter family \({{\mathcal S}_t}\) of two-surfaces being Lie-dragged along its own inverse mean curvature vector qa. Assuming that such a family of surfaces (and hence, the vector field qa on the three-submanifold swept by \({{\mathcal S}_t}\)) exists, Frauendiener showed that the Hawking energy is nondecreasing along the vector field qa if the dominant energy condition is satisfied. This family of surfaces would be analogous to the solution of the geodesic equation, where the initial point and direction at that point specify the whole solution, at least locally. However, it is known (Frauendiener, private communication) that the corresponding flow is based on a system of parabolic equations such that it does not admit a well-posed initial value formulation.Footnote 10 Motivated by this result, Malec, Mars, and Simon [351] considered the inverse mean curvature flow of Geroch on spacelike hypersurfaces (see Section 6.2.2). They showed that if the dominant energy condition and certain additional (essentially technical) assumptions hold, then the Hawking energy is monotonic. These two results are the natural adaptations for the Hawking energy of the corresponding results known for some time for the Geroch energy, aiming to prove the Penrose inequality. (We return to this latter issue in Section 13.2, only for a very brief summary.) The necessary conditions on flows of two-surfaces on null, as well as spacelike, hypersurfaces ensuring the monotonicity of the Hawking energy are investigated in [114]. The monotonicity property of the Hawking energy under another geometric flows is discussed in [89]. For a discussion of the relationship between Hawking energy and other expressions (e.g., the Bartnik mass and the Brown-York energy), see [460]. For the first attempts to introduce quasi-local energy oparators, in particular the Hawking energy oparator, in loop quantum gravity, see [565]. Two generalizations Hawking defined not only energy, but spatial momentum as well, completely analogously to how the spatial components of Bondi-Sachs energy-momentum are related to Bondi energy: $$P_{\rm{H}}^{\underline a}({\mathcal S}) = \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} {1 \over {4\pi}}\oint\nolimits_{\mathcal S} {(\sigma \sigma {\prime}+ \bar \sigma \bar \sigma {\prime}- {\psi _2} - {{\bar \psi}_{2{\prime}}} + 2{\phi _{11}} + 2\Lambda)\,{W^{\underline a}}\,d{\mathcal S}},$$ where \({W^{\underline a}},\,a = 0,\, \ldots, \,3\), are essentially the first four spherical harmonics: $$\begin{array}{*{20}c} {{W^0} = 1,} & {{W^1} = {{\zeta + \bar \zeta} \over {1 + \zeta \bar \zeta}},} & {{W^2} = {1 \over {\rm{i}}}{{\zeta - \bar \zeta} \over {1 + \zeta \bar \zeta}},} & {{W^3} = {{1 - \zeta \bar \zeta} \over {1 + \zeta \bar \zeta}}.} \\ \end{array}$$ Here ζ and \({\bar \zeta}\) are the standard complex stereographic coordinates on \({\mathcal S} \approx {S^2}\). Hawking considered the extension of the definition of \({E_H}({\mathcal S})\) to higher genus two-surfaces as well by the second expression in Eq. (6.1). Then, in the expression analogous to the first one in Eq. (6.1), the genus of \({\mathcal S}\) appears. For recent generalizations of the Hawking energy for two-surfaces foliating the stationary and dynamical untrapped hypersurfaces, see [527, 528] and Section 11.3.4. The Geroch energy Suppose that the two-surface \({\mathcal S}\) for which EH is defined is embedded in the spacelike hypersurface Σ. Let χab be the extrinsic curvature of Σ in M and kab the extrinsic curvature of \(\Sigma\) in Σ. (In Section 4.1.2 we denote the latter by νab.) Then 8ρρ′ = (χabqab)2 − (kabqab)2, by means of which $$\begin{array}{*{20}c} {{E_{\rm{H}}}({\mathcal S}) = \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} \left({1 - {1 \over {16\pi}}\oint\nolimits_{\mathcal S} {{{({k_{ab}}{q^{ab}})}^2}\;d{\mathcal S}} + {1 \over {16\pi}}\oint\nolimits_{\mathcal S} {{{({\chi _{ab}}{q^{ab}})}^2}\;d{\mathcal S}}} \right) \geq} \\ {\; \geq \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} \left({1 - {1 \over {16\pi}}\oint\nolimits_{\mathcal S} {{{({k_{ab}}{q^{ab}})}^2}\;d{\mathcal S}}} \right) = \quad \quad \quad \quad \quad \quad} \\ {\;\; = {1 \over {16\pi}}\sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} \oint\nolimits_{\mathcal S} {\left({{2^{\mathcal S}}R - {{({k_{ab}}{q^{ab}})}^2}} \right)\;d{\mathcal S}} = :{E_{\rm{G}}}({\mathcal S}).\quad \quad} \\ \end{array}$$ In the last step we use the Gauss-Bonnet theorem for \({\mathcal S} \approx {S^2}\). \({E_G}({\mathcal S})\) is known as the Geroch energy [207]. Thus, it is not greater than the Hawking energy, and, in contrast to EH, it depends not only on the two-surface \({\mathcal S}\), but on the hypersurface Σ as well. The calculation of the small sphere limit of the Geroch energy was saved by observing [275] that, by Eq. (6.4), the difference of the Hawking and the Geroch energies is proportional to \(\sqrt {{\rm{Area}}({\mathcal S})} \times \oint\nolimits_{\mathcal S} {{{({\chi _{ab}}{q^{ab}})}^2}d{\mathcal S}}\). Since, however, χabqab — for the family of small spheres \({{\mathcal S}_r}\) — does not tend to zero in the r → 0 limit, in general, this difference is \({\mathcal O}({r^3})\). It is zero if Σ is spanned by spacelike geodesics orthogonal to ta at p. Thus, for general Σ, the Geroch energy does not give the expected \({{4\pi} \over 3}{r^3}{T_{ab}}{t^a}{t^b}\) result. Similarly, in vacuum, the Geroch energy deviates from the Bel-Robinson energy in r5 order even if Σ is geodesic at p. Since \({E_H}({\mathcal S}) \geq {E_G}({\mathcal S})\) and since the Hawking energy tends to the ADM energy, the large sphere limit of \({E_G}({\mathcal S})\) in an asymptotically flat Σ cannot be greater than the ADM energy. In fact, it is also precisely the ADM energy [207]. For a definition of Geroch's energy as a quasi-local energy oparator in loop quantum gravity, see [565]. Monotonicity properties The Geroch energy has interesting positivity and monotonicity properties along a special flow in Σ [207, 291]. This flow is the inverse mean curvature flow defined as follows. Let t: Σ → ℝ be a smooth function such that its level surfaces, \({{\mathcal S}_t}: = \{q \in \Sigma \left\vert {t(q) = t} \right.\}\), are homeomorphic to S2, there is a point p ∈ Σ such that the surfaces \({{\mathcal S}_t}\) are shrinking to p in the limit t → −∞, and they form a foliation of Σ − {p}. Let n be the lapse function of this foliation, i.e., if va is the outward directed unit normal to \({{\mathcal S}_t}\) in Σ, then nvaDat = 1. Denoting the integral on the right-hand side in Eq. (6.4) by Wt, we can calculate its derivative with respect to t. In general this derivative does not seem to have any remarkable properties. If, however, the foliation is chosen in a special way, namely if the lapse is just the inverse mean curvature of the foliation, n = 1/k where k ≔ kabqab, and furthermore Σ is maximal (i.e., χ = 0) and the energy density of the matter is non-negative, then, as shown by Geroch [207], Wt ≥ 0 holds. Jang and Wald [291] modified the foliation slightly, such that t ∈ [0, ∞), and the surface \({{\mathcal S}_0}\) was assumed to be future marginally trapped (i.e., ρ = 0 and ρ′ ≥ 0). Then they showed that, under the conditions above, \(\sqrt {{\rm{Area}}({{\mathcal S}_0})} {W_0} \leq \sqrt {{\rm{Area}}({{\mathcal S}_t})} {W_t}\). Since \({E_G}({{\mathcal S}_t})\) tends to the ADM energy as t → ∞, these considerations were intended to argue that the ADM energy should be non-negative (at least for maximal Σ) and not less than \(\sqrt {{\rm{Area}}({{\mathcal S}_0})/(16\pi {G^2})}\) (at least for time-symmetric Σ), respectively. Later Jang [289] showed that, if a certain quasi-linear elliptic differential equation for a function w on a hypersurface Σ admits a solution (with given asymptotic behavior), then w defines a mapping between the data set (Σ, hab, χab) on Σ and a maximal data set \((\Sigma, \,{{\bar h}_{ab}},\,{{\bar \chi}_{ab}})\) (i.e., for which \({{\bar \chi}_{ab}}{{\bar h}^{ab}} = 0\)) such that the corresponding ADM energies coincide. Then Jang shows that a slightly modified version of the Geroch energy is monotonic (and tends to the ADM energy) with respect to a new, modified version of the inverse mean curvature foliation of \((\Sigma, \,{{\bar h}_{ab}})\). The existence and the properties of the original inverse-mean-curvature foliation of (Σ, hab) above were proven and clarified by Huisken and Ilmanen [278, 279], giving the first complete proof of the Riemannian Penrose inequality, and, as proven by Schoen and Yau [444], Jang's quasi-linear elliptic equation admits a global solution. The Hayward energy We saw that EH can be nonzero, even in the Minkowski spacetime. This may motivate us to consider the following expression $$\begin{array}{*{20}c} {I({\mathcal S}): = \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} \left({1 + {1 \over {4\pi}}\oint\nolimits_{\mathcal S} {(2\rho \rho {\prime}- \sigma \sigma {\prime}- \bar \sigma \bar \sigma {\prime})\;d{\mathcal S}}} \right)} \\ {\quad \;\;\;= \sqrt {{{{\rm{Area}}({\mathcal S})} \over {16\pi {G^2}}}} {1 \over {4\pi}}\oint\nolimits_{\mathcal S} {(- {\psi _2} - {{\bar \psi}_{2{\prime}}} + 2{\phi _{11}} + 2\Lambda)\;d{\mathcal S}}.} \\ \end{array}$$ (Thus, the integrand is \({1 \over 4}(F + \bar F)\), where F is given by Eq. (4.4).) By the Gauss equation, this is zero in flat spacetime, furthermore, it is not difficult to see that its limit at spatial infinity is still the ADM energy. However, using the second expression of \(I({\mathcal S})\), one can see that its limit at the future null infinity is the Newman-Unti, rather than the Bondi-Sachs energy. In the literature there is another modification of Hawking energy, due to Hayward [248]. His suggestion is essentially \(I({\mathcal S})\) with the only difference being that the integrands of Eq. (6.5) above contain an additional term, namely the square of the anholonomicity −ωaωa (see Sections 4.1.8 and 11.2.1). However, we saw that ωa is a boost-gauge-dependent quantity, thus, the physical significance of this suggestion is questionable unless a natural boost gauge choice, e.g., in the form of a preferred foliation, is made. (Such a boost gauge might be that given by the mean extrinsic curvature vector Qa and \({{\bar Q}_a}\) discussed in Section 4.1.2.) Although the expression for the Hayward energy in terms of the GHP spin coefficients given in [81, 83] seems to be gauge invariant, this is due only to an implicit gauge choice. The correct, general GHP form of the extra term is \(- {\omega _a}{\omega ^a} = 2(\beta - {{\bar \beta}\prime})(\bar \beta - {\beta \prime})\). If, however, the GHP spinor dyad is fixed, as in the large or small sphere calculations, then \(\beta - {{\bar \beta}\prime} = \tau = - {{\bar \tau}\prime}\), and hence, the extra term is, in fact, the gauge invariant \(2\tau \bar \tau\). Taking into account that \(\tau = {\mathcal O}({r^{- 2}})\) near the future null infinity (see, e.g., [455]), it is obvious from the remark on the asymptotic behavior of \(I({\mathcal S})\) above that the Hayward energy tends to the Newman-Unti, instead of the Bondi-Sachs, energy at the future null infinity. The Hayward energy has been calculated for small spheres both in nonvacuum and vacuum [81]. In nonvacuum it gives the expected value \({{4\pi} \over 3}{r^3}{T_{ab}}{t^a}{t^b}\). However, in vacuum it is \(- {8 \over {45G}}{r^5}{T_{abcd}}{t^a}{t^b}{t^c}{t^d}\), which is negative. Penrose's Quasi-Local Energy-Momentum and Angular Momentum The construction of Penrose is based on twistor-theoretical ideas, and motivated by the linearized gravity integrals for energy-momentum and angular momentum. Since, however, twistor-theoretical ideas and basic notions are still considered 'special knowledge', the review here of the basic idea behind the Penrose construction is slightly more detailed than that of the others. The main introductory references of the field are the volumes [425, 426] by Penrose and Rindler on 'Spinors and Spacetime', especially volume 2, the very readable book by Hugget and Tod [277] and the comprehensive review article [516] by Tod. How do the twistors emerge? We saw in Section 3.1.1 that in the Newtonian theory of gravity the mass of the source in D can be expressed as the flux integral of the gravitational field strength on the boundary \({\mathcal S}: = \partial D\). Similarly, in the weak field (linear) approximation of general relativity on Minkowski spacetime the source of the gravitational field (i.e., the linearized energy-momentum tensor) is still analogous to charge. In fact, the total energy-momentum and angular momentum of the source can be expressed as appropriate two-surface integrals of the curvature at infinity [481]. Thus, it is natural to expect that the energy-momentum and angular momentum of the source in a finite three-volume Σ, given by Eq. (2.5), can also be expressed as the charge integral of the curvature on the two-surface \({\mathcal S}\). However, the curvature tensor can be integrated on \({\mathcal S}\) only if at least one pair of its indices is annihilated by some tensor via contraction, i.e., according to Eq. (3.14) if some ωab = ω[ab] is chosen and μab = εab. To simplify the subsequent analysis, ωab will be chosen to be anti-self-dual: ωab = εA′B′ ωAB with ωAB = ω(AB).Footnote 11 Thus, our goal is to find an appropriate spinor field ωAB on \({\mathcal S}\) such that $${Q_{\mathcal S}} = [{\bf{K}}]: = \int\nolimits_\Sigma {{K_a}{T^{ab}}{1 \over {3!}}{\varepsilon _{bcde}}} = {1 \over {8\pi G}}\oint\nolimits_{\mathcal S} {{\omega ^{A\,B}}{R_{A\,Bcd}}} = :{A_{\mathcal S}}[\omega ].$$ Since the dual of the exterior derivative of the integrand on the right, and, by Einstein's equations, the dual of the 8πG times the integrand on the left, respectively, is $${\varepsilon ^{ecdf}}{\nabla _e}({\omega ^{A\,B}}{R_{A\,Bcd}}) = - 2{\rm{i}}{\psi ^F}_{A\,BC}{\nabla ^{F{\prime}\left(A \right.}}{\omega ^{\left. {BC} \right)}} + 2{\phi _{A\,B\,E{\prime}}}^{F{\prime}}{\rm{i}}{\nabla ^{E{\prime}F}}{\omega ^{A\,B}} + 4\Lambda {\rm{i}}\nabla _A^{F{\prime}}{\omega ^{F\,A}},$$ $$- 8\pi G{K_a}{T^{a\,f}} = 2{\phi ^{FAF{\prime}A{\prime}}}{K_{AA{\prime}}} + 6\Lambda {K^{FF{\prime}}}.$$ expressions (7.2) and (7.3) are equal if ωAB satisfies $${\nabla ^{A{\prime}A}}{\omega ^{BC}} = - {\rm{i}}{\varepsilon ^{A\left(B \right.}}{K^{\left. C \right)A{\prime}}}.$$ This equation in its symmetrized form, \({\nabla ^{{A\prime}(A}}{\omega ^{BC)}} = 0\), is the valence 2 twistor equation, a specific example for the general twistor equation \({\nabla ^{{A\prime}(A}}{\omega ^{BC \ldots E)}} = 0\) for ωBC.…E = ω(BC.…E). Thus, as could be expected, ωAB depends on the Killing vector Ka, and, in fact, Ka can be recovered from ωAB as \({K^{{A\prime}A}} = {2 \over 3}{\rm{i}}\nabla _B^{{A\prime}}{\omega ^{AB}}\). Thus, ωAB plays the role of a potential for the Killing vector KA′A. However, as a consequence of Eq. (7.4), Ka is a self-dual Killing 1-form in the sense that its derivative is a self-dual (s.d.) 2-form: In fact, the general solution of Eq. (7.4) and the corresponding Killing vector are $$\begin{array}{*{20}c} {{\omega ^{A\,B}} = - {\rm{i}}{x^{AA{\prime}}}{x^{BB{\prime}}}{{\bar M}_{A{\prime}B{\prime}}} + {\rm{i}}{x^{\left(A \right.}}_{A{\prime}}{T^{\left. B \right)A{\prime}}} + {\Omega ^{A\,B}},} \\ {{K^{AA{\prime}}} = {T^{AA{\prime}}} + 2{x^{A\,B{\prime}}}\bar M_{B{\prime}}^{A{\prime}},\quad \quad \quad \quad \quad \quad \;\;\;} \\ \end{array}$$ where \({{\bar M}_{{A\prime}{B\prime}}},\,{T^{A{A\prime}}}\), and ΩAB are constant spinors, and using the notation \({x^{A{A\prime}}}: = {x^{\underline a}}\sigma _{\underline a}^{\underline A \,{{\underline A}\prime}}{\mathcal E}_{\underline A}^A\bar {\mathcal E}_{{{\underline A}\prime}}^{{A\prime}}\), where \(\{{\mathcal E}_{\underline {\rm{A}}}^{\rm{A}}\}\) is a constant spin frame (the 'Cartesian spin frame') and \(\sigma _{\underline a}^{\underline A \,{{\underline A}\prime}}\) are the standard SL(2, ℂ) Pauli matrices (divided by \(\sqrt 2\)). These yield that Ka is, in fact, self-dual, \({\nabla _{A{A\prime}}}{K_{B{B\prime}}} = {\varepsilon _{AB}}{{\bar M}_{{A\prime}{B\prime}}},\,{T^{A{A\prime}}}\) is a translation and \({{\bar M}_{{A\prime}{B\prime}}}\) generates self-dual rotations. Then \({Q_{\mathcal S}}[{\bf{K}}] = {T_{A{A\prime}}}{P^{A{A\prime}}} + 2{{\bar M}_{{A\prime}{B\prime}}}{J^{{A\prime}{B\prime}}}\), implying that the charges corresponding to ΩAB are vanishing, the four components of the quasi-local energy-momentum correspond to the real TAA′ s, and the spatial angular momentum and center-of-mass are combined into the three complex components of the self-dual angular momentum \({{\bar J}^{{A\prime}{B\prime}}}\), generated by \({{\bar M}_{{A\prime}{B\prime}}}\). Twistor space and the kinematical twistor Recall that the space of the contravariant valence-one twistors of Minkowski spacetime is the set of the pairs Zα ≔ (λA, πA′) of spinor fields, which solve the valence-one-twistor equation ∇A′AλB = −iεABπA′. If Zα is a solution of this equation, then Ẑα ≔ (αA, πA′ + iϒA′aλA) is a solution of the corresponding equation in the conformally-rescaled spacetime, where ϒa ≔ Ω−1∇aΩ and Ω is the conformal factor. In general, the twistor equation has only the trivial solution, but in the (conformal) Minkowski spacetime it has a four complex-parameter family of solutions. Its general solution in the Minkowski spacetime is λA = ΛA − ixAA′ πA′, where ΛA and πA′ are constant spinors. Thus, the space Tα of valence-one twistors, called the twistor space, is four-complex-dimensional, and hence, has the structure \({{\rm{T}}^\alpha} = {{\rm{S}}^A} \oplus {{{\rm{\bar S}}}_{{A\prime}}}\). Tα admits a natural Hermitian scalar product: if Wβ = (ωB, σB′) is another twistor, then \({H_{\alpha {\beta \prime}}}{Z^\alpha}{{\bar W}^{{\beta \prime}}}: = {\lambda ^A}{{\bar \sigma}_A} + {\pi _{{A\prime}}}{{\bar \omega}^{{A\prime}}}\). Its signature is (+, +, −, −), it is conformally invariant, \({H_{\alpha {\beta \prime}}}{{\hat Z}^\alpha}{{\bar \hat W}^{{\beta \prime}}}: = {H_{\alpha {\beta \prime}}}{Z^\alpha}{{\bar W}^{{\beta \prime}}}\), and it is constant on Minkowski spacetime. The metric Hαβ′ defines a natural isomorphism between the complex conjugate twistor space, \({{{\rm{\bar T}}}^\alpha}\prime\), and the dual twistor space, \({{\rm{T}}_\beta}: = {{\rm{S}}_B} \oplus {{\rm{\bar S}}^{{B\prime}}}\), by \(({{\bar \lambda}^{{A\prime}}},\,{{\bar \pi}_A}) \mapsto ({{\bar \pi}_A},\,{{\bar \lambda}^{{A\prime}}})\). This makes it possible to use only twistors with unprimed indices. In particular, the complex conjugate Āα′β′ of the covariant valence-two twistor Aαβ can be represented by the conjugate twistor Aαβ ≔ Aα′β′Hα′βHβ′β. We should mention two special, higher-valence twistors. The first is the infinity twistor. This and its conjugate are given explicitly by $$\begin{array}{*{20}c} {{I^{\alpha \beta}}: = \left({\begin{array}{*{20}c} {{\varepsilon ^{A\,B}}} & 0 \\ 0 & 0 \\ \end{array}} \right),} & {{I_{\alpha \beta}}: = {{\bar I}^{\alpha {\prime}\beta {\prime}}}{H_{\alpha {\prime}\alpha}}{H_{\beta {\prime}\beta}} = \left({\begin{array}{*{20}c} 0 & 0 \\ 0 & {{\varepsilon ^{A{\prime}\,B{\prime}}}} \\ \end{array}} \right)} \\ \end{array}.$$ The other is the completely anti-symmetric twistor εεαβγ, whose component ε0123 in an Hαβ′-orthonormal basis is required to be one. The only nonvanishing spinor parts of εεαβγ are those with two primed and two unprimed spinor indices: \({\varepsilon ^{A{\prime}B{\prime}}}_{CD} = {\varepsilon ^{A{\prime}B{\prime}}}{\varepsilon _{CD}},{\varepsilon ^{A{\prime}}}_B{\,^{C{\prime}}}_D = - {\varepsilon ^{A{\prime}C{\prime}}}{\varepsilon _{BD}},{\varepsilon _{AB}}^{C{\prime}D{\prime}} = {\varepsilon _{AB}}{\varepsilon ^{C{\prime}D{\prime}}}\). Thus, for any four twistors \(Z_i^\alpha = (\lambda _i^A,\,\pi _{{A\prime}}^i),\,i = 1,\, \ldots, \,4\), the determinant of the 4×4 matrix, whose i-th column is \((\lambda _i^0,\,\lambda _i^1,\,\pi _0^i,\,\pi _1^i)\), where the \(\lambda _i^0,\, \ldots, \,\pi _1^i\), are the components of the spinors \(\lambda _i^A\) and \(\pi _A^i\), in some spin frame, is $$\nu : = \det \left({\begin{array}{*{20}c} {\lambda _1^{\bf{0}}} & {\lambda _2^{\bf{0}}} & {\lambda _3^{\bf{0}}} & {\lambda _4^{\bf{0}}} \\ {\lambda _1^{\bf{1}}} & {\lambda _2^{\bf{1}}} & {\lambda _3^{\bf{1}}} & {\lambda _4^{\bf{1}}} \\ {\pi _{\bf{0\prime}}^1} & {\pi _{\bf{0\prime}}^2} & {\pi _{\bf{0\prime}}^3} & {\pi _{\bf{0\prime}}^4} \\ {\pi _{\bf{1\prime}}^1} & {\pi _{\bf{1\prime}}^2} & {\pi _{\bf{1\prime}}^3} & {\pi _{\bf{1\prime}}^4} \\ \end{array}} \right) = {\textstyle{1 \over 4}}{{\epsilon}^{ij}}_{kl}\lambda _i^A\lambda _j^B\pi _{A{\prime}}^k\pi _{B{\prime}}^l{\varepsilon _{A\,B}}{\varepsilon ^{A{\prime}B{\prime}}} = {\textstyle{1 \over 4}}{\varepsilon _{\alpha \beta \gamma \delta}}Z_1^\alpha Z_2^\beta Z_3^\gamma Z_4^\delta,$$ where \({\epsilon ^{ij}}_{kl}\) is the totally antisymmetric Levi-Civita symbol. Then Iαβ and Iαβ are dual to each other in the sense that \({I^{\alpha \beta}} = {1 \over 2}{\varepsilon ^{\alpha \beta \gamma \delta}}{I_{\gamma \delta}}\), and by the simplicity of Iαβ one has εαβγδIαβIγδ = 0. The solution ωAB of the valence-two twistor equation, given by Eq. (7.5), can always be written as a linear combination of the symmetrized product \({\lambda ^{(A}}{\omega ^B})\) of the solutions λA and ωA of the valence-one twistor equation. ωAB uniquely defines a symmetric twistor ωαβ (see, e.g., [426]). Its spinor parts are $${\omega ^{\alpha \beta}} = \left({\begin{array}{*{20}c} {{\omega ^{A\,B}}} & {- {\textstyle{1 \over 2}}{K^A}_{B{\prime}}} \\ {- {\textstyle{1 \over 2}}{K_{A{\prime}}}^B} & {- {\rm{i}}{{\bar M}_{A{\prime}B{\prime}}}} \\ \end{array}} \right).$$ However, Eq. (7.1) can be interpreted as a ℂ-linear mapping of ωαβ into ℂ, i.e., Eq. () defines a dual twistor, the (symmetric) kinematical twistor Aαβ, which therefore has the structure $${A_{\alpha \beta}} = \left({\begin{array}{*{20}c} 0 & {{P_A}^{B{\prime}}} \\ {{P^{A{\prime}}}_B} & {2{\rm{i}}{{\bar J}^{A{\prime}B{\prime}}}} \\ \end{array}} \right).$$ Thus, the quasi-local energy-momentum and self-dual angular momentum of the source are certain spinor parts of the kinematical twistor. In contrast to the ten complex components of a general symmetric twistor, it has only ten real components as a consequence of its structure (its spinor part AAB is identically zero) and the reality of PAA′. These properties can be reformulated by the infinity twistor and the Hermitian metric as conditions on Aαβ: the vanishing of the spinor part Aab is equivalent to AαβIαγIβδ = 0 and the energy momentum is the \({A_{\alpha \beta}}{Z^\alpha}{I^{\beta \gamma}}{H_{\gamma {\gamma \prime}}}{{\bar Z}^{{\gamma \prime}}}\) part of the kinematical twistor, while the whole reality condition (ensuring both AAB = 0 and the reality of the energy-momentum) is equivalent to $${A_{\alpha \beta}}{I^{\beta \gamma}}{H_{\gamma \delta {\prime}}} = {\bar A_{\delta {\prime}\beta {\prime}}}{\bar I^{\beta {\prime}\gamma {\prime}}}{H_{\gamma {\prime}\alpha}}.$$ Using the conjugate twistors, this can be rewritten (and, in fact, usually is written) as \({A_{\alpha \beta}}{I^{\beta \gamma}} = ({H^{\gamma {\alpha \prime}}}\,{{\bar A}_{{\alpha \prime}{\beta \prime}}}{H^{{\beta \prime}\delta}})\,({H_{\delta {\delta \prime}}}{{\bar I}^{{\delta \prime}{\gamma \prime}}}{H_{{\gamma \prime}\alpha}}) = {{\bar A}^{\gamma \delta}}{I_{\delta \alpha}}\). The quasi-local mass can also be expressed by the kinematical twistor as its Hermitian norm [420] or as its determinant [510]: $${m^2} = - {P_A}^{A{\prime}}{P^A}_{A{\prime}} = - {1 \over 2}{A_{\alpha \beta}}{\bar A_{\alpha {\prime}\beta {\prime}}}{H^{\alpha \alpha {\prime}}}{H^{\beta \beta {\prime}}} = - {1 \over 2}{A_{\alpha \beta}}{\bar A^{\alpha \beta}},$$ $${m^4} = 4\det {A_{\alpha \beta}} = {\textstyle{1 \over {3!}}}{\varepsilon ^{\alpha \beta \gamma \delta}}{\varepsilon ^{\mu \nu \,\rho \sigma}}{A_{\alpha \mu}}{A_{\beta \nu}}{A_{\gamma \rho}}{A_{\delta \sigma}}.$$ Similarly, as Helfer shows [264], the various components of the Pauli-Lubanski spin vector \({S_a}: = {1 \over 2}{\varepsilon _{abcd}}{P^b}{J^{cd}}\) can also be expressed by the kinematic and infinity twistors and by certain special null twistors: if Zα = (−ixAB′ πB′, πA′) and Wα = (−ixAB′ σB′, σA′) are two different (null) twistors such that AαβZαZβ = 0 and AαβWαWβ = 0, then $${(2{P^e}{\pi _{E{\prime}}}{\bar \pi _E}{P^f}{\sigma _{F{\prime}}}{\bar \sigma _F})^{- 1}}{\bar \pi _A}{\pi _{A{\prime}}}{\bar \sigma _B}{\bar \sigma _{B{\prime}}}({S^a}{P^b} - {S^b}{P^a}) = - \Re \left({{{{A_{\alpha \beta}}{Z^\alpha}{W^\beta}} \over {{I_{\gamma \delta}}{Z^\gamma}{W^\delta}}}} \right).$$ (ℜ on the right means 'real part'.) Thus, to summarize, the various spinor parts of the kinematical twistor Aαβ are the energy-momentum and s.d. angular momentum. However, additional structures, namely the infinity twistor and the Hermitian scalar product, are needed to be able to 'isolate' its energy-momentum and angular momentum parts, and, in particular, to define the mass and express the Pauli-Lubanski spin. Furthermore, the Hermiticity condition ensuring that Aαβ has the correct number of components (ten reals) is also formulated in terms of these additional structures. The original construction for curved spacetimes Two-surface twistors and the kinematical twistor In general spacetimes, the twistor equations have only the trivial solution. Thus, to be able to associate a kinematical twistor with a closed orientable spacelike two-surface \({\mathcal S}\) in general, the conditions on the spinor field ωAB have to be relaxed. Penrose's suggestion [420, 421] is to consider ωAB in Eq. (7.1) to be the symmetrized product \({\lambda ^{(A}}{\omega ^\beta})\) of spinor fields that are solutions of the 'tangential projection to \({\mathcal S}\)' of the valence-one twistor equation, the two-surface twistor equation. (The equation obtained as the 'tangential projection to \({\mathcal S}\)' of the valence-two twistor equation (7.4) would be under-determined [421].) Thus, the quasi-local quantities are searched for in the form of a charge integral of the curvature: $$\begin{array}{*{20}c} {{A_{\mathcal S}}[\lambda, \omega ]: = {{- 1} \over {8\pi G}}\oint\nolimits_{\mathcal S} {{\lambda ^A}{\omega ^B}{R_{A\,Bcd}}} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {= {{\rm{i}} \over {4\pi G}}\oint\nolimits_{\mathcal S} {[{\lambda ^0}{\omega ^0}({\phi _{01}} - {\psi _1}) + ({\lambda ^0}{\omega ^1} + {\lambda ^1}{\omega ^0})\,({\phi _{11}} + \Lambda - {\psi _2}) + {\lambda ^1}{\omega ^1}({\phi _{21}} - {\psi _3})]\;d{\mathcal S},}} \\ \end{array}$$ where the second expression is given in the GHP formalism with respect to some GHP spin frame adapted to the two-surface \({\mathcal S}\). Since the indices c and d on the right of the first expression are tangential to \({\mathcal S}\), this is just the charge integral of FABcd in the spinor identity (4.5) of Section 4.1.5. The two-surface twistor equation that the spinor fields should satisfy is just the covariant spinor equation \({\mathcal{T}_{E'EA}}{{\mkern 1mu} ^B}{\lambda _B} = 0\). By Eq. (4.6) its GHP form is \({\mathcal T}\lambda : = ({{\mathcal T}^ +} \oplus {{\mathcal T}^ -})\lambda = 0\), which is a first-order elliptic system, and its index is 4(1 − g), where g is the genus of \({\mathcal S}\) [58]. Thus, there are at least four (and in the generic case precisely four) linearly-independent solutions to \({\mathcal T}\lambda = 0\) on topological two-spheres. However, there are 'exceptional' two-spheres for which there exist at least five linearly independent solutions [297]. For such 'exceptional' two-spheres (and for higher-genus two-surfaces for which the twistor equation has only the trivial solution in general) the subsequent construction does not work. (The concept of quasi-local charges in Yang-Mills theory can also be introduced in an analogous way [509, 183]). The space of the solutions to \({\rm{T}}_{\mathcal S}^\alpha\) is called the two-surface twistor space. In fact, in the generic case this space is four-complex-dimensional, and under conformal rescaling the pair Zα = (λA, iΔA′AλA) transforms like a valence one contravariant twistor. Zα is called a two-surface twistor determined by λA. If \({{\mathcal S}\prime}\) is another generic two-surface with the corresponding two-surface twistor space \({\rm{T}}_{{{\mathcal S}\prime}}^\alpha\), then although \({\rm{T}}_{\mathcal S}^\alpha\) and \({\rm{T}}_{{{\mathcal S}\prime}}^\alpha\) are isomorphic as vector spaces, there is no canonical isomorphism between them. The kinematical twistor Aαβ is defined to be the symmetric twistor determined by \({A_{\alpha \beta}}{Z^\alpha}{W^\beta}: = {A_{\mathcal S}}[\lambda, \,\omega ]\) for any Zα = (λA, iΔA′AλA) and Wα = (ωA, iΔA′AωA from \({\rm{T}}_{\mathcal S}^\alpha\). Note that \({A_{\mathcal S}}[\lambda, \,\omega ]\) is constructed only from the two-surface data on \({\mathcal S}\). The Hamiltonian interpretation of the kinematical twistor For the solutions λA and ωA of the two-surface twistor equation, the spinor identity (4.5) reduces to Tod's expression [420, 426, 516] for the kinematical twistor, making it possible to re-express \({\mathcal S}\) by the integral of the Nester-Witten 2-form [490]. Indeed, if $${H_{\mathcal S}}[\lambda, \bar \mu ]: = {1 \over {4\pi G}}\oint\nolimits_{\mathcal S} {u{{(\lambda, \bar \mu)}_{ab}}} = - {1 \over {4\pi G}}\oint\nolimits_{\mathcal S} {{{\bar \gamma}^{A{\prime}B{\prime}}}{{\bar \mu}_{A{\prime}}}{\Delta _{B{\prime}B}}{\lambda ^B}\,d{\mathcal S}},$$ then, with the choice \({{\bar \mu}_{{A\prime}}}: = {\rm{i}}{\Delta _{{A\prime}}}^A{\omega _A}\), this gives Penrose's charge integral by Eq. (4.5): \({A_{\mathcal S}}[\lambda, \,\omega ] = {H_{\mathcal S}}[\lambda, \,\bar \mu ]\). Then, extending the spinor fields λA and ωA from \({\mathcal S}\) to a spacelike hypersurface \(\Sigma\) with boundary \({\mathcal S}\) in an arbitrary way, by the Sparling equation it is straightforward to rewrite \({A_{\mathcal S}}[\lambda, \,\omega ]\) in the form of the integral of the energy-momentum tensor of the matter fields and the Sparling form on Σ. Since such an integral of the Sparling form can be interpreted as the Hamiltonian of general relativity, this is a quick re-derivation of Mason's [357, 358] Hamiltonian interpretation of Penrose's kinematical twistor: \({A_{\mathcal S}}[\lambda, \,\omega ]\) is just the boundary term in the total Hamiltonian of the matter + gravity system, and the spinor fields λA and ωA (together with their 'projection parts' iΔA′AλA and iΔA′AωA) on \({\mathcal S}\) are interpreted as the spinor constituents of the special lapse and shift, called the 'quasi-translations' and 'quasi-rotations' of the two-surface, on the two-surface itself. The Hermitian scalar product and the infinity twistor In general, the natural pointwise Hermitian scalar product, defined by \(\left\langle {Z,\,\bar W} \right\rangle : = - {\rm{i(}}{\lambda ^A}{\Delta _{A{A\prime}}}{{\bar \omega}^{{A\prime}}} - {{\bar \omega}^{{A\prime}}}{\Delta _{A{A\prime}}}{\lambda ^A})\), is not constant on \({\mathcal S}\), thus, it does not define a Hermitian scalar product on the two-surface twistor space. As is shown in [296, 299, 514], \(\left\langle {Z,\,\bar W} \right\rangle\) is constant on \({\mathcal S}\) for any two two-surface twistors if and only if \({\mathcal S}\) can be embedded, at least locally, into some conformal Minkowski spacetime with its intrinsic metric and extrinsic curvatures. Such two-surfaces are called noncontorted, while those that cannot be embedded are called contorted. One natural candidate for the Hermitian metric could be the average of \(\left\langle {Z,\,\bar W} \right\rangle\) on \({\mathcal S}\) [420]: \({H_{\alpha {\beta \prime}}}{Z^\alpha}{{\bar W}^{{\beta \prime}}}: = [{\rm{Area(}}{\mathcal S}{{\rm{)}}^{- {1 \over 2}}}\oint\nolimits_{\mathcal S} {\left\langle {Z,\,\bar W} \right\rangle \,d{\mathcal S}}\), which reduces to \(\left\langle {Z,\,\bar W} \right\rangle\) on noncontorted two-surfaces. Interestingly enough, \(\oint\nolimits_{\mathcal S} {\left\langle {Z,\,\bar W} \right\rangle \,d{\mathcal S}}\) can also be reexpressed by the integral (7.14) of the Nester-Witten 2-form [490]. Unfortunately, however, neither this metric nor the other suggestions appearing in the literature are conformally invariant. Thus, for contorted two-surfaces, the definition of the quasi-local mass as the norm of the kinematical twistor (cf. Eq. (7.10)) is ambiguous unless a natural Hαβ′ is found. If \({\mathcal S}\) is noncontorted, then the scalar product \(\left\langle {Z,\,\bar W} \right\rangle\) defines the totally anti-symmetric twistor εεαβγ, and for the four independent two-surface twistors \(Z_1^\alpha, \, \ldots, \,Z_4^\alpha\) the contraction \({\varepsilon _{\alpha \beta \gamma \delta}}Z_1^\alpha Z_2^\beta Z_3^\gamma Z_4^\delta\), and hence, by Eq. (7.7), the determinant ν, is constant on \({\mathcal S}\). Nevertheless, ν can be constant even for contorted two-surfaces for which \(\left\langle {Z,\,\bar W} \right\rangle\) is not. Thus, the totally anti-symmetric twistor εεαβγ can exist even for certain contorted two-surfaces. Therefore, an alternative definition of the quasi-local mass might be based on Eq. (7.11) [510]. However, although the two mass definitions are equivalent in the linearized theory, they are different invariants of the kinematical twistor even in de Sitter or anti-de Sitter spacetimes. Thus, if needed, the former notion of mass will be called the norm-mass, the latter the determinant-mass (denoted by mD). If we want to have not only the notion of the mass but its reality as well, then we should ensure the Hermiticity of the kinematical twistor. But to formulate the Hermiticity condition (7.9), one also needs the infinity twistor. However, −εA′B Δ A′AλAΔB′BωB is not constant on \({\mathcal S}\) even if it is noncontorted. Thus, in general, it does not define any twistor on \({\rm{T}}_{\mathcal S}^\alpha\). One might take its average on \({\mathcal S}\) (which can also be re-expressed by the integral of the Nester-Witten 2-form [490]), but the resulting twistor would not be simple. In fact, even on two-surfaces in de Sitter and anti-de Sitter spacetimes with cosmological constant λ the natural definition for Iαβ is Iαβ ≔ diag(λεAB, εA′B′) [426, 424, 510], while on round spheres in spherically-symmetric spacetimes it is \({I_{\alpha \beta}}{Z^\alpha}{W^\beta}: = {1 \over {2{r^2}}}(1 + 2{r^2}\rho {\rho {\prime}}){\varepsilon _{AB}}{\lambda ^A}{\omega ^B} - {\varepsilon ^{{A{\prime}}{B{\prime}}}}{\Delta _{{A{\prime}}A}}{\lambda ^A}{\Delta _{{B{\prime}}B}}{\omega ^B}\) [496]. Thus, no natural simple infinity twistor has been found in curved spacetime. Indeed, Helfer claims that no such infinity twistor can exist [263]: even if the spacetime is conformally flat (in which case the Hermitian metric exists) the Hermiticity condition would be fifteen algebraic equations for the (at most) twelve real components of the 'would be' infinity twistor. Then, since the possible kinematical twistors form an open set in the space of symmetric twistors, the Hermiticity condition cannot be satisfied even for nonsimple IαβS. However, in contrast to the linearized gravity case, the infinity twistor should not be given once and for all on some 'universal' twistor space that may depend on the actual gravitational field. In fact, the two-surface twistor space itself depends on the geometry of \({\mathcal S}\), and hence all its structures also. Since in the Hermiticity condition (7.9) only the special combination \({H^\alpha}_{{\beta {\prime}}}: = {I^{\alpha \beta}}{H_{\beta {\beta {\prime}}}}\) of the infinity and metric twistors (the 'bar-hook' combination) appears, it might still be hoped that an appropriate \({H^\alpha}_{{\beta {\prime}}}\) could be found for a class of two-surfaces in a natural way [516]. However, as far as the present author is aware, no real progress has been achieved in this way. The various limits Obviously, the kinematical twistor vanishes in flat spacetime and, since the basic idea comes from linearized gravity, the construction gives the correct results in the weak field approximation. The nonrelativistic weak field approximation, i.e., the Newtonian limit, was clarified by Jeffryes [298]. He considers a one-parameter family of spacetimes with perfect fluid source, such that in the λ → 0 limit of the parameter λ, one gets a Newtonian spacetime, and, in the same limit, the two-surface \({\mathcal S}\) lies in a t = const. hypersurface of the Newtonian time t. In this limit the pointwise Hermitian scalar product is constant, and the norm-mass can be calculated. As could be expected, for the leading λ2-order term in the expansion of m as a series of λ he obtained the conserved Newtonian mass. The Newtonian energy, including the kinetic and the Newtonian potential energy, appears as a λ4-order correction. The Penrose definition for the energy-momentum and angular momentum can be applied to the cuts \({\mathcal S}\) of the future null infinity ℐ+ of an asymptotically flat spacetime [420, 426]. Then every element of the construction is built from conformally-rescaled quantities of the nonphysical spacetime. Since ℐ+ is shear-free, the two-surface twistor equations on \({\mathcal S}\) decouple, and hence, the solution space admits a natural infinity twistor Iαβ. It singles out precisely those solutions whose primary spinor parts span the asymptotic spin space of Bramson (see Section 4.2.4), and they will be the generators of the energy-momentum. Although \({\mathcal S}\) is contorted, and hence, there is no natural Hermitian scalar product, there is a twistor \({H^\alpha}_{{\beta \prime}}\) with respect to which Aαβ is Hermitian. Furthermore, the determinant ν is constant on \({\mathcal S}\), and hence it defines a volume 4-form on the two-surface twistor space [516]. The energy-momentum coming from Aαβ is just that of Bondi and Sachs. The angular momentum defined by Aαβ is, however, new. It has a number of attractive properties. First, in contrast to definitions based on the Komar expression, it does not have the 'factor-of-two anomaly' between the angular momentum and the energy-momentum. Since its definition is based on the solutions of the two-surface twistor equations (which can be interpreted as the spinor constituents of certain BMS vector fields generating boost-rotations) instead of the BMS vector fields themselves, it is free of supertranslation ambiguities. In fact, the two-surface twistor space on \({\mathcal S}\) reduces the BMS Lie algebra to one of its Poincaré subalgebras. Thus, the concept of the 'translation of the origin' is moved from null infinity to the twistor space (appearing in the form of a four-parameter family of ambiguities in the potential for the shear σ), and the angular momentum transforms just in the expected way under such a 'translation of the origin'. It is shown in [174] that Penrose's angular momentum can be considered as a supertranslation of previous definitions. The other way of determining the null infinity limit of the energy-momentum and angular momentum is to calculate them for large spheres from the physical data, instead of for the spheres at null infinity from the conformally-rescaled data. These calculations were done by Shaw [455, 457]. At this point it should be noted that the r → ℞ limit of Aαβ vanishes, and it is \(\sqrt {{\rm{Area(}}{{\mathcal S}_r})} {A_{\alpha \beta}}\) that yields the energy-momentum and angular momentum at infinity (see the remarks following Eq. (3.14)). The specific radiative solution for which the Penrose mass has been calculated is that of Robinson and Trautman [510]. The two-surfaces for which the mass was calculated are the r = const. cuts of the geometrically-distinguished outgoing null hypersurfaces u = const. Tod found that, for given u, the mass m is independent of r, as could be expected because of the lack of incoming radiation. In [264] Helfer suggested a bijective nonlinear map between the two-surface twistor spaces on the different cuts of ℐ+, by means of which he got something like a 'universal twistor space'. Then he extends the kinematical twistor to this space, and in this extension the shear potential (i.e., the complex function for which the asymptotic shear can be written as σ = ð2 S) appears explicitly. Using Eq. (7.12) as the definition of the intrinsic-spin angular momentum at scri, Helfer derives an explicit formula for the spin. In addition to the expected Pauli-Lubanski type term, there is an extra term, which is proportional to the imaginary part of the shear potential. Since the twistor spaces on the different cuts of scri have been identified, the angular momentum flux can be, and has in fact been, calculated. (For an earlier attempt to calculate this flux, see [262].) The large sphere limit of the two-surface twistor space and the Penrose construction were investigated by Shaw in the Sommers [475], Ashtekar-Hansen [37], and Beig-Schmidt [65] models of spatial infinity in [451, 452, 454]. Since no gravitational radiation is present near the spatial infinity, the large spheres are (asymptotically) noncontorted, and both the Hermitian scalar product and the infinity twistor are well defined. Thus, the energy-momentum and angular momentum (and, in particular, the mass) can be calculated. In vacuum he recovered the Ashtekar-Hansen expression for the energy-momentum and angular momentum, and proved their conservation if the Weyl curvature is asymptotically purely electric. In the presence of matter the conservation of the angular momentum was investigated in [456]. The Penrose mass in asymptotically anti-de Sitter spacetimes was studied by Kelly [312]. He calculated the kinematical twistor for spacelike cuts \({\mathcal S}\) of the infinity ℐ+, which is now a timelike three-manifold in the nonphysical spacetime. Since ℐ admits global three-surface twistors (see the next Section 7.2.5), \({\mathcal S}\) is noncontorted. In addition to the Hermitian scalar product, there is a natural infinity twistor, and the kinematical twistor satisfies the corresponding Hermiticity condition. The energy-momentum four-vector coming from the Penrose definition is shown to coincide with that of Ashtekar and Magnon [42]. Therefore, the energy-momentum four-vector is future pointing and timelike if there is a spacelike hypersurface extending to ℐ on which the dominant energy condition is satisfied. Consequently, m2 ≥ 0. Kelly shows that \(m_{\rm{D}}^2\) is also non-negative and in vacuum it coincides with m2. In fact [516], m ≥ mD ≥ 0 holds. The quasi-local mass of specific two-surfaces The Penrose mass has been calculated in a large number of specific situations. Round spheres are always noncontorted [514], thus, the norm-mass can be calculated. (In fact, axisymmetric two-surfaces in spacetimes with twist-free rotational Killing vectors are noncontorted [299].) The Penrose mass for round spheres reduces to the standard energy expression discussed in Section 4.2.1 [510]. Thus, every statement given in Section 4.2.1 for round spheres is valid for the Penrose mass, and we do not repeat them. In particular, for round spheres in a t = const. slice of the Kantowski-Sachs spacetime, this mass is independent of the two-surfaces [507]. Interestingly enough, although these spheres cannot be shrunk to a point (thus, the mass cannot be interpreted as 'the three-volume integral of some mass density'), the time derivative of the Penrose mass looks like the mass conservation equation. It is, minus the pressure times the rate of change of the three-volume of a sphere in flat space with the same area as \({\mathcal S}\) [515]. In conformally-flat spacetimes [510] the two-surface twistors are just the global twistors restricted to \({\mathcal S}\), and the Hermitian scalar product is constant on \({\mathcal S}\). Thus, the norm-mass is well defined. The construction works nicely, even if global twistors exist only on a, e.g., spacelike hypersurface Σ containing \({\mathcal S}\). These are the three-surface twistors [510, 512], which are solutions of certain (overdetermined) elliptic partial-differential equations, called the three-surface twistor equations, on Σ. These equations are completely integrable (i.e., they admit the maximal number of linearly-independent solutions, namely four) if and only if Σ, with its intrinsic metric and extrinsic curvature, can be embedded, at least locally, into some conformally-flat spacetime [512]. Such hypersurfaces are called noncontorted. It might be interesting to note that the noncontorted hypersurfaces can also be characterized as the critical points of the Chern-Simons functional, built from the real Sen connection on the Lorentzian vector bundle or from the three-surface twistor connection on the twistor bundle over Σ [66, 495]. Returning to the quasi-local mass calculations, Tod showed that in vacuum the kinematical twistor for a two-surface \({\mathcal S}\) in a noncontorted Σ depends only on the homology class of \({\mathcal S}\). In particular, if \({\mathcal S}\) can be shrunk to a point, then the corresponding kinematical twistor is vanishing. Since Σ is noncontorted, \({\mathcal S}\) is also noncontorted, and hence the norm-mass is well defined. This implies that the Penrose mass in the Schwarzschild solution is the Schwarzschild mass for any noncontorted two-surface that can be deformed into a round sphere, and it is zero for those that do not go round the black hole [514]. Thus, in particular, the Penrose mass can be zero even in curved spacetimes. A particularly interesting class of noncontorted hypersurfaces is that of the conformally-flat time-symmetric initial data sets. Tod considered Wheeler's solution of the time-symmetric vacuum constraints describing n 'points at infinity' (or, in other words, n − 1 black holes) and two-surfaces in such a hypersurface [510]. He found that the mass is zero if \({\mathcal S}\) does not go around any black hole, it is the mass Mi of the i-th black hole if \({\mathcal S}\) links precisely the i-th black hole, it is \({M_i} + {M_j} - {M_i}{M_j}/{d_{ij}} + {\mathcal O}(1/d_{ij}^2)\) if \({\mathcal S}\) links precisely the i-th and the j-th black holes, where dij is some appropriate measure of the distance between the black holes, …, etc. Thus, the mass of the i-th and j-th holes as a single object is less than the sum of the individual masses, in complete agreement with our physical intuition that the potential energy of the composite system should contribute to the total energy with negative sign. Beig studied the general conformally-flat time-symmetric initial data sets describing n 'points at infinity' [62]. He found a symmetric trace-free and divergence-free tensor field Tab and, for any conformal Killing vector ξa of the data set, defined the two-surface flux integral P(ξ) of Tabξb on \({\mathcal S}\). He showed that P(ξ) is conformally invariant, depends only on the homology class of \({\mathcal S}\), and, apart from numerical coefficients, for the ten (locally-existing) conformal Killing vectors, these are just the components of the kinematical twistor derived by Tod in [510] (and discussed in the previous paragraph). In particular, Penrose's mass in Beig's approach is proportional to the length of the P's with respect to the Cartan-Killing metric of the conformal group of the hypersurface. Tod calculated the quasi-local mass for a large class of axisymmetric two-surfaces (cylinders) in various LRS Bianchi and Kantowski-Sachs cosmological models [515] and more general cylindrically-symmetric spacetimes [517]. In all these cases the two-surfaces are noncontorted, and the construction works. A technically interesting feature of these calculations is that the two-surfaces have edges, i.e., they are not smooth submanifolds. The twistor equation is solved on the three smooth pieces of the cylinder separately, and the resulting spinor fields are required to be continuous at the edges. This matching reduces the number of linearly-independent solutions to four. The projection parts of the resulting twistors, the \({\rm{i}}{\Delta _{{A\prime}A}}{\lambda ^A}{\rm{s}}\), are not continuous at the edges. It turns out that the cylinders can be classified invariantly to be hyperbolic, parabolic, or elliptic. Then the structure of the quasi-local mass expressions is not simply 'density' × 'volume', but is proportional to a 'type factor' f(L) as well, where is the coordinate length of the cylinder. In the hyperbolic, parabolic, and elliptic cases this factor is sinh ωL/(ωL), 1, and sin ωL/(ωL), respectively, where ω is an invariant of the cylinder. The various types are interpreted as the presence of a positive, zero, or negative potential energy. In the elliptic case the mass may be zero for finite cylinders. On the other hand, for static perfect fluid spacetimes (hyperbolic case) the quasi-local mass is positive. A particularly interesting spacetime is that describing cylindrical gravitational waves, whose presence is detected by the Penrose mass. In all these cases the determinant-mass has also been calculated and found to coincide with the norm-mass. A numerical investigation of the axisymmetric Brill waves on the Schwarzschild background is presented in [87]. It was found that the quasi-local mass is positive, and it is very sensitive to the presence of the gravitational waves. Another interesting issue is the Penrose inequality for black holes (see Section 13.2.1). Tod shows [513, 514] that for static black holes the Penrose inequality holds if the mass of the black hole is defined to be the Penrose quasi-local mass of the spacelike cross section \({\mathcal S}\) of the event horizon. The trick here is that \({\mathcal S}\) is totally geodesic and conformal to the unit sphere, and hence, it is noncontorted and the Penrose mass is well defined. Then, the Penrose inequality will be a Sobolev-type inequality for a non-negative function on the unit sphere. This inequality is tested numerically in [87]. Apart from the cuts of ℐ+ in radiative spacetimes, all the two-surfaces discussed so far were noncontorted. The spacelike cross section of the event horizon of the Kerr black hole provides a contorted two-surface [516]. Thus, although the kinematical twistor can be calculated for this, the construction in its original form cannot yield any mass expression. The original construction has to be modified. The properties of the Penrose construction that we have discussed are very remarkable and promising. However, the small surface calculations clearly show some unwanted features of the original construction [511, 313, 560], and force its modification. First, although the small spheres are contorted in general, the leading term of the pointwise Hermitian scalar product is constant: \({\lambda ^A}{\Delta _{A{A\prime}}}{{\bar \omega}^{{A\prime}}} - {{\bar \omega}^{{A\prime}}}{\Delta _{{A\prime}A}}{\lambda ^A}\) for any two-surface twistors Zα = (λA,iΔA′AλA) and Wα = (ωA,iΔA′AωA) [511, 313]. Since in nonvacuum spacetimes the kinematical twistor has only the 'four-momentum part' in the leading \({\mathcal O}({r^3})\)-order with \({P_a} = {{4\pi} \over 3}{r^3}{T_{ab}}{t^b}\), the Penrose mass, calculated with the norm above, is just the expected mass in the leading \({\mathcal O}({r^3})\) order. Thus, it is positive if the dominant energy condition is satisfied. On the other hand, in vacuum the structure of the kinematical twistor is $${A_{\alpha \beta}} = \left({\begin{array}{*{20}c} {2{\rm{i}}{\lambda _{AB}}} & {{P_A}^{B{\prime}}} \\ {{P^{A{\prime}}}_B} & 0 \\ \end{array}} \right) + {\mathcal O}\,({r^6}),$$ where \({\lambda _{AB}} = {\mathcal O}({r^5})$${P_{A{A\prime}}} = {2 \over {45G}}{r^5}{\psi _{ABCD}}{\chi _{{A\prime}{B\prime}{C\prime}{D\prime}}}{t^{B{B\prime}}}{t^{CC}}{t^{D{D\prime}}}\) with \({\chi _{ABCD}}: = {\psi _{ABCD}} - 4{{\bar \psi}_{{A\prime}{B\prime}{C\prime}{D\prime}}}{t^{{A\prime}}}{\,_A}{t^{{B\prime}}}_B{t^{{C\prime}}}_C{t^{{D\prime}}}_D\). In particular, in terms of the familiar conformal electric and magnetic parts of the curvature the leading term in the time component of the four-momentum is \({P_{A{A\prime}}}{t^{A{A\prime}}} = {1 \over {45G}}{H_{ab}}({H^{ab}} - {\rm{i}}{E^{ab}})\). Then, the corresponding norm-mass, in the leading order, can even be complex! For an \({{\mathcal S}_r}\) in the t = const. hypersurface of the Schwarzschild spacetime, this is zero (as it must be inlight of the results of Section 7.2.5, because this is a noncontorted spacelike hypersurface), but for a general small two-sphere not lying in such a hypersurface, PAA′ is real and spacelike, and hence, m2 < 0. In the Kerr spacetime, PAA′ itself is complex [511, 313]. The modified constructions Independently of the results of the small-sphere calculations, Penrose claims that in the Schwarzschild spacetime the quasi-local mass expression should yield the same zero value on two-surfaces, contorted or not, which do not surround the black hole. (For the motivations and the arguments, see [422].) Thus, the original construction should be modified, and the negative results for the small spheres above strengthened this need. A much more detailed review of the various modifications is given by Tod in [516]. The 'improved' construction with the determinant A careful analysis of the roots of the difficulties lead Penrose [422, 426] (see also [511, 313, 516]) to suggest the modified definition for the kinematical twistor $${A{\prime}_{\alpha \beta}}{Z^\alpha}{W^\beta}: = {{\rm{i}} \over {8\pi G}}\oint\nolimits_{\mathcal S} {\eta \,{\lambda ^A}{\omega ^B}{R_{A\,Bcd}}},$$ where η is a constant multiple of the determinant in Eq. (7.7). Since on noncontorted two-surfaces the determinant ν is constant, for such surfaces A′αβ reduces to Aαβ, and hence, all the nice properties proven for the original construction on noncontorted two-surfaces are shared by A′αβ. The quasi-local mass calculated from Eq. (7.16) for small spheres (in fact, for small ellipsoids [313]) in vacuum is vanishing in the fifth order. Thus, apparently, the difficulties have been resolved. However, as Woodhouse pointed out, there is an essential ambiguity in the (nonvanishing, sixth-order) quasi-local mass [560]. In fact, the structure of the modified kinematical twistor has the form (7.15) with vanishing \({P^{{A\prime}}}_B\) and \({P_A}^{{B\prime}}\) but with nonvanishing λAB in the fifth order. Then, in the quasi-local mass (in the leading sixth order) there will be a term coming from the (presumably nonvanishing) sixth-order part of \({P^{{A\prime}}}_B\) and \({P_A}^{{B\prime}}\) and the constant part of the Hermitian scalar product, and the fifth-order λAB and the still ambiguous \({\mathcal O}(r)\)-order part of the Hermitian metric. Modification through Tod's expression These anomalies lead Penrose to modify A′αβ slightly [423]. This modified form is based on Tod's form of the kinematical twistor: $${A^{{\prime}{\prime}}_{\alpha \beta}}{Z^\alpha}{W^\beta}: = {1 \over {4\pi G}}\oint\nolimits_{\mathcal S} {{{\bar \gamma}^{A{\prime}B{\prime}}}[{\rm{i}}{\Delta _{A{\prime}A}}(\sqrt \eta {\lambda ^A})]\;[{\rm{i}}{\Delta _{B{\prime}B}}(\sqrt \eta {\omega ^B})]\;d{\mathcal S}}.$$ The quasi-local mass on small spheres coming from A″αβ is positive [516]. Mason's suggestions A beautiful property of the original construction was its connection with the Hamiltonian formulation of the theory [357]. Unfortunately, such a simple Hamiltonian interpretation is lacking for the modified constructions. Although the form of Eq. (7.17) is that of the integral of the Nester-Witten 2-form, and the spinor fields \(\sqrt \eta {\lambda ^A}\) and \({\rm{i}}{\Delta _{{A\prime}A}}(\sqrt \eta {\lambda ^A})\) could still be considered as the spinor constituents of the 'quasi-Killing vectors' of the two-surface \({\mathcal S}\), their structure is not so simple, because the factor η itself depends on all four of the independent solutions of the two-surface twistor equation in a rather complicated way. To have a simple Hamiltonian interpretation, Mason suggested further modifications [357, 358]. He considers the four solutions \(\lambda _i^A,i = 1, \ldots, 4\), of the two-surface twistor equations, and uses these solutions in the integral (7.14) of the Nester-Witten 2-form. Since \({H_{\mathcal S}}\) is a Hermitian bilinear form on the space of the spinor fields (see Section 8), he obtains 16 real quantities as the components of the 4 × 4 Hermitian matrix \({E_{ij}}: = {H_{\mathcal S}}[{\lambda _i},{{\bar \lambda}_j}]\). However, it is not clear how the four 'quasi-translations' of \({\mathcal S}\) should be found among the 16 vector fields \(\lambda _i^A\bar \lambda _j^{{A\prime}}\) (called 'quasi-conformal Killing vectors' of \({\mathcal S}\)) for which the corresponding quasi-local quantities could be considered as the components of the quasi-local energy-momentum. Nevertheless, this suggestion leads us to the next class of quasi-local quantities. Approaches Based on the Nester-Witten 2-Form We saw in Section 3.2 that both the ADM and Bondi-Sachs energy-momenta can be re-expressed by the integral of the Nester-Witten 2-form \(u{(\lambda, \bar \mu)_{ab}}\), the proof of the positivity of the ADM and Bondi—Sachs masses is relatively simple in terms of the two-component spinors. Thus, from a pragmatic point of view, it seems natural to search for the quasi-local energy-momentum in the form of the integral of the Nester-Witten 2-form. Now we will show that the integral of Møller's tetrad superpotential for the energy-momentum, coming from his tetrad Lagrangian (3.5), is just the integral of \(u{({\lambda ^{\underline A}},{\bar \lambda ^{{{\underline B}{\prime}}}})_{ab}}\), where \(\{\lambda _A^{\underline A}\}\) is a normalized spinor dyad. Hence, all the quasi-local energy-momenta based on the integral of the Nester-Witten 2-form have a natural Lagrangian interpretation in the sense that they are charge integrals of the canonical Noether current derived from Møller's first-order tetrad Lagrangian. If \({\mathcal S}\) is any closed, orientable spacelike two-surface and an open neighborhood of \({\mathcal S}\) is time and space orientable, then an open neighborhood of \({\mathcal S}\) is always a trivialization domain of both the orthonormal and the spin frame bundles [500]. Therefore, the orthonormal frame \(\{E_{\underline a}^a\}\) can be chosen to be globally defined on \({\mathcal S}\), and the integral of the dual of Møller's superpotential, \({1 \over 2}{K^e}{\vee_e}^{ab}{1 \over 2}{\varepsilon _{abcd}}\), appearing on the right-hand side of the superpotential Eq. (3.7), is well defined. If (ta, va) is a pair of globally-defined normals of \({\mathcal S}\) in the spacetime, then in terms of the geometric objects introduced in Section 4.1, this integral takes the form $$\begin{array}{*{20}c} {Q\,[{\bf{K}}]: = {1 \over {8\pi G}}\oint\nolimits_{\mathcal S} {{1 \over 2}{K^e}{\vee _e}^{ab}{1 \over 2}{\varepsilon _{abcd}}} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {= {1 \over {8\pi G}}\oint\nolimits_{\mathcal S} {{K^e}\left({- {}^ \bot {\varepsilon _{ea}}{Q_b}^{ba} - {A_e} - {}^ \bot {\varepsilon _{ea}}({\delta _b}E_{\underline b}^b){\eta ^{\underline b \underline a}}E_{\underline a}^a + {\delta _e}({t_a}E_{\underline a}^a){\eta ^{\underline a \underline b}}E_{\underline b}^b{\upsilon _b}} \right)d{\mathcal S}.}} \\ \end{array}$$ The first term on the right is just the dual mean curvature vector of \({\mathcal S}\), the second is the connection one-form on the normal bundle, while the remaining terms are explicitly SO(1, 3) gauge dependent. On the other hand, this is boost gauge invariant (the boost gauge dependence of the second term is compensated by the last one), and depends on the tetrad field and the vector field Ka given only on \({\mathcal S}\), but is independent in the way in which they are extended off the surface. As we will see, the general form of other quasi-local energy-momentum expressions show some resemblance to Eq. (8.1). Then, suppose that the orthonormal basis is built from a normalized spinor dyad, i.e., \(E_{\underline a}^a = \sigma _{\underline a}^{\underline A {{\underline B}{\prime}}}\varepsilon _{\underline A}^A\bar \varepsilon _{{{\underline B}{\prime}}}^{{A{\prime}}}\), where \(\sigma _{\underline a}^{\underline A {{\underline B}{\prime}}}\) are the SL(2, ℂ) Pauli matrices (divided by \(\sqrt 2)\)) and \(\{\varepsilon _{\underline A}^A\}, \underline A = 0,1\), is a normalized spinor basis. A straightforward calculation yields the following remarkable expression for the dual of Møller's superpotential: $${1 \over 4}\sigma _{\underline A \,\underline {B{\prime}}}^{\underline a}E_{\underline a}^e{\vee _\varepsilon}^{ab}{1 \over 2}{\varepsilon _{abcd}} = u\,{({\varepsilon _{\underline A}},{\bar \varepsilon _{\underline {B{\prime}}}})_{cd}} + \overline {u{{({\varepsilon _{\underline B}},{{\bar \varepsilon}_{\underline {A{\prime}}}})}_{cd}}},$$ where the overline denotes complex conjugation. Thus, the real part of the Nester-Witten 2-form, and hence, by Eq. (3.11), apart from an exact 2-form, the Nester-Witten 2-form itself, built from the spinors of a normalized spinor basis, is just the superpotential 2-form derived from Møller's first-order tetrad Lagrangian [500]. Next we will discuss some general properties of the integral of \(u{(\lambda, \bar \mu)_{ab}}\), where λA and μA are arbitrary spinor fields on \({\mathcal S}\). Then, in the integral \({H_{\mathcal S}}[\lambda, \bar \mu ]\), defined by Eq. (7.14), only the tangential derivative of λA appears. (μA is involved in \({H_{\mathcal S}}[\lambda, \bar \mu ]\) algebraically.) Thus, by Eq. (3.11), \({H_{\mathcal S}}:{C^\infty}({\mathcal S},{{\rm{S}}_A}) \times {C^\infty}({\mathcal S},{{\rm{S}}_A}) \rightarrow {\rm{\mathbb C}}\) is a Hermitian scalar product on the (infinite-dimensional complex) vector space of smooth spinor fields on \({\mathcal S}\). Thus, in particular, the spinor fields in \({H_{\mathcal S}}[\lambda, \bar \mu ]\) need be defined only on \({\mathcal S}\), and \(\overline {{H_{\mathcal S}}[\lambda, \bar \mu ]}\) holds. A remarkable property of \({{H_{\mathcal S}}}\) is that if λA is a constant spinor field on \({\mathcal S}\) with respect to the covariant derivative Δe, then \({H_{\mathcal S}}[\lambda, \bar \mu ] = 0\) for any smooth spinor field μA on \({\mathcal S}\). Furthermore, if \(\lambda _A^{\underline A} = (\lambda _A^0,\lambda _A^1)\) is any pair of smooth spinor fields on \({\mathcal S}\), then for any constant SL(2, ℂ) matrix \({\Lambda _{\underline A}}^{\underline B}\) one has \({H_{\mathcal S}}[{\lambda ^{\underline C}}{\Lambda _{\underline C}}^{\underline A},{{\bar \lambda}^{\underline {{D{\prime}}}}}{{\bar \Lambda}_{\underline {{D{\prime}}}}}^{{{\underline B}{\prime}}}] = {H_{\mathcal S}}[{\lambda ^{\underline C}},{{\bar \lambda}^{{{\underline D}{\prime}}}}]{\Lambda _{\underline C}}^{\underline A}{{\bar \Lambda}_{{{\underline D}{\prime}}}}^{{{\underline B}{\prime}}}\), i.e., the integrals \({H_{\mathcal S}}[{\lambda ^{\underline A}},{{\bar \lambda}^{{{\underline B}{\prime}}}}]\) transform as the spinor components of a real Lorentz vector over the two-complex-dimensional space spanned by \(\lambda _A^0\) and \(\lambda _A^1\). Therefore, to have a well-defined quasi-local energy-momentum vector we have to specify some two-dimensional subspace \({{\bf{S}}^{\underline A}}\) of the infinite-dimensional space \({C^\infty}({\mathcal S},{{\rm{S}}_A})\) and a symplectic metric \({\varepsilon _{\underline A \underline B}}\) thereon. Thus, underlined capital Roman indices will be referring to this space. The elements of this subspace would be interpreted as the spinor constituents of the 'quasi-translations' of the surface \({\mathcal S}\). Note, however, that in general the symplectic metric \({\varepsilon _{\underline A \underline B}}\) need not be related to the pointwise symplectic metric εAB on the spinor spaces, i.e., the spinor fields \(\lambda _A^0\) and \(\lambda _A^1\) that span \({{\bf{S}}^{\underline A}}\) are not expected to form a normalized spin frame on \({\mathcal S}\). Since, in Møller's tetrad approach it is natural to choose the orthonormal vector basis to be a basis in which the translations have constant components (just like the constant orthonormal bases in Minkowski spacetime, which are bases in the space of translations), the spinor fields \(\lambda _A^{\underline A}\) could also be interpreted as the spinor basis that should be used to construct the orthonormal vector basis in Møller's superpotential (3.6). In this sense the choice of the subspace \({{\bf{S}}^{\underline A}}\) and the metric \({\varepsilon _{\underline A \underline B}}\) is just a gauge reduction (see Section 3.3.3). Once the spin space \({\rm{(}}{{\rm{S}}^{\underline A}},{\varepsilon _{\underline A \underline B}})\) is chosen, the quasi-local energy-momentum is defined to be \(P_{\mathcal S}^{\underline A \underline {{B{\prime}}}}: = {H_{\mathcal S}}[{\lambda ^{\underline A}},{{\bar \lambda}^{\underline {{B{\prime}}}}}]\) and the corresponding quasi-local mass \({m_{\mathcal S}}\). is \(m_{\mathcal S}^2: = {\varepsilon _{\underline A \underline B}}{\varepsilon _{{{\underline A}{\prime}}{{\underline B}{\prime}}}}P_{\mathcal S}^{\underline A {{\underline A}{\prime}}}P_{\mathcal S}^{\underline B {{\underline B}{\prime}}}\) In particular, if one of the spinor fields \(\lambda _A^{\underline A}\), e.g., \(\lambda _A^0\), is constant on \({\mathcal S}\) (which means that the geometry of \({\mathcal S}\) is considerably restricted), then \(P_{\mathcal S}^{{{00}{\prime}}} = P_{\mathcal S}^{{{01}{\prime}}} = P_{\mathcal S}^{{{10}{\prime}}} = 0\), and hence, the corresponding mass \({m_{\mathcal S}}\) is zero. If both \(\lambda _A^0\) and \(\lambda _A^1\) are constant (in particular, when they are the restrictions to \({\mathcal S}\) of the two constant spinor fields in the Minkowski spacetime), then \(P_{\mathcal S}^{\underline A \underline {{B{\prime}}}}\) itself is vanishing. Therefore, to summarize, the only thing that needs to be specified is the spin space \(({{\rm{S}}^{\underline A}},{\varepsilon _{\underline A \underline B}})\), and the various suggestions for the quasi-local energy-momentum based on the integral of the Nester-Witten 2-form correspond to the various choices for this spin space. The Ludvigsen-Vickers construction Suppose that spacetime is asymptotically flat at future null infinity, and the closed spacelike two-surface \({\mathcal S}\) can be joined to future null infinity by a smooth null hypersurface \({\mathcal N}\). Let \({{\mathcal S}_\infty}: = {\mathcal N} \cap {{\mathscr I}^ +}\), the cut defined by the intersection of \({\mathcal N}\) with future null infinity. Then, the null geodesic generators of \({\mathcal N}\) define a smooth bijection between \({\mathcal S}\) and the cut \({{\mathcal S}_\infty}\) (and hence, in particular, \({\mathcal S} \approx {S^2}\)). We saw in Section 4.2.4 that on the cut \({{\mathcal S}_\infty}\) at the future null infinity we have the asymptotic spin space \((S_\infty ^{\underline A},{\varepsilon _{\underline A \underline B}})\). The suggestion of Ludvigsen and Vickers [346] for the spin space \(({{\rm{S}}^{\underline A}},{\varepsilon _{\underline A \underline B}})\) on \({\mathcal S}\) is to import the two independent solutions of the asymptotic twistor equations, i.e., the asymptotic spinors, from the future null infinity back to the two-surface along the null geodesic generators of the null hypersurface \({\mathcal N}\). Their propagation equations, given both in terms of spinors and in the GHP formalism, are $${\iota ^A}{\bar o^{A{\prime}}}({\nabla _{AA{\prime}}}{\lambda _B})\,{o^B} = \;{\prime}{\lambda _0} + \rho {\lambda _1} = 0.$$ Here \(\varepsilon _{\rm{A}}^A = \{{o^A},{\iota ^A}\}\) is the GHP spin frame introduced in Section 4.2.4, and by Eq. (4.6) the second half of these equations is just Δ+λ = 0. It should be noted that the choice of Eqs. (8.3) and (8.4) for the propagation law of the spinors is 'natural' in the sense that in flat spacetime they reduce to the condition of parallel propagation, and Eq. (8.4) is just the appropriate part of the asymptotic twistor equation of Bramson. We call the spinor fields obtained by using Eqs. (8.3) and (8.4) the Ludvigsen-Vickers spinors on \({\mathcal S}\). Thus, given an asymptotic spinor at infinity, we propagate its zero-th components (with respect to the basis \(\varepsilon _{\rm{A}}^A\)) to \({\mathcal S}\) by Eq. (8.3). This will be the zero-th component of the Ludvigsen-Vickers spinor. Then, its first component will be determined by Eq. (8.4), provided ρ is not vanishing on any open subset of \({\mathcal S}\). If \(\lambda _A^0\) and \(\lambda _A^1\) are Ludvigsen-Vickers spinors on \({\mathcal S}\) obtained by Eqs. (8.3) and (8.4) from two asymptotic spinors that formed a normalized spin frame, then, by considering \(\lambda _A^0\) and \(\lambda _A^1\) to be normalized in \({{\bf{S}}^{\underline A}}\), we define the symplectic metric \({\varepsilon _{\underline A \underline B}}\) on \({{\rm{S}}^{\underline A}}\) to be that with respect to which \(\lambda _A^0\) and \(\lambda _A^1\) form a normalized spin frame. Note, however, that this symplectic metric is not connected with the symplectic fiber metric εAB of the spinor bundle \({{\bf{S}}^A}({\mathcal S})\) over \({\mathcal S}\). Indeed, in general, \(\lambda _A^{\underline A}\lambda _B^{\underline B}{\varepsilon ^{AB}}\) is not constant on \({\mathcal S}\), and hence, εAB does not determine any symplectic metric on the space \({{\bf{S}}^{\underline A}}\) of the Ludvigsen-Vickers spinors. In Minkowski spacetime the two Ludvigsen-Vickers spinors are just the restriction to \({\mathcal S}\) of the two constant spinors. Remarks on the validity of the construction Before discussing the usual questions about the properties of the construction (positivity, monotonicity, the various limits, etc.), we should make some general remarks. First, it is obvious that the Ludvigsen-Vickers energy-momentum in its above form cannot be defined in a spacetime, which is not asymptotically flat at null infinity. Thus, their construction is not genuinely quasi-local, because it depends not only on the (intrinsic and extrinsic) geometry of \({\mathcal S}\), but on the global structure of the spacetime as well. In addition, the requirement of the smoothness of the null hypersurface \({\mathcal N}\) connecting the two-surface to the null infinity is a very strong restriction. In fact, for general (even for convex) two-surfaces in a general asymptotically flat spacetime, conjugate points will develop along the (outgoing) null geodesics orthogonal to the two-surface [417, 240]. Thus, either the two-surface must be near enough to the future null infinity (in the conformal picture), or the spacetime and the two-surface must be nearly spherically symmetric (or the former cannot be 'very much curved' and the latter cannot be 'very much bent'). This limitation yields that, in general, the original construction above does not have a small sphere limit. However, using the same propagation equations (8.3) and (8.4) one could define a quasi-local energy-momentum for small spheres [346, 84]. The basic idea is that there is a spin space at the vertex p of the null cone in the spacetime whose spacelike cross section is the actual two-surface, and the Ludvigsen-Vickers spinors on \({\mathcal S}\) are defined by propagating these spinors from the vertex p to \({\mathcal S}\) via Eqs. (8.3) and (8.4). This definition works in arbitrary spacetimes, but the two-surface cannot be extended to a large sphere near the null infinity, and it is still not genuinely quasi-local. Monotonicity, mass-positivity and the various limits Once the Ludvigsen-Vickers spinors are given on a spacelike two-surface \({{\mathcal S}_r}\) of constant affine parameter r in the outgoing null hypersurface \({\mathcal N}\), then they are uniquely determined on any other spacelike two-surface \({{\mathcal S}_{{r{\prime}}}}\) in \({\mathcal N}\), as well, i.e., the propagation law, Eqs. (8.3) and (8.4), defines a natural isomorphism between the space of the Ludvigsen-Vickers spinors on different two-surfaces of constant affine parameter in the same \({\mathcal N}\). (r need not be a Bondi-type coordinate.) This makes it possible to compare the components of the Ludvigsen-Vickers energy-momenta on different surfaces. In fact [346], if the dominant energy condition is satisfied (at least on \({\mathcal N}\)), then for any Ludvigsen-Vickers spinor λA and affine parameter values r1 ≤ r2, one has \({H_{{{\mathcal S}_{{r_1}}}}}[\lambda, \bar \lambda ] \leq {H_{{{\mathcal S}_{{r_2}}}}}[\lambda, \bar \lambda ]\), and the difference \({H_{{{\mathcal S}_{{r_2}}}}}[\lambda, \bar \lambda ] \leq {H_{{{\mathcal S}_{{r_1}}}}}[\lambda, \bar \lambda ] \geq 0\) can be interpreted as the energy flux of the matter and the gravitational radiation through \({\mathcal N}\) between \({{\mathcal S}_{{r_1}}}\) and \({{\mathcal S}_{{r_2}}}\). Thus, both \(P_{{{\mathcal S}_r}}^{{{00}{\prime}}}\) and \(P_{{{\mathcal S}_r}}^{{{11}{\prime}}}\) are increasing with r ('mass-gain'). A similar monotonicity property ('mass-loss') can be proven on ingoing null hypersurfaces, but then the propagation equations (8.3) and (8.4) should be replaced by ϸ′λ1 = 0 and − Δ−λ ≔ ðλ1 + ρ′λ0 = 0. Using these equations the positivity of the Ludvigsen-Vickers mass was proven in various special cases in [346]. Concerning the positivity properties of the Ludvigsen-Vickers mass and energy, first it is obvious by the remarks on the nature of the propagation equations (8.3) and (8.4) that in Minkowski spacetime the Ludvigsen-Vickers energy-momentum is vanishing. However, in the proof of the non-negativity of the Dougan-Mason energy (discussed in Section 8.2) only the λA ∈ ker Δ+ part of the propagation equations is used. Therefore, as realized by Bergqvist [79], the Ludvigsen-Vickers energy-momenta (both based on the asymptotic and the point spinors) are also future directed and nonspacelike, if \({\mathcal S}\) is the boundary of some compact spacelike hypersurface Γ on which the dominant energy condition is satisfied and \({\mathcal S}\) is weakly future convex (or at least ρ ≤ 0). Similarly, the Ludvigsen-Vickers definitions share the rigidity properties proven for the Dougan-Mason energy-momentum [488]. Under the same conditions the vanishing of the energy-momentum implies the flatness of the domain of dependence D(Σ) of Σ. In the weak field approximation [346] the difference \({H_{{{\mathcal S}_{{r_2}}}}}[\lambda, \bar \lambda ] - {H_{{{\mathcal S}_{{r_1}}}}}[\lambda, \bar \lambda ]\) is just the integral of \(4\pi G{T_{ab}}{l^a}{\lambda ^B}{{\bar \lambda}^{{B{\prime}}}}\) on the portion of \({\mathcal N}\) between the two two-surfaces, where Tab is the linearized energy-momentum tensor. The increment of \({H_{{{\mathcal S}_r}}}[\lambda, \bar \lambda ]\) on \({\mathcal N}\) is due only to the flux of the matter energy-momentum. Since the Bondi-Sachs energy-momentum can be written as the integral of the Nester-Witten 2-form on the cut in question at the null infinity with the asymptotic spinors, it is natural to expect that the first version of the Ludvigsen-Vickers energy-momentum tends to that of Bondi and Sachs. It was shown in [346, 457] that this expectation is, in fact, correct. The Ludvigsen-Vickers mass was calculated for large spheres both for radiative and stationary spacetimes with r−2 and r−3 accuracy, respectively, in [455, 457]. Finally, on a small sphere of radius r in nonvacuum the second definition gives [84] the expected result (4.9), while in vacuum [84, 494] it is $$P_{{{\mathcal S}_r}}^{\underline A \underline {B{\prime}}} = {1 \over {10G}}{r^5}{T^a}_{bcd}{t^b}{t^c}{t^d}\varepsilon _A^{\underline A}\bar \varepsilon _{A{\prime}}^{\underline {B{\prime}}} + {4 \over {45G}}{r^6}{t^e}({\nabla _e}{T^a}_{bcd}){t^b}{t^c}{t^d}\varepsilon _A^{\underline A}\bar \varepsilon _{A{\prime}}^{\underline {B{\prime}}} + {\mathcal O}({r^7}).$$ Thus, its leading term is the energy-momentum of the matter fields and the Bel-Robinson momentum, respectively, seen by the observer ta at the vertex p. Thus, assuming that the matter fields satisfy the dominant energy condition, for small spheres this is an explicit proof that the Ludvigsen-Vickers quasi-local energy-momentum is future pointing and nonspacelike. The Dougan-Mason constructions Holomorphic/antiholomorphic spinor fields The original construction of Dougan and Mason [172] was introduced on the basis of sheaf-theoretical arguments. Here we follow a slightly different, more 'pedestrian' approach, based mostly on [488, 490]. Following Dougan and Mason we define the spinor field λA to be antiholomorphic when me∇eλA = meΔeλA = 0, or holomorphic if \({\bar m^e}{\nabla _e}{\lambda _A} = {\bar m^e}{\Delta _e}{\lambda _A} = 0\). Thus, this notion of holomorphicity/antiholomorphicity is referring to the connection Δe on \({\mathcal S}\). While the notion of the holomorphicity/antiholomorphicity of a function on \({\mathcal S}\) does not depend on whether the Δe or δe operator is used, for tensor or spinor fields it does. Although the vectors ma and \({\bar m^a}\) are not uniquely determined (because their phase is not fixed), the notion of holomorphicity/antiholomorphicity is well defined, because the defining equations are homogeneous in ma and \({{\bar m}^a}\). Next, suppose that there are at least two independent solutions of \({\bar m^e}{\Delta _e}{\lambda _A} = 0\). If λA and μA are any two such solutions, then \({\bar m^e}{\Delta _e}({\lambda _A}{\mu _B}{\varepsilon ^{AB}}) = 0\), and hence by Liouville's theorem λAμBεAB is constant on \({\mathcal S}\). If this constant is not zero, then we call \({\mathcal S}\) generic; if it is zero then \({\mathcal S}\) will be called exceptional. Obviously, holomorphic λA on a generic \({\mathcal S}\) cannot have any zero, and any two holomorphic spinor fields, e.g., λA and λA, span the spin space at each point of \({\mathcal S}\) (and they can be chosen to form a normalized spinor dyad with respect to εAB on the whole of \({\mathcal S}\)). Expanding any holomorphic spinor field in this frame, the expanding coefficients turn out to be holomorphic functions, and hence, constant. Therefore, on generic two-surfaces there are precisely two independent holomorphic spinor fields. In the GHP formalism, the condition of the holomorphicity of the spinor field λA is that its components (λ0, λ1) be in the kernel of \({{\mathcal H}^ +}: = {\Delta ^ +} \oplus {{\mathcal T}^ +}\). Thus, for generic two-surfaces ker \({{\mathcal H}^ +}\) with the constant \({\varepsilon _{\underline A \underline B}}\) would be a natural candidate for the spin space \(\left({{{\bf{S}}^{\underline A}},\,{\varepsilon _{\underline A \underline B}}} \right)\) above. For exceptional two-surfaces, the kernel space ker \({{\mathcal H}^ +}\) is either two-dimensional but does not inherit a natural spin space structure, or it is higher than two dimensional. Similarly, the symplectic inner product of any two antiholomorphic spinor fields is also constant, one can define generic and exceptional two-surfaces as well, and on generic surfaces there are precisely two antiholomorphic spinor fields. The condition of the antiholomorphicity of λA is \(\lambda \in \ker \,{{\mathcal H}^ -}: = \ker ({\Delta ^ -} \oplus {{\mathcal T}^ -})\). Then \({{\bf{S}}^{\underline A}} = \ker \,{{\mathcal H}^ -}\) could also be a natural choice. Note that the spinor fields, whose holomorphicity/antiholomorphicity is defined, are unprimed, and these correspond to the antiholomorphicity/holomorphicity, respectively, of the primed spinor fields of Dougan and Mason. Thus, the main question is whether there exist generic two-surfaces, and if they do, whether they are 'really generic', i.e., whether most of the physically important surfaces are generic or not. The genericity of the generic two-surfaces \({{\mathcal H}^ \pm}\) are first-order elliptic differential operators on certain vector bundles over the compact two-surface \({\mathcal S}\), and their index can be calculated: \({\rm{index}}({{\mathcal H}^ \pm}) = 2(1 - g)\), where g is the genus of \({\mathcal S}\). Therefore, for \({\mathcal S} \approx {S^2}\) there are at least two linearly-independent holomorphic and at least two linearly-independent antiholomorphic spinor fields. The existence of the holomorphic/antiholomorphic spinor fields on higher-genus two-surfaces is not guaranteed by the index theorem. Similarly, the index theorem does not guarantee that \({\mathcal S} \approx {S^2}\) is generic either. If the geometry of \({\mathcal S}\) is very special, then the two holomorphic/antiholomorphic spinor fields (which are independent as solutions of \({{\mathcal H}^ \pm}\lambda = 0\)) might be proportional to each other. For example, future marginally-trapped surfaces (i.e., for which ρ = 0) are exceptional from the point of view of holomorphic spinors, and past marginally-trapped surfaces (ρ′ = 0) from the point of view of antiholomorphic spinors. Furthermore, there are surfaces with at least three linearly-independent holomorphic/antiholomorphic spinor fields. However, small generic perturbations of the geometry of an exceptional two-surface \({\mathcal S}\) with S2 topology make \({\mathcal S}\) generic. Finally, we note that several first-order differential operators can be constructed from the chiral irreducible parts Δ± and \({{\mathcal T}^ \pm}\) of Δe, given explicitly by Eq. (4.6). However, only four of them, the Dirac-Witten operator Δ ≔ Δ+ ⊕ Δ−, the twistor operator \({\mathcal T}: = {{\mathcal T}^ +} \oplus {{\mathcal T}^ -}\), and the holomorphy and antiholomorphy operators \({{\mathcal H}^ \pm}\), are elliptic (which ellipticity, together with the compactness of \({\mathcal S}\), would guarantee the finiteness of the dimension of their kernel), and it is only \({{\mathcal H}^ \pm}\) that have a two-complex-dimensional kernel in the generic case. This purely mathematical result gives some justification for the choices of Dougan and Mason. The spinor fields \(\lambda _A^{\underline A}\) that should be used in the Nester-Witten 2-form are either holomorphic or antiholomorphic. This construction does not work for exceptional two-surfaces. Positivity properties One of the most important properties of the Dougan-Mason energy-momenta is that they are future-pointing nonspacelike vectors, i.e., the corresponding masses and energies are non-negative. Explicitly [172], if \({\mathcal S}\) is the boundary of some compact spacelike hypersurface Σ on which the dominant energy condition holds, furthermore if \({\mathcal S}\) is weakly future convex (in fact, ρ ≥ 0 is enough), then the holomorphic Dougan-Mason energy-momentum is a future-pointing nonspacelike vector, and, analogously, the antiholomorphic energy-momentum is future pointing and nonspacelike if ρ′ ≥ 0. (For the functional analytic techniques and tools to give a complete positivity proof, see, e.g., [182].) As Bergqvist [79] stressed (and we noted in Section 8.1.3), Dougan and Mason used only the Δ+λ = 0 (and, in the antiholomorphic construction, the Δ−λ = 0) half of the 'propagation law' in their positivity proof. The other half is needed only to ensure the existence of two spinor fields. Thus, that might be Eq. (8.3) of the Ludvigsen-Vickers construction, or \({{\mathcal T}^ +}\lambda = 0\) in the holomorphic Dougan-Mason construction, or even \({{\mathcal T}^ +}\lambda = k\sigma {\prime}{\psi{\prime}_2}{\lambda _0}\) for some constant k, a 'deformation' of the holomorphicity considered by Bergqvist [79]. In fact, the propagation law may even be \({\bar m^a}{\Delta _a}{\lambda _B} = {\tilde f_B}^C{\lambda _C}\) for any spinor field \({\tilde f_B}^C\) satisfying \({\pi ^{- B}}_A{\tilde f_B}^C = {\tilde f_A}^B\pi {+ ^C}B = 0\). This ensures the positivity of the energy under the same conditions and that εAB λAμB is still constant on \({\mathcal S}\) for any two solutions λA and μA, making it possible to define the norm of the resulting energy-momentum, i.e., the mass. In the asymptotically flat spacetimes the positive energy theorems have a rigidity part as well, namely the vanishing of the energy-momentum (and, in fact, even the vanishing of the mass) implies flatness. There are analogous theorems for the Dougan-Mason energy-momenta as well [488, 490]. Namely, under the conditions of the positivity proof \(P_{\mathcal S}^{{\underline A}{{\underline B}\prime}}\) is zero iff D(Σ) is flat, which is also equivalent to the vanishing of the quasi-local energy, \({E_{\mathcal S}}: = {1 \over {\sqrt 2}}(P_{\mathcal S}^{00{\prime}} + P_{\mathcal S}^{11{\prime}}) = 0\), and \(P_{\mathcal S}^{{\underline A}{{\underline B}\prime}}\) is null (i.e., the quasi-local mass is zero) iff D(Σ) is a pp-wave geometry and the matter is pure radiation. In particular [498], for a coupled Einstein-Yang-Mills system (with compact, semisimple gauge groups) the zero quasi-local mass configurations are precisely the pp-wave solutions found by Güven [230]. Therefore, in contrast to the asymptotically flat cases, the vanishing of the mass does not imply the flatness of D(Σ). Since, as we will see below, the Dougan-Mason masses tend to the ADM mass at spatial infinity, there is a seeming contradiction between the rigidity part of the positive mass theorems and the result 2 above. However, this is only an apparent contradiction. In fact, according to one of the possible positive mass proofs [38], the vanishing of the ADM mass implies the existence of a constant null vector field on D(Σ), and then the flatness follows from the incompatibility of the conditions of the asymptotic flatness and the existence of a constant null vector field: The only asymptotically flat spacetime admitting a constant null vector field is flat spacetime. These results show some sort of rigidity of the matter + gravity system (where the latter satisfies the dominant energy condition), even at the quasi-local level, which is much more manifest from the following equivalent form of the results 1 and 2. Under the same conditions D(Σ) is flat if and only if there exist two linearly-independent spinor fields on \({\mathcal S}\), which are constant with respect to Δe, and D(Σ) is a pp-wave geometry; the matter is pure radiation if and only if there exists a Δe-constant spinor field on \({\mathcal S}\) [490]. Thus, the full information that D(Σ) is flat/pp-wave is completely encoded, not only in the usual initial data on, but in the geometry of the boundary of Σ, as well. In Section 13.5 we return to the discussion of this phenomenon, where we will see that, assuming \({\mathcal S}\) is future and past convex, the whole line element of D(Σ) (and not only the information that it is some pp-wave geometry) is determined by the two-surface data on \({\mathcal S}\). Comparing results 1 and 2 above with the properties of the quasi-local energy-momentum (and angular momentum) listed in Section 2.2.3, the similarity is obvious: \(P_{\mathcal S}^{{\underline A}{{\underline B}\prime}} = 0\) characterizes the 'quasi-local vacuum state' of general relativity, while \({m_{\mathcal S}} = 0\) is equivalent to 'pure radiative quasi-local states'. The equivalence of \({E_{\mathcal S}} = 0\) and the flatness of D(Σ) show that curvature always yields positive energy, or, in other words, with this notion of energy no classical symmetry breaking can occur in general relativity. The 'quasi-local ground states' (defined by \({E_{\mathcal S}} = 0\)) are just the 'quasi-local vacuum states' (defined by the trivial value of the field variables on D(Σ)) [488], in contrast, for example, to the well known ϕ4 theories. Both definitions give the same standard expression for round spheres [171]. Although the limit of the Dougan-Mason masses for round spheres in Reissner-Nordström spacetime gives the correct irreducible mass of the Reissner-Nordström black hole on the horizon, the constructions do not work on the surface of bifurcation itself, because that is an exceptional two-surface. Unfortunately, without additional restrictions (e.g., the spherical symmetry of the two-surfaces in a spherically-symmetric spacetime) the mass of the exceptional two-surfaces cannot be defined in a limiting process, because, in general, the limit depends on the family of generic two-surfaces approaching the exceptional one [490]. Both definitions give the same, expected results in the weak field approximation and, for large spheres, at spatial infinity; both tend to the ADM energy-momentum [172]. (The Newtonian limit in the covariant Newtonian spacetime was studied in [564].) In nonvacuum both definitions give the same, expected expression (4.9) for small spheres, in vacuum they coincide in the r5 order with that of Ludvigsen and Vickers, but in the r6 order they differ from each other. The holomorphic definition gives Eq. (8.5), but in the analogous expression for the antiholomorphic energy-momentum, the numerical coefficient 4/(45G) is replaced by 1/(9G) [171]. The Dougan-Mason energy-momenta have also been calculated for large spheres of constant Bondi-type radial coordinate value r near future null infinity [171]. While the antiholomorphic construction tends to the Bondi-Sachs energy-momentum, the holomorphic one diverges in general. In stationary spacetimes they coincide and both give the Bondi-Sachs energy-momentum. At the past null infinity it is the holomorphic construction, which reproduces the Bondi-Sachs energy-momentum, and the antiholomorphic construction diverges. We close this section with some caution and general comments on a potential gauge ambiguity in the calculation of the various limits. By the definition of the holomorphic and antiholomorphic spinor fields they are associated with the two-surface \({\mathcal S}\) only. Thus, if \({\mathcal S}{\prime}\) is another two-surface, then there is no natural isomorphism between the space — for example of the antiholomorphic spinor fields ker \({{\mathcal H}^ -}({\mathcal S})\) on \({\mathcal S}\) — and ker \({{\mathcal H}^ -}({\mathcal S}{\prime})\) on \({{\mathcal S}{\prime}}\), even if both surfaces are generic and hence, there are isomorphisms between them.Footnote 12 This (apparently 'only theoretical') fact has serious pragmatic consequences. In particular, in the small or large sphere calculations we compare the energy-momenta, and hence, the holomorphic or antiholomorphic spinor fields as well, on different surfaces. For example [494], in the small-sphere approximation every spin coefficient and spinor component in the GHP dyad and metric component in some fixed coordinate system \((\zeta, \,\bar \zeta)\) is expanded as a series of r, as \({\lambda _{\mathbf{A}}}(r,\,\zeta, \,\bar \zeta) = {\lambda _{\mathbf{A}}}^{(0)}(\zeta, \,\bar \zeta) + r{\lambda _{\mathbf{A}}}^{(1)}(\zeta, \,\bar \zeta) + \cdots + {r^k}{\lambda _{\bf{A}}}^{(k)}(\zeta, \,\bar \zeta) + {\mathcal O}({r^{k + 1}})\). Substituting all such expansions and the asymptotic solutions of the Bianchi identities for the spin coefficients and metric functions into the differential equations defining the holomorphic/antiholomorphic spinors, we obtain a hierarchical system of differential equations for the expansion coefficients λA(0), λA(1), …, etc. It turns out that the solutions of this system of equations with accuracy form a 2k, rather than the expected two-complex-dimensional, space. 2(k − 1) of these 2k solutions are 'gauge' solutions, and they correspond in the approximation with given accuracy to the unspecified isomorphism between the space of the holomorphic/antiholomorphic spinor fields on surfaces of different radii. Obviously, similar 'gauge' solutions appear in the large sphere expansions, too. Therefore, without additional gauge fixing, in the expansion of a quasi-local quantity only the leading nontrivial term will be gauge-independent. In particular, the r6-order correction in Eq. (8.5) for the Dougan-Mason energy-momenta is well defined only as a consequence of a natural gauge choice.Footnote 13 Similarly, the higher-order corrections in the large sphere limit of the antiholomorphic Dougan-Mason energy-momentum are also ambiguous unless a 'natural' gauge choice is made. Such a choice is possible in stationary spacetimes. A specific construction for the Kerr spacetime Logically, this specific construction should be presented in Section 12, but the technique that it is based on justifies its placement here. By investigating the propagation law, Eqs. (8.3) and (8.4) of Ludvigsen and Vickers for the Kerr spacetimes, Bergqvist and Ludvigsen constructed a natural flat, (but nonsymmetric) metric connection [85]. Writing the new covariant derivative in the form \({\tilde \nabla _{AA{\prime}}}{\lambda _B} = {\nabla _{AA{\prime}}}{\lambda _B} + {\Gamma _{AA{\prime}B}}^C{\lambda _C}\), the 'correction' term \({\Gamma _{AA\prime B}}^C\) could be given explicitly in terms of the GHP spinor dyad (adapted to the two principal null directions), the spin coefficients ρ, τ and ρ′, and the curvature component ψ2. \({\Gamma _{AA\prime B}}^C\) admits a potential [86]: \({\Gamma _{AA\prime BC}} = - {\nabla _{(C}}^{B{\prime}}{H_{B)}}_{AA{\prime}B{\prime}}\), where \({H_{ABA{\prime}B{\prime}}}: = {1 \over 2}{\rho ^{- 3}}(\rho + \bar \rho){\psi _2}{o_A}{o_B}{\bar o_{A{\prime}}}{\bar o_{B{\prime}}}\). However, this potential has the structure Hab = flalb appearing in the form of the metric \({g_{ab}} = g_{ab}^0 + f{l_a}{l_b}\) for the Kerr-Schild spacetimes, where \(g_{ab}^0\) is the flat metric. In fact, the flat connection \({\tilde \nabla _e}\) above could be introduced for general Kerr-Schild metrics [234], and the corresponding 'correction term' ΓAA′BC could be used to easily find the Lánczos potential for the Weyl curvature [18]. Since the connection \({\tilde \nabla _{AA{\prime}}}\) is flat and annihilates the spinor metric εAB, there are precisely two linearly-independent spinor fields, say \(\lambda _A^0\) and \(\lambda _A^1\), that are constant with respect to \({\tilde \nabla _{A{A\prime}}}\) and form a normalized spinor dyad. These spinor fields are asymptotically constant. Thus, it is natural to choose the spin space \(({{\mathbf{S}}^{\underline A}},\,{\varepsilon _{\underline A \underline B}})\) to be the space of the \({\tilde \nabla _a}\)-constant spinor fields, irrespectively of the two-surface \({\mathcal S}\). A remarkable property of these spinor fields is that the Nester-Witten 2-form built from them is closed: \(du({\lambda ^{\underline A}},\,{\bar \lambda ^{{{\underline B}\prime}}}) = 0\). This implies that the quasi-local energy-momentum depends only on the homology class of \({\mathcal S}\), i.e., if \({{\mathcal S}_1}\) and \({{\mathcal S}_2}\) are two-surfaces, such that they form the boundary of some hypersurface in M, then \(P_{{{\mathcal S}_1}}^{\underline A {{\underline B}\prime}} = P_{{{\mathcal S}_2}}^{\underline A {{\underline B}\prime}}\), and if \({\mathcal S}\) is the boundary of some hypersurface, then \(P_{\mathcal S}^{\underline A {{\underline B}\prime}} = 0\). In particular, for two-spheres that can be shrunk to a point, the energy-momentum is zero, but for those that can be deformed to a cut of the future null infinity, the energy-momentum is that of Bondi and Sachs. Quasi-Local Spin Angular Momentum In this section we review three specific quasi-local spin-angular-momentum constructions that are (more or less) 'quasi-localizations' of Bramson's expression at null infinity. Thus, the quasi-local spin angular momentum for the closed, orientable spacelike two-surface \({\mathcal S}\) will be sought in the form (3.16). Before considering the specific constructions themselves, we summarize the most important properties of the general expression of Eq. (3.16). Since the most detailed discussion of Eq. (3.16) is probably given in [494, 496], the subsequent discussions will be based on them. First, observe that the integral depends on the spinor dyad algebraically, thus it is enough to specify the dyad only at the points of \({\mathcal S}\). Obviously, \(J_{\mathcal S}^{\underline A\underline B}\) transforms like a symmetric second-rank spinor under constant SL(2, ℂ) transformations of the dyad \(\{\lambda _A^{\underline A}\}\). Second, suppose that the spacetime is flat, and let \(\{\lambda _A^{\underline A}\}\) be constant. Then the corresponding one-form basis \(\{\vartheta _a^{\underline a}\}\) is the constant Cartesian one, which consists of exact one-forms. Then, since the Bramson superpotential \(w({\lambda ^{\underline A}},{\lambda ^{\underline B}})\) is the anti-self-dual part (in the name indices) of \(\vartheta _a^{\underline a}\vartheta _b^{\underline b} - \vartheta _b^{\underline a}\vartheta _a^{\underline b}\), which is also exact, for such spinor bases, Eq. (3.16) gives zero. Therefore, the integral of Bramson's superpotential (3.16) measures the nonintegrability of the one-form basis \(\vartheta _a^{{\underline A}{\underline A'}} = \lambda _A^{\underline A}\bar \lambda _{A'}^{{\underline A'}}\), i.e., \(J_{\mathcal S}^{\underline A\underline B}\) is a measure of how much the actual one-form basis is 'distorted' by the curvature relative to the constant basis of Minkowski spacetime. Thus, the only question is how to specify a spin frame on \({\mathcal S}\) to be able to interpret \(J_{\mathcal S}^{\underline A\underline B}\) as angular momentum. It seems natural to choose those spinor fields that were used in the definition of the quasi-local energy-momenta in Section 8. At first sight this may appear to be only an ad hoc idea, but, recalling that in Section 8 we interpreted the elements of the spin spaces \(({\bf{S}}^{\underline A},{\varepsilon _{\underline A\underline B}})\) as the 'spinor constituents of the quasi-translations of \({\mathcal S}\)', we can justify such a choice. Based on our experience with the superpotentials for the various conserved quantities, the quasi-local angular momentum can be expected to be the integral of something like 'superpotential' × 'quasi-rotation generator', and the 'superpotential' is some expression in the first derivative of the basic variables, actually the tetrad or spinor basis. Since, however, Bramson's superpotential is an algebraic expression of the basic variables, and the number of the derivatives in the expression for the angular momentum should be one, the angular momentum expressions based on Bramson's superpotential must contain the derivative of the 'quasi-rotations', i.e., (possibly a combination of) the 'quasi-translations'. Since, however, such an expression cannot be sensitive to the 'change of the origin', they can be expected to yield only the spin part of the angular momentum. The following two specific constructions differ from each other only in the choice for the spin space \(({\bf{S}}^{\underline A},{\varepsilon _{\underline A\underline B}})\), and correspond to the energy-momentum constructions of the previous Section 8. The third construction (valid only in the Kerr spacetimes) is based on the sum of two terms, where one is Bramson's expression, and uses the spinor fields of Section 8.3. Thus, the present section is not independent of Section 8, and, for the discussion of the choice of the spin spaces \(({\bf{S}}^{\underline A},{\varepsilon _{\underline A\underline B}})\), we refer to that. Another suggestion for the quasi-local spatial angular momentum, proposed by Liu and Yau [338], will be introduced in Section 10.4.1. The Ludvigsen-Vickers angular momentum Under the conditions that ensured the Ludvigsen-Vickers construction for the energy-momentum would work in Section 8.1, the definition of their angular momentum is straightforward [346]. Since in Minkowski spacetime the Ludvigsen-Vickers spinors are just the restriction to \({\mathcal S}\) of the constant spinor fields, by the general remark above the Ludvigsen-Vickers spin angular momentum is zero in Minkowski spacetime. Using the asymptotic solution of the Einstein-Maxwell equations in a Bondi-type coordinate system it has been shown in [346] that the Ludvigsen-Vickers spin angular momentum tends to that of Bramson at future null infinity. For small spheres [494] in nonvacuum it reproduces precisely the expected result (4.10), and in vacuum it is $$J_{{{\mathcal S}_r}}^{\underline A \underline B} = {4 \over {45G}}{r^5}{T_{AA{\prime}BB{\prime}CC{\prime}DD{\prime}}}{t^{AA{\prime}}}{t^{BB{\prime}}}{t^{CC{\prime}}}\left({r{t^{D{\prime}E}}{\varepsilon ^{DF}}\varepsilon _{\left(E \right.}^{\underline A}\varepsilon _{\left. F \right)}^{\underline B}} \right) + {\mathcal O}({r^7}).$$ We stress that in both the vacuum and nonvacuum cases, the factor \(r{t^{D'E}}{\varepsilon ^{DF}}\;{\mathcal E}_{(E}^{\underline A}{\mathcal E}_{F)}^{\underline B}\), interpreted in Section 4.2.2 as an average of the boost-rotation Killing fields that vanish at p, emerges naturally. No (approximate) boost-rotation Killing field was put into the general formulae by hand. Holomorphic/antiholomorphic spin angular momenta Obviously, the spin-angular-momentum expressions based on the holomorphic and antiholomorphic spinor fields [492] on generic two-surfaces are genuinely quasi-local. Since, in Minkowski spacetime the restriction of the two constant spinor fields to any two-surface is constant, and hence holomorphic and antiholomorphic at the same time, both the holomorphic and antiholomorphic spin angular momenta are vanishing. Similarly, for round spheres both definitions give zero [496], as would be expected in a spherically-symmetric system. The antiholomorphic spin angular momentum has already been calculated for axisymmetric two-surfaces \({\mathcal S}\), for which the antiholomorphic Dougan-Mason energy-momentum is null, i.e., for which the corresponding quasi-local mass is zero. (As we saw in Section 8.2.3, this corresponds to a pp-wave geometry and pure radiative matter fields on D(Σ) [488, 490].) This null energy-momentum vector turned out to be an eigenvector of the anti-symmetric spin-angular-momentum tensor \(J_{\mathcal S}^{\underline A\underline B}\), which, together with the vanishing of the quasi-local mass, is equivalent to the proportionality of the (null) energy-momentum vector and the Pauli-Lubanski spin [492], where the latter is defined by $$S_{\mathcal S}^{\underline a}: = {\textstyle{1 \over 2}}{\varepsilon ^{\underline a}}_{\underline b \underline c \underline d}P_{\mathcal S}^{\underline b}J_{\mathcal S}^{\underline c \underline d}.$$ This is a known property of the zero-rest-mass fields in Poincaré invariant quantum field theories [231]. Both the holomorphic and antiholomorphic spin angular momenta were calculated for small spheres [494]. In nonvacuum the holomorphic spin angular momentum reproduces the expected result (4.10), and, apart from a minus sign, the antiholomorphic construction does also. In vacuum, both definitions give exactly Eq. (9.1). In general the antiholomorphic and the holomorphic spin angular momenta are diverging near the future null infinity of Einstein-Maxwell spacetimes as r and r2, respectively. However, the coefficient of the diverging term in the antiholomorphic expression is just the spatial part of the Bondi-Sachs energy-momentum. Thus, the antiholomorphic spin angular momentum is finite in the center-of-mass frame, and hence it seems to describe only the spin part of the gravitational field. In fact, the Pauli-Lubanski spin (9.2) built from this spin angular momentum and the antiholomorphic Dougan-Mason energy-momentum is always finite, free of the 'gauge' ambiguities discussed in Section 8.2.4, and is built only from the gravitational data, even in the presence of electromagnetic fields. In stationary spacetimes both constructions are finite and coincide with the 'standard' expression (4.15). Thus, the antiholomorphic spin angular momentum defines an intrinsic angular momentum at the future null infinity. Note that this angular momentum is free of supertranslation ambiguities, because it is defined on the given cut in terms of the solutions of elliptic differential equations. These solutions can be interpreted as the spinor constituents of certain boost-rotation BMS vector fields, but the definition of this angular momentum is not based on them [496]. The angular momentum of Bergqvist and Ludvigsen [86] for the Kerr spacetime is based on their special flat, nonsymmetric but metric, connection explained briefly in Section 8.3. But their idea is not simply the use of the two \({{\tilde \nabla}_e}\)-constant spinor fields in Bramson's superpotential. Rather, in the background of their approach there are twistor-theoretical ideas. (The twistor-theoretic aspects of the analogous flat connection for the general Kerr-Schild class are discussed in [234].) The main idea is that, while the energy-momentum is a single four-vector in the dual of the Hermitian subspace of \({{\bf{S}}^{\underline A}} \otimes {{{\bf{\bar S}}}^{\underline B{\prime}}}\), the angular momentum is not only an anti-symmetric tensor over the same space, but should depend on the 'origin', a point in a four-dimensional affine space M0 as well, and should transform in a specific way under the translation of the 'origin'. Bergqvist and Ludvigsen defined the affine space M0 to be the space of the solutions Xa of \({{\tilde \nabla}_a}{X_b} = {g_{ab}} - {H_{ab}}\), and showed that M0 is, in fact, a real, four-dimensional affine space. Then, for a given Xaa′, to each \({{\tilde \nabla}_a}\)-constant spinor field λA they associate a primed spinor field by μA′ ≔ Xa′aλA. This μA′ turns out to satisfy the modified valence-one twistor equation \({{\tilde \nabla}_{A(A{\prime}}}{\mu _{B{\prime})}} = - {H_{AA{\prime}BB{\prime}}}{\lambda ^B}\). Finally, they form the 2-form $$W\,{(X,{\lambda ^{\underline A}},{\lambda ^{\underline B}})_{ab}}: = {\rm{i}}\left[ {\lambda _A^{\underline A}{\nabla _{B\,B{\prime}}}\left({{X_{A{\prime}C}}{\varepsilon ^{CD}}\lambda _D^{\underline B}} \right) - \lambda _B^{\underline A}{\nabla _{A\,A{\prime}}}\left({{X_{B{\prime}C}}{\varepsilon ^{CD}}\lambda _D^{\underline B}} \right) + {\varepsilon _{A{\prime}B{\prime}}}\lambda _{\left(A \right.}^{\underline A}\lambda _{\left. B \right)}^{\underline B}} \right],$$ and define the angular momentum \(J_{\mathcal S}^{\underline A\underline B}(X)\) with respect to the origin Xa as 1/(8πG) times the integral of \(W{(X,{\lambda ^{\underline A}},{\lambda ^{\underline B}})_{ab}}\) on some closed, orientable spacelike two-surface \({\mathcal S}\). Since this Wab is closed, Δ[aWbc] = 0 (similar to the Nester-Witten 2-form in Section 8.3), the integral \(J_{\mathcal S}^{\underline A\underline B}(X)\) depends only on the homology class of \({\mathcal S}\). Under the 'translation' Xe ↦ Xe + ae of the 'origin' by a \({{\tilde \nabla}_a}\)-constant one-form ae, it transforms as \(J_{\mathcal S}^{\underline A\underline B}(\tilde X) = J_{\mathcal S}^{\underline A\underline B}(X) + {a^{(\underline A}}_{\underline B{\prime}}P_{\mathcal S}^{\underline B)\underline B{\prime}}\), where the components \({a_{\underline A\underline B{\prime}}}\) are taken with respect to the basis \(\{\lambda _A^{\underline A}\}\) in the solution space. Unfortunately, no explicit expression for the angular momentum in terms of the Kerr parameters m and a is given. The Hamilton-Jacobi Method If one is concentrating only on the introduction and study of the properties of quasi-local quantities, and is not interested in the detailed structure of the quasi-local (Hamiltonian) phase space, then perhaps the most natural way to derive the general formulae is to follow the Hamilton-Jacobi method. This was done by Brown and York in deriving their quasi-local energy expression [120, 121]. However, the Hamilton-Jacobi method in itself does not yield any specific construction. Rather, the resulting general expression is similar to a superpotential in the Lagrangian approaches, which should be completed by a choice for the reference configuration and for the generator vector field of the physical quantity (see Section 3.3.3). In fact, the 'Brown-York quasi-local energy' is not a single expression with a single well-defined prescription for the reference configuration. The same general formula with several other, mathematically-inequivalent definitions for the reference configurations are still called the 'Brown-York energy'. A slightly different general expression was used by Kijowski [315], Epp [178], Liu and Yau [338] and Wang and Yau [544]. Although the former follows a different route to derive his expression and the latter three are not connected directly to the canonical analysis (and, in particular, to the Hamilton-Jacobi method), the formalism and techniques that are used justify their presentation in this section. The present section is mainly based on the original papers [120, 121] by Brown and York. Since, however, this is the most popular approach to finding quasi-local quantities and is the subject of very active investigations, especially from the point of view of the applications in black hole physics, this section is perhaps less complete than the previous ones. The expressions of Kijowski, Epp, Liu and Yau and Wang and Yau will be treated in the formalism of Brown and York. The Brown-York expression To motivate the main idea behind the Brown-York definition [120, 121], let us first consider a classical mechanical system of n degrees of freedom with configuration manifold Q and Lagrangian L: TQ × ℝ → ℝ (i.e., the Lagrangian is assumed to be first order and may depend on time explicitly). For given initial and final configurations, \((q_1^a,{t_1})\) and \((q_2^a,{t_2})\), respectively, the corresponding action functional is \({I^1}[q(t)]\;: = \int\nolimits_{{t_1}}^{{t_2}} {L({q^a}(t),{{\dot q}^a}(t),t)\;dt}\), where qa(t) is a smooth curve in Q from \({q^a}({t_1}) = q_1^a\) to \({q^a}({t_2}) = q_2^a\) with tangent \({{\dot q}^a}(t)\) at t. (The pair (qa(t), t) may be called a history or world line in the 'spacetime' Q × ℝ.) Let (qa(u, t(u)), t(u)) be a smooth one-parameter deformation of this history, for which (qa(0, t(0)), t(0)) = (qa(t), t), and u ∈ (−ϵ, ϵ) for some ϵ > 0. Then, denoting the derivative with respect to the deformation parameter u at u = 0 by δ, one has the well known expression $$\delta {I^1}[q(t)] = \int\nolimits_{{t_1}}^{{t_2}} {\left({{{\partial L} \over {\partial {q^a}}} - {d \over {dt}}{{\partial L} \over {\partial {{\dot q}^a}}}} \right)} \;(\delta {q^a} - {\dot q^a}\delta t)\;dt + {{\partial L} \over {\partial {{\dot q}^a}}}\delta {q^a}\vert _{{t_1}}^{{t_2}} - \left({{{\partial L} \over {\partial {{\dot q}^a}}}{{\dot q}^a} - L} \right)\;\delta t\vert _{{t_1}}^{{t_2}}.$$ (10.1) Therefore, introducing the Hamilton-Jacobi principal function \({S^1}(q_1^a,{t_1};q_2^a,{t_2})\) as the value of the action on the solution qa(t) of the equations of motion from \((q_1^a,{t_1})\) to \((q_2^a,{t_2})\), the derivative of S1 with respect to \(q_2^a\) gives the canonical momenta \(p_a^1: = (\partial L/\partial {{\dot q}^a})\), while its derivative with respect to t2 gives minus the energy, \(- {E^1} = - (p_a^1{{\dot q}^a} - L)\), at t2. Obviously, neither the action I1 nor the principal function S1 are unique: I[q(t)] ≔ I1[q(t)] − I0[q(t)] for any I0[q(t)] of the form \(- {E^1} = - (p_a^1{{\dot q}^a} - L)\) (dh/dt) dt with arbitrary smooth function h = h(qa(t), t) is an equally good action for the same dynamics. Clearly, the subtraction term I0[q(t)] alters both the canonical momenta and the energy according to \(p_a^1 \mapsto {p_a} = p_a^1 - (\partial h/\partial {q^a})\) and E1 ↦ E = E1 + (∂h/∂t), respectively. The variation of the action and the surface stress-energy tensor The main idea of Brown and York [120, 121] is to calculate the analogous variation of an appropriate first-order action of general relativity (or of the coupled matter + gravity system) and isolate the boundary term that could be analogous to the energy above. To formulate this idea mathematically, Brown and York considered a compact spacetime domain D with topology Σ × [t1,t2] such that Σ × {t} correspond to compact spacelike hypersurfaces Σt; these form a smooth foliation of D and the two-surfaces \({{\mathcal S}_t}: = \partial {\Sigma _t}\) (corresponding to ∂Σ × {t}) form a foliation of the timelike three-boundary 3B of D. Note that this D is not a globally hyperbolic domain.Footnote 14 To ensure the compatibility of the dynamics with this boundary, the shift vector is usually chosen to be tangent to St on 3B. The orientation of 3B is chosen to be outward pointing, while the normals, both of \({\Sigma _1}: = {\Sigma _{{t_1}}}\) and of \({\Sigma _2}: = {\Sigma _{{t_2}}}\), are chosen to be future pointing. The metric and extrinsic curvature on Σt will be denoted, respectively, by hab and χab, and those on 3B by γab and Θab. The primary requirement of Brown and York on the action is to provide a well-defined variational principle for the Einstein theory. This claim leads them to choose for I1 the 'trace K action' (or, in the present notation, the 'trace χ action') for general relativity [572, 573, 534], and the action for the matter fields may be included. (For minimal, nonderivative couplings, the presence of the matter fields does not alter the subsequent expressions.) However, as Geoff Hayward pointed out [243], to have a well-defined variational principle, the 'trace χ action' should in fact be completed by two two-surface integrals, one on \({{\mathcal S}_1}\) and the other on \({{\mathcal S}_2}\). Otherwise, as a consequence of the edges \({{\mathcal S}_1}\) and \({{\mathcal S}_2}\), called the 'joints' (i.e., the nonsmooth parts of the boundary ∂D), the variation of the metric at the points of the edges \({{\mathcal S}_1}\) and \({{\mathcal S}_2}\) could not be arbitrary. (See also [242, 315, 100, 119], where the 'orthogonal boundaries assumption' is also relaxed.) Let η1 and η2 be the scalar product of the outward-pointing normal of 3B and the future-pointing normal of Σ1 and of Σ2, respectively. Then, varying the spacetime metric (for the variation of the corresponding principal function S1) they obtained the following: $$\begin{array}{*{20}c} {\delta {S^1} = \int\nolimits_{{\Sigma _2}} {{1 \over {16\pi G}}\sqrt {\vert h\vert} \,({\chi ^{ab}} - \chi {h^{ab}})\;\delta {h_{ab}}{d^3}x -} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad} \\ {- \int\nolimits_{{\Sigma _1}} {{1 \over {16\pi G}}\sqrt {\vert h\vert} \,({\chi ^{ab}} - \chi {h^{ab}})\;\delta {h_{ab}}{d^3}x -} \quad \quad \quad \quad \quad \quad \quad} \\ {- \int\nolimits_{{}^3B} {{1 \over {16\pi G}}\sqrt {\vert \gamma \vert} \,({\Theta ^{ab}} - \Theta {\gamma ^{ab}})\;\delta {\gamma _{ab}}\,{d^3}x} - \quad \quad \quad \quad \quad \quad \;\;\;} \\ {\quad - {1 \over {8\pi G}}\oint\nolimits_{{{\mathcal S}_2}} {{{\tanh}^{- 1}}{\eta _2}\delta \sqrt {\vert q\vert} {d^2}x} + {1 \over {8\pi G}}\oint\nolimits_{{{\mathcal S}_1}} {{{\tanh}^{- 1}}{\eta _1}\delta \sqrt {\vert q\vert} {d^2}x}.} \\ \end{array}$$ The first two terms together correspond to the term \(p_a^1\delta {q^a}\vert _{{t_1}}^{{t_2}}\) of Eq. (10.1), and, in fact, the familiar ADM expression for the canonical momentum \({{\tilde p}^{ab}}\) is just \({1 \over {16\pi G}}\sqrt {\vert h\vert} ({\chi ^{ab}} - \chi {h^{ab}})\). The last two terms give the effect of the presence of the nondifferentiable 'joints'. Therefore, it is the third term that should be analogous to the third term of Eq. (10.1). In fact, roughly, this is proportional to the proper time separation of the 'instants' Σ1 and Σ2, and it is reasonable to identify its coefficient as some (quasi-local) analog of the energy. However, just as in the case of the mechanical system, the action (and the corresponding principal function) is not unique, and the principal function should be written as S ≔ S1 − S0, where S0 is assumed to be an arbitrary function of the three-metric on the boundary ∂D = Σ2 ∪3B ∪ Σ1. Then $${\tau ^{ab}}: = - {2 \over {\sqrt {\vert \gamma \vert}}}{{\delta S} \over {\delta {\gamma _{ab}}}} = {1 \over {8\pi G}}({\Theta ^{ab}} - \Theta {\gamma ^{ab}}) + {2 \over {\sqrt {\vert \gamma \vert}}}{{\delta {S^0}} \over {\delta {\gamma _{ab}}}}$$ defines a symmetric tensor field on the timelike boundary 3B, and is called the surface stress-energy tensor. (Since our signature for γab on 3B is (+, −, −) rather than (−, +, +), we should define τab with the extra minus sign, according to Eq. (2.1).) Its divergence with respect to the connection 3 De on 3B determined by γab is proportional to the part γabTbcυc of the energy-momentum tensor, and hence, in particular, τab is divergence-free in vacuum. Therefore, if (3B, γab) admits a Killing vector, say Ka, then, in vacuum $${Q_{\mathcal S}}\,[{\bf{K}}]: = \oint\nolimits_{\mathcal S} {{K_a}{\tau ^{ab}}{{\bar t}_b}\,d{\mathcal S}},$$ the flux integral of τabKb on any spacelike cross section \({\mathcal S}\) of 3B, is independent of the cross section itself, and hence, defines a conserved charge. If Ka is timelike, then the corresponding charge is called a conserved mass, while for spacelike Ka with closed orbits in \({\mathcal S}\) the charge is called angular momentum. (Here \({\mathcal S}\) is not necessarily an element of the foliation \({{\mathcal S}_t}\)t of 3B, and \({{\bar t}^a}\) is the unit normal to \({\mathcal S}\) tangent to 3B.) Clearly, the trace-χ action cannot be recovered as the volume integral of some scalar Lagrangian, because it is the Hilbert action plus a boundary integral of the trace χ, and the latter depends on the location of the boundary itself. Such a Lagrangian was found by Pons [431]. This depends on the coordinate system adapted to the boundary of the domain D of integration. An interesting feature of this Lagrangian is that it is second order in the derivatives of the metric, but it depends only on the first time derivative. A detailed analysis of the variational principle, the boundary conditions and the conserved charges is given. In particular, the asymptotic properties of this Lagrangian is similar to that of the ΓΓ Lagrangian of Einstein, rather than to that of Hilbert. The general form of the Brown-York quasi-local energy The 3 + 1 decomposition of the spacetime metric yields a 2 + 1 decomposition of the metric γab, as well. Let N and Na be the lapse and the shift of this decomposition on 3B. Then the corresponding decomposition of τab defines the energy, momentum, and spatial-stress surface densities according to $$\varepsilon : = {t_a}{t_b}{\tau ^{ab}} = - {1 \over {8\pi G}}k + {1 \over {\sqrt {\vert q\vert}}}{{\delta {S^0}} \over {\delta N}},$$ $${j_a}: = - {q_{ab}}{t_c}{\tau ^{bc}} = {1 \over {8\pi G}}{A_a} + {1 \over {\sqrt {\vert q\vert}}}{{\delta {S^0}} \over {\delta {N^a}}},$$ $${s}^{ab}: = \Pi _c^a\Pi _d^b{\tau ^{cd}} = {1 \over {8\pi G}}\left[ {{k^{ab}} - k{q^{ab}} + {q^{ab}}{t^e}({\nabla _e}{t_f})\;{\upsilon ^f}} \right] + {2 \over {\sqrt {\vert q\vert}}}{{\delta {S^0}} \over {\delta {q_{ab}}}},$$ where qab is the spacelike two-metric, Ae is the SO(1,1) vector potential on \({{\mathcal S}_t}\), \(\Pi _b^a\) is the projection to \({{\mathcal S}_t}\) introduced in Section 4.1.2, kab is the extrinsic curvature of \({{\mathcal S}_t}\) corresponding to the normal va orthogonal to 3B, and k is its trace. The timelike boundary 3B defines a boost-gauge on the two-surfaces \({{\mathcal S}_t}\) (which coincides with that determined by the foliation Σt in the 'orthogonal boundaries' case). The gauge potential Ae is taken in this gauge. Thus, although ε and ja on \({{\mathcal S}_t}\) are built from the two-surface data (in a particular boost-gauge), the spatial surface stress depends on the part ta(∇atb)vb of the acceleration of the foliation Σt as well. Let ξa be any vector field on 3B tangent to 3B, and ξa = nta + na its 2 + 1 decomposition. Then we can form the charge integral (10.4) for the leaves \({{\mathcal S}_t}\) of the foliation of 3B $${E_t}[{\xi ^a},{t^a}]: = \oint\nolimits_{{{\mathcal S}_t}} {{\xi _a}{\tau ^{ab}}{t_b}\,d{{\mathcal S}_t}} = \oint\nolimits_{{{\mathcal S}_t}} {(n\varepsilon - {n^a}{j_a})\;d{{\mathcal S}_t}}.$$ Obviously, in general Et[ξa, ta] is not conserved, and depends not only on the vector field ξa and the two-surface data on the particular \({{\mathcal S}_t}\), but on the boost-gauge that 3B defines on \({t^a}\), i.e., the timelike normal ta as well. Brown and York define the general form of their quasi-local energy on \({\mathcal S}: = {{\mathcal S}_t}\) by $${E_{{\rm{BY}}}}({\mathcal S},{t^a}): = {E_t}\;[{t^a},{t^a}],$$ i.e., they link the 'quasi-time-translation' (i.e., the 'generator of the energy') to the preferred unit normal ta of \({{\mathcal S}_t}\). Since the preferred unit normals ta are usually interpreted as a fleet of observers who are at rest with respect to \({{\mathcal S}_t}\), in their spirit the Brown-York-type quasi-local energy expressions are similar to EΣ[ta] given by Eq. (2.6) for the matter fields or Eq. (3.17) for the gravitational 'field' rather than to the charges \({Q_{\mathcal S}}[{\bf{K}}]\). For vector fields ξa = na with closed integral curved in \({{\mathcal S}_t}\) the quantity Et[ξa, ta] might be interpreted as angular momentum corresponding to ξa. The quasi-local energy is still not completely determined, because the 'subtraction term' S0 in the principal function has not been specified. This term is usually interpreted as our freedom to shift the zero point of the energy. Thus, the basic idea of fixing the subtraction term is to choose a 'reference configuration', i.e., a spacetime in which we want to obtain zero quasi-local quantities Et[ξa, ta] (in particular zero quasi-local energy), and identify S0 with the S1 of the reference spacetime. Thus, by Eq. (10.5) and (10.6) we obtain that $$\begin{array}{*{20}c} {\varepsilon = - {1 \over {8\pi G}}(k - {k^0}),} & {{j_a} = {1 \over {8\pi G}}({A_a} - A_a^0),} \\ \end{array}$$ where k0 and \(A_a^0\) are the reference values of the trace of the extrinsic curvature and SO(1, 1)-gauge potential, respectively. Note that to ensure that k0 and \(A_a^0\) really be the trace of the extrinsic curvature and SO(1, 1)-gauge potential, respectively, in the reference spacetime, they cannot depend on the lapse N and the shift Na. This can be ensured by requiring that S0 be a linear functional of them. We return to the discussion of the reference term in the various specific constructions below. For a definition of the Brown-York energy as a quasi-local energy oparator in loop quantum gravity, see [565]. Further properties of the general expressions As we noted, ε, ja, and sab depend on the boost-gauge that the timelike boundary defines on \({{\mathcal S}_t}\). Lau clarified how these quantities change under a boost gauge transformation, where the new boost-gauge is defined by the timelike boundary 3B′ of another domain D′such that the particular two-surface St is a leaf of the foliation of 3B′ as well [333]. If \(\{{{\bar \Sigma}_t}\}\) is another foliation of D such that \(\partial {{\bar \Sigma}_t} = {{\mathcal S}_t}\) and \({{\bar \Sigma}_t}\) is orthogonal to 3B, then the new ε′, j′a, and \(s_{ab}{\prime}\) are built from the old ε, ja, and sab and the 2 + 1 pieces on \({{\mathcal S}_t}\) of the canonical momentum \({{\bar \tilde p}^{ab}}\), defined on \({{\bar \Sigma}_t}\). Apart from the contribution of S0, these latter quantities are $${j_ \vdash}: = {2 \over {\sqrt {\vert h\vert}}}{\upsilon _a}{\upsilon _b}{\bar \tilde p^{ab}} = {1 \over {8\pi G}}l,$$ $${\hat j_a}: = {2 \over {\sqrt {\vert h\vert}}}{q_{ab}}{\upsilon _c}{\bar \tilde p^{bc}} = {1 \over {8\pi G}}{A_a},$$ $${t_{ab}}: = {2 \over {\sqrt {\vert h\vert}}}{q_{ac}}{q_{bd}}{\bar \tilde p^{cd}} = {1 \over {8\pi G}}\;[{l_{ab}} - {q_{ab}}\;(l + {\upsilon ^e}({\nabla _e}{\upsilon _f}){t^e})],$$ where lab is the extrinsic curvature of \({{\mathcal S}_t}\) corresponding to its normal ta (we denote this by τab in Section 4.1.2), and l is its trace. (By Eq. (10.12) \({{\hat j}_a}\) is not an independent quantity, that is just ja. These quantities were originally introduced as the variational derivatives of the principal function with respect to the lapse, the shift and the two-metric of the radial foliation of Σt [333, 119], which are, in fact, essentially the components of the canonical momentum.) Thus, the required transformation formulae for ε, ja, and sab follow from the definitions and those for the extrinsic curvature and the SO(1, 1) gauge potential of Section 4.1.2. The various boost-gauge invariant quantities that can be built from ε, ja, sab, j⊢, and tab are also discussed in [333, 119]. Lau repeated the general analysis above using the tetrad (in fact, triad) variables and the Ashtekar connection on the timelike boundary, instead of the traditional ADM-type variables [331]. Here the energy and momentum surface densities are re-expressed by the superpotential \({\vee _b}^{ae}\), given by Eq. (3.6), in a frame adapted to the two-surface. (Lau called the corresponding superpotential 2-form the 'Sparling 2-form'.) However, in contrast to the usual Ashtekar variables on a spacelike hypersurface [30], the time gauge cannot be imposed globally on the boundary Ashtekar variables. In fact, while every orientable three-manifold Σ is parallelizable [410], and hence, a globally-defined orthonormal triad can be given on Σ, the only parallelizable, closed, orientable two-surface is the torus. Thus, on 3B, we cannot impose the global time gauge condition with respect to any spacelike two-surface \({\mathcal S}\) in 3B unless \({\mathcal S}\) is a torus. Similarly, the global radial gauge condition in the spacelike hypersurfaces Σt (even in a small open neighborhood of the whole two-surfaces \({{\mathcal S}_t}\) in Σt) can be imposed on a triad field only if the two-boundaries \({{\mathcal S}_t} = \partial {\Sigma _t}\) are all tori. Obviously, these gauge conditions can be imposed on every local trivialization domain of the tangent bundle \(T{{\mathcal S}_t}\) of \({{\mathcal S}_t}\). However, since in Lau's local expressions only geometrical objects (like the extrinsic curvature of the two-surface) appear, they are valid even globally (see also [332]). On the other hand, further investigations are needed to clarify whether or not the quasi-local Hamiltonian, using the Ashtekar variables in the radial-time gauge [333], is globally well defined. In general, the Brown-York quasi-local energy does not have any positivity property even if the matter fields satisfy the dominant energy conditions. However, as G. Hayward pointed out [244], for the variations of the metric around the vacuum solutions that extremalize the Hamiltonian, called the 'ground states', the quasi-local energy cannot decrease. On the other hand, the interpretation of this result as a 'quasi-local dominant energy condition' depends on the choice of the time gauge above, which does not exist globally on the whole two-surface \({\mathcal S}\). Booth and Mann [100] shifted the emphasis from the foliation of the domain D to the foliation of the boundary 3B. (These investigations were extended to include charged black holes in [101], where the gauge dependence of the quasi-local quantities is also examined.) In fact, from the point of view of the quasi-local quantities defined with respect to the observers with world lines in 3B and orthogonal to \({\mathcal S}\), it is irrelevant how the spacetime domain D is foliated. In particular, the quasi-local quantities cannot depend on whether or not the leaves Σt of the foliation of D are orthogonal to 3B. As a result, Booth and Mann recovered the quasi-local charge and energy expressions of Brown and York derived in the 'orthogonal boundary' case. However, they suggested a new prescription for the definition of the reference configuration (see Section 10.1.8). Also, they calculated the quasi-local energy for round spheres in the spherically-symmetric spacetimes with respect to several moving observers, i.e., in contrast to Eq. (10.9), they did not link the generator vector field ξa to the normal ta of \({{\mathcal S}_t}\). In particular, the world lines of the observers are not integral curves of (∂/∂t) in the coordinate basis given in Section 4.2.1 on the round spheres. Using an explicit, nondynamic background metric \(g_{ab}^0\), one can construct a covariant first-order Lagrangian \(L({g_{ab}},g_{ab}^0)\) for general relativity [306], and one can use the action \({I_D}[{g_{ab}},g_{ab}^0]\) based on this Lagrangian instead of the trace χ action. Fatibene, Ferraris, Francaviglia, and Raiteri [184] clarified the relationship between the two actions, \({I_D}[{g_{ab}}]\) and \({I_D}[{g_{ab}},g_{ab}^0]\), and the corresponding quasi-local quantities. Considering the reference term S0 in the Brown-York expression as the action of the background metric \(g_{ab}^0\) (which is assumed to be a solution of the field equations), they found that the two first-order actions coincide if the spacetime metrics gab and \(g_{ab}^0\) coincide on the boundary ∂D. Using \(L({g_{ab}},g_{ab}^0)\), they construct the conserved Noether current for any vector field ξa and, by taking its flux integral, define charge integrals \({Q_{\mathcal S}}[{\xi ^a},{g_{ab}},g_{ab}^0]\) on two-surfaces \({\mathcal S}\).Footnote 15 Again, the Brown-York quasi-local quantity Et[ξa, ta] and \({Q_{{{\mathcal S}_t}}}[{\xi ^a},{g_{ab}},g_{ab}^0]\) coincide if the spacetime metrics coincide on the boundary ∂D and if ξa has some special form. Therefore, although the two approaches are basically equivalent under the boundary condition above, this boundary condition is too strong from both the point of view of the variational principle and that of the quasi-local quantities. We will see in Section 10.1.8 that even the weaker boundary condition, that requires only the induced three-metrics on 3B fromgab and from \(g_{ab}^0\) to be the same, is still too strong. The Hamiltonians If we can write the action I[q(t)] of our mechanical system into the canonical form \(\int\nolimits_{{t_1}}^{{t_2}} {[{p_a}{{\dot q}^a} - H({q^a},{p_a},t)]}\), then it is straightforward to read off the Hamiltonian of the system. Thus, having accepted the trace χ action as the action for general relativity, it is natural to derive the corresponding Hamiltonian in the analogous way. Following this route Brown and York derived the Hamiltonian, corresponding to the 'basic' (or nonreferenced) action I1 as well [121]. They obtained the familiar integral of the sum of the Hamiltonian and the momentum constraints, weighted by the lapse N and the shift Na, respectively, plus Et[Nta + Na, ta], given by Eq. (10.8), as a boundary term. This result is in complete agreement with the expectations, as their general quasi-local quantities can also be recovered as the value of the Hamiltonian on the constraint surface (see also [100]). This Hamiltonian was investigated further in [119]. Here all the boundary terms that appear in the variation of their Hamiltonian are determined and decomposed with respect to the two-surface ∂Σ. It is shown that the change of the Hamiltonian under a boost of Σ yields precisely the boosts of the energy and momentum surface density discussed above. Hawking, Horowitz, and Hunter also derived the Hamiltonian from the trace χ action \(I_D^1[{g_{ab}}]\) both with the orthogonal [241] and nonorthogonal boundary assumptions [242]. They allowed matter fields ΦN, whose dynamics is governed by a first-order action \(I_{{\rm{m}}D}^1[{g_{ab}},{\Phi _N}]\), to be present. However, they treated the reference configuration in a different way. In the traditional canonical analysis of the fields and the geometry based on a noncompact Σ (for example in the asymptotically flat case) one has to impose certain falloff conditions that ensure the finiteness of the action, the Hamiltonian, etc. This finiteness requirement excludes several potentially interesting field + gravity configurations from our investigations. In fact, in the asymptotically flat case we compare the actual matter + gravity configurations with the flat spacetime + vanishing matter fields configuration. Hawking and Horowitz generalized this picture by choosing a static, but otherwise arbitrary, solution \(g_{ab}^0\), \(\Phi _N^0\) of the field equations, considered the timelike boundary 3B of D to be a timelike cylinder 'near the infinity', and considered the action $${I_D}\,[{g_{ab}},{\Phi _N}]: = I_D^1\,[{g_{ab}}] + I_{{\rm{m}}D}^1\,[{g_{ab}},{\Phi _N}] - I_D^1\left[ {g_{ab}^0} \right] - I_{{\rm{m}}D}^1\,[g_{ab}^0,\Phi _N^0]$$ and those matter + gravity configurations that induce the same value on 3B as and \(\Phi _N^0\) and \(g_{ab}^0\). Its limit as 3B is 'pushed out to infinity' can be finite, even if the limit of the original (i.e., nonreferenced) action is infinite. Although in the nonorthogonal boundaries case the Hamiltonian derived from the nonreferenced action contains terms coming from the 'joints', by the boundary conditions at 3B they are canceled from the referenced Hamiltonian. This latter Hamiltonian coincides with that obtained in the orthogonal boundaries case. Both the ADM and the Abbott-Deser energy can be recovered from this Hamiltonian [241], and the quasi-local energy for spheres in domains with nonorthogonal boundaries in the Schwarzschild solution is also calculated [242]. A similar Hamiltonian, including the 'joints' or 'corner' terms, was obtained by Francaviglia and Raiteri [191] for the vacuum Einstein theory (and for Einstein-Maxwell systems in [9]), using a Noether charge approach. Their formalism, using the language of jet bundles, is, however, slightly more sophisticated than that common in general relativity. Booth and Fairhurst [95] reexamined the general form of the Brown-York energy and angular momentum from a Hamiltonian point of view.Footnote 16 Their starting point is the observation that the domain D is not isolated from its environment, thus, the quasi-local Hamiltonian cannot be time independent. Therefore, instead of the standard Hamiltonian formalism for the autonomous systems, a more general formalism, based on the extended phase space, must be used. This phase space consists of the usual bulk configuration and momentum variables \(({h_{ab}},{{\tilde p}^{ab}})\) on the typical three-manifold Σ and the time coordinate t, the space coordinates xA on the two-boundary \({\mathcal S} = \partial \Sigma\), and their conjugate momenta π and πa. The second important observation of Booth and Fairhurst is that the Brown-York boundary conditions are too restrictive. The two-metric, lapse, and shift need not be fixed, but their variations corresponding to diffeomorphisms on the boundary must be allowed. Otherwise diffeomorphisms that are not isometries of the three-metric γab on 3B cannot be generated by any Hamiltonian. Relaxing the boundary conditions appropriately, they show that there is a Hamiltonian on the extended phase space, which generates the correct equations of motions, and the quasi-local energy and angular momentum expression of Brown and York are just (minus) the momentum π conjugate to the time coordinate t. The only difference between the present and the original Brown-York expressions is the freedom in the functional form of the unspecified reference term. Because of the more restrictive boundary conditions of Brown and York, their reference term is less restricted. Choosing the same boundary conditions in both approaches, the resulting expressions coincide completely. The flat space and light cone references The quasi-local quantities introduced above become well defined only if the subtraction term S0 in the principal function is specified. The usual interpretation of a choice for S0 is the calibration of the quasi-local quantities, i.e., fixing where to take their zero value. The only restriction on S0 that we had is that it must be a functional of the metric γab on the timelike boundary 3B. To specify S0, it seems natural to expect that the principal function S be zero in Minkowski spacetime [216, 120]. Then S0 would be the integral of the trace Θ0 of the extrinsic curvature of 3B, if it were embedded in Minkowski spacetime with the given intrinsic metric γab. However, a general Lorentzian three-manifold (3B, γab) cannot be isometrically embedded, even locally, into the Minkowski spacetime. (For a detailed discussion of this embedability, see [120] and Section 10.1.8.) Another assumption on S0 might be the requirement of the vanishing of the quasi-local quantities, or of the energy and momentum surface densities, or only of the energy surface density ε, in some reference spacetime, e.g., in Minkowski or anti-de Sitter spacetime. Assuming that S0 depends on the lapse N and shift Na linearly, the functional derivatives (∂S0/∂N) and (∂S0/∂Na) depend only on the two-metric qab and on the boost-gauge that 3B defined on \({{\mathcal S}_t}\). Therefore, ε and ja take the form (10.10), and, by the requirement of the vanishing of ε in the reference spacetime it follows that k0 should be the trace of the extrinsic curvature of \({{\mathcal S}_t}\) in the reference spacetime. Thus, it would be natural to fix k0 as the trace of the extrinsic curvature of \({{\mathcal S}_t}\), when (\({{\mathcal S}_t}\), qab) is embedded isometrically into the reference spacetime. However, this embedding is far from unique (since, in particular, there are two independent normals of \({{\mathcal S}_t}\) in the spacetime and it would not be fixed which normal should be used to calculate k0), and hence the construction would be ambiguous. On the other hand, one could require (\({{\mathcal S}_t}\), qab) to be embedded into flat Euclidean three-space, i.e., into a spacelike hyperplane of Minkowski spacetime. This is the choice of Brown and York [120, 121]. In fact, as we already noted in Section 4.1.3, for two-surfaces with everywhere positive scalar curvature, such an embedding exists and is unique. (The order of the differentiability of the metric is reduced in [261] to C2.) A particularly interesting two-surface that cannot be isometrically embedded into the flat three-space is the event horizon of the Kerr black hole, if the angular momentum parameter a exceeds the irreducible mass (but is still not greater than the mass parameter m), i.e., if \(\sqrt 3 m < 2\vert a\vert \; < 2m\) [463]. (On the other hand, for its global isometric embedding into ℝ4, see [203].) Thus, the construction works for a large class of two-surfaces, but certainly not for every potentially interesting two-surface. The convexity condition is essential. It is known that the (local) isometric embedability of (\({\mathcal S}\), qab) into flat three-space with extrinsic curvature \(k_{ab}^0\) is equivalent to the Gauss-Codazzi-Mainardi equations \({\delta _a}({k^{0a}}_b - \delta _b^a{k^0}) = 0\) and \(^{\mathcal S}R - {({k^0})^2} + k_{ab}^0{k^{0ab}} = 0\). Here δa is the intrinsic Levi-Civita covariant derivative and \(^{\mathcal S}R\) is the corresponding curvature scalar on \({\mathcal S}\) determined by qab. Thus, for given qab and (actually the flat) embedding geometry, these are three equations for the three components of \(k_{ab}^0\), and hence, if the embedding exists, qab determines k0. Therefore, the subtraction term k0 can also be interpreted as a solution of an under-determined elliptic system, which is constrained by a nonlinear algebraic equation. In this form the definition of the reference term is technically analogous to the definition of those in Sections 7, 8, and 9, but, by the nonlinearity of the equations, in practice it is much more difficult to find the reference term k0 than the spinor fields in the constructions of Sections 7, 8, and 9. Accepting this choice for the reference configuration, the reference SO(1,1) gauge potential \(A_a^0\) will be zero in the boost-gauge in which the timelike normal of \({{\mathcal S}_t}\) in the reference Minkowski spacetime is orthogonal to the spacelike three-plane, because this normal is constant. Thus, to summarize, for convex two-surfaces, the flat space reference of Brown and York is uniquely determined, k0 is determined by this embedding, and \(A_a^0 = 0\). Then \(8\pi G{S^0} = - \int\nolimits_{{{\mathcal S}_t}} {N{k^0}} d{{\mathcal S}_t}\), from which sab can be calculated (if needed). The procedure is similar if, instead of a spacelike hyperplane of Minkowski spacetime, a spacelike hypersurface of
CommonCrawl
New Indian-Chennai News & More -> திருக்குறள் யாப்பியல் ஆய்வுகள் -> Sanskrit prosody TOPIC: Sanskrit prosody Sanskrit prosody "Chanda" redirects here. For other uses, see Chanda (disambiguation). "Chandas" redirects here. For the Telugu poetry, see Chandas (poetry). For the typeface, see Chandas (typeface). Hindu scriptures and texts Smriti Vedas[show] Upanishads[show] Other scriptures[show] Related Hindu texts Vedangas[show] Puranas[show] Itihasa[show] Shastras and sutras[show] Timeline[show] Sanskrit prosody or Chandas refers to one of the six Vedangas, or limbs of Vedic studies.[1] It is the study of poetic metres and verse in Sanskrit.[1] This field of study was central to the composition of the Vedas, the scriptural canons of Hinduism, so central that some later Hindu and Buddhist texts refer to the Vedas as Chandas.[1][2] The Chandas, as developed by the Vedic schools, included both linear and non-linear systems.[3] The system was organized around seven major metres, according to Annette Wilke and Oliver Moebus, called the "seven birds" or "seven mouths of Brihaspati", and each had its own rhythm, movements and aesthetics wherein a non-linear structure (aperiodicity) was mapped into a four verse polymorphic linear sequence.[3] Sanskrit metres include those based on a fixed number of syllables per verse, and those based on fixed number of morae per verse.[4] The Gayatri metre was structured with 3 verses of 8 syllables (6x4), the Usnih with 2 verses of 8 and 1 of 12 syllables (7x4), the Anustubh with 4 verses of 8 syllables (8x4), Brihati with 2 verses of 8 followed by 1 each of 12 and 8 syllables (9x4), the Pankti with 5 verses of 8 syllables (10x4), the Tristubh with 4 verses of 11 syllables (11x4), and the Jagati metre with 4 verses of 12 syllables each (12x4).[5] In Vedic culture, the Chandas were revered for their perfection and resonance, with the Gayatri metre treated as the most refined and sacred, and one that continues to be part of modern Hindu culture as part of Yoga and hymns of meditation at sunrise.[6] Extant ancient manuscripts on Chandas include Pingala's Chandah Sutra, while an example of a medieval Sanskrit prosody manuscript is Kedara Bhatta's Vrittaratnakara.[7][note 1] The most exhaustive compilations of Sanskrit prosody describe over 600 metres.[10] This is a substantially larger repertoire than in any other metrical tradition.[11] 1Etymology 2History 3.1Nomenclature 3.2Classification 3.3Light and heavy syllables 3.3.1Exceptions 3.4Gaṇa 3.4.1A mnemonic 3.4.2Comparison with Greek and Latin prosody 4The seven birds: major Sanskrit metres 4.1Other syllable-based metres 4.2Morae-based metres 4.3Hybrid metres 5Metres as tools for literary architecture 5.1Use of metre to identify corrupt texts 6Texts 6.1Chandah Sutra 6.2Bhashyas 7Usage 7.1Post-vedic poetry, epics 7.2Chandas and mathematics 8Influence 8.1In India 8.2Outside India 9See also 10Notes 11References 11.1Bibliography 12External links The term Chanda (Sanskrit: छन्द) means "pleasing, alluring, lovely, delightful or charming", and is based on the root chad which means "esteemed to please, to seem good, feel pleasant and/or something that nourishes, gratifies or is celebrated".[12] The term also refers to "any metrical part of the Vedas or other composition".[12] Ancient Sanskrit written on hemp-based paper. Hemp fibre was commonly used in the production of paper from 200 BCE to the late 1800's. The hymns of Rigveda include the names of metres, which implies that the discipline of Chandas (Sanskrit prosody) emerged in the 2nd-millennium BCE.[4][note 2] The Brahmanas layer of Vedic literature, composed between 900 BCE and 700 BCE, contains a complete expression of the Chandas.[15] Panini's treatise on Sanskrit grammar distinguishes Chandas as the verses that compose the Vedas, from Bhashya (Sanskrit: भाष्य), the language used for learned discourse and scholastic discussion of the Vedas.[16] The Vedic Sanskrit texts employ fifteen metres, of which seven are common, and the most frequent are three (8-, 11- and 12-syllable lines).[17] The post-Vedic texts, such as the epics as well as other classical literature of Hinduism, deploy both linear and non-linear metres, many of which are based on syllables and others based on diligently crafted verses based on repeating numbers of morae (matra per foot).[17] About 150 treatises on Sanskrit prosody from the classical era are known, in which some 850 metres were defined and studied by the ancient and medieval Hindu scholars.[17] The ancient Chandahsutra of Pingala, also called Pingala Sutras, is the oldest Sanskrit prosody text that has survived into the modern age, and it is dated to between 600 and 200 BCE.[18][19] Like all Sutras, the Pingala text is distilled information in the form of aphorisms, and these were widely commented on through the bhashya tradition of Hinduism. Of the various commentaries, those widely studied are the three 6th century texts - Jayadevacchandas, Janashrayi-Chhandovichiti and Ratnamanjusha,[20] the 10th century commentary by Karnataka prosody scholar Halayudha, who also authored the grammatical Shastrakavya and Kavirahasya (literally, The Poet's Secret).[18] Other important historical commentaries include those by the 11th-century Yadavaprakasha and 12th-century Bhaskaracharya, as well as Jayakriti's Chandonushasana, and Chandomanjari by Gangadasa.[18][20] There is no word without meter, nor is there any meter without words. —Natya Shastra[21] Major encyclopedic and arts-related Hindu texts from the 1st and 2nd millennium CE contain sections on Chandas. For example, the chapters 328 to 335 of the Agni Purana,[22][23] chapter 15 of the Natya Shastra, chapter 104 of the Brihat Samhita, the Pramodajanaka section of the Manasollasa contain embedded treatises on Chandas.[24][25][26] Elements[edit] Nomenclature[edit] A syllable (Akshara, अक्षर), in Sanskrit prosody, is a vowel following one or more consonants, or a vowel without any.[27] The short syllable is one with short (hrasva) vowels, which are a (अ), i (इ), u (उ), ṛ (ऋ) and ḷ (ऌ). The long syllable is defined as one with long (dirgha) vowels, which are ā (आ), ī (ई), ū (ऊ), ṝ (ॠ), e (ए), ai (ऐ), o (ओ) and au (औ).[27] A stanza (padya) is defined in Sanskrit prosody as a group of four quarters (pādas).[27] Indian prosody studies developed two types of stanzas. Vritta stanzas are those that are crafted with a precise number syllables, while Jati stanzas are those that are based on syllabic instants (morae, matra).[27] The Vritta[note 3] stanzas are further recognized in three forms, with Samavritta where the four quarters are similar in its embedded mathematical pattern, Ardhasamavritta where alternate verses keep similar syllabic structure, and Vishamavritta where all four quarters are different.[27] A regular Vritta is defined as that where the total number of syllables in each verse is less than or equal to 26 syllables, while irregulars contain more.[27] When the metre is based on morae (matra), a short syllable is counted as one mora, and a long syllable is counted as two morae.[27] Classification[edit] The metres found in classical Sanskrit poetry are sometimes alternatively classified into three kinds.[29] Syllabic verse (akṣaravṛtta or aksharavritta): metres depend on the number of syllables in a verse, with relative freedom in the distribution of light and heavy syllables. This style is derived from older Vedic forms, and found in the great epics, the Mahabharata and the Ramayana. Syllabo-quantitative verse (varṇavṛtta or varnavritta): metres depend on syllable count, but the light-heavy patterns are fixed. Quantitative verse (mātrāvṛtta or matravritta): metres depend on duration, where each verse-line has a fixed number of morae, usually grouped in sets of four. Light and heavy syllables[edit] In most of Sanskrit poetry the primary determinant of a metre is the number of syllables in a unit of verse, called the pāda ("foot" or "quarter"). Meters of the same length are distinguished by the pattern of laghu ("light") and guru ("heavy") syllables in the pāda. The rules distinguishing laghu and guru syllables are the same as those for non-metric prose, and these are specified in Vedic Shiksha texts that study the principles and structure of sound, such as the Pratishakhyas. Some of the significant rules are:[30][31] Metre is a veritable ship, for those who want to go, across the vast ocean of poetry. —Dandin, 7th century[32] A syllable is laghu only if its vowel is hrasva ("short") and followed by at most one consonant before another vowel is encountered. A syllable with an anusvara ('ṃ') or a visarga ('ḥ') is always guru. All other syllables are guru, either because the vowel is dīrgha ("long"), or because the hrasva vowel is followed by a consonant cluster. The hrasva vowels are the short monophthongs: 'a', 'i', 'u', 'ṛ' and 'ḷ' All other vowels are dirgha: 'ā', 'ī', 'ū', 'ṝ', 'e', 'ai', 'o' and 'au'. (Note that, morphologically, the last four vowels are actually the diphthongs 'ai', 'āi', 'au' and 'āu', as the rules of sandhi in Sanskrit make clear.)[33] Gangadasa Pandita states that the last syllable in each pāda may be considered guru, but a guru at the end of a pāda is never counted as laghu.[note 4][better source needed] For measurement by mātrā (morae), laghu syllables count as one unit, and guru syllables as two units.[34] Exceptions[edit] The Hindu prosody treatises crafted exceptions to these rules based on their study of sound, which apply in Sanskrit and Prakrit prosody. For example, the last vowel of a verse, regardless of its natural length, may be considered short or long according to the requirement of the metre.[27] Exceptions also apply to special sounds, of the type प्र, ह्र, ब्र and क्र.[27] Gaṇa[edit] Gaṇa (Sanskrit, "group") is the technical term for the pattern of light and heavy syllables in a sequence of three. It is used in treatises on Sanskrit prosody to describe metres, according to a method first propounded in Pingala's chandahsutra. Pingala organizes the metres using two units:[35] l: a "light" syllable (L), called laghu g: a "heavy" syllable (H), called guru Metrical feet Disyllables ˘ ˘ pyrrhic, dibrach ˘ ¯ iamb ¯ ˘ trochee, choree ¯ ¯ spondee Trisyllables ˘ ˘ ˘ tribrach ¯ ˘ ˘ dactyl ˘ ¯ ˘ amphibrach ˘ ˘ ¯ anapaest, antidactylus ˘ ¯ ¯ bacchius ¯ ¯ ˘ antibacchius ¯ ˘ ¯ cretic, amphimacer ¯ ¯ ¯ molossus See main article for tetrasyllables. Pingala's method described any metre as a sequence of gaṇas, or triplets of syllables (trisyllabic feet), plus the excess, if any, as single units. There being eight possible patterns of light and heavy syllables in a sequence of three, Pingala associated a letter, allowing the metre to be described compactly as an acronym.[36] Each of these has its Greek prosody equivalent as listed below. The Ganas (गण, class)[37][38] prosody Weight Symbol Style Greek Na-gaṇa L-L-L u u u Ma-gaṇa H-H-H — — — DUM DUM DUM Ja-gaṇa L-H-L u — u da DUM da Ra-gaṇa H-L-H — u — DUM da DUM Cretic Bha-gaṇa H-L-L — u u DUM da da Sa-gaṇa L-L-H u u — da da DUM Anapaest Ya-gaṇa L-H-H u — — da DUM DUM Ta-gaṇa H-H-L — — u DUM DUM da Pingala's order of the gaṇas, viz. m-y-r-s-t-j-bh-n, corresponds to a standard enumeration in binary, when the three syllables in each gaṇa are read right-to-left with H=0 and L=1. A mnemonic[edit] The word yamātārājabhānasalagāḥ (or yamātārājabhānasalagaṃ) is a mnemonic for Pingala's gaṇas, developed by ancient commentators, using the vowels "a" and "ā" for light and heavy syllables respectively with the letters of his scheme. In the form without a grammatical ending, yamātārājabhānasalagā is self-descriptive, where the structure of each gaṇa is shown by its own syllable and the two following it:[39] ya-gaṇa: ya-mā-tā = L-H-H ma-gaṇa: mā-tā-rā = H-H-H ta-gaṇa: tā-rā-ja = H-H-L ra-gaṇa: rā-ja-bhā = H-L-H ja-gaṇa: ja-bhā-na = L-H-L bha-gaṇa: bhā-na-sa = H-L-L na-gaṇa: na-sa-la = L-L-L sa-gaṇa: sa-la-gā = L-L-H The mnemonic also encodes the light "la" and heavy "gā" unit syllables of the full scheme. The truncated version obtained by dropping the last two syllables, viz. yamātārājabhānasa, can be read cyclically (i.e., wrapping around to the front). It is an example of a De Bruijn sequence.[40] Comparison with Greek and Latin prosody[edit] Sanskrit prosody shares similarities with Greek and Latin prosody. For example, in all three, rhythm is determined from the amount of time needed to pronounce a syllable, and not on stress (quantitative metre).[41][42] Each eight syllable line, for instance in the Rigveda, is approximately equivalent to the Greek iambic dimeter.[28] The sacred Gayatri metre of the Hindus consists of three of such iambic dimeter lines, and this embedded metre alone is at the heart of about 25% of the entire Rigveda.[28] The gaṇas are, however, not the same as the foot in Greek prosody. The metrical unit in Sanskrit prosody is the verse (line, pada), while in Greek prosody it is the foot.[43] Sanskrit prosody allows elasticity similar to Latin Saturnian verse, uncustomary in Greek prosody.[43] The principles of both Sanskrit and Greek prosody probably go back to Proto-Indo-European times, because similar principles are found in ancient Persian, Italian, Celtic, and Slavonic branches of Indo-European.[44] The seven birds: major Sanskrit metres[edit] The Vedic Sanskrit prosody included both linear and non-linear systems.[3] The field of Chandas was organized around seven major metres, state Annette Wilke and Oliver Moebus, called the "seven birds" or "seven mouths of Brihaspati",[note 5] and each had its own rhythm, movements and aesthetics. The system mapped a non-linear structure (aperiodicity) into a four verse polymorphic linear sequence.[3] The seven major ancient Sanskrit metres are the three 8-syllable Gayatri, the four 8-syllable Anustubh, the four 11-syllable Tristubh, the four 12-syllable Jagati, and the mixed padas metres named Ushnih, Brihati and Pankti. गायत्रेण प्रति मिमीते अर्कमर्केण साम त्रैष्टुभेन वाकम् । वाकेन वाकं द्विपदा चतुष्पदाक्षरेण मिमते सप्त वाणीः ॥२४॥ With the Gayatri, he measures a song; with the song – a chant; with the Tristubh – a recited stanza; With the stanza of two feet and four feet – a hymn; with the syllable they measure the seven voices. ॥24॥ —  Rigveda 1.164.24, Translated by Tatyana J. Elizarenkova[46] The major ancient metres in Sanskrit prosody[5][47] Structure Mapped Sequence[5] Varieties[48] Usage[49] 24 syllables; 3 verses of 8 syllables 6x4 11 Common in Vedic texts Example: Rigveda 7.1.1-30, 8.2.14[50] Ushnih 2 verses of 8; 1 of 12 syllables 7x4 8 Vedas, not common Example: Rigveda 1.8.23-26[51] Anushtubh 4 verses of 8 syllables 8x4 12 Most frequent in post-Vedic Sanskrit metrical literature; embedded in the Bhagavad Gita, the Mahabharata, the Ramayana, the Puranas, Smritis and scientific treatises Example: Rigveda 8.69.7-16, 10.136.7[52] Brihati 1 verse of 12; 1 verse of 8 syllables 9x4 12 Vedas, rare Example: Rigveda 5.1.36, 3.9.1-8[53] Pankti 5 verses of 8 syllables 10x4 14 Uncommon, found with Tristubh Example: Rigveda 1.191.10-12[54] Tristubh 4 verses of 11 syllables 11x4 22 Second in frequency in post-Vedic Sanskrit metric literature, dramas, plays, parts of the Mahabharata, major 1st-millennium Kavyas Example: Rigveda 4.50.4, 7.3.1-12[55] Jagati 4 verses of 12 syllables 12x4 30 Third most common, typically alternates with Tristubh in the same text, also found in separate cantos. Example: Rigveda 1.51.13, 9.110.4-12[56] Other syllable-based metres[edit] Beyond these seven metres, ancient and medieval era Sanskrit scholars developed numerous other syllable-based metres (Akshara-chandas). Examples include Atijagati (13x4, in 16 varieties), Sakkari (14x4, in 20 varieties), Atisakkari (15x4, in 18 varieties), Ashti (16x4, in 12 varieties), Atyashti (17x4, in 17 varieties), Dhriti (18x4, in 17 varieties), Atidhriti (19x4, in 13 varieties), Kriti (20x4, in 4 varieties) and so on.[57][58] Morae-based metres[edit] See also: Mātrika metre In addition to the syllable-based metres, Hindu scholars in their prosody studies, developed Gana-chandas or Gana-vritta, that is metres based on mātrās (morae, instants).[59][58][60] The metric foot in these are designed from laghu (short) morae or their equivalents. Sixteen classes of these instants-based metres are enumerated in Sanskrit prosody, each class has sixteen sub-species. Examples include Arya, Udgiti, Upagiti, Giti and Aryagiti.[61] This style of composition is less common than syllable-based metric texts, but found in important texts of Hindu philosophy, drama, lyrical works and Prakrit poetry.[17][62] The entire Samkhyakarika text of the Samkhya school of Hindu philosophy is composed in Arya metre, as are many chapters in the mathematical treatises of Aryabhata, and some texts of Kalidasa.[61][63] Hybrid metres[edit] Hindu scholars also developed a hybrid class of Sanskrit metres, which combined features of the syllable-based metres and morae-based metres.[64][58] These were called Matra-chandas. Examples of this group of metres include Vaitaliya, Matrasamaka and Gityarya.[65] The Hindu texts Kirātārjunīya and Naishadha Charita, for instance, feature complete cantos that are entirely crafted in the Vaitaliya metre.[64][66] The Hanuman Chalisa, a 40-verse hymn of praise to Hanuman, is composed in Matra-chanda.[67] Metres as tools for literary architecture[edit] The Vedic texts, and later Sanskrit literature, were composed in a manner where a change in metres was an embedded code to inform the reciter and audience that it marks the end of a section or chapter.[47] Each section or chapter of these texts uses identical metres, rhythmically presenting their ideas and making it easier to remember, recall and check for accuracy.[47] Similarly, the authors of Sanskrit hymns used metres as tools of literary architecture, wherein they coded a hymn's end by frequently using a verse of a metre different than that used in the hymn's body.[47] However, they never used Gayatri metre to end a hymn or composition, possibly because it enjoyed a special level of reverence in Hindu texts.[47] In general, all metres were sacred and the Vedic chants and hymns attribute the perfection and beauty of the metres to divine origins, referring to them as mythological characters or equivalent to gods.[47] Use of metre to identify corrupt texts[edit] The verse perfection in the Vedic texts, verse Upanishads[note 6] and Smriti texts has led some Indologists from the 19th century onwards to identify suspected portions of texts where a line or sections are off the expected metre.[68][69] Some editors have controversially used this metri causa principle to emend Sanskrit verses, assuming that their creative conjectural rewriting with similar-sounding words will restore the metre.[68] This practice has been criticized, states Patrick Olivelle, because such modern corrections may be changing the meaning, adding to corruption, and imposing the modern pronunciation of words on ancient times when the same syllable or morae may have been pronounced differently.[68][69] Large and significant changes in metre, wherein the metre of succeeding sections return to earlier sections, are sometimes thought to be an indication of later interpolations and insertion of text into a Sanskrit manuscript, or that the text is a compilation of works of different authors and time periods.[70][71][72] However, some metres are easy to preserve and a consistent metre does not mean an authentic manuscript. This practice has also been questioned when applied to certain texts such as ancient and medieval era Buddhist manuscripts, in view of the fact that this may reflect versatility of the author or changing styles over author's lifetime.[73] Texts[edit] Chandah Sutra[edit] This section needs expansion. You can help by adding to it. (March 2016) When halved, (record) two. When unity (is subtracted, record) sunya. When sunya, (multiply by) two. When halved, multiply (by) itself (squared). —Chandah Sutra 8.28-31 6th-2nd century BCE[74][75] The Chandah Sutra is also known as Chandah sastra, or Pingala Sutras after its author Pingala. It is the oldest Hindu treatise on prosody to have survived into the modern era.[18][19] This text is structured in 8 books, with a cumulative total of 310 sutras.[76] It is a collection of aphorisms predominantly focussed on the art of poetic metres, and presents some mathematics in the service of music.[74][77] Bhashyas[edit] The 11th-century bhashya on Pingala's Chandah Sutra by Ratnakarashanti, called Chandoratnakara, added new ideas to Prakrit poetry, and this was influential to prosody in Nepal, and to the Buddhist prosody culture in Tibet where the field was also known as chandas or sdeb sbyor.[45] Post-vedic poetry, epics[edit] The Anushtubh Vedic metre has been the most popular in classical and post-classical Sanskrit works.[49] It is also octosyllabic, next harmonic to Gayatri metre that is sacred to the Hindus, and it appears either in free verse or fixed syllabic form (shloka). It has a rhythm, offers flexibility and creative space, but has embedded rules such as its sixth syllable is always long, the fifth syllable is always short; often, the seventh syllable in even numbered lines of a stanza is short (iambic) as well.[49] The Anushtubh is present in Vedic texts, but its presence is minor, and Trishtubh and Gayatri metres dominate in the Rigveda for example.[78] A dominating presence of the Anushtubh metre in a text is a marker that the text is likely post-Vedic.[79] The Mahabharata, for example, features many verse metres in its chapters, but an overwhelming proportion of the stanzas, 95% are shlokas of the anustubh type, and most of the rest are tristubhs.[80] The Hindu epics and the post-Vedic classical Sanskrit poetry is typically structured as quatrains of four pādas (verses), with the metrical structure of each pāda completely specified. In some cases, pairs of pādas may be scanned together as the hemistichs of a couplet.[81] It is then normal for the pādas comprising a pair to have different structures, to complement each other aesthetically. Otherwise the four pādas of a stanza have the same structure. Chandas and mathematics[edit] The attempt to identify the most pleasing sounds and perfect compositions led ancient Indian scholars to study permutations and combinatorial methods of enumerating musical metres.[82] The Pingala Sutras includes a discussion of binary system rules to calculate permutations of Vedic metres.[77][83][84] Pingala, and more particularly the classical Sanskrit prosody period scholars, developed the art of Matrameru, which is the field of counting sequences such as 0, 1, 1, 2, 3, 5, 8 and so on (Fibonacci numbers), in their prosody studies.[77][83][85] The first five rows of the Pascal's triangle, also called the Halayudha's triangle.[86] Halayudha discusses this and more in his Sanskrit prosody bhashya on Pingala. The 10th-century Halāyudha's commentary on Pingala Sutras, developed meruprastāra, which mirrors the Pascal's triangle in the west, and now also called as the Halayudha's triangle in books on mathematics.[77][86] The 11th-century Ratnakarashanti's Chandoratnakara describes algorithms to enumerate binomial combinations of metres through pratyaya. For a given class (length), the six pratyaya were:[87] prastāra, the "table of arrangement": a procedure for enumerating (arranging in a table) all metres of the given length, naṣṭa: a procedure for finding a metre given its position in the table (without constructing the whole table), uddiṣṭa: a procedure for finding the position in the table of a given metre (without constructing the whole table), laghukriyā or lagakriyā: calculation of the number of metres in the table containing a given number of laghu (or guru) syllables, saṃkhyā: calculation of the total number of metres in the table, adhvan: calculation of the space needed to write down the prastāra table of a given class (length). Some authors also considered, for a given metre, (A) the number of guru syllables, (B) the number of laghu syllables, (C) the total number of syllables, and (D) the total number of mātras, giving expressions for each of these in terms of any two of the other three. (The basic relations being that C=A+B and D=2A+B.)[88] Influence[edit] In India[edit] Song and language Children understand song, beasts do too, and even snakes. But the sweetness of literature, does the Great God himself truly understand. —Rajatarangini[89] The Chandas are considered one of the five categories of literary knowledge in Hindu traditions. The other four, according to Sheldon Pollock, are Gunas or expression forms, Riti, Marga or the ways or styles of writing, Alankara or tropology, and Rasa, Bhava or aesthetic moods and feelings.[89] The Chandas are revered in Hindu texts for their perfection and resonance, with the Gayatri metre treated as the most refined and sacred, and one that continues to be part of modern Hindu culture as part of Yoga and hymns of meditation at sunrise.[6] Outside India[edit] Nepali language is the most popular language for chanda so far outside India. The predominant Hindu culture may be the reason for flourishing of chanda in Nepal. The first Nepali book, a translation of Hindu epic "Ramayana", was also written in classical chanda - mostly "shardulabikridit chanda".[90] The Sanskrit Chanda has influenced southeast Asian prosody and poetry, such as Thai Chan (Thai: ฉันท์).[91] Its influence, as evidenced in the 14th-century Thai texts such as the Mahachat kham luang, is thought to have come either through Cambodia or Sri Lanka.[91] Evidence of the influence of Sanskrit prosody in 6th-century Chinese literature is found in the works of Shen Yueh and his followers, probably introduced through Buddhist monks who visited India.[92] Shiksha Prosody (Latin) ^ For a review of other Sanskrit prosody texts, see Moriz Winternitz's History of Indian Literature,[8] and HD Velankar's Jayadaman.[9] ^ See, for example, Rigveda hymns 1.164, 2.4, 4.58, 5.29, 8.38, 9.102 and 9.103;[13] and 10.130[14] ^ Vritta, literally "turn", is rooted in vrit, Latin vert-ere, thereby etymologically to versus of Latin and "verse" of Indo-European languages.[28] ^ सानुस्वारश्च दीर्घश्च विसर्गी च गुरुर्भवेत् । वर्णः संयोगपूर्वश्च तथा पादान्तगोऽपि वा ॥ ^ These seven metres are also the names of the seven horses of Hindu Sun god (Aditya or Surya), mythically symbolic for removing darkness and bringing the light of knowledge.[45] These are mentioned in Surya verses of the Ashvini Shastra portion of Aitareya Brahmana. ^ Kena, Katha, Isha, Shvetashvatara and Mundaka Upanishads are examples of verse-style ancient Upanishads. ^ Jump up to:a b c James Lochtefeld (2002), "Chandas" in The Illustrated Encyclopedia of Hinduism, Vol. 1: A-M, Rosen Publishing, ISBN 0-8239-2287-1, page 140 ^ Moriz Winternitz (1988). A History of Indian Literature: Buddhist literature and Jaina literature. Motilal Banarsidass. p. 577. ISBN 978-81-208-0265-0. ^ Jump up to:a b c d Annette Wilke & Oliver Moebus 2011, pp. 391-392 with footnotes. ^ Jump up to:a b Peter Scharf (2013). Keith Allan (ed.). The Oxford Handbook of the History of Linguistics. Oxford University Press. pp. 228–234. ISBN 978-0-19-164344-6. ^ Jump up to:a b c Annette Wilke & Oliver Moebus 2011, p. 392. ^ Jump up to:a b Annette Wilke & Oliver Moebus 2011, pp. 393-394. ^ Deo 2007, pp. 6-7 section 2.2. ^ Maurice Winternitz 1963, pp. 1-301, particularly 5-35. ^ HD Velankar (1949), Jayadāman (a collection of ancient texts on Sanskrit prosody and a classified list of Sanskrit metres with an alphabetical index), OCLC 174178314, Haritosha; HD Velankar (1949), Prosodial practice of Sanskrit poets, Journal of the Royal Asiatic Society, Volume 24-25, pages 49-92. ^ Deo 2007, pp. 3, 6 section 2.2. ^ Jump up to:a b Monier Monier-Williams (1923). A Sanskrit-English Dictionary. Oxford University Press. p. 332. ^ Origin and Development of Sanskrit Metrics, Arati Mitra (1989), The Asiatic Society, pages 4-6 with footnotes ^ William K. Mahony (1998). The Artful Universe: An Introduction to the Vedic Religious Imagination. State University of New York Press. pp. 110–111. ISBN 978-0-7914-3579-3. ^ Guy L. Beck 1995, pp. 40-41. ^ Sheldon Pollock 2006, pp. 46, 268-269. ^ Jump up to:a b c d Alex Preminger; Frank J. Warnke; O. B. Hardison Jr. (2015). Princeton Encyclopedia of Poetry and Poetics. Princeton University Press. pp. 394–395. ISBN 978-1-4008-7293-0. ^ Jump up to:a b c d Sheldon Pollock 2006, p. 370. ^ Jump up to:a b B.A. Pingle 1898, pp. 238-241. ^ Jump up to:a b Andrew Ollett (2013). Nina Mirnig; Peter-Daniel Szanto; Michael Williams (eds.). Puspika: Tracing Ancient India Through Texts and Traditions. Oxbow Books. pp. 331–334. ISBN 978-1-84217-385-5. ^ Har Dutt Sharma (1951). "Suvrttatilaka". Poona Orientalist: A Quarterly Journal Devoted to Oriental Studies. XVII: 84. ^ Rocher 1986, p. 135. ^ MN Dutt, Agni Purana Vol 2, pages 1219-1233 (Note: Dutt's manuscript has 365 chapters, and is numbered differently) ^ Sheldon Pollock 2006, pp. 184-188. ^ T. Nanjundaiya Sreekantaiya (2001). Indian Poetics. Sahitya Akademi. pp. 10–12. ISBN 978-81-260-0807-0. ^ Maurice Winternitz 1963, pp. 8–9, 31–34. ^ Jump up to:a b c d e f g h i Lakshman R Vaidya, Sanskrit Prosody - Appendix I, in Sanskrit-English Dictionary, Sagoon Press, Harvard University Archives, pages 843-856; Archive 2 ^ Jump up to:a b c A history of Sanskrit Literature, Arthur MacDonell, Oxford University Press/Appleton & Co, page 56 ^ Deo 2007, p. 5. ^ Coulson, p.21 ^ Muller & Macdonell, Appendix II ^ Maurice Winternitz 1963, p. 13. ^ Coulson, p.6 ^ Muller and Macdonell, loc.cit. ^ Pingala CS 1.9-10, in order ^ Pingala, chandaḥśāstra, 1.1-10 ^ Horace Hayman Wilson 1841, pp. 415-416. ^ Pingala CS, 1.1-8, in order ^ Coulson, p.253ff ^ Stein, Sherman K. (1963), "Yamátárájabhánasalagám", The Man-made Universe: An Introduction to the Spirit of Mathematics, pp. 110–118 . Reprinted in Wardhaugh, Benjamin, ed. (2012), A Wealth of Numbers: An Anthology of 500 Years of Popular Mathematics Writing, Princeton Univ. Press, pp. 139–144. ^ Barbara Stoler Miller (2013). Phantasies of a Love Thief: The Caurapancasika Attributed to Bilhana. Columbia University Press. pp. 2 footnote 2. ISBN 978-0-231-51544-3. ^ Alex Preminger; Frank J. Warnke; O. B. Hardison Jr. (2015). Princeton Encyclopedia of Poetry and Poetics. Princeton University Press. p. 498. ISBN 978-1-4008-7293-0. ^ Jump up to:a b A history of Sanskrit Literature, Arthur MacDonell, Oxford University Press/Appleton & Co, page 55 ^ Stephen Dobyns (2011). Next Word, Better Word: The Craft of Writing Poetry. Macmillan. pp. 248–249. ISBN 978-0-230-62180-0. ^ Jump up to:a b Jamgon Kongtrul Lodro Taye; Koṅ-sprul Blo-gros-mtha'-yas; Gyurme Dorje (2012). The Treasury of Knowledge: Indo-Tibetan classical learning and Buddhist phenomenology. Book six, parts one and two. Shambhala Publications. pp. 26–28. ISBN 978-1-55939-389-8. ^ Tatyana J. Elizarenkova (1995). Language and Style of the Vedic Rsis. State University of New York Press. pp. 113–114. ISBN 978-0-7914-1668-6. ^ Jump up to:a b c d e f Tatyana J. Elizarenkova (1995). Language and Style of the Vedic Rsis. State University of New York Press. pp. 111–121. ISBN 978-0-7914-1668-6. ^ Jump up to:a b c Horace Hayman Wilson 1841, pp. 418-422. ^ Arnold 1905, pp. 10, 48. ^ Arnold 1905, p. 48. ^ Arnold 1905, p. 11, 50 with note ii(a). ^ Arnold 1905, p. 48, 66 with note 110(i). ^ Arnold 1905, p. 55 with note iv, 172 with note viii. ^ Arnold 1905, pp. 48 with table 91, 13 with note 48, 279 with Mandala VII table. ^ Arnold 1905, pp. 12 with note 46, 13 with note 48, 241-242 with note 251. ^ Jump up to:a b c Hopkins 1901, p. 193. ^ Horace Hayman Wilson 1841, p. 427. ^ Andrew Ollett (2013). Nina Mirnig; Peter-Daniel Szanto; Michael Williams (eds.). Puspika: Tracing Ancient India Through Texts and Traditions. Oxbow Books. pp. 331–358. ISBN 978-1-84217-385-5. ^ Jump up to:a b Horace Hayman Wilson 1841, pp. 427-428. ^ Maurice Winternitz 1963, pp. 106-108, 135. ^ Annette Wilke & Oliver Moebus 2011, pp. 230-232 with footnotes 472-473. ^ Kālidāsa; Hank Heifetz (1990). The Origin of the Young God: Kālidāsa's Kumārasaṃbhava. Motilal Banarsidass. pp. 153–154. ISBN 978-81-208-0754-9. ^ Annette Wilke & Oliver Moebus 2011, p. 1044. ^ Jump up to:a b c Patrick Olivelle (1998). The Early Upanisads : Annotated Text and Translation. Oxford University Press. pp. xvi–xviii, xxxvii. ISBN 978-0-19-535242-9. ^ Jump up to:a b Patrick Olivelle (2008). Collected Essays: Language, Texts and Society. Firenze University Press. pp. 293–295. ISBN 978-88-8453-729-4. ^ Maurice Winternitz 1963, pp. 3-4 with footnotes. ^ Patrick Olivelle (2008). Collected Essays: Language, Texts and Society. Firenze University Press. pp. 264–265. ISBN 978-88-8453-729-4. ^ Alf Hiltebeitel (2000), Review: John Brockington, The Sanskrit Epics, Indo-Iranian Journal, Volume 43, Issue 2, pages 161-169 ^ John Brough (1954), The Language of the Buddhist Sanskrit Texts, Bulletin of the School of Oriental and African Studies, Volume 16, Number 2, pages 351-375 ^ Jump up to:a b Kim Plofker (2009). Mathematics in India. Princeton University Press. pp. 55–57. ISBN 0-691-12067-6. ^ Bettina Bäumer; Kapila Vatsyayan (January 1992). Kalātattvakośa: A Lexicon of Fundamental Concepts of the Indian Arts. Motilal Banarsidass. p. 401. ISBN 978-81-208-1044-0. ^ Nooten, B. Van (1993). "Binary numbers in Indian antiquity". J Indian Philos. Springer Science $\mathplus$ Business Media. 21 (1): 31–32. doi:10.1007/bf01092744. ^ Jump up to:a b c d Nooten, B. Van (1993). "Binary numbers in Indian antiquity". J Indian Philos. Springer Science $\mathplus$ Business Media. 21 (1): 31–50. doi:10.1007/bf01092744. ^ Kireet Joshi (1991). The Veda and Indian Culture: An Introductory Essay. Motilal Banarsidass. pp. 101–102. ISBN 978-81-208-0889-8. ^ Friedrich Max Müller (1860). A History of Ancient Sanskrit Literature. Williams and Norgate. pp. 67–70. ^ Hopkins, p.192 ^ Hopkins, p.194. (This is typical for the shloka). ^ Kim Plofker (2009). Mathematics in India. Princeton University Press. pp. 53–57. ISBN 0-691-12067-6. ^ Jump up to:a b Susantha Goonatilake (1998). Toward a Global Science. Indiana University Press. p. 126. ISBN 978-0-253-33388-9. ^ Alekseĭ Petrovich Stakhov (2009). The Mathematics of Harmony: From Euclid to Contemporary Mathematics and Computer Science. World Scientific. pp. 426–427. ISBN 978-981-277-583-2. ^ Keith Devlin (2012). The Man of Numbers: Fibonacci's Arithmetic Revolution. Bloomsbury Academic. p. 145. ISBN 978-1-4088-2248-7. ^ Jump up to:a b Alexander Zawaira; Gavin Hitch**** (2008). A Primer for Mathematics Competitions. Oxford University Press. p. 237. ISBN 978-0-19-156170-2. ^ Hahn, p. 4 ^ Hahn, pp. 15–18 ^ Jump up to:a b Sheldon Pollock 2006, p. 188. ^ Bhanubhakta Acharya ^ Jump up to:a b B.J. Terwiel (1996). Jan E. M. Houben (ed.). Ideology and Status of Sanskrit: Contributions to the History of the Sanskrit Language. BRILL. pp. 307–323. ISBN 90-04-10613-8. ^ B.J. Terwiel (1996). Jan E. M. Houben (ed.). Ideology and Status of Sanskrit: Contributions to the History of the Sanskrit Language. BRILL. pp. 319–320 with footnotes. ISBN 90-04-10613-8. Arnold, Edward Vernon (1905). Vedic Metre in its historical development. Cambridge University Press (Reprint 2009). ISBN 978-1113224446. Guy L. Beck (1995). Sonic Theology: Hinduism and Sacred Sound. Motilal Banarsidass. ISBN 978-81-208-1261-1. Brown, Charles Philip (1869). Sanskrit prosody and numerical symbols explained. London: Trübner & Co. Deo, Ashwini. S (2007). "The metrical organization of Classical Sanskrit verse (Note: the url and the journal number the pages differently; the version in the journal starts at page 63)" (PDF). Journal of Linguistics. Cambridge University Press. 43 (01). doi:10.1017/s0022226706004452. Colebrooke, H.T. (1873). "On Sanskrit and Prakrit Poetry". Miscellaneous Essays. 2. London: Trübner and Co. pp. 57–146. Coulson, Michael (1976). Teach Yourself Sanskrit. Teach Yourself Books. Hodder and Stoughton. Hahn, Michael (1982). Ratnākaraśānti's Chandoratnākara. Kathmandu: Nepal Research Centre. Hopkins, E.W. (1901). "Epic versification". The Great Epic of India. New York: C. Scribner's Sons. LCCN Friedrich Max Müller; Arthur Anthony Macdonell (1886). A Sanskrit grammar for beginners (2 ed.). Longmans, Green. p. 178. PDF Patwardhan, M. (1937). Chandoracana. Bombay: Karnataka Publishing House. B.A. Pingle (1898). Indian Music. Education Society's Press. Sheldon Pollock (2006). The Language of the Gods in the World of Men: Sanskrit, Culture, and Power in Premodern India. University of California Press. ISBN 978-0-520-93202-9. Rocher, Ludo (1986), The Puranas, Otto Harrassowitz Verlag, ISBN 978-3447025225 Velankar, H.D. (1949). Jayadaman: a collection of ancient texts on Sanskrit prosody and a classical list of Sanskrit metres with an alphabetical index. Bombay: Haritoṣamala. Weber, Albrecht (1863). Indische Studien. 8. Leipzig. Annette Wilke; Oliver Moebus (2011). Sound and Communication: An Aesthetic Cultural History of Sanskrit Hinduism. Walter de Gruyter. ISBN 978-3-11-018159-3. Horace Hayman Wilson (1841). An introduction to the grammar of the Sanskrit language. Madden. Maurice Winternitz (1963). History of Indian Literature. Motilal Banarsidass. ISBN 978-81-208-0056-4. Prosody (chandaḥśāstra), Chapter XV of the Nāṭyaśāstra Manuscripts of Pingala Sutra, Vritta Ratnakara and Shrutabodha, University of Kentucky (2004), Includes poetic metre marked sections of Buddha Charita Vrittaratnakara by Kedara Bhatta, and Chandomanjari by Pandit Gangadasa, Manuscripts on Sanskrit Prosody, Compiled with commentary by Vidyasagara (1887), Harvard University Archives / Hathi Trust, University of Wisconsin Archive (Sanskrit), Vrittaratnakara only (Hindi), Vrittaratnakara only (Tamil) Sanskrit Prosody and Numerical Symbols Explained, Charles P Brown, Trubner & Co. A list of 1,300+ metres in post classical Sanskrit prosody, Universität Heidelberg, Germany Sanskrit metre recognizer (This is an incomplete test version.) Recordings of recitation: H. V. Nagaraja Rao (ORI, Mysore), Ashwini Deo, Ram Karan Sharma, Arvind Kolhatkar Intensive Course on Sanskrit Prosody held at CEAS, Bucharest, by Shreenand L. Bapat [1] Introduction to Sanskrit prosody LearnSanskrit.Org
CommonCrawl
Links 6/15: Monsters, Link Study shows that banning bottled water on campuses just makes students switch to bottled soda, with obvious detrimental consequences to health and no decrease in bottle waste. Pakistan's transgender tax collectors. A couple of posts ago, I mocked the Muslim activist who claimed Mossad broke into his house and stole one of his shoes to creep him out. Jonathan Zhou corrects me and points out that this sort of thing is actually a known intelligence agency tactic. A systematic review of all 55 medical conditions whose risks vary with your month of birth. Popehat does some very impressive investigative reporting into the government trying to make a (literal) federal case out of random libertarian blog commenters criticizing a judge at Reason.com. A pretty good example of the abuses of power possible if laws about Internet threats are made too strict. Followups here, here, and here. Probiotics watch: maybe eating fermented food decreases social anxiety? Nevada enacts comprehensive school choice law. The experiment has begun. Life imitating JRPGs – mysterious "time crystals" may hold the secret to outlasting entropy. No word on whether you have to get all seven, or whether they are hidden in temples themed around the seven elements. Some people on Tumblr try to help me understand the implications. A while ago, I was getting the impression that the Mexican drug cartels were unstoppable and the Mexican government was too corrupt to be able to do anything about them. Now the cartels are almost all defeated or in retreat. What happened? Long ago I reviewed a book claiming the future was glia. Now some scientists are proposing that maybe SSRIs work by affecting glial cells. American Hippopotamus describes the 1910s plan by two larger-than-life Boer War guerilla-assassins to "turn American into a nation of hippo ranchers". The story alone would be worth your time even if it wasn't well-written, but it happens to be very possibly the best-written article I have ever read. Long, but also available on Kindle if wanted. Program that teaches college women how to avoid rape may cut risk of rape in half as per new study. This article on whether the US could replicate Scandinavia's low poverty rate is interesting throughout, but what makes it for me is the claim that Swedes in the US have the same poverty rate as Swedes in Sweden [edit: possibly this is false?]. How much should we make of this? Not only are we living in the future, but it's exactly the future Philip K Dick told us to expect: "Abortion drone" to make first flight into Poland The mysterious resemblance between the ancient Numenorean calendar and the French revolutionary calendar (h/t an-animal-imagined-by-poe) I'd always heard the story "Iceland rejected fiscal austerity and did everything exactly the way the left wanted and did great." Scott Sumner and Tyler Cowen say that actually Iceland had lots and lots of austerity. I think it's probably time to stop bothering Rachel Dolezal. She seems like a good example of a person who's not hurting anyone, has some really weird problems she needs to sort out, but because she doesn't fall into a designated "here are people we have agreed it's not okay to mock" category we are mocking her. The psychoanalyst in me wants to say this is some kind of displacement where people who are upset they can't get away with making fun of real black people suddenly see an apparent black person (and NAACP leader, no less!) lose their magical protection and become a valid target, and are now channeling years of pent-up rage at her. Anyway, not totally related, but an explanation of why this is not a good analogy for transgender. Article originally reported as "no gender gap in tech salaries" gives a more nuanced description of their result. Summary: true based on sample of equally qualified people one year after graduation; no evidence whether or not it's true in other situations. This article is also good example of "if you have data supporting a controversial point, ignorant people on Twitter will throw out some terms that sound statistics-y and bad, like 'confounding' or 'cherry-picking', then say you have now been debunked." Doctors with the highest ratings on those rate-your-doctor sites may deliver worse care than less-well-rated docs. Maybe you get higher ratings by giving patients what they want, which is usually amphetamines, narcotics, antibiotics, and unnecessary tests. The time Harriet Beecher Stowe wrote an article about Lord Byron's divorce so controversial it caused a third of The Atlantic's readership to cancel their subscriptions. Alyssa Vance writes on Facebook about Ivy League colleges' sketchy methods of soliciting alumni donations. In a study of 20,000 people, an uncommon allele of the MAO-A gene may cause a sevenfold increased risk of violent criminal behavior, making it probably the strongest gene-crime link to date. Previously on SSC links: if robots are taking our jobs, how come productivity numbers aren't increasing? Now: okay, productivity numbers are increasing, but the robots still don't seem to be taking our jobs. A man angry at the German government for falsely imprisoning him is adopting a thousand children in order to make them German citizens and do his part to strain the welfare state. Apparently everything legally checks out and no one can stop him. Open borders advocates take note. [edit: old story, loophole since possibly closed?] Anti-science-denial group Committee for Skeptical Inquiry wants to make a $25,000 bet with the global warming doubters at the Heartland Institute about future climate trends. While I totally approve of this strategy ("A bet is a tax on bullshit" – Alex Tabarrok and Bryan Caplan), the exact terms seem kind of dumb – AFAIK, Heartland doesn't believe that the Earth is not getting warmer, just that it's not necessarily human-caused. Betting on next year's temperature does nothing to settle that. In the last links post, I mentioned a study that tried to use transgender people to test the sources of the gender gap. A new study from Brazil tries to do the same with race – Brazilians are frequently very multiracial, and different companies might classify the same employee differently. The study tries to match that with salaries – does a boss who thinks of an employee as white pay them more than their boss next year who thinks of them as black? They conclude that 40% of racial income gaps can be explained in that way, though of course it sounds like Brazil's racial situation is different enough from America's that it might not generalize. Nothing sophisticated or intellectual about this one – just trucks driving off aircraft carriers. Wheeeee! Some linguists talk of "the Anglic languages", a language family including English and some of its weirder relatives and descendants that have evolved to the point of mutual intelligibility. You've probably heard of Scots, ie "the reason you can't understand Robert Burns". But did you know about Forth and Bargy? Google's neural nets can now amplify images without human guidance. And by amplify, they mean add shoggoths (warning: shoggoth). Also, this seems way too much like the visual effects of LSD to be a coincidence, and I look forward to neuroscientists explaining the exact connection. A mildly interesting Wall Street Journal article on how jobs are staying open longer because employers can't find qualified candidates also contains some surprising information – 5% of job interviews include an IQ test, and almost 20% include a personality test. I'm not sure how that meshes with our recent discussion of Griggs vs. US. I'm starting to think the importance of this case is overblown – the actual ruling specifically banned assessing qualifications based on IQ tests or on degree completion. Everyone does the latter, so why are we so sure this case is restricting people from doing the former? Obvious once I heard it but something I never thought about it before – the Statue of Liberty is green because all old tarnished copper is green. When it was first built, it was, well, copper-colored. When it tarnished the government was supposed to raise money to fix it, but never got around to it. Now it's impossible for me not to find the idea of the Statue of Liberty being green kind of hilarious. California college professors told they can be disciplined or fired for committing "microaggressions" including "describing America as a melting pot" or saying that "I believe the most qualified person should get the job". Assumed this was some kind of total fake, did some digging, still seems legit, but if anyone can find otherwise I will correct myself with apologies and relief. At least every time I see this sort of thing it's in universities, suggesting the contagion is somewhat contained. [edit: a claim that this doesn't matter much] We already know that many medical studies and many psychological studies fail to replicate. What about economics studies? The necessary work is still being done, but the recent progress report suggests that about 66% of replication attempts completely fail to replicate the original finding, with another 12% partly failing to replicate and only 22% replicating completely. Possibly an argument for privileging theory more in the interminable Econ Theory Versus Empiricism Wars? Contrary to some reports, nationwide gun violence and nationwide violence against police do not seem to have spiked after the latest round of police brutality stories and race riots. This wins my prize for real case most like the sort of weird murder mysteries you see in books: A man is found dead in the desert with an obvious fatal gunshot wound. He has no enemies but recently suffered a major financial setback; everyone suspects he committed suicide and only wanted it to look like murder. However, this ruse is very convincing; no gun is found anywhere nearby. How did he shoot himself? How long can a con man with no soccer talent whatsoever play soccer at the professional level before anybody catches on? How about twenty years? IQ researcher, Ian Deary collaborator, and SSC victim Dr. Stuart Ritchie has written an introductory book on IQ and intelligence studies that looks pretty good. Not sure if the ambiguity of meaning in the subtitle is a horrible mistake or 100% deliberate. This entry was posted in Uncategorized and tagged links on June 24, 2015 by Scott Alexander. ← OT22: Flow My Tears, The Policeman Thread Reflections From The Halfway Point → 1,287 thoughts on "Links 6/15: Monsters, Link" Kevin July 1, 2015 at 11:27 pm An article that Scott and others might appreciate: On the problem of normative sociology Incidentally, "normative sociology" doesn't necessarily have a left-wing bias. There are lots of examples of conservatives doing it as well (e.g. rising divorce rates must be due to tolerance of homosexuality, out-of-wedlock births must be caused by the welfare system etc.) The difference is that people on the left are often more keen on solving various social problems, and so they have a set of pragmatic interests at play that can strongly bias judgement. The latter case is particularly frustrating, because if the plan is to solve some social problem by attacking its causal antecedents, then it is really important to get the causal connections right – otherwise your intervention is going to prove useless, and quite possibly counterproductive. I recall marvelling at how seldom I had heard this idea expressed: that the left consistently gets it right when it comes to identifying problems, but then gets the explanations wrong (and often clings to those explanations long after they have proven problematic), and so is practically ineffective. Agronomous July 2, 2015 at 12:04 pm This is an excellent article. I recommend it to everyone here, and hope Scott at least adds it to his next links post. How did you come across it? It was recommended on Marginal Revolution: http://marginalrevolution.com/marginalrevolution/2015/06/normative-sociology.html Shmi Nux June 29, 2015 at 6:17 pm Lead? What lead? It could have been air conditioning that reduced crime: > When it gets really hot in Baltimore, people in poorer neighborhoods spill out into the streets. This is because they don't have air conditioning. Because crime goes up when the temperature goes up, the police department sees hot temperatures and people in the streets as a recipe for violence. So they respond by sending scores of cops into these neighbors to clear out the corners. The city ends up spending thousands of dollars on overtime just to basically harass the people in these neighborhoods for trying to keep cool. What if instead of spending all that money on police overtime summer after summer, one year you just bought air conditioners for poor people? Would that work? I don't know. But it would help relations with the community. And we know that what we've been doing doesn't work. > I've had a pet theory that part of the reason for the crime drop of the last 20 years is the proliferation of air conditioning. I've yet to see any studies on it. But it makes some sense. From http://www.washingtonpost.com/news/the-watch/wp/2015/06/25/an-interview-with-the-baltimore-cop-whos-revealing-all-the-horrible-things-he-saw-on-the-job/ Alraune June 29, 2015 at 11:39 pm That'd be the absolute risk reduction theory. Never go outside, and you won't get mugged. Jiro June 29, 2015 at 11:57 pm I suspect that giving poor people air conditioners would lead to them losing food stamps and similar benefits because of the increase in "income" from getting the air conditioner. If they make enough money that they pay taxes, they may instead find themselves forced to sell the air conditioners to pay the income taxes on the air conditioners. And even if not, they may have to sell the air conditioners simply because they have a greater need for money than for air conditioners. Also, you'll find poor people being burglarized for their air conditioners. And you'll find those who sell things to the poor raising their prices to capture the surplus from the poor people getting sellable air conditioners, same as colleges raise their prices when you make it easy for students to get loans. (For that matter, can the poor people even afford the electricity to run the air conditioners?) Protagoras June 30, 2015 at 12:18 am I think the electricity is probably the biggest issue; AC uses a huge amount of that. But I can't imagine burglarizing air conditioners becoming big business. They're too heavy and not valuable enough for that. Shenpen June 29, 2015 at 10:21 am >but what makes it for me is the claim that Swedes in the US have the same poverty rate as Swedes in Sweden Is there any use of data that have equally loud left and right wing interpretations? L: because there is no racism against them R: because better culture / genes I mean you can throw this bit of data into the usual political mosh pit and still both sides feel their views reinforced. Larry Kestenbaum June 28, 2015 at 5:20 pm Our local monthly magazine runs a "fake ad" in each issue for readers to find. In the latest issue, the fake ad is from "The Partnership for a Gender-Free America". Obviously this is a parody of a real anti-drug organization, which visualizes and works toward a future America in which literally no one uses illegal drugs, openly or secretly. I mean, isn't that what "drug-free" means? To be really parallel, then, the "gender-free" organization's goal would be to completely wipe out any expression of gender in America, even in private. As John Lennon might have put it: "Imagine there's no gender / I wonder if you can." Gender abolitionism is an actual thing some really out-there radical feminists support. I have no idea how they plan to accomplish it or what it would look like, though. Alraune June 29, 2015 at 2:05 am I have no idea how they plan to accomplish it or what it would look like, though. On the internet, nobody knows you're a dog? Nita June 29, 2015 at 3:15 am I have no idea [..] what it would look like, though. Well, I can't speak for "really out-there radical feminists", but here are some things I would like to see (as a person without a strong internal sense of gender who's sick of the forced genderification of literally everyone and everything): – people can wear or refuse to wear skirts, trousers, boots, heels, make-up or jewelry without being berated, mocked or harassed for it; – those who want to perform a particular gender role (i.e., look and act like "a man" or "a woman") can still do that where appropriate, just like goths, eco-hipsters, butch lesbians or tech geeks dress and act according to their social identity today; – gender of pronouns and names either disappears or is rendered as semantically irrelevant as the grammatical gender of nouns in German or Spanish; – toys and clothes for kids come in orange, yellow, green, purple and red as often as they do in pink or blue. – people can wear or refuse to wear skirts, trousers, boots, heels, make-up or jewelry without being berated, mocked or harassed for it Certainly a workable plan if the modes of refusal are sufficiently conspicuous and expensive. walpolo June 29, 2015 at 1:07 pm I've never understood what the harm of pink/blue is supposed to be. HeelBearCub June 29, 2015 at 1:20 pm @walpolo: It strongly enforces a paradigm wherein toys cannot be played with by both genders. In a world where toys are just toys, boys who like playing with trucks are perfectly willing to play with a pink truck. Girls who like trucks are perfectly willing to play with any colored truck. But a pink/blue divided world, toys stop being just toys, they are either girl toys or boy toys. Then the absence of "pink" toys of a certain type, or "blue" toys of a certain type strongly enforces that a child should not play with that particular toy. There are follow-ons to that in terms of what sort of harms come from it, but that is the basic objection. So if all toys were available in both colors, it would be fine? No. It still implies that certain toys are only for boys and certain toys are only for girls. I'll quote myself: "In a world where toys are just toys, boys who like playing with trucks are perfectly willing to play with a pink truck. Girls who like trucks are perfectly willing to play with any colored truck." The, every toy available in either color seems lik a "separate but equal" argument. I have a harder time seeing what the harm is supposed to be in that case, though. I can understand how it's harmful to reinforce a concept like "Trucks are for boys." But if trucks are equally available in both pink and blue, you don't get that effect. Nornagest June 29, 2015 at 6:17 pm I am given to understand that mid-century civil rights activists were concerned about "separate but equal" because it was often a smokescreen for very unequal differences in the quality or availability of services, not because they preferred a different paint job. I'm not sure you are actually engaging with "separate but equal" claim I was making. The issue is that it enforces the idea that boys and girls are so different that they can't even play with each other's toys. That there is something inherently wrong with a boy touching something that a girl might touch, or vice-versa. The idea that blacks and whites couldn't drink from the same water fountain can't be rationalized merely because "they both had water fountains". Separation implies that there is something wrong with sharing the water fountain. edit: @Nornagest, hopefully that answers your query as well. Alraune June 29, 2015 at 6:58 pm The toy example would probably be more persuasive if toy trucks weren't overwhelming yellow and red. @Alraune: If a truck was pink, do you think, in our current society, it lowers the probability of boys playing with it? What estimate would you place on the percentage reduction? edit: And I don't think there is a pink/blue divide, really. It's more of a "pink badge of girlie". I think I'd be more inclined to frame it in terms of preference. Give a young boy the choice of a pink truck or nothing and he'll play with the pink truck. But give two boys a pink and a blue truck and they'll fight over the blue one, and give them a pink, a blue, and a black and the pink one will rarely get used. Unless it has oversized wheels or skulls or something else that boys consider uniquely cool. I'm almost more interested in the converse, though. Can any of the women here tell us about girls' attitudes toward male-coded toys? I don't remember much female contempt for my toys during my own boyhood (though girls definitely had a concept of "cooties"), but, of course, the girls that were willing to play with my stuff would have been much more visible to me. >>I'm not sure you are actually engaging with "separate but equal" claim I was making. The issue is that it enforces the idea that boys and girls are so different that they can't even play with each other's toys. That there is something inherently wrong with a boy touching something that a girl might touch, or vice-versa. Ah, I see. I can see where this is coming from now, but I'm not sure I agree that this form of "separateness but equality" is necessarily bad. Here's a different example. A Jewish person and a Gentile are the same/equal in terms of anything morally relevant. But in our culture, the yarmulke is widely accepted as "for the Jewish person" and not "for the Gentile." What makes it wrong to have a somewhat similar sort of arbitrary separation between which colors of things are "for boys" and which colors are "for girls"? It certainly does underscore the differences between the two sexes. But it's the boys and girls choosing to do so. (Of course they are acculturated to want to do so–but the same goes for the Jews.) "Can any of the women here tell us about girls' attitudes toward male-coded toys?" There were a few–Legos, Construx, and similar–that I would have given my left arm for. "But it's the boys and girls choosing to do so." Well, the boys and girls aren't choosing, the decision is being made for them by the toy industry, and my impression is that it has developed at a rather rapid pace over the last 30 years or so. I don't recall, the "all pink, all the time" girls toy aisles from my youth or my 8 year young sisters youth. By the the time my kids were born 10-15 years later, it was in full force. I think boys and girls do some intrinsic gender sorting. I don't think that needs to be signal boosted though. Look, if YOU want to choose only pink toys for your daughters, so be it. But I don't think that is what is actually happening, broadly. Rather, a significant minority preference for very gendered toys, and a relatively "meh" reaction from the vast bulk of the rest leads to the very gendered toy aisles. As a boy of about 5 or 6, I had a baby doll. It was simply a fairly life-like baby. I loved that doll (Johnny). I fed it the bottles that came with it, carried it around, sang it songs, the sort of things one does with babies. My mom remembers my Dad being relatively apoplectic. When my sister came along a few years later, I carried her in a front pack, read to her, burped her, and generally loved her. I never got a message that I wasn't supposed to play with dolls, so I had little issue doing so. But in a world where everything is coded boy or girl, it might have been different. I'm not saying there's nothing problematic about the sex divisions in toy marketing. I'm just saying that pink vs. blue is a strange place to object when color-coding toys for children of different sexes doesn't itself seem problematic. What's problematic is, as you say, the attitude that toy babies are for girls, etc. Unique Identifier June 29, 2015 at 9:48 pm Why stop at boys' toys and girls' toys? Why do we even classify things as toys and non-toys? What about the girls who want to play with empty beer bottles? What about the boys who want to play with CAT-5 cables? Doesn't this strongly enforce a paradigm where non-toys cannot be played with? Isn't this a serious injustice, borne by the unfortunate children with non-traditional play preferences? Now, this might seem facetious, but if you can answer these questions, you are well on your way to solving the red truck problem too. walpolo June 29, 2015 at 10:43 pm Ha, point well taken. Any culture is going to consist of some arbitrary norms that push people in one direction or another without actually "forcing" them to conform, and the notion that we might ever be free of such forces is an anarchist pipe dream. On the other hand, the notion that such cultural forces need to be carefully controlled by right-thinking moralists is an authoritarian pipe dream. And although I can recognize a lot of the harm done by our present culture, I tend to think that those who strive for the abolition of gender, or similar goals, are dreaming that authoritarian pipe dream. The issue is that it enforces the idea that boys and girls are so different that they can't even play with each other's toys. That there is something inherently wrong with a boy touching something that a girl might touch, or vice-versa. Rather, a significant minority preference for very gendered toys, and a relatively "meh" reaction from the vast bulk of the rest leads to the very gendered toy aisles. I think you're completely overlooking the driving factor here. We don't get His and Hers of every toy because we have a small selection of toys, we get them because toys are cheaper and more varied than ever, and producing a second model of everything is easy. They're reflecting specific consumer demands, not the lowest common denominator. The main customer for toys, though, isn't children, it's their parents. And for the parents, having the ability to selectively mark objects Anathema to certain of their children is a substantial positive benefit because it reduces fights over those objects. Particularly in the standard one son+one daughter household, the more segregated the toys, the better: think how many categories of fights are prevented if Billy and Mary utterly refuse to touch each other's stuff! Now, there's obviously there's going to be some point of diminishing returns here, not all the fights will vanish because sometimes they steal each other's things to hurt each other rather than because they want the things. And at some point the downstream effects of having created a society where everything must be either floral- or camo-patterned will outweigh the benefits, and so on. But if you can make fights over toys a third less likely (which would be my guess for the pink truck) by buying toys where, if the wrong kid were using it, they'd be hit with some insecurity rather than just a sense of triumph? That's probably a win for everyone involved. @ Alraune I'm afraid I don't find your explanation very plausible. 1) "standard one son+one daughter household"? They're a minority — something like 18% of all households with children. 2) The sheer number of anxious parents asking for advice because their son likes the "wrong" toys or colors (e.g.: one, two), not to mention various articles about it (e.g.: one, two, three, four), contradicts the idea that the ubiquitous color-coding is simply about avoiding conflict between siblings. > being berated, mocked or harassed for it Is the idea of traditional politeness or etiquette lost in the US? People berate or mock people i.e. tell their opinion without being asked first? I don't find women who dress like men attractive, but the idea of walking up to a stranger and telling it entirely horrifies me. It would be so much against social etiquette. Nita June 30, 2015 at 12:08 pm So, how would people treat a man dressed "like a woman" in your country, where rich people parade in gold chains and mental issues are treated with vodka? Shenpen July 1, 2015 at 6:34 am With the same kind of depressed indifference as everybody else. Such as person would be strongly "othered", and "othering" in this context means simply ignore. Nita July 1, 2015 at 7:08 am Really? Including the folks who wanted to punish "popularizing" (whatever that means) homosexuality, sex changes, transvestitism and bisexuality with 3 years in prison, and the people who voted for these fine folks? Winter Shaker June 29, 2015 at 7:49 am Some people I know seem to have taken to using gender-neutral singular 'they' to refer to everyone, not just explicitly non-binary people who choose it. This is not noticeably silly – there are languages which do not have to use gendered personal pronouns (either because they don't need to use pronouns at all, or because they have a gender-neutral singular already), and there is no law of linguistics that would prevent English becoming such a language if enough people drop 'he' and 'she' in favour of 'they'. I don't know if there are any languages that lack both gendered pronouns and a baked-in singular/plural distinction, but I'd be surprised if there weren't some somewhere. There is already a movement to try to do this in Swedish. Also, you do not need a passport, driving licence etc to specify 'male', 'female' or other in order to function as proof of identity. I think it's perfectly possible that one could 'abolish gender' in the sense of the government taking no official interest in whether you are male, female or whatever other identity you wish, and with the language you speak containing no structures that assume a priori that you must be either male or female. On the other hand, lots of people do identify strongly as male or female, and I don't see that facet of human nature going away. Gender may not be fully modeled by a boolean binary, but it does still appear to be a bimodal distribution, with most people belonging very obviously to one cluster or the other. So I don't think you could 'abolish gender' in that sense as long as we remain recognisably human. Perhaps 'abolish gender essentialism' is the project that most people could compromise on – or at least agree that it is a non-crazy thing to want to do. Sex does seem to be bimodal: more than 95% of all people belong to the groups {XX, ovaries, vagina, high level of estrogens} and {XY, testes, penis, high level of androgens}. But gender? Do 95% of people really aspire to be "macho lumberjacks" or "nurturing princesses"? lots of people do identify strongly as male or female The most surprising thing I've noticed from all the discussions surrounding trans issues is that many people, in fact, don't strongly identify as either gender, but simply go along with societal expectations for pragmatic reasons. I've expressed an identical sentiment myself on this page, but I've still gotta sanity check you here: of those discussions surrounding trans- issues, did any of them take place outside of, for want of a better word, Math-Person enclaves? Lumberjacks are low-status. But as a better example most men would like to be James Bond i.e. both masculine and high-status because androgens generaly precisely this type of status desire, to the extent that it was shown that androgens even in women influence stereotype threat levels – they directly influence how much one cares about comparing themselves with others. Masculine gender roles are all about power, and androgens are all about desiring that. For women, the princess stuff is pre-capitalist. So not really relevant. The high status woman is more of a Gina Lollobrigida type today. Also it is a bit more difficult because it is hard to tell if lower androgens or higher estrogens are more important. Nurturing is oxytocin. Estrogen has more to do with high socialization, emotional connections, a classic case is the puberty / teenage girl behavior or early pregnancy where the opinion of others becomes very important and moods and emotions matter a lot. Estrogen is clearly not about nurturing – after childbirth it crashes through the floor. I don't think nurturing is a _ generic_ female gender role as such – do the girls in dance club look like they are being very nurturing? It is a _motherhood_ role which happens after e.g. attractiveness got already reduced by the first birth and normally happens only after marriage. Nurturing is female only as much as working long is male – not very much, just a marriage role. But the core roles are male fighting/dominating and female emotional/social. Nita June 29, 2015 at 2:52 pm That's an interesting hypothesis. I'd like to see a properly sampled study, too. @ Shenpen James Bond and Gina Lollobrigida? I think you might be a little behind the times 😛 More seriously: everyone wants to be respected and admired — or "high status", as you put it. (Except for people with unhealthy self-loathing issues.) And you do realize that anger is an emotion and status games are a type of social interaction, right? Neike Taika-Tessaro June 30, 2015 at 9:16 am @Nita: My first instinct was to say '…that's an excellent point', and then I realised that while it's incredibly easy for me to imagine that's true for most people, that's probably in strong part because I identify as androgynous. Do you have some statistical sources corrected for confounders? I'd be interested, because that sounds like the sort of thing that will shape the way I look at things if true. Ever An Anon June 30, 2015 at 10:16 am It seems like there's a conflation here between costume and role. For example, if I quantum leapt into the body of a 18th century French Aristocrat then I would suddenly face the expectation of wearing tights wigs and makeup: failing to dress in a properly masculine way would get me mocked almost as badly as my atrocious command of the French language. And if it had been a leap into a German young lord less than a century later I had better insult someone at my academy quickly or I'd be the only chump without a manly dueling scar. Two millenia earlier in Macedon and I'd be cursing my body hair every day while shaving. But the masculine virtues, chiefly L'Amor Fati and self cultivation, would be the same in all three as well as in modern times. The fundamentals of "Being a Man" are human universals, and that is what guys identify with rather than a particular costume. By that standard I would say most guys have a strong sex identity: even if they would prefer nail polish to baseball caps, they still largely aspire to masculine virtues. Nita June 30, 2015 at 10:52 am @ Ever An Anon "L'Amor Fati"? Hmm, let's see… I want to learn more and more to see as beautiful what is necessary in things; then I shall be one of those who make things beautiful. That, and self cultivation? This combination of virtues sounds a lot like the ideal of a good Evangelical Christian housewife — joyfully accept your lot in life, tirelessly work to improve yourself… And these ladies take their femininity quite seriously — they might take offense if you go around calling their virtues manly! @ Neike I would be happy to find any statistics on this, of any quality. >Also, you do not need a passport, driving licence etc to specify 'male', 'female' or other in order to function as proof of identity. True but I cannot really imagine how narcissist one must be to really care about this. You also can't imagine a reason, other than narcissism, why someone might care. Yes, I cannot. It sounds to me the idea that a person cannot bear that anyone have a different opinion about himself/herself even if that is not really a person but a paper produced by a bureaucratic machine. Not being able to deal with disagreement about ones own self is what narcissism is. It is not being sure and confident about one' self / identity so every external disagreement reinforces those nagging doubts. And my opinion of rationalists (or is it conservatives?) sinks ever lower. Shenpen June 29, 2015 at 12:04 pm Conservative (non-American), with a mild interest in rationalism, if you are interested. But quite frankly I have about zero interest in your opinion. Because I am not narcissistic – I can live very well with the fact that some people have a different opinion about me than I do. Say something actually interesting, such as what other alternatives exist. Why should I? If you were open to alternatives, you would have gone out into the world (i.e., other places on the internet) and looked for them. You would have maybe come up with some tentative alternatives yourself, which you could have then tested against reality. (You can still do those things. Absolutely no one is stopping you.) (And given how you've dug in your heels with the single explanation that occurred to you, why would I think that my providing you with an alternative would be anything but futile?) Cauê June 29, 2015 at 12:48 pm Anon, you could at least attempt to argue your case(s). I don't see what kind of contribution you think you're making here. @Caue, I see many comments making unsupported assertions, in which the commenter is COMPLETELY CONVINCED that there no other explanation is possible. I have no hope that any rational argument or evidence will be considered by someone like that. Therefore, I have zero incentive to argue a case in specific terms. Cauê June 29, 2015 at 1:43 pm Meanwhile, when thinking of reasons why you would complain as you're doing, instead of providing the alternative explanation(s) and see what happens, the only ones I can come up with are… less than generous. @Anonymous: There is the old saying, "Better to remain silent and appear a fool, than to open your mouth and remove all doubt." If you find Shenpen to be foolish, you should incentivize him to speak, not remain silent. @Cauê: I will attemp to steel-man @Anonymous's argument, at least a little bit. We have all been in arguments where the other side is not merely well-versed in the arguments for their own opinion, but extremely resistant to anything that runs contrary. It's confirmation bias on steroids (or maybe just confirmation bias). CJB has a point that he concedes somewhere in this thread about Swedish defense spending vs. US defense spending where he makes the point that in many places, merely to admit error is to show weakness. When one finds oneself in argument in this manner, it becomes exhausting, the debate equivalent of trying to talk to a two-year old who merely says "Why?" to every answer. To your point though, I would however, hope that on this blog at least, we wouldn't be so quick to pattern match to that type of argument. HBC, I don't how many times one should have to try to argue in good faith with someone before concluding that's the case and accusing them of this, but I'm pretty sure it's at least once. @Caue, you've seen two separate commenters [there's no charitable way I can put it], and your issue is . . . with the person who points this out to them? @HBC, the particular quote that comes to my mind is that a lie travels halfway around the globe before the truth has a chance to put on its shoes. No, the situation isn't quite analogous. However, I think it is perfectly fine to tell someone to go forth into the internets to find the information they're convinced does not exist. I truly do not have the energy to do more than that. FacelessCraven June 29, 2015 at 3:41 pm @Anon – "I truly do not have the energy to do more than that." And yet you have the energy to keep sniping with rude comments. Argue for the audience. Always. The point of view you are representing deserves better than you are giving it. Thank you, FC. At least we can agree on that! Cauê says: Yes, I completely agree with this. Surely though, one can, after the 4th "Why?" in a row, dismiss the two year old, even if that is all in one conversation? I'm not sure what the polite way to do this is, especially in a manner that does not concede the argument. "the particular quote that comes to my mind is that a lie travels halfway around the globe before the truth has a chance to put on its shoes" On this blog, in this community, if one makes an assertion, being asked to justify that assertion should generally assumed to be valid and in good faith. If people are making counter-assertions, this does not simply nullify both parties responsibility for justifying their assertions. Do I sympathize with the general idea that the comment community here, having attracted a predominately gray-red cohort, seems to ignore violations of the 2 out of 3 rule that Scott laid out if they coincide with a gray-red worldview? Yes, I sympathize with that. But I don't think the answer is to then consciously engage in that type of behavior. @HBC, I keep silent 99 times out of 100, even when I see people engage in "that type of behavior." I don't read the comments very often, but when I do, I see a definite trend in which comments expressing right-wing views are much less likely to be challenged than those from the left. And that's all I will say about it. Protagoras June 29, 2015 at 5:28 pm I feel I should perhaps say, once in a while, that I agree with Anonymous; I have gotten a bit frustrated at the build up of right-wing talking points and abuse of liberals in the comment threads of late. I only rarely respond, because when I do it generally seems that either the inferential distance is too great or there's too little interest in actual rational discussion on the other side for it to be productive, and I don't consider making the comment threads a scene of tribal warfare to be a desirable outcome either. But count me as a left winger who still loves Scott's posts, but is much less thrilled by the comment threads of late. @Protagaros: Oh, I certainly agree with that. Except, I'm fairly young here, so I don't think I ever saw the halcyon days you are describing. Several months ago I saw a thread where some seemed to be bemoaning the lack of liberal/left-leaning commentors as detrimental to the back and forth dialogue that was desired. I didn't comment at the time, but my initial reaction was that the general tenor is not welcoming to liberal commentors, but rather, fairly hostile. I think 3 weeks or so ago, veronica d tried to make that point and ended up having her point attacked either semantically or in a weak-man (depending on how you want to see it). Not many gray or red-tribe members seemed to jump to her defense. This last point is particularly what I am trying to get at. Unless we want to things to devolve to some sort of echo chamber (or dual echo chamber), we have to particularly guard against accepting/not challenging weak arguments from people we broadly agree with. Ever An Anon June 29, 2015 at 9:14 pm "The quality of this website's comments has gone to hell" is probably the most common sentiment I've ever seen at any time on any website. I wouldn't be surprised if those weren't the first words ever transmitted electronically. It used to be almost exclusively Blue-state "I've never met a real-life Creationist" progressives and bleeding heart libertarians here, and there were people complaining about the comments then too. And back then we had actual death threats, albiet all from the same commenter afaik. After that NRx was a big thing and there still weren't any regular Red-state conservatives but everyone was flipping out over whatever thing Jim had said that milisecond. And 100% of it was more offensive to liberal sensibilities than anything posted in recent memory. Having a half-way mindkilled debate about gun control or global warming isn't as fun as listening to communists fantasize about putting lower middle class white folks against the wall as class traitors or hearing surprisingly well-sourced race realist arguments that black people not being able to swim is a public health risk, but it's not anywhere near as disruptive either. Doomsaying now of all times is silly. notes June 29, 2015 at 10:51 pm 'Things used to be better here' is arguably the oldest human sentiment, and was certainly held as early as we have samples of narrative writing. Earlier, if we trust their recording of oral histories. I'm generally against fantasies of executing kulaks (though perhaps I should not be: sometimes fantasies substitute for action more than they incite it), and I don't even understand why someone would make a well-sourced argument for… subsidized swimming lessons for black people? Voluntary associations gathering to address the grave threat of lack of swimming technique? Probably this is because I don't see how it ends up as a public health problem. Anyway, setting aside these examples of a more colorful commentariat which, the vagaries of the internet permitting, may yet return… there's a positive value to trying rational argument here, even for Anonymous. Perhaps more relevantly, there's a strong negative value to despairing of rational argument and going straight to sniping. The former helps reinforce the norms of rational discourse here; the latter actively tears them down. Worse, it's a self-fulfilling prophecy: perhaps rational argument on a given subject, with a given interlocutor is futile. Certainly it becomes so, if sniping is the standard of discourse. If such despair helps hasten what you dislike about the comments, as I think it would, then consider taking the opposite course — not as a favor to those unwilling to listen, but as your attempt to give the PoV for which you argue the representation it deserves. Or, less rationally: Stay awhile. Bring forth the arguments your causes deserve. I don't even understand why someone would make a well-sourced argument for… subsidized swimming lessons for black people? Voluntary associations gathering to address the grave threat of lack of swimming technique? Probably this is because I don't see how it ends up as a public health problem. The public health problem is that it (in conjunction with various amplifying effects?) means there's a 3x disparity in the rate of drowning. Samuel Skinner June 30, 2015 at 1:53 am Steve Sailor still reads and comments here; I don't know about the communists. Nornagest June 30, 2015 at 1:56 am Did the commie class murder fantasy people and the anti-black bigots armed with Damned Facts mellow out, wander out, or get chased out? Jim and most of his imitators got permabanned, IMO correctly; I'm basically okay with having neoreactionaries floating around, even the ethno-nationalist kind (whom I find the least politically interesting and the most ethically sketchy), but Jim's style was way too abusive no matter how good his sourcing. The commie murder fantasy perpetrator received a few temporary bans and largely mellowed out; they're still around occasionally. I'm not gonna name them here, because they might prefer it that way and they're actually a pretty good commentator when they're not fantasizing about having people shot. (Also because I may have dreamed of feeding annoying people to sharks, Blofeld style.) Alraune June 29, 2015 at 11:11 am Expanding on my above reply to Nita, I think that, yes, 5% is the right magnitude of exceptions there, but that the number is substantially higher within nerd circles. Partly because there's a tendency towards identifying primarily as your mind and having far-flung hypothetical interests that makes the question "if you were [other gender, other race, a robot blimp on Jupiter, etc. etc.] what would things be like?" seem more interesting and less repulsive than most people seem to find that class of question, but I think there's also a more fundamentally gender-specific portion. Being male or female in the social/gender sense is largely about what type of status games you naturally take to and bond by playing, and nerds are quite infamously bad at the normal status games, preferring to play their own. And nerd status games, though not really unisex, are sufficiently outlying that the male and female versions are closer matches to each other than to their corresponding normal games. When all your close friendships are of the nerd-bonding sort rather than the gender-bonding sort, gender identification falls by the wayside. On the average, the more intellectual a person is, the more neutered they look. Unfortunately I am not an exception – I would like to be far, far more masculine but it feels like every time I jump into an intellectual interest, my testosterone drops. onyomi June 29, 2015 at 1:37 pm A number of religious traditions like Hinduism, Buddhism, and Daoism associate androgyny with enlightenment and completeness, so maybe you just have the best of both worlds? https://en.wikipedia.org/wiki/Ardhanarishvara http://www.jstor.org/stable/41298758?seq=1#page_scan_tab_contents stillnotking June 29, 2015 at 8:33 am The Left Hand of Darkness, I guess? Which is an indirect way of saying "Not gonna happen without major and fundamental revisions of human nature." Paul Torek June 28, 2015 at 7:29 am As far as Iceland goes, Sumner is at least partly right because monetary effects at least have traction: http://icelandicecon.blogspot.com/2012/07/interest-rates-and-indexation.html And since basically no neo-Keynesians deny that monetary policy matters, and since real interest rates in Iceland quickly went negative, Cowen's argument is radically incomplete at best. Iceland followed (net) expansionary policies, a Keynesian can argue, and got expansion. But my previous comment wasn't about the economics argument, I'm just saying nyah nyah and calling Cowen a poopyhead. Because Iceland did its (merely-fiscal) austerity the "wrong" way from his POV. FWIW, I applaud Cowen's cautiousness about economic generalizations. Oops, this comment was supposed to be threaded under Nathan's 6/27 10:41pm. Paul Torek June 27, 2015 at 9:08 pm Iceland did move to a primary government surplus … primarily by raising taxes. Is this the model Tyler Cowen wants to shout from the rooftops? And then there's Iceland's treatment of creditors of banks. The Guardian reported (Oct 2013): Nobel prize winner Joeseph Stilitz agreed. "What Iceland did was right. It would have been wrong to burden future generations with the mistakes of the financial system." For Financial Times economist Martin Wolf too, it was a triumph. "Iceland let the creditors of its banks hang. Ireland did not. Good for Iceland!" Less good, of course, for the foreign creditors. […] "We raised almost every tax there was – and introduced new ones," recalled the then finance minister, Steingrimur Sigfusson, adding that there were considerable cuts in public spending too as government debt swelled to eye-watering levels. More details from the EC (pdf): a special tax on higher income was abolished in 2006, but reintroduced in 2009. In 2010, the PAYE (pay as you earn) system which had been basically a flat rate system with or without a temporary surcharge, was replaced by a three-rate system. … The total rates for 2014 were set at 37.32% … for yearly incomes of up to ISK 2 897 702 (EUR 18 028), 39.42% … for incomes from ISK 2 897 703 to ISK 8 874 108 (EUR 55 210) and 46.22% for incomes above this value. (Math on federal+regional taxes omitted because formatting) That document also mentions that the corporate tax rate increased from 15% in 2008-9 to 18% in 2010 and 20% in 2013. Nathan June 27, 2015 at 10:41 pm I'm not sure that you really understand what the economic argument is about. Fiscal austerity is fiscal austerity in Keynesian economics, regardless of whether it's done through tax raises or spending cuts. Scott Sumner has made this point quite often in particular. Cowen primarily believes that Europe's (and America's) problems are structural and that the Keynesian AS/AD framework is not the relevant one to use. So he highlights an example of a country getting a very different outcome to what the Keynesian framework predicts. Sumner on the other hand is still pretty much on board with AS/AD but thinks that monetary effects render fiscal policy irrelevant. Both are low-tax favouring libertarians, but for both men the argument over the correct economic framework is much bigger and more important than the tax rate. Cowen also makes clear that he doesn't believe the Icelandic experience is generalizable. He doesn't believe what worked for them will necessarily work elsewhere. That's the trouble with Cowen. He tends to reject broad rules so its hard to find contradictions in his thinking. Brian Donohue June 28, 2015 at 2:23 pm And lo and behold, Scott Sumner with some serious love for Scott Alexander: http://www.themoneyillusion.com/?p=29739 Austerity does mean raising taxes, it is known by every European, because it is faster to squeeze both ends (higher tax, lower spending) to repay the debt than just one. This also means that the anti-tax American Right have never really considered austerity. Bad news Scott, the euphemism treadmill caught up to you. You're gonna need a new word for "content warning." Wow, that's pretty ridiculous. The article you linked to is 2tribalist4me though. Just because a feminist blogger said something doesn't mean it's a universal declaration of the Official Feminist Opinion on the issue. The article itself looks more reasonable (to me at least) than the weird editor's note that precedes it. I actually consider the terminology change a slight positive development, since ditching "trigger" moves things out of the faux-medicalization gutter and should encourage… I was going to say "better dialogue", but that's far too optimistic so I'll just say "fighting fairly." Still demonstrates the futility of the supposed principle involved though: there is no circumlocution of sensitive topics sufficient to prevent discomfort, the warnings function entirely as a yellow star marking the speaker out for future assaults. Deiseach June 27, 2015 at 12:29 pm Esteemed and worshipful readers and commenters of this parish: let me take this opportunity to tell you how much I appreciate the quality of contribution on here (yes, even when we veer dangerously close to "You're a poopyhead!" territory). Because, although my eyes may glaze over at the mathematics involved when ye start digging into the statistical analysis, by Crom Cruach and his sub-gods twelve, at least ye understand and appreciate nuance and that there are distinctions and subtleties and above all reasons for the holding of positions other than "Are they merely stupid? Or evil as well?" Because ye can make an argument for why "You are a poopyhead!" and the degree, quality and origin of poop involved. This brought to you courtesy of a Tumblr blogging of a sequence from some TV show which ended up with "So go fuck yourself, Aquinas" (not a sentiment calculated to win my heart) where the writers were striving for Deep but only came up with Propaganda. If you don't understand the difference between "pride as a sin" and "pride which means self-esteem and a true valuation of your worth", please don't. Just don't. zz June 27, 2015 at 12:58 pm 4 * 5 = 30 0.002 + e^{i \pi} + \sum_{i = 1}^\infty \frac{1}{2^n} Deiseach is a poopyhead (-b \pm \sqrt{b^2 – 4sc}) /2 /s Deiseach June 27, 2015 at 5:16 pm Why, I'll….eyelids growing heavy…think I'll take that lying down?….can't…stay…awake…mathematics…zzzzzzzz Thanks you two, that was fun. stillnotking June 27, 2015 at 1:23 pm Any forum that can bring together neoreactionaries and social justice types without immediately degenerating into poopyhead territory is something pretty special. Yes. As someone who believes in The Usefulness of Dialogue among People of Good Faith Who Disagree, I think Scott is providing an instructive example and an important counterweight to the General Tenor of Internet Dialogue circa 2015, with the help of many commentators. Amen. Who would the social justice types be? (I know there used to be a few, but they don't seem to be commenting much anymore.) Foo June 29, 2015 at 3:29 am veronica d maybe? Anyway, I'm guessing they mostly either got converted to anti-SJ or driven away. Or got tired of the echo chamber. Mark, what I personally look for is disagreement that is 1) interesting and 2) offers new information. That's been getting harder to find. (P.S. The comment can be judged on its merits, regardless of what pseudonym I choose.) Try to engage substantively instead of, well, instead of what you're doing today, and see what happens. notes June 27, 2015 at 1:25 pm I may be in error here, but my own understanding was that Aquinas condemned as sinful any misestimation of one's proper worth/place (though he did draw a distinction between pride and pusillanimity, considering them related, and sometimes the latter stemming from the former… the one being overestimation and the other underestimation of one's self). It's a puzzling confusion: his definition of pride literally opens by defining it as overestimation of what one is, which by definition could not be a true valuation of one's worth. Well, Aquinas of course stole a lot from Aristotle, and Aristotle didn't have the idea of a "sin" of pride. For Aristotle it was entirely good for someone to be proud if they had something to be actually proud of; misestimation was the only thing that was a problem. This may be a place where Aquinas gets confusing because he is serving two masters, the Christian ideas about pride, and the very not Christian Aristotle. I think the modern confusion is because nowadays they very much do believe in the sin of pride, except (a) they don't have the notion of sin and (b) it only applies to people/causes of which one disapproves. Pride is otherwise considered a healthy self-esteem or reclamation of slurs or the like, a reaction to past mores where thinness/whiteness/maleness/intelligence/being from the West/conservatism/heterosexuality/cis gender/neurotypicalness/abledness etc. were considered desirable and superior, and those not possessing those traits were considered lacking. So White Pride is an awful terrible bad thing because racism (I'm not talking about neo-Nazism or the like, but could anyone imagine going around saying "I'm proud to be white!"). But Gay Pride is a great wonderful thing because LGBT! Any distinction between the two is considered hair-splitting, and any intimation of "pride is sinful" is "Oh, you want me to be ashamed of myself and you're only trying to put me down and keep me down, yeah?" The idea that anyone, even the persons of superior social causes, can suffer from the poisonous kind of pride is not considered. Oh, well. Times change and we have to change with them, right? And I was more irritated that the writers were trying to be Clever and Edgy but only came off as mouthing the pious platitudes of the day. If they hadn't felt the need to insult Aquinas, I'd have simply scrolled past it without comment. But you insult the Dumb Ox and I take it personal like 🙂 Tarrou June 27, 2015 at 6:06 pm I am most encouraged not by Scott (who does an admirable job here) but by the commentariat taking up the ideal as well. One man being admirable is relatively easy, though rare. His being able to influence so many others to up their internet-commenting game is something else entirely. @Tarrou: I wish you would enter the commentariat who has taken up the ideal… unseriousness begets sarcasm A Definite Beta Guy June 27, 2015 at 8:25 pm Ideas interest most of us more than rage. This is not the case for most people, who seek easily digestible feelings much like the 11 year old in that new Pixar movie. My general thought on the matter. Indeed, but Scott has gotten some pretty far-reaching coverage, he gets linked on very popular (and some very partisan) sites. This brings a lot of people who aren't necessarily old LW folks or serious skeptics. I'm impressed that the community self-polices so effectively. Have we been descending into increasingly impenetrable jargon? That would be one obvious policing method. you drunk? for reference, I am very much like this when drunk Tarrou June 27, 2015 at 6:24 am Well Scott, The contagion may be limited to academia, but not to California! http://www.uwsp.edu/acadaff/NewFacultyResources/NFSRacialMicroaggressions_Table.pdf This one is even better, it is the considered opinion of the University of Wisconsin that even to deny that one is racist is a microaggression (for whites only of course)! Simply staggering. HeelBearCub June 27, 2015 at 8:57 am The examples of specific micro-aggressions on race are: "I'm not a racist. I have several Black "As a woman, I know what you go through as a racial minority." You may not like the idea of framing things as micro-aggressions. But if I switch the context, I'll think you'll see how nonsensical the statements are. "I'm don't hate Red-Tribe. I have several Red-Tribe friends." "As a person working in tech support, I know what you go through as a farmer." And the Motte goes up! Perhaps you'd care to justify the rest of them? "There is only one race, the human race." Asking an Asian American to teach them words in their native Asking an Asian person to help with a Math or Science problem. "I believe the most qualified person should get the job." Oh man, I had a Chinese-American maths professor once! I microaggressed the SHIT out of him! I needed so much help! Well, of course some of these statements are clearly contextual. But, if you accuse me of hating Red-Tribe, then in context: race.–> There is only one tribe, the American Tribe. And in the context of you saying "It's been shown there is a hiring bias against people with Southern accents" then responding "I believe the most qualified person should get the job" as a counter-argument is, again, ridiculous. For the others: Asking an Asian American to teach them words in their native language –> If all of your great-grand-parents are from Texas (but you live in NYC and so did your parents and grandparents), asking you to speak in a Southern accent or sing The Yellow Rose of Texas. Asking an Asian person to help with a Math or Science problem. –> Oh, you're from Alabama, how do you cook chitlins? Finally, could I ask if you could be a tad less aggressive in your replies? Whatever dialogue you are trying to have gets a little lost. DrBeat June 27, 2015 at 10:17 am There is nothing malicious or condemnable about "There is only one tribe. The American tribe." other than that it doesn't make sense if they are talking about the whole world. It's a legitimate sentiment that people in America should stop backbiting each other over factional differences. What is wrong with that? "I believe the most qualified person should get the job" was not specified as only bad in a certain context. Asking an Asian person to teach you a word in their native language is not like asking a Texan to speak in a Southern accent or to sing. Asking for someone to convey information is not like asking someone to make a performance for you. It also doesn't specify "saying Oh, you're Asian, you are good at math, help me with this." And this is not an accidental omission. It says "asking an Asian person to help with a math or science problem". So it's more like saying "Hey, I am cooking chitlins, can you give me a hand?" to the nearest person, who happens to be from Alabama. "Microaggressions" are clearly "anything that might hurt the feelings of a person who has self-modified their feelings for maximum hurtability." This is how they are used, again and again and again. That you can invent a context in which some of these "microaggressions" might be regarded as actually bad doesn't matter, because they have never and WILL never be regarded as microaggressions only in that context — the entire point of codifying microaggressions is to try and make them seen as bad outside of that context. HeelBearCub June 27, 2015 at 10:40 am @DrBeat: For argument, let's suppose I say something negative about (coded words for people in Red-Tribe), then you call me on it by saying "You are just saying that because you hate Red-Tribe", and then I counter with "There is only the American tribe." Would that not be an infuriating response? It's not the statement itself, it's the context. It's simply a distraction from the actual argument. Even in the context of you merely asserting the tribes exist, it's not a counter-argument, but a statement of desired utopia. The context in which the statement "I believe the most qualified person should get the job" is shown in the theme. Myth of meritocracy Statements which assert that race does not play a role in life successes "Asking an Asian person to teach you a word in their native language" Not an Asian person, an Asian American person (it's right there in the sentence). The micro-aggression is assuming that someone who looks Asian must not have been born in the US and/or must be a native Chinese/Korean/Japanese speaker. Why do you assume that someone from Alabama must know how to cook chitlins? Can't you identify with the idea that it grates to have people assume so much about you based only on one piece of knowledge? onyomi June 27, 2015 at 11:47 am I think the "assuming Asian American speaks Chinese/Japanese/Korean" thing is a legit, if minor, offense. But I think it's pretty Orwellian to say that statements like "I think the job should go to the most qualified person" are inherently offensive, though they could be, depending on the context. This is because it assumes a whole frame of reference which is not shared by everyone, or even close to everyone. A reasonable percentage of Americans, right or wrong, think that race is no longer a major obstacle to success in this country. A reasonable percentage of Americans, right or wrong, think that affirmative action-type programs are unethical and/or ineffective. To say "it is offensive even to express those opinions because they imply a world view that clashes with my own" is just a way of shutting down whole swathes of the population from ever speaking their opinion. "Not all Asians in America grew up in Asia and speak Asian languages" is not a controversial statement. "American meritocracy is a myth" is a very controversial statement. Discourse hygiene that demands that a controversial position be accepted as a prerequisite for polite discussion is intellectually dishonest. HBC, you can't make policy that relies on context to the extent you're defending here. Once these examples become codified as "microagressions", people will just pattern-match, and defending the "aggressor" on the grounds that the context was different (or whatever grounds, really) will be both draining and dangerous. btw, I've seen leftists who hate conservatives – "I don't, I have many conservative friends" would actually be very informative, especially in certain academic contexts. I also disagree with other of your assessments, even the "ridiculous" one. This is not clear-cut. I can't imagine how this policy wouldn't hurt innocent people for innocent thoughts. @Cauê /@onyomi: First let met start be simply reiterating that the specific comment I was replying to was a complaint that "even to deny that one is racist is a microaggression". Broadly speaking, I think this is weak-manning the argument about micro-aggressions. I see people here point out that Blue Tribe – Red Tribe interactions are frequently based on bias, and that one, as a member of Blue Tribe, assuredly has biases against Red Tribe which color (see what I did there) their thought patterns. This is the the subject of reams of writing on both Less Wrong, SSC, and other places. If I deny that those biases exist and that they effect Blue Tribe broadly, you would laugh and find this to be complete non-sense. But even if I asserted that I, as a member of Blue Tribe, had no biases against Ref Tribe and therefore my arguments, emotions, thoughts were not affected by them, you would also rate this as highly dubious. I think that Louis CKs monologue on "mild-racism" covers some of this in a much more palatable way when we stop talking about Blue-Red and start talking about racism. Everyone has xenophobic tendencies. @onyomi: "A reasonable percentage of Americans, right or wrong, think that race is no longer a major obstacle to success in this country." 50 years ago, a reasonable percentage of Americans thought that blacks were inherently inferior to whites. Surely you can see that this is not a sufficient condition for not paying attention to the problem that these types of internal biases cause? Cauê : Can these types of policies be misused and lead to harm? Yes. Does that mean that they assuredly will be misued and lead to harm? Yes. But, a policy of ignoring the problems caused by othering different races also leads to harm. These things are in tension with each other. Using the microagression principle as a cudgel to beat people for innocent statements is wrong. There are certainly people on the left who have been pointing out the excesses, and I think it is right to do so. But that doesn't invalidate the original argument that microagressions actually exist and are problematic. To me, the question is not whether microaggressions are a problem, but whether they're a problem that can or should be solved with the intervention of official power structures. I feel strongly that they shouldn't, for a lot of reasons, the most obvious of which is that selective enforcement potential is so high as to be guaranteed. HBC, you as much as admitted this yourself when you said "everyone has xenophobic tendencies". Applying official sanction to something everyone does is a bad idea. The cure would be worse than the disease. @Mark Atwood: See Jonathan Chait here and the follow up here In the second article, you can see links to two other left-leaning authors protesting the excesses of the current PC culture. @stillnotking: If the government was specifically banning microagressions in broad society, I would agree with you. But that isn't what is happening. What we have are a host of organizations attempting to set policy only within themselves. Certainly that is complicated by the broad requirements under the Civil Rights Act, but I would need some citation to say that the broad excesses of these policies are caused by government. But, if your manager at the drive through says, "Don't engage in political chit-chat with the customers, it infuriates half of them" this would not be the kind of restriction of speech that is concerning. I thought we were specifically discussing campus policies here. I'm not talking about the government at all; I'm quite sure any such law in the US would be struck down on First Amendment grounds. Colleges, OTOH, can get away with deeply illiberal, unfair, and selectively-enforced policies without coming to national attention, and their victims usually have little recourse. Sorry, I misunderstood what you meant by – "whether they're a problem that can or should be solved with the intervention of official power structures" HR exists at almost every business in the country. They almost always have policies that cover this sort of stuff. They almost always go well beyond what the law requires. The kinds of things that are officially verbotten by HR happen fairly routinely, but this is much like the speed limit being 55 but most people driving 65. Colleges and Universities are unique institutions, and they do have some unique challenges. But they will still have HR policies. Saying that HR policies should not deal with these kinds of things seems fairly naive. HR has policies about harassment and discrimination, which are very different things from microaggression. Again, as you admitted yourself, everyone says something mildly xenophobic from time to time. Not everyone discriminates or harasses. Huge difference. Give college administrators the power to discipline students or faculty just for making some mildly or inadvertently insulting remark, like assuming an Asian is good at math, and I guarantee you will not like how it turns out. I speak from long and dire experience with the breed. (Administrators, not Asians.) Ironically, the selective enforcement could even be applied in racist ways. I don't really think "American meritocracy is a myth" and "blacks are not inferior to whites" are analogous statements for all sorts of reasons. But even if they were, if we were living in a world in which a large percentage of people still believed in white racial superiority, would it be the best strategy to just try to shut down any discussion of that question as inherently offensive? Of course, ostracism, disapproval, shame, etc. can be powerful weapons, but for reasons stated in "In Favor of Niceness…" I'd rather say that "bad argument gets counter argument, not bullet. Always." Nowadays we rarely need to present arguments against white supremacy because people rarely present arguments in favor of white supremacy outside of scary little corners of the internet where most of us would fear to tread anyway. But if 50% of the population still believed in white supremacy, then we'd need to make the case to them, not just tell them to shut up. "American meritocracy is a myth" may be a case worth making, but those who think it's correct need to make the case. They shouldn't just assume they're correct and proceed to treat disagreement as a social faux pas. I think a better comparison would be if I, as a libertarian said, "free markets and capitalism have proven beyond a shadow of a doubt to be the greatest systems for alleviating human suffering of all time; therefore, for anyone to imply otherwise is inherently disrespectful, hurtful, and obscene." *I* actually think that free markets have proven their value beyond a shadow of a doubt and am baffled by people who think otherwise. But the fact remains that I still live in a society where a decent %age of people would disagree with me on that. Is it more helpful for me to engage with them and keep making the case for free markets, or to just say "well, given that free markets are obviously the best, only rude, mean, stupid people could imply otherwise, so I'll just ignore them."? Corporations, I believe, generally ban offensive conduct. That includes, but is not limited to harassment or discrimination. In an "at-will" employment state, HR can fire you for literally anything they feel like, but certainly if you offend someone frequently enough, are asked to stop, and don't, they will fire you. I think the proof that they are analogous is within the statements themselves. There once existed a time when blacks were presumed inferior to whites, both as a point of law and as general sentiment. No matter how meritorious the black person was, they would not be afforded the benefits commensurate with that merit. We certainly can't look at the distribution of benefits across the races and say that blacks are in their position today, broadly, solely based on merit. There are plenty of people still alive who were subject to Jim Crow laws, let alone the accumulated weight of 400 years of slavery. See, but now you're arguing the case. My point is not that the case can't be persuasively argued, but that it can't be assumed. The fact that there exist good arguments for a position does not justify consigning all contrary statements to the realm of "micro-aggression." They could just be wrong and in need of correction by a better argument. If I say, "racism is not a big barrier to getting ahead in America today," and you disagree with that idea, it's perfectly reasonable to say, "but what about the legacy of Jim Crow? What about examples x, y, and z of institutional racism?" I do not think it is reasonable to react by saying "how dare you?! That's a hurtful opinion! I'm deeply offended and will report this to the student committee on speech and expression." I really don't know what to say. You said they weren't even analogous, I showed they were equal (actually antithetical, as you framed it). Now you are off arguing something else. It's feels to me like poor form on your part. I was responding to: I probably should have quoted the part I was responding to in the first place. Sorry if it wasn't clear. I was not responding to your contention that those cases are analogous. I could respond to that, but for me, at least, that would be getting off topic, as it is not the meta issue I'm interested in here. That's some nuanced context you got going on there, but allow me to attempt to elucidate myself. Please correct me if I am misrepresenting your argument, it is the first step to be able to formulate the opponent's side. -Given that racism exists- -Given that college professors are known to be hotbeds of deep-south conservative racisms, especially in California and Wisconsin- -Given that it is the responsibility of college professors to often engage in wild hypotheticals for the purposes of teaching- -Given that college students and college administrations have Never, not Once overreacted and suppressed speech that was innocuous or merely political- -Given that minorities are so famously delicate in their sensibilities that any bland conversation which accidentally reminds them that they are not white males is so distressing as to deny them equal opportunity to an education!- -Therefore, it should be HR policy that anyone who says any number of anodyne statements which could be wildly misinterpreted out of context by grievance-mongering shitweasels shall henceforth be considered that worst of things: a White Racist Oppressor! And until they get tenure, this will be reason to get them up in front of the hiring committee.- I think, on balance, Socrates would probably counsel against the "shitweasels" part. I hang my head. Jaskologist June 27, 2015 at 8:53 pm On a scale of one to ten, how racist are you? Careless June 27, 2015 at 12:26 pm it is the considered opinion of the University of Wisconsin Hey, University of Wisconsin at Stevens Point. Not that UW Madison isn't batty, but this isn't on them Oh man! What a terrible oversight on my part! It is not the considered opinion of Wal-Mart that we discovered a hateful plan to exploit the workers of Marysville! It's only the opinion of Wal-Mart Marysville! Seriously dude, we're now drawing distinctions between two campuses of the same college? Your criticism is that I didn't specify down to the franchise level in my short executive summary? Well, I find your criticism meritless, because you quote Tarrou and not Tarrou-Saginaw. There are states that have extensive networks of U of State schools, with more than one of them being significant (California being the most extreme example). Some states have no particularly impressive U of State schools at all, of course. Another common pattern is to have exactly one flagship campus in the U of State system that is of real significance, and a number of other small U of State schools scattered about geographically which may have strengths in narrow specialities, but in many respects barely outrank community colleges. University of Wisconsin is of this last type; University of Wisconsin at Madison is a serious institution, and any other school with the University of Wisconsin name is a very minor deal. In pretty much all cases of multiple campuses of the U of State type, when there's more than one campus, the different campuses have considerable independence and autonomy from one another, a lot more than in your Walmart analogy. To take an example likely to be more familiar to many people, the difference between Berkeley and UC Santa Barbara is a very big deal, even though California is an outlier among such systems in that even the "minor" University of California campuses like UCSB are still fairly substantial schools. Still not getting the point mate. The whole UC is propagating this microaggression stuff, and I'm merely noting another data point, far, far away from California, to add to Scott's "contagion" metaphor. UC is the biggest and most prestigious state school system in the nation. UW isn't, and I'm certainly not claiming that UW-SP is Harvard. I am claiming that UW-SP is UW, and that this demonstrates something of the reach of this sort of ultra-sensitive whinging. Samuel Skinner June 27, 2015 at 10:24 pm His point is factual. If you want the narrative it represents, rather than alarm it is that the SJW stuff tends to be pushed the most by people who aren't institutionally high status. @ Tarrou: Well, I find your criticism meritless, because you quote Tarrou and not Tarrou-Saginaw. Saginaw, Michigan? If that's your town, awesome. It is, and it is! Awesome in a hide-your-kids-hide-your-wife kinda way, but nonetheless. 😛 For those who like this idea, there may be an experiment in the near future: http://www.independent.co.uk/news/world/europe/dutch-city-of-utrecht-to-experiment-with-a-universal-unconditional-income-10345595.html SUT June 26, 2015 at 4:02 pm If you want a cure for the Toxiplasma blues I'd recommend this one minute interview with his the Charlestown shooter's black friend: http://bossip.com/1157939/charleston-shooter-dylan-roofs-black-best-friend-says-i-still-love-him-video/ I mean what a breath of fresh air! Meanwhile NYT, the New Yorker, are cranking out opinion piece after opinion piece (like Roxanne Gay's) designed to turn man against man. Preaching resentment, fear, hatred. Meanwhile, you never see anything like this video do you? Something that makes you deeply sad, but filled with hope for the living. But if you find yourself floundering without that Toxiplasma rage, just go straight to the video's comments. Sylocat June 26, 2015 at 3:03 pm JRPGs usually only have four Crystals, for Fire, Water, Earth and Air. They are indeed in themed temples, though. (FFIV had 8 crystals, but the other 4 were Dark Crystals that were mirror counterparts of the base 4, so that doesn't count) Now, Chaos Emeralds, on the other hand… DrBeat June 26, 2015 at 3:10 pm What about Secret of Mana? It had water, fire, earth, air, dark, light, moon, and tree elements. And if your theory of temporal mechanics doesn't account for Secret of Mana, your theory is wrong. Sylocat June 27, 2015 at 12:06 am Huh. You know, I remembered the Mana Spirits (Dryad was my fave), but I forgot they had crystals too. I think Secret of Mana was, and this may sound strange, too much FUN for a Squaresoft game. In more traditional JRPGs where the gameplay is all selecting menu options, you get used to reading the textboxes carefully, so you have an easier time following the story, which is good, since the story is the reward. In more action-focused games, you don't do as much reading during gameplay, and the gameplay is itself more of a reward in itself, so it's easier to lose track of the story. Does that make any sense? Random Thought Generator June 26, 2015 at 11:48 am Why would anyone ban bottled water, or even want to? And, if the despot in control has the power to ban bottled water, why doesn't s/he have the power to ban bottled soda pop too? CJB June 26, 2015 at 1:08 pm The argument presumably went that as resusable water bottles and free tap water are everywhere, people would just stop drinking bottled water and drink tap water. The flaws in this argument are obvious to anyone who knows anything about how people work. So….not college administrators. Power is always subject to legitimacy; I suspect that any school administration (besides BYU's) that banned bottled soda would find its legitimacy called into immediate question. I suspect the "ban" in question consists merely of not selling bottled water in vending machines or campus stores. Still stupid, though. Daniel Burfoot June 26, 2015 at 10:57 am I was really excited to take the CSI bet against the AGW position, until I realized the terms of the bet were totally contrived. They are NOT looking at next year's temperature alone – they are looking at a moving 30-year average. That's mathematically equivalent to a bet that next year's temperature will be hotter than the temperature 30 years ago. meh June 25, 2015 at 11:38 pm The Vance article is more about the moral hazard of need based financial aid than about sketchy collection. She doesn't really get into collection tactics. Seems more rant than rational. Zykrom June 25, 2015 at 6:31 pm Looks like Nick Land has gotten into seasteading. Ergot4 June 25, 2015 at 2:35 pm Environmentalists should actually push to repeal bans on plastic bottles. Yes, landfills are ugly. But few realize that recycling plastics hastens global warming. Petroleum that has been converted to plastic bottles can't be burned as oil, and won't create harmful greenhouse gasses. So more plastic bottles are better for our atmosphere. You shouldn't recycle paper, either, if you're concerned about CO2 Paper is nothing but a block of carbon (n' stuff, but mostly carbon). As for the "they cut down trees!" argument- you think people are cutting old growth oak for fucking paper mills? It's farmed cheap pulpwood and pure carbon sequestration. Most landfills rapidly become anoxic and decay stops- people that professionally do garbage anthropology say you can still find unrotted stuff from the 50's in there. Very effective way to sequester carbon. James Picone June 25, 2015 at 9:11 pm AFAIK recycling paper is also quite energy-intensive compared to just making more paper from farmed-for-paper trees. I *think* that there's some local environmental benefit in reducing interesting chemical runoff from bleaching and the like, but I'm not very certain on that. TL;DR: Environmental tradeoffs are hard, just make emitting CO2 more expensive with a tax and make the market figure it out. houseboatonstyx June 26, 2015 at 5:08 pm Our local recycling center chops it up and sells it back to me for mulch/composting, kitty litter, animal bedding, packing things for UPS, etc. The recycling center adjoins the landfill transfer point, so my paper rides along with whatever I'm taking to the dump, and rides back home* with me, chopped. So negliable transport cost. * It rides home with me if the workers are very quick , or if they're an example in a forum post. Otherwise I carry someone else's shredded paper. As an environmentalist, I've been saying that for years; better the oil should go back in the ground where it came from, than up in the air as smoke. Bicycling is counter-productive too; the gasoline we don't buy, makes gasoline cheaper for the bulldozers and chainsaws. Bicycling, at least, means the gasoline you would have used can go to other use. But bicycling as an environmental choice doesn't make much sense, really, given the tradeoffs you make. The other benefits are what make bicycling worth it. Harald K June 26, 2015 at 3:37 am The crude oil you don't use will be used, that's true. It might be for something with bad environmental externalities, as you suggest, but it may also be for something good. As James Picone says, put a carbon tax on it and let the market figure it out. But the crude oil you don't use will reduce aggregate demand ever so slightly, which will reduce price ever so slightly, which will eventually make some extraction unprofitable that would otherwise be profitable, leaving slightly more carbon in the ground. Not much, but I would guess about a tank's worth. So even absent a carbon tax, bicycling is definitively a good idea. Paging David Friedman…. @ Harald K My TL;DR here is: Oil prices go up and down; trees go down and stay down. The oil industry is big and has many elastic elements: supply, demand, overhead, interest rates, tax rules. The oil industry (or the Market) knows how to handle a small loss in sales, and even the highest amount you could dream of would be small to them. If the price of gasoline did go down, neighbors would fill up their chainsaws and use them this week. Investors would buy low. Some industries that use a lot of fuel would stock up. Some industries that have been using some other fuel would switch to gasoline. Some that were considering switching to clean power will decide to stay with gasoline a while longer. When gasoline prices leveled out, the investors would sell high, but your neighbor's trees would still be down. Another problem is, the bicycling approach needs a lot of people, but has a low ceiling on recruitment. There are only a limited number of USians (for example) who could feasibly do without cars often enough – there's your ceiling. But there is a threshold of gas sales lost that you have to reach before the oil market would even register the loss. I'm afraid that threshold is far above the ceiling. Douglas Knight June 26, 2015 at 9:04 pm trees go down and stay down Trees are, in fact, a "renewable resource." We can grow new trees. In particular, the paper industry does grow new trees. It grows trees as fast as possible, taking as much carbon as possible out of the air. It uses young trees, so it quickly reaches equilibrium – it really is harvesting trees that it planted. houseboatonstyx June 27, 2015 at 12:35 am @ Douglas Knight >>trees go down and stay down >Trees are, in fact, a "renewable resource." We can grow new trees. Unless the land they grew on has been cleared for some other use, in a project made possible by cheaper gasoline.* Even if new trees are planted there, the site will lack comparable trees for a couple of decades. * Or possible without it, of course. @Houseboatonstyx: I don't think that is the way to look at it. You really need to look at long term sustainable use. I believe that once a tree matures, it has done most of the carbon capture it will do. So, given a plot of land that is in use by the paper industry and harvested every 35 years, if you then don't harvest at 35 years, I think the net carbon in the atmosphere goes up over harvest and replant. There are plenty of other environmental costs of timber harvesting, but assuming the land is replanted and can grow the trees naturally sustainably, I don't see why carbon cost would be one. And the idea that the thing stopping your neighbor from chainsawing their trees is the high price of gas seems a little bizarre to me. If you live on a residential street, its just not true. If you neighbor is a timber grower, then the cost of fuel is almost surely a very small part of the overall cost of production, carry costs on the land and labor would dominate, I think. And also the idea that construction or development is limited by fuel cost also seems wrong. Broadly, economic growth drives development. If fuel prices are high enough to kill growth, sure. But I don't think that is the argument you are making. houseboatonstyx June 27, 2015 at 10:50 pm @ HeelBearCub, Yes, we seem to be looking at quite different things, and quite different neighborhoods. I was not addressing carbon or tree farm harvesting, but erosion, loss of habitat, of bio-diversity, etc etc – whether in acres or in a single big tree in someone's yard. To the alternate universe where US bike riding now might influence* mining practice eventually, I prefer the universe where there are some marginal projects in which fuel cost for bulldozers and chain saws is a non-neglable consideration, and consequences will occur (or hopefully not occur) immediately rather than eventually. Untangling the double+ negatives, I prefer to keep driving my car (to worthwhile destinations) rather than possibly seeing more trees cut now. * Directly, rather than by signaling for other action. Environmentalism isn't about reducing X in the environment, or saving Y. It is purely and totally puritanical self-flagellation and conspicuous consumption for status whoring. Riding a bike sends a signal. "I am better and more moral than all these car-bound clowns, and healthier to boot!" Recycling pretty much anything other than metal is environmentally damaging. The Prius is more carbon-intensive than gasoline powered vehicles, and so are large pets like dogs. None of the totems of the environmental religion actually help the earth in any way, shape or form. They are just a way to signal to one's co-religionists via tasteless food and expensive-but-shoddily-made-and-itchy clothes that you are one of them. >Riding a bike sends a signal. "I am better and more moral than all these car-bound clowns, and healthier to boot!" But someone who rides a bike instead of driving actually is healthier and morally better than you. John Schilling June 26, 2015 at 10:47 am I ride a bike pretty much every day; I reject the claim that it makes me morally superior to anyone else. zz June 26, 2015 at 9:13 pm Not only are us bikers healthier (than we would have been driving), we're richer (than we would have been driving). TheNybbler June 26, 2015 at 9:22 pm What good is money if you're not going to buy a cool sportscar with it? Who ever said anything about not buying a cool sportscar? I'm just going to use it to make Justin Bieber parody music videos, not drive (Unless gas prices go up.) Only 1/3 of the wealth generated comes from the price of the car, don't you know. Tibor June 27, 2015 at 9:26 am Your point with recycling may be valid (but also add glass to the metal), I am not entirely sure though. I believe it almost definitely holds for paper, because the costs of collecting it separately and most importantly chemically cleaning it before further use (which is limited anyway) is something that seems rather expensive, while paper is a renewable resource (true, not all paper is being made from industrial forests but then one could simply switch to those paper products that do and that do not harvest rainforests for example). I am less sure about plastics, since it uses oil in its manufacturing and it is unclear when alternative to plastics which do not use oil will be available. You actually need glass shards to produce more glass today in the usual industrial production, plus the fact that beer bottles come with a deposit even in countries where it is not required by the state is some evidence that this actually pays off. Biological waste also seems relatively sensible to me, you can burn it or use it as fertilizer without any special treatment. Funny thing is, that I actually only sort out paper (mainly out of habit, I guess it does not really help much), because in Germany you usually have to get plastic bags from the city which are for plastics and then you just leave them in front of your house (whereas there is a container for paper…never understood why it is done differently for plastics – in Austria or the Czech republic there are containers for that as well) and I don't know exactly where to get those bags (although I also have not really made much effort in finding them). Your statement about bike riding is just utter nonsense, sorry to say that. I have both a car and a bike and currently live in a small town (130 000 inhabitants). I can get anywhere in 10-15 minutes by bike, I don't have to worry about finding a parking place and I save money while doing that. Bikes are simply more practical here (most of the year, winters are not so bad here, but still I tend to go around either by car or by the city bus in January and February). There are also bike tracks around which make it safer (they are not everywhere, but they are along the biggest roads so you don't risk getting hit by a car). Also gasoline is way more expensive here than in the US (but not prohibitively so…1 liter costs about 1.4€ , so about $1.5.) Of course, if you live in a big city without bike tracks and with huge distances (or even live in a suburb and commute to work in the city every day), then riding a bike becomes impractical and those who do either do it because they like biking, they like to save gas money, they want to exercise regularly on their way to work or they actually do it for status signaling (but notice that there are many alternative explanations as well). Of course, if you live in a big city without bike tracks and with huge distances (or even live in a suburb and commute to work in the city every day), then riding a bike becomes impractical [….] That describes much of the US, and bicycling replacing auto use is even more impractical in our rural or semi-rural areas. Perhaps I should have shown my US colors earlier in this sub-thread. 😉 As for recycling, and what use is made of the materials, I'd suggest asking one's own local recycling center what they do. This varies greatly in different areas at different times, and new ways to use the materials (and better ways to process them) are constantly being developed. The deposit on beer bottles is for reusing not recycling. That is, just washing them out and filling them again with beer. In some sense this is very efficient, but no one does it any more. houseboatonstyx: Yeah, the distances in the US are much bigger than in Europe and more people live in the suburbs and commute to work. Also, probably parking is not such a problem there? Most cities in Europe have a lot of parking restrictions in the city centres, so that you either have to pay for parking or it is not possible at all (unless you are a resident in that area). So unless your company has its own parking place or unless you work outside of the centre, driving gets pretty complicated and it is faster to go either by public transport or by bike. In southern countries like Italy or Spain people also use scooters a lot. Douglas: Nobody? Well, at least in (parts of) Europe it is pretty common (although only for beer or milk if it is in glass bottles, not for wine or hard liquor for some reason). In Germany, even some plastic bottles come with a deposit – this is however something artificially imposed by the government by legislation pushed by the Green Party and did not use to be the case before. You are of course right that this is for reuse, but it is weak evidence for recycling of glass as well since it is worth collecting old glasses and washing them as opposed to making new ones. I doubt chemically treating the bottles and sorting out the damaged ones is much cheaper than making new ones, so that suggest that it is especially the material that makes them worth collecting…I could be wrong of course, which is why I am saying it is a weak evidence. Tibor, do they really reuse the bottles, or just put a deposit on them? My understanding is that beer bottles all used to be identical, so they got reused between bottlers. But now people want custom bottles, if only a glued-on label, so they don't reuse them. Milk is a different matter, because people usually only buy milk from a single seller. And if you have a milkman, he can pick up the empties when he drops off the fulls. Even in America, I know people who return fancy milk bottles to the store. It's not terribly uncommon to reuse growler- or half-growler-size beer bottles, at least if you frequent brew pubs or craft breweries; IME it seems most common in the Pacific Northwest. I think this is a relatively recent phenomenon, though, and if you try to do it at a grocery store or a corner liquor store they'll look at you like you've grown a second head. TeMPOraL June 25, 2015 at 2:29 pm RE intelligence tactics, I'm really surprised the article doesn't even mention the word "zersetzung", which is the name of this known and effective method that was favored by the Stasi. Doug Muir June 26, 2015 at 3:14 am The "moving stuff around" technique would certainly be one form of Zersetzung. But Zersetzung included a much wider range of techniques, including various forms of harassment, entrapment, and indirect intimidation. Doug M. Matt M June 26, 2015 at 3:44 pm Given how things turned out for East Germany, we might dispute the "effective" part… Just because these techniques were used by crappy, unpopular regimes that ended up falling doesn't mean they're not potentially effective techniques. I'd say the jury's still out on that one. LHN June 27, 2015 at 3:19 am I assumed that the argument was that the techniques didn't turn out to allow the regime to survive the withdrawal of the outside support that was keeping it in place for any measurable length of time. (Unlike, e.g., North Korea or Cuba, which while crappy and unpopular proved not to require active Soviet maintenance to endure.) Vermithrx June 25, 2015 at 1:42 pm On the robots stealing our jobs: I predict there is a feedback loop hiding the job loss when you compare countries. When productivity increases prices can fall and the companies innovating the fastest in that direction will take a larger share of the manufacturing market and expand their operations, leading to more jobs in the country they are located in but fewer jobs elsewhere as their competitors lose marketshare. I won't have a chance to look at the study linked in the article until tomorrow, so don't know whether the study estimated the global productivity increase vs. job loss by combining their country data or not, but that is the direction I would want to look. Bill Walker June 25, 2015 at 1:32 pm The hippo article was entertaining… but it says that the colorful protagonist wrote in 1917 "…we were all turning into military robots". "Robot" in the sense of "autonomous machine worker" dates from 1920, Karel Capek's book. So colorful character becomes colorful time traveler (possibly robotic). LHN June 25, 2015 at 1:58 pm Wikipedia shows his writing running from 1926 to 1945, so it's not implausible that "robot" would enter his vocabulary on recollection of his Great War experiences, even if at the time he might have thought "automaton" or whatever. SpicyCatholic June 25, 2015 at 1:20 pm Obviously, L'affaire Dolezal has had legs because it came on the heels of the Jenner news, and gave those who are skeptical of/hostile to the current Transgender Moment a test case to probe the assumptions and principles of the pro-Transgender (for lack of a better term) crowd. Two things are apparent: (i) even if you conclude that there are important differences between Dolezal and Jenner, it's not absurd to ask the question and work through the reasoning; (ii) Dolezal/Jenner expose the internal contradictions and incompleteness of the SJW left. There's no quick answer to why Transgender & Transracial are meaningfully different. I've read smart people explain it, like in Scott's link. It's not a short argument. You can't explain it in an elevator conversation. In order to explain it fully, you need a grounding in what we mean by gender and race, and it's not as if we've got those figured out perfectly. You need some understanding of genetics. The arguments get philosophical quickly. So you can't just dismiss the proposition that "Transracial" is as legitimate as "Transgender" the way you would dismiss the proposition that the Chicago Bulls and Chicago Blackhawks are equally good at hockey. And when you try, you look silly or worse. I saw some Twitter comments that went something like "Gender is how you feel in your brain, race is about your DNA." Whoa, hold on there: race went from being an unscientific social construct that isn't real to Hard Science. All it took was someone comparing being Transgender (good!) to Transracial (bad!). Even Scott's link, while otherwise thoughtful, leads off with a dubious assertion: that "a typical black person cannot just choose to be white." While it's true that most people who we think of as black cannot pass for white, the converse is equally true: most white people cannot pass for black – not without tremendous effort. And when even chocolaty black people put in that effort, they can pass for white. See Jackson, Michael. If you're dismissing the Transracial arguments quickly, you're doing something wrong. The way I see it, the SJW left has been forced in advance to reach the "Transracial isn't a thing" conclusion. All their arguments must lead there. It's a matter of the history of race in America: You can't do blackface, ever. You just can't. That's one of the big undisputed no-nos. There's an almost visceral reaction from the SJWs – an involuntary rejection – when someone who is not black does anything that looks like he or she is pretending to be black. Witness the objections to Iggy Azalea and her "blaccent." So regardless of the independent merits of the "Transracial" arguments, the SJWs absolutely need it not to be a thing. They were ideologically committed to it not being a thing long ago. And the SJW's really don't want to have this conversation, because to do so exposes the contradictions in their positions. They want to sigh loudly and move on. That's why their arguments have titles, like at the link, such as "About all I have to say on 'transracialism'." In other words, here's my argument, end of discussion. It's barely a step up from "'Shut up,' he explained." Just some of the contradictions: (1)(a) In order to say that a person with white skin can't "feel black" the way that someone with a penis can "feel like a woman," you must accept that there's a real difference in the way that the brains of men and women generally operate, and that these differences map to stereotypical gender norms. (1)(b) We're supposed to take people at their word regarding their gender identity. We don't get to examine Jenner's brain to determine whether he or she really feels like a woman. But if someone says they feel black, we can *never* accept that. (2) If you argue that gender is a mere social construct and our brains are blank slates at birth, upon which the culture tells us how to act, then the same must be true for race. (3) Race isn't real except when it is. Contrary to what SJWs insist, "Black Privilege" is as real as White Privilege. White Privilege may be more advantageous generally, but there are some settings where being black confers an advantage. All the talk of racial boundaries being fuzzy and artificial disappear when someone whom our culture doesn't consider to be black, like Dolezal, attempts to gain the advantages of blackness. At that point we get very clear definitions of race. (4) If it is possible to "feel like a woman," and there's no such thing as "feeling black." Yet when Jenner went from Bruce to Caitlyn, he began expressing a white woman's gender. He didn't express gender in the way that a typical Japanese or Mandinka would. Jenner felt a need to express a racialized gender. If Jenner's transition resulted in her acting like a sassy black woman, she'd be accused of racial appropriation, and she wouldn't be able to defend herself by saying that's how she felt. She MUST feel like a white woman. But that's not a thing, right? Much of these contradictions can be resolved by assuming two things: (i) when we say "race," we're really talking about ancestral origin within the past 500 years. By "black," we generally mean that you have plurality ancestry from sub-Saharan Africa from that period forward. On its face this isn't that controversial. But the SJWs don't like to admit that this is how we define "black" because it makes it something that can be objectively measured, and not mere social hand-waving. (ii) There are actual physical differences between the way men's and women's brains operate, and sometimes this can get crossed up. SJWs refuse to move off the blank-slate model of the brain. When people bring up "you're born either male or female according to your genitalia and your chromosomes", people making an apologia for transgender like to bring up intersex people and conditions such as Klinefelter Syndrome and other chromosomal abnormalities to show that there is no such thing as an absolute biologically determined binary gender system. So what about the argument that you can't be transracial because there are biologically determined racial characteristics? Such as the amount of melanin in the skin? Or epicanthic folds of the eyelids? Well, Down's Syndrome used to be called Mongolism (and persons with it were described as "Mongoloid") because the slanted eyes of the syndrome were considered similar to the epicanthic folds of Central Asian peoples. XXXX Syndrome people can also have epicanthic folds. So "natural distinctive" racial features can be mimicked by chromosomal disorders in the same way that a person with Klinefelter's will be perceived as phenotypically male even though they possess two X chromosomes, which is associated with "being female". A black mother can give birth to a white-skinned child. White-skinned parents can have a black-skinned child. If simple biological determinism is not enough to define gender, we may be finding out that neither is it enough to define race. If the existence of intersex people can prop up the arguments for "And even though I have the functional genitalia and the non-aneuploidy chromosomes of one gender, I am actually of a different gender, two genders, genderfluid, agender, or third gender" of transgenderism, then perhaps (as we are now coming to accept transgender people not as mentally ill or fakers looking for attention) in the future we will accept the experiences of transracial people whose genetic profile explains why they have the features of Race A when belonging to Race B (or mixed-race) as supporting the identities of people who say "Although I have the functional phenotype of Race C I am actually and have always been Race D". I think the interesting implication of the acceptance of Jenner's identity and the refusal to accept Dolezal's (which I think is the correct position), is that it actually implies the relative biological *in*significance of race relative to gender. In centuries past, people probably assumed white women had more in common with white men than with black women. But I think this is actually not the case. Biologically, the difference between the men and women of any given race is bigger than the difference between the men or women of different races. Not just talking about genitalia, of course, but about the average levels of all kinds of hormones, etc. Some feminists have opposed Jenner's self-identification as female because, they say, to truly be female, you have to have had the experience of growing up being treated as female. A comedy, of course, but for Steve Martin's character in this movie, I think there's a real sense in which we might say he *is* black (despite his inborn love of twinkies): https://www.youtube.com/watch?v=O7why8Xo_RQ Conversely, I think that, even if my parents had, for some reason, raised me as a girl–dressing me as a girl, making me use female pronouns, etc. I would still have *felt* like a boy. If my parents had really wanted me to be a girl maybe they could have made me feel like one by giving my pregnant mother hormone injections of some kind, but by the time I was born, the fact that I had a man's brain had pretty much been set in stone. And there are weird cases of this–not just transgender, but, for example, a boy whose penis was removed at infancy due to a botched circumcision and the parents decided to raise him as a girl. He had intense gender dysphoria and always felt he was a man. Another joke, but, isn't there a real sense in which Barack Obama *isn't really* the first black president? https://www.youtube.com/watch?v=EDxOSjgl5Z4 Of course, he's been *treated* as black by those around him, and to the extent that that, or skin pigment, is what it means to be black, then yes, he's black. But to the extent that it means something cultural, he's really not very black at all. He was raised in Indonesia and Hawaii by white people. This is the same reason I get annoyed when certain fancy institutions fill their black people quota with the children of powerful, wealthy African families. Other than their skin tone, these students don't have that much in common with African Americans. Now had Herman Cain been elected, he would have definitely been the first *African American* president (note that Barry Obama started to embrace his blackness once he went into politics and became "Barack"). Now, of course, if you are a black child raised by white people then you are still going to be treated differently in public even if your parents treat you no differently than their white children. But this only means that you've still got half of the equation missing: your parents treated you like a white person (member of their culture), but society is still assuming your culture corresponds to your skin tone (which, in this case, it does not). In other words, race is relatively cultural, gender biological (though, as with everything, there are elements of both at work in both cases), and to this, all the usual caveats apply. I think this may run counter to a lot of prevailing narratives, even though I think it's the logical corollary of the acceptance of Jenner and the non-acceptance of Dolezal. The basic assertion you are making here seems correct to me. Another way to state this that might be that sex has a bi-modal distribution. There are those who end up being the rare instances that don't clump around one set of characteristics, but they are rare. Whereas "race" (whatever that is) is more like a lumpy 3D hill. You can slice it lots of different ways to try and group people into so called races, but the categories ending up being fairly arbitrary. One of my favorite illustrations of that is that during the Irish immigration wave, they weren't considered white. In fact, newspapers of the day frequently depicted them as dark-skinned and attributed to them all manner of negative characteristics. Now, if you isolate a particular set of characteristics and treat them differently, we can then start to see large "racial" differences. The incidence of slavery among whites in 1860 America was essentially 0%, but that doesn't mean whites were "naturally" not slave and blacks "naturally" slaves. Those are TERFs (Trans Exclusionary Radical Feminists). They exist, their comments will be signal boosted, but they don't represent the mainstream of current 3rd-Wave Feminism. As to the Obama not being Black thing, to me that is just a "let me slice this hill a different way". It's a no-true-scotsman argument. I mean nobody ever said Barry White's name wasn't black enough. If he was going by Barry and people then "found out" his given name was Barak, they would have had a field day with that. He might be in a slightly difference place on the hill than we would call the "average" black man, but, frankly, that is true of almost all presidents. In defense of the newspapers of the day, the Irish of the Irish immigration wave were very noticeably different-looking. Based on the photos I've seen, I'd guess the average self-identified Irish-American today looks only a quarter as far from the white mean as the average first-genner did. HeelBearCub June 26, 2015 at 10:45 pm That sort-of misses the point, doesn't it? Edit to de-snark: The Irish, both here and back home, have not undergone some radical transfusion of genetic material in the last 100 years. Edit 2: it's important to note that it was mostly the previous wave of German immigrants complaining about the current wave of Irish. There was quite a bit of motivated and biased reasoning in the Germans wanting to separate themselves from the Irish. Larry Kestenbaum June 27, 2015 at 3:34 am No, but people who grew up in conditions of grinding rural poverty, disease, and malnutrition in the Auld Sod of the 1840s are going to look quite different than their affluent First World great-great-grandchildren. CJB June 27, 2015 at 3:52 am The term usually used for people who look like that is "Black Irish" "The term is commonly used to describe people of Irish origin who have dark features, black hair, a dark complexion and dark eyes." according to IrishCentral.com And various pre-20th century brit writers describe various strains of celtic ancestry as "dark". Point being, HBC, that talk about xenophobia towards the Irish usually comes from an assumption of absurdity: "Arguments over whether the Irish were white should always have been patently ridiculous, because I can barely tell if people are Irish. So this [and it's often argued by extension, race itself] must be entirely about cultural perception." And that's ahistorical. That the discussion took place in terms of white/non-white demonstrates how firmly committed the American paradigm has always been to collapsing group identity questions into the Race axis, but the Irish immigrants were quite visibly a xenogroup. So the conclusion I take from the situation is more like "when a xenogroup shows up, people notice and react in the same fashion even if their cultural paradigm and vocabulary are horribly ill-suited for expressing what makes the group different." "when a xenogroup shows up, people notice and react in the same fashion even if their cultural paradigm and vocabulary are horribly ill-suited for expressing what makes the group different." Isn't this precisely my point? Or, perhaps more accurately, why do you assume that their cultural paradigm and vocabulary do anything else other than identify the xenogroup? It's the fact that there is a set of characteristics that mark them as a xenogroup that matters. Nothing else in their attributes does. @Larry Kestenbaum: I'm arguing that the Irish immigration (and really, many immigrant waves) disproves the notion that our classification of race has very much to do with some intrinsic property of the genome of the people immigrating. The fact that the immigrating Irish may have had deficits that were environmentally caused boosts this claim, doesn't it? @CJB: Whether Black Irish is really a thing and doesn't just refer to "foreigners with dark intentions" is ambiguous according to what I assume is the same Irish central article. But even so, there definitely is no indication that Black Irish dominated the immigration wave. We would see these Black Irish today, wouldn't we? The Irish were broadly described as "dark" and "not white" at that time, by those who wished to impugn their nature. And I believe this has been true for most immigration waves. I always thought "black Irish" simply meant "Irish person with dark hair, as distinguished from the large percentage of Irish with red or otherwise light-colored hair." I, for example, have blackish-brown hair but very fair skin with freckles. My brother has red hair and even lighter skin. I thought this made me "black Irish," to the extent I am Irish, as an American with a big proportion of Irish ancestry. The fact is, Irish people are even whiter than most "white" people, as I can attest from days spent at the beach. I think it's correct that calling people "swarthy" was just a stock way to describe menacing foreigners. Remember, also, that people had less means of dispelling such notions. If you don't know any actual Irish people, you can't just GoogleImage them to see if the description is, technically, accurate. I think it's also correct that Irish people, especially if malnourished and wearing different clothes could look weirder than one might imagine. If your standard for "white people" is a well-fed person of even skin tone with straight brownish or blondeish hair, and you encounter a hoard of people with bright red, curly hair, neon white skin, freckles, a weird accent, unusually skinny and small, always hanging out together, then those people are going to seem pretty foreign. Nowadays, Irish have been largely subsumed within the category of "white," in the same way people have stopped paying as much attention to finer distinctions within "white," like Jewish, Italian American, German American, etc. but difference is all relative. If you're from England and all you've ever met is people from England, then people from Ireland probably look weird and foreign, and sound funny when they talk. But if you then meet someone from Asia, Africa, or the Middle East, then perhaps you realize that the difference between you and the Irishman was actually quite small. Ironically and almost amusingly, I remember reading that very early accounts of Chinese immigrants to California described them as "stupid and lazy." @ HeelBearCub: I would certainly think so! (If you thought I was making the opposite point, well, one of us is confused.) HBC: I'll confess, I wasn't paying too much attention to what your point was. I expect I was mostly on a tangent. I'll restate. The argument that the Irish were non-white is usually used in the following logic chain: 1. Modern Irish-Americans are universally perceived as white. 2. Irish immigrants were not always perceived as white. 3. Modern Irish-Americans look essentially the same as the Irish immigrants did. 4. Therefore, only the internal perception of their race has changed. 5. Therefore, perception of race is culturally arbitrary, and also people in the past were dumbasses. …but point 3 is false, leaving us with: 3. Modern Irish-Americans look substantially less different than Irish immigrants did. 4. Therefore, both the internal perception of their race and what was there to be perceived have changed. 5. Therefore, this example wasn't particularly informative, which we probably would have predicted if we weren't all such historical chauvinists. And I don't exempt myself from the indictment for historical chauvinism here, it blindsided me as well. The only reason I'm mentioning this is that I recently took a tour through a local history museum, was hit with this nagging "one of these things is not like the other" feeling as I went through the galleries, and was shocked to realize that the category I'd discovered was the Irish. onyomi June 28, 2015 at 1:39 am @Alraune In what ways did the 19th century Irish immigrants look so different from present-day Irish Americans (and indeed, from Irish people living in Ireland currently)? I've been looking for a good gallery to demonstrate, and ideally also to get more evidence for it having been dietary, but the internet is really big on publishing drawings of emaciated famine victims and frustratingly uninterested in photos of them a couple decades later. I did find this impressively terrible graph claiming Ireland contained more people than Europe until 1910 though. That can't be right… It makes it look like the population of Ireland 200 years ago was almost as large as that of India today! What is the context of that claim? It is true, however, I believe, that there are more Americans of Irish descent in the US now than there are Irish people in Ireland. It's supposed to be the population of Ireland (scale on the left, million/dem) vs. the total population of Europe (scale on the right, hundred million/dem). Regardless of whether #3 is really true, because I think @onyomi has a good point there, do you think the Irish of today are the the genetic descendants of the Irish of 100 years ago? I can't imagine 2/3/4 generations is significant enough to allow the kind of genetic drift that would say they are significantly different merely based on random mutation or evolutionary pressure. But do you see some other way that Irish should not be representative, genetically, of The Irish of yesterday? Assuming that you think they are representative, the the add on to point #3 is simply that judging the capability of people based on how they look and their existing economic circumstance is foolish. This is the main argument, not that the Germans, Irish, Italians, Poles, Greeks, Jews, etc. of yesterday didn't look "different" from the descendants of the Mayflower, but that judging them based on their external appearance was nonsensical. Time after time, people did this. Do you think that you can tell, from a person's external appearance, assuming they are healthy, how capable they are? Yes — the fact that ethnic groups in America intermarry at high rates. My own ancestry is not so unusual: my four grandparents, children of immigrants, came from four different ethnic groups. And I was born six decades ago. The "ethnic purity" (Jimmy Carter's awkward expression) of a given immigrant population declines monotonically from generation to generation. Let's say there is such a thing as the Irish genome, which is characteristic of almost everybody in the all-Catholic parts of Ireland. Presumably you could use this to develop a DNA test for the extent of a person's genetic Irishness. How many babies born in the U.S. today would test as 100% Irish? I'm guessing almost none. Fifty percent Irish? Maybe very low single digits. Anecdote: on a visit to Boston some years ago, before the Big Dig, on a very slow moving expressway, there was a old guy walking from car to car, raising money for the Knights of Columbus or something. This man's face caught my attention. He just looked just so extraordinarily Irish, like a picture from Dorothea Lange's Ireland. It's not a common experience here (perhaps outside Boston) to recognize a person as Irish from physical appearance alone. Sorry, I thought about spelling out that I meant Irish on both sides of the pond. I guess I should have. Certainly, if the Irish of yesterday looked so different as to seem "not white", and it was due to genetics, we would see some population in Ireland today that matched it? If not, how could one explain it? Even in the US, we would expect to see some remnants of the population, if for no other reason than they would look so different as to reduce intermarriage. There is an important point about intermarriage to be made though. For whatever reason, when those of African ancestry intermarry with those of different ancestries, the resulting offspring tend to show very clearly the heritage. Of course, this really might say more about the societally imposed costs, than it says about whether the visual markers are particularly noteworthy. But there is no substantial population of people born on this side of the pond who are anything like "as Irish" as typical natives of Ireland. How about the fact that "not white" had a different meaning 150 years ago than today? Americans in the 1860s probably had a much narrower idea of what counted as "white". But I'm probably just repeating what you and others have said upthread. This assumes that there were Irish immigrants who would be judged "non-white" by an observer today. Absent any evidence, I dismiss that completely out of hand. If you graph two different values on the same field against the same X-axis with different Y-axes, you are going to Science Hell. @Larry: @Alraune is arguing against the idea that the treatment of the Irish serves as an example of race being socially constructed. The Irish in Ireland of today would be incontrovertibly white. Either the idea of white has been socially constructed to include today's Irish, or today's Irish look much different than the Irish back then. Your point about our view of what is "white" being less narrow today is exactly what I am trying to get at. Well, then, we agree, and my apologies for missing or misunderstanding the previous context. Ah, I see, I didn't even look at the left side of the Y axis, because yeah, that's weird to plot two different values on the same field like that, though I guess the point is just to show the population of Ireland relative to that of Europe. That is, it shows that Ireland went from being about 1/60th of the total population of Europe to being about 1/10th of the population of Europe in a very short time. A less confusing graph of just the population of Ireland does reveal that same spike: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c1/Population_of_Ireland_since_1600.png/330px-Population_of_Ireland_since_1600.png What's interesting to me is that the famine is preceded by an unprecedented population explosion which I guess, must have been at least partially predicated on widespread availability of potatoes (+improved medical care? Though that wouldn't explain the explosion relative to the rest of Europe?). So the famine is in some ways more of a painful regression to the mean rather than a painful net loss of population. I guess there can be a population bubble, just like an economic bubble. Also makes Malthus's concerns more understandable. What's interesting to me is that the famine is preceded by an unprecedented population explosion which I guess, must have been at least partially predicated on widespread availability of potatoes The first problem is that we don't have any accurate data for pre-Famine population of Ireland. Estimates run anywhere from six to ten million, so the eight million is the "splitting the difference" figure. Secondly, pre-Famine and post-Famine society in Ireland changed dramatically. The European trend was for later and less fertile marriage (England was in line with that); the Irish, on the contrary, married early and had lots of kids, and we appear to have had lots of those kids survive birth and early childhood. Funnily enough, the diet of potatoes, fish and milk was described as being healthy, since people seemed to thrive on it. Possibly for the same reason the "fasting makes mice live longer" diets work? What contributes to the complication of the whole picture is the political background; since (for example) the Penal Laws aimed to break the inheriting of large estates or farms by Catholics, the pattern of sub-dividing plots of land between all the sons of a family, so that each was enabled to marry and start a family of their own, as you could grow a sufficient crop of potatoes to feed that family on a small amount of land, was established. High population meant cheap agricultural labour. Landlords were also not adverse to having many tenants on small plots of land, as they could charge higher rents (demand for land meaning that people would pay as much as was feasible). The Famine disproportionately affected the poorer and lower-class sections of society, which are always the ones with the highest birth-rate, but it also affected the cultural landscape. Marriage was now delayed so people were older when they married; land was concentrated in the eldest son rather than being split between the sons; dowries and fortunes were expected with daughters so again, not all the daughters of a family could marry. Tolerance of childbearing outside marriage dropped drastically ("Victorian values" in action). Emigration was the safety valve for the unmarried men and women to leave, those who had little to no prospect of work or marriage or inheriting land. One of the ironies of our colonial history is that an increasing population was touted as a good thing for England, but a bad thing for Ireland. As you can see from this graph, the population of Great Britain increased in a nice, steady fashion over a century from 1801-1901. Despite Malthus' warnings, a growing population was seen as a sign of a rich, prosperous, thriving nation. Thanks for the history, Deiseach. It's interesting to think about how drastically material conditions can effect cultural mores, like those surrounding childbirth and sexuality. brad June 28, 2015 at 11:40 pm I wonder what the percentage in the US that have at least one Irish great-grandparent and 100% catholic great-grandparents. houseboatonstyx June 29, 2015 at 1:35 am @ deiseach Despite Malthus' warnings, a growing population was seen as a sign of a rich, prosperous, thriving nation. I thougnt it would be more like, "Despite the fact that a growing population had long been seen as a sign of a rich, prosperous, thriving nation, Malthus warned that [whatever]." Perhaps it depends on who is doing the seeing. Malthus was agreeing with some old dead philosophers, against then-current popular opinions. Thomas Robert Malthus's An Essay on the Principle of Population was an immediate succès de scandale when it appeared in 1798. […] he found himself attacked on all sides–by Romantic poets, utopian thinkers, and the religious establishment. http://www.amazon.com/Malthus-Life-Legacies-Untimely-Prophet/dp/0674728718 Good Burning Plastic June 27, 2015 at 3:10 am [Obama] was raised in Indonesia and Hawaii by white people. On the other hand, as someone once put it "I've heard he sleeps with an African-American woman", so I'd expect him to have some intimate familiarity with African-American culture. I've heard the same sentiment expressed less snarkily by Eurasians as "we inherit the race of our children." That actually sort of proves my point–that race is as much cultural as genetic. But can you imagine saying "I know all about the struggles of growing up female… I'm married to one!" I don't think it was even properly cultural to start with. The Anglo-Franco-Germanic Alliance didn't have one culture, and certainly didn't think of itself as having one culture, that came later. I think that, even if my parents had, for some reason, raised me as a girl–dressing me as a girl, making me use female pronouns, etc. I would still have *felt* like a boy. I've always wondered about that, because I have a sister who seems to have the same personality and the same brain as I do in pretty much all relevant senses. When I moved out, the rest of my family claims she started to act even more like me, so the nurture influence seems to have if anything been towards artificial divergence. So I expect you could have raised either of us as the other, and neither of us seems to be gender dysphoric. David Friedman June 28, 2015 at 10:38 pm " Despite Malthus' warnings, a growing population was seen as a sign of a rich, prosperous, thriving nation." I don't think that is in any way inconsistent with what Malthus wrote–rather the opposite. His argument was that raising the standard of living of the masses would result in population growth, and that raising it by as much as Godwin and Condorcet expected would result in an exponential increase at something close to the biological maximum (since in a society that rich, having children doesn't deprive the parents of anything that matters, and people enjoy sex), which would eventually outrun any plausible increase due to economic growth and drive standards of living back down. That, at least, is my memory of what Malthus wrote in the Essay on Population, and consistent with the versions of the iron law that appear in his contemporaries, especially Ricardo. Echo June 25, 2015 at 12:59 pm "That is it. That is human sex selection. Everything else is triggered by hormones, including the hormones present as the brain develops." That's just… not true. At all. I sympathize with the people who would love to believe it's true because it makes hormone therapy appear incredibly powerful, but there are several other known mechanisms responsible for sex differentiation. Larry Kestenbaum June 25, 2015 at 11:43 am Not directly relevant to the current thread, but for anyone who wonders about my day job, I'm in the news today. CJB June 25, 2015 at 11:45 am Oooooh, look at all that sexy, sexy public affirmation. Your SMV just went up. You are no more a BETA CUCKOLD ORBITER. You have now risen to WEAK ALPHA. I suggest spending more time on BRUTE STRENGTH and GENERATING MASS. Ladies love MASS and they hate BETA CUCKOLD ORBITERS. SFG June 26, 2015 at 11:05 am Only BETA ORBITERS and OMEGA MALES spend their time tearing down BETA ORBITERS and OMEGA MALES on INTERNET FORUMS. Hah! But logic is an intellectual tool, and as everyone knows, only OMEGA MALES are NERDS. What good is your logic against BRUTE STRENGTH? SFG June 26, 2015 at 3:19 pm Pretty good on an INTERNET FORUM where you can't actually BEAT ME UP. Besides, this is a bit of a strawman–the whole point of the redpill/PUA/MRA philosophy isn't that logic is useless, but that it won't get you laid. And yes, it is useful, but it won't get you laid. PUA is all about applying brainpower to solving the problem of how to get laid. Redpill and MRA are not the same thing, not related, not comparable, and not aligned. MRAs are all about how dudes should not have to "get laid" to prove their value as human beings. Stop lumping them together. walpolo June 26, 2015 at 11:55 am Looks like you'll be hard at work starting today! 🙂 Larry Kestenbaum June 26, 2015 at 12:33 pm It's much quieter than I expected. So far today, we have processed six marriage license applications, of which five were to opposite-sex couples, one to a same-sex couple. This is in a county with a population around 350,000. I am hearing similar reports from counties around the state. There has not been a big rush of applicants so far. Of course, the decision was announced only about two hours ago. By contrast, when Michigan had a one-day window for same-sex marriage on March 22, 2014, we had a long line waiting outside the door when we opened. We issued 75 licenses in four hours. Less urgency since it's a clearly permanent change, plus no impetus to be First due to the previous temporary window? Yes, it's ordinary business now. And perhaps in 2014, we drained the pipeline of people who were really anxious to get married right away. Final marriage license total for today: 25, of which 18 were same-sex, and 7 were opposite-sex. How do those overall numbers compare with a typical Friday in June? Confirms my hunch that gay marriage was more "rah gay people" than anything that had significant impact on the world. Imagine a bully taking the toy of another kid and then not playing with it… that's what the Blue Tribe did with marriage for the Red Tribe. Protagoras June 29, 2015 at 3:31 am Except in your bully analogy, the other kid can't play with the toy any more. What has the blue tribe deprived the red tribe of? Attention, approval, and points, obviously. A better analogy would be when the pretty, popular sister takes art lessons because she's jealous of the attention her mousey sister gets for making drawings. In the simple analysis, it looks like an improvement for the first girl with no loss for the second, but since the motive is overwhelmingly and transparently to win in the zero-sum sibling rivalry, the total outcome will be a protracted and spiteful series of fights that the entire family loses. @Foo,@Alraune: What these comments prove is that there is simply no point trying to engage in rational discourse with the opposing side. If you wanted to find out what gay marriage was really "about," you'd have done so. what gay marriage was really "about" 2% personal desire for it, 8% friends with the first group, 90% status play? I have to think you are just trolling with these comments. Or, conversely, that everyone who gets married mostly does so because of "status play". I mean, the low number of cross-racial marriages post Loving v. Virginia doesn't suggest that those who were supported the proposition that it should be legal because it was a right were simply bullshitting about their reasons for backing it. If I wasn't convinced that he is just trolling, I'd ask what kind of evidence would be sufficient to change his mind. But he's just trolling. Or, conversely, that everyone who gets married mostly does so because of "status play". As a ballpark figure, I'd buy it if someone said 90% of gay marriage activists weren't planning such a marriage, nor had any close friends or relations who were. Now, I might object to collapsing all impersonal motives into "status play", which is only true under the broadest, most Hansonian view of status. But I don't think it's realistic to say that gay marriage activism was driven mainly by people who personally wanted to marry their partners, either. Maybe in 1994, when this was first on my radar, but not now. (Not trying to grind an axe here — I think this was about the best realistic outcome under the current regulatory regime.) Everyone who gets married mostly does so [as a] "status play". "Play" in the sense of a football down, not roleplay. And, yeah, I do think that status is the primary reason people route their relationships through the gigantic expensive year-consuming headache of a public ceremony that we've indoctrinated them into prizing for years and years. I mean, the low number of cross-racial marriages post Loving v. Virginia doesn't suggest that those who supported the proposition that it should be legal because it was a right were simply bullshitting about their reasons for backing it. It also doesn't suggest they were all dedicated and consistent rights theorists. I try to be one of those dedicated and consistent rights theorists myself. The first thing you learn is that for the vast majority of people, "Is X a right?" is an indistinguishable question from "is your general opinion of X positive or negative?" The second thing you learn is that even where your opinions are perfectly logical, the levels of emphasis each has won't be which means the output isn't. So, yes, what drove the majority of interracial marriage supporters is more complex than "simply bullshitting", but not by that much. They adopted their opinion because someone else held that opinion and they thought agreeing with them was socially advantageous, and they broadcast the opinion along for the same reason. We live in a runaway feedback loop in which apparently strong social consensuses just being the amplified opinions of some remarkably small number of people who would give a damn if it changed is the normal situation, not the exception. (In some cases, that "remarkably small number" is zero, and you have a strong, actionable consensus that consists of nothing but socially amplified noise.) This is so completely wrong that I'm almost speechless. I think you don't realize that the context in which Loving was decided in 1967 was completely different from the context of Obergefell in 2015. A Gallup poll in 1968 (the year after the Loving decision) showed only 20% approved of interracial marriage; 73% disapproved. Before the ruling, 15 states prohibited marriages between blacks and whites, and those laws had more support than opposition in national polls. The Lovings, plaintiffs in the Supreme Court case, were prosecuted under the Virginia law, convicted of a felony and sentenced to 25 years in prison. This is not some minor inconvenience. When the Warren Court took the Loving's case and struck down the miscegnation laws in 1967, the ruling was quietly celebrated in some places, but it certainly wasn't a nationally popular decision. Indeed, a few months after the ruling, the daughter of U.S. Secretary of State Dean Rusk, a young white woman, married a black man. This was thought to be such an embarrassment to the Johnson Administration that Rusk offered his resignation! (LBJ didn't accept it.) In 1967, there was certainly no "runaway feedback loop" expressing support for interracial marriage. Larry, I do not see how any of that is in any way relevant to Alraune's claim. That Loving occurred at a different point in the trajectory of popularity from Obergefell has nothing to do with the question of why there was such a trajectory in the first place. The Lovings were sentenced to 25 years of exile from Virginia (or 1 year of jail), not 25 years of jail. I'm certainly discussing events I wasn't around for here, but I did know what the interracial marriage approval trends looked like, and was incorporating them as best I could when I spoke. My claim that consensus on social issues consists mostly of amplification of an interested minority opinion, with most people being naturally ambivalent and only holding an opinion because they were told it was a good one to have, is quite consistent with the interracial marriage approval numbers. At the time of the Loving decision, the public stance towards interracial marriage was consensus condemnation. When it stopped being a live issue because the law was decided, it slowly drifted from consensus condemnation towards no opinion for 30 years. There was no feedback loop in '67, but in the late 90s, you see a sudden new trend in which people start to approve of interracial marriage en masse, and that seems to have happened entirely because "approve" took that final doddering step across the 50% line and became The Majority View. Interracial marriage has now gained consensus approval symmetrical to its original disapproval rate, which is itself evidence that the approval rate primarily reflects factors inherent to the social structure rather than to the issue at hand. The primary difference between the interracial and gay marriage situations appears to be that the Warren Court was much less sensitive to the wind than the Rhenquist Court was. SCOTUS should have taken up the matter in 2004 after Massachusetts legalized SSM, and we've spent a decade paying the price for their hesitance. BBA June 30, 2015 at 8:25 am If SCOTUS had considered the question in 2004 it would have gone the other way. (Kennedy is a squish, he'd have bought Scalia's invective back then. Today it just seems absurd and even Kennedy can see it.) Since they generally won't consider reversing a decision until about 15-20 years have passed, we'd end up where we are now in 2020 or so. @ Anonymous: You're right — the source I relied on got that wrong. But to be (1) convicted of a felony, and (2) expelled from your home state, on threat of imprisonment, is surely no small penalty. @ Alraune: There was no feedback loop in '67 My claim that consensus on social issues consists mostly of amplification of an interested minority opinion, with most people being naturally ambivalent and only holding an opinion because they were told it was a good one to have Arguably, that describes the formation of public opinion on any issue. Political views are meaningless in isolation, and almost nobody is a "consistent theorist" whose ideology overrides their environment. in the late 90s, you see a sudden new trend in which people start to approve of interracial marriage en masse, and that seems to have happened entirely because "approve" took that final doddering step across the 50% line and became The Majority View. There's a little bump right after 50%, sure, but overall, approval of interracial marriage shows a very gradual increase from the 1950s to the 2010s, consistent with social changes and generational succession. It was a slow process from start to finish. Interesting question which the state of Michigan is struggling with right now: should marriage licenses have blanks for the gender of the parties? I mean, should the gender of the parties be explicitly given on the face of a completed marriage license? What's the argument in favor of having such a blank at this point? (Assuming it's been articulated.) Is it specified in the legislation or regulations that define the form? John Schilling June 29, 2015 at 1:42 pm It would probably be useful for statistical purposes. I expect quite a few people would be interested in those statistics, if this isn't one of those subjects we don't want federal bureaucrats to be gathering statistics on for fear they will be abused. (Nit– state rather than federal bureaucrats for marriage licenses.) There are a fair number of marriage-related statistics that are (or might be) tracked and of interest that aren't (presumably) included on the license form. E.g., tracking racial and ethnic disparities in marriage rates, religion, income of the parties, etc. I'd be leery of turning the marriage license into a survey form on that basis. Given that, I'm inclined to presume against using it that way in this case, unless the state has a persuasive positive reason for doing so. It's not something I'd go to the wall over or anything, but it seems as if keeping it is more a matter of inertia than something that would be done if starting from scratch. (And I'd say Chesterton's Wall doesn't really apply in this case, since the wall itself has been torn down, for better or worse. We're trying to decide if some of the landscaping associated with it still makes sense now that it's gone.) Update: yesterday, following a discussion about how awkward it is to deal with people of ambiguous gender (e.g. birth certificate shows one thing, driver's license shows something different), the committee voted overwhelmingly to omit gender blanks from the marriage license form. Today, there is some pushback going on, so the discussion continues. Cauê June 25, 2015 at 11:25 am I'd like some help understanding the one about race in Brazil. Also, some thoughts: First, I'm happy to see a USian study about this. Brazil and the US have large differences in race relations, despite very similar history, and the comparison should teach us interesting things (IMO the most important difference is that we don't have a "white culture" and a "black culture" around here; cultural differences follow region and class). Also, from all I've seen, we're embarrassingly bad at studying it. The paper is right in saying that skin tone is the determinant characteristic, and that a history of miscegenation gave us a very large number of people of "ambiguous race". As a personal anecdote, my 5-year-old stepson recently asked us "what color am I?", and only then did we find out that my wife thinks he's white and I think he's "pardo" (a word that basically means "anything between white and black"). Anyway. The paper says that people who go from reported non-white on the old job to reported white on the new job are more likely to be going from plants where most people are reported non-white to plants where most people are reported white, and vice versa. They also say that workers that go from non-white to white get a wage "premium", but at the same time they're going to plants with lower average wages. (are the ones getting better wages and the ones going to lower-average-wage firms the same people? are the majority-white firms and the lower-average-wage firms the same places?) They explain it with workers "manipulating" the perception of their race so they'll get jobs in places that discriminate against non-whites. Now, from inside the culture, this looks super weird to me, but my stats-fu is weak and I can't assess how well they supported it (my instinct would be to look at whether there's a pattern to employers' different perceptions of where the racial thresholds are on the white-black scale). Dore June 25, 2015 at 2:27 am Very sad to hear an article that I consider to be very poor quality and have very weak arguments to be described as "interesting throughout, but what makes it for me". The only of those countries sitting on a pile of oil is Norway and whilst it has over 2/3 the GDP/capita of the others its growth rate isn't any higher, its income inequality is only worse than that of Sweden and its innovation is always ranked top 15 (which shouldn't be the case if innovation was negatively correlated with equality as the following quote seems to claim). [Innovation] disproportionately comes from economies where "incentives for workers and entrepreneurs results in greater inequality and greater poverty". An even better example would be Sweden which is, according to rankings of choice, 1st in income equality, 3rd in innovation. It makes no sense whatsoever to say that innovation is, thus, the result of a "gap of incomes between successful and unsuccessful entrepreneurs", as claimed by the original "research" paper. Even if the discerning factor for innovation was in pure absolute terms, the world has its fair share of both small and large countries and quickly eyeballing the values doesn't seem to me that there is any significant correlation (yes, eyeballing is not the way to do it and if anyone else is up to the task I bet $10 the correlation won't be found). In other words, the government can really screw things up, if it wants to, but it can't likely meaningfully increase the rate of growth above the level of innovation that the global system will support. This is a telling paragraph that shows a lot about where the author is coming from (and the reasons I believe have led to the conclusions, namely ideological). There's no evidence saying that the Scandinavian governments need a growth rate above any arbitrary level of innovation that the world might or might not be able to support. Their growth rate is far from being high and I think their relative success has more to do with their ability to provide relatively more for relatively less than with them having more to provide with. (In regards to gdp/capita Norway is the only Scandinavian country within top 10, runner up is 17th. In regards to gdp/capita annual growth rate they're even worse ranking all below 40th. They're able to provide quality services to their population despite growing relatively slower than other countries (or maybe because of it)) And if you think that Acemoglu, Robinson and Verdier are right, then Scandinavia simply doesn't need to focus on innovation, as long as the United States is willing to carry that weight. I do have more things to say about this article but it's getting late and I need to sleep. I fail to see how that article, the one from WP, and the original research are anything but the result of a jingoistic bubble. None of those pieces present any convincing argument for the "theoretical framework" and yet it spreads and gets signaled as if it were even passably accurate. If you look upthread, you'll notice I opened my comment on the article with "It's Megan McArdle". McArdle is — generously — an extremely sloppy writer. The less generous interpretation would be that she doesn't care about the truth or falsehood of her assertions; she's a successful conservative columnist who has moved steadily from one high-prestige gig to another. To what purpose, other than preaching to the converted? I disagree with your characterization of McArdle, btw, having first noticed her in the context of some very well researched and well written essays. But if she were wholly unknown to me, that wouldn't change the fact that your claim is an unsupported, off-putting ad-hominem. Any claim of the sort, "That's by X, and all right-thinking people know better than to trust X", is, well, a big red flag for me. So if you want to do anything more than make noise in an echo chamber, the first thing you need to do is to show why this article is poorly-researched and factually inaccurate. If you do that, it might then be helpful to note that this is a pattern we should watch out for. Until you do that, there is no a priori reason for us to trust you over McArdle, or anyone else, and the fact that you're here making unsupported accusations and they aren't, is reason for us to distrust you. Autonomous June 25, 2015 at 3:53 pm "There are some cases when it is not (ad hominem) really a fallacy, such as when one needs to evaluate the truth of factual statements (as opposed to lines of argument or statements of value) made by interested parties. If someone has an incentive to lie about something, then it would be naive to accept his statements about that subject without question." http://www.nakedcapitalism.com/2012/09/project-s-h-a-m-e-on-megan-mcardle-portrait-of-a-taxpayer-subsidized-libertarian.html Edward Scizorhands June 25, 2015 at 10:34 pm I often wondered what kind of person would ever post something like what you just did. The essay opens with the fact that her father worked for the government, so she somehow cannot be libertarian . . . no, I've never bought that one, but I've always been glad to see someone's opening salvo be "let me tell you about this person's parents." It saves a lot of time. Glen Raphael June 26, 2015 at 1:59 am "let me tell you about this person's parents." Sure, but my favorite bit of silliness was this claim: "The IHS attempted to hide McArdle's involvement, scrubbing her name from the dinner announcement page." If you click through there are links to demonstrate that, yes indeed, while they were still selling tickets the IHS events page listed specific details about the expected program alongside the info about cost and time and location. Specifically it said ticket-buyers could expect to encounter all of the following: (a) remarks by Charles Koch (b) a tribute to Walter Williams (c) some stories from "a parade of IHS alumni", and (d) McArdle as the MC. BUT LATER when somebody updated the page to note "WE ARE CURRENTLY SOLD OUT OF TICKETS" and point people at a waiting-list, they GOT RID OF the no-longer-so-important "program" section of the page. Which means they eliminated ALL of this from the page: (a), (b), (c), and (d). Clearly IHS was ATTEMPTING TO HIDE that their event…had a program? I'm confused. Edward Scizorhands June 26, 2015 at 9:48 am Oh, it's a brand new user. McArdle has her stalkers that look for any mention of her name and post their same old links. That's what we've got here, nothing more. Bahahahahahahahahhahhahahahahaha! Oh man, McMegan's Internet Stalkers have found SSC! We'll need to get the exterminator in, those things are hell to get out of the carpet. Doug Muir June 25, 2015 at 4:20 pm John, I have two long comments explaining why it is, in fact, badly researched and inaccurate. Noting that someone has an awful track record is not an ad hominem attack. It's the argument on credibility, which is related but distinct. It's not "I don't believe Megan because she's a bad person"; it's "I don't believe Megan because she has a track record of making non-fact statements." This is why courts since the days of Ethelred and Edgar have admitted evidence about a witness's past untruths: they're relevant to judging the witness' credibility. Now, you can counter by saying "no, actually her track record is quite good, or so ISTM", and then we could hash that out. Fair enough! But if in fact someone is consistently sloppy, fails to fact-check, repeats tired old tropes without checking, or just seems to flat out lie a lot…? Then they lack credibility, and it's entirely reasonable for us as readers to take that into account. That seems like an awfully pedantic distinction, but never mind that. Seeing as how Doug Muir is a known fabulist, why should we pay any attention when he says that Megan McArdle is a known fabulist and not to be trusted? Statements of the form, "[X] is not to be trusted; trust me on this", do not seem to me to convey any useful information beyond signaling the tribal affiliation of the speaker. That is a poor justification for a defamatory and possibly libelous statement, and I'd rather not see that sort of thing here. John, how is this case different from the comments others made earlier about Arthur Chu? Public and media figures have track records and reputations, which plainly bear on their credibility. No difference at all. Invoking the name "Arthur Chu" to dismiss an argument, merely signals that the speaker and his intended audience are part of the Tribe That Arthur Chu isn't in. It's not persuasive, and it is neither kind nor helpful even if it is true. Yes, people have track records and reputations. Unless a person is very exceedingly or very recently famous, specificallyfor being some sort of fabulist, assuming that your audience knows and agrees with that record/reputation is, again, just tribal signaling – we are the tribe that is defined by this bit of objectively-trivial tribal knowledge, boo McArdle and/or Chu, yay us. Otherwise, support the accusation. Autonomous June 26, 2015 at 6:12 am What's it called in European football when an intemperate competitor transparently feigns an injury? I never said Arthur Chu's arguments should be dismissed out of hand, I said his articles shouldn't be given pageviews. I want running Arthur Chu to be a bad business decision. Okay, question for John. Say someone links to an article saying that MMR vaccines are linked to early-onset Alzheimers. Someone else promptly posts, "Dude — that article is by Andrew Wakefield." Would that be acceptable discourse? Cerebral Paul Z. June 26, 2015 at 12:18 pm Surely Andrew Wakefield comes under the "famous specifically for being a fabulist" exception? A better parallel case would be if someone cited a paleoclimate reconstruction and someone else replied "Dude– that article is by Michael Mann." That would bug me even though I agree to a large extent about Mann's untrustworthiness: open questions should not be treated as if they were closed. It's not the ad hominem that people are really balking at here; it's the ex cathedra. In most cases it's going to be somewhat context-dependent. Mann is less of a fabulist than Wakefield, but probably better known. Either one, a specialist community can reasonably say, "this person generally doesn't meet our standards, and is locally famous enough that we all know him and known not tor trust them without outside verification". But if you've got people linking uncritically to articles by Mann, Wakefield, or whomever, then it's pretty clear that you're not actually in a community where everybody knows them as unreliable. In which case, yes, CPZ has got it about right. @John Schilling: Generally, I think you are right. Most statements of the variety "Well, that was written by so-and-so" is essentially signalling the belief that so-and-so is believed to be identified with a particular tribe. But, if Keith Olbermann writes an article talking about how horrible and awful and disgraceful [something] is, it is in fact a useful datapoint to know that a) He is very closely associated with Blue Tribe, and b) He is well known for his emotional hyperbole. Just because someone from Red Tribe were saying "Dude, that's Olbermann" wouldn't make that fact any less true. In a Bayesian sense, my prior for Olbermann producing a well-reasoned argument that is not full of hyperbolic and emotional argument is low. So, if I see an argument that appears to be restrained and well-reasoned coming from him, I will only give the argument credence after more close examination than I would require for an unknown author. Certainly, citation to back up the assertion of an author's particular trend is needed if it is not already common knowledge. But I wouldn't reject out-of-hand an opposite tribe assertion of flaws with a particular author. I would rather see, in this case, some sort of citation or argument to indicate why Doug M. thinks that McArdle is generally sloppy (rather than merely sloppy in this case). sneezus June 25, 2015 at 12:23 am He killed himself by attaching guns to helium balloons. Have you looked into the earth being flat yet? baseball June 25, 2015 at 12:15 am so did anyone actually respond to the vox article that Vance links to? he makes some good points. i am actually really disappointed with the way the EA community is handling this… seems they are just as tribal as everyone else. "Givewell good, Harvard bad! Screw you and your technicalities about research!" Thanks for flagging that article. It's very good. Murphy June 26, 2015 at 6:05 am I was expecting better. He just more-less says "nu uh!" Harvard does research but is it likely that the extra 400 million will actually yield 800,000 dead children worth of research? It's easy to hand wave and imply that research into anything has infinite payoff because research into how many hookers the board members of harvard can bang might be vital to the earths survival next year but it's stunningly unlikely that the 400 million will be used even vaguely effectively. I know being skeptical of Harvard is trendy around these parts but I've never heard them accused of straight-up massacre before. Not a direct massacre. Using dead children as a unit of currency is a very old idea of Scott's that has become a minor meme in EA circles; the idea is that every ~$500 you spend on research could be spent on e.g. malaria bednets, which would save roughly one expected life. Note that this is an old value and probably a low one; I can't be bothered to look up where I remember this from, but I seem to recall that more modern values are an order of magnitude higher. And of course it's vulnerable to the same criticisms that EA in general is. If you're a believer in overpopulation as a problem, is not giving to charity the ultimate charity? And I like that measure. I use similar measures: "Well, I dunno- this *thing I need* is kind of expensive…no, wait. It's 20 bucks. That's *one lunch with colleagues*. Will this provide me with more utility than that?" On the other hand, makes it hard to justify doing anything at all that isn't work and charity. "Every dollar I don't give, a swollen baby dies screaming in it's sobbing mother's arms! This is the cappuccino of a MONSTER!" I used to measure utility in burritos. I make a little too much money for that to work well now, but it's not a bad approach if you can find something that compares well in price. Murphy June 29, 2015 at 12:20 pm http://www.raikoth.net/deadchild.html One Dead Child or DC is the marginal cost of saving the life of one 3rd world child. When the article was written it was about $800. I used an older number. Used for opportunity cost comparisons. baseball June 27, 2015 at 3:37 pm I don't doubt that Harvard's research money could be better directed. But I'm doubtful that 100 years later, looking back on the most impactful projects of the early 21st century, malaria nets will be among those included. It seems like EAs have a bias towards projects with sure impact so they can feel like they are good people. I would rather the EA movement become "like Harvard, but prioritizing research based on EA principles" than what it's doing currently. Samuel Skinner June 27, 2015 at 7:50 pm Why? People who are healthier are more productive and people who are alive are more useful than corpses. While its unlikely we will get earth changing benefits from the difference, but it isn't impossible. It certainly would improve the situation in the countries affected when people get the opportunity to think a little more about the long term. Let me give an example. You know gypsies? They aren't exacted famed for their massive contribution to mainstream society. However they certainly did produce scientists and engineers http://www.imninalu.net/famous-Gypsies.htm#Scientists 4 scientists and 1 Nobel prize winner Not too bad for a population estimated to have less than a million people before WW2. As for optimizing research, I don't know if we can make any improvements over what is currently done. Scientists and engineers are already motivated to discover the most useful or interesting breakthrough and I don't see how outsiders could provide anything but marginal improvements in their search. baseball June 29, 2015 at 3:10 am Scientists and engineers are already motivated to discover the most useful or interesting breakthrough and I don't see how outsiders could provide anything but marginal improvements in their search. So you don't think Harvard's research funds could be better directed? I do. I don't think I or other outsiders could easily figure that out. If it was easy, people at Harvard would have already done it- individual scientists are incentivized to research things that pay off. You can improve it, but I'm doubtful it would be in the same difficulty range as "deal with malaria". Has the EA crew considered the effectiveness of carpet-bombing with DDT instead of mosquito nets? That worked pretty well for eliminating malaria in the US, and if I were African I'd be happy to sacrifice a few birds for the cause. (Quick googling gives one study.) Mosquitoes can develop resistance to DDT. In areas where there is a history of DDT use in agriculture, the local mosquitoes may already have had a chance to develop such resistance, and more widespread spraying may also just cause already resistant species to spread out. For various reasons, using DDT residentially doesn't seem to produce resistance in the way that widespread spraying does, and so it doesn't have the same danger of becoming self-defeating in the long run (well, probably it would in the extremely long run, as it's hard to imagine it would have no effect at all in breeding for tolerance, but as I understand it in observed timescales any effect of residential use on mosquito tolerance levels has been too small to detect). Douglas Knight June 27, 2015 at 11:13 pm Most places that have wiped out malaria have not wiped out mosquitoes. If it were as simple as carpet-bombing DDT, there would be no more mosquitoes. It is generally more effective to select against malaria than to select against mosquitoes. Putting insecticide in houses selects between mosquitoes. It selects for mosquitoes that do not enter houses and thus do not bite people who are bedridden with malaria. Thus it selects for mosquitoes that do not carry malaria. Also, it puts selection pressure on malaria to be less virulent, to not make people bedridden, the favorite topic of Paul Ewald. Actually wiping out malaria is not something you can contribute to marginally. It is all or nothing. EAs don't have enough money to consider this option. Bill Gates does and I'm pretty sure that he's considered it and does not find it viable. I don't mean not cost-effective, but simply impossible. (Maybe wiping out malaria in a peninsula like Thailand is possible. But Africa is all or nothing.) Carpet bombing with DDT is not a very accurate description of what America did. Mainly it was draining swamps. Environmental DDT played a role, as did administering drugs simultaneously to everyone in a region. Africa did try to wipe out malaria 1950-1970. It failed for a lot of reasons. Mainly the problem was that there were too many places for mosquitoes to hide. This is true on many scales. For one, there are too many individual swamps to drain. Also, they connect regions, making it hard to work one region at a time, which is what was done in America. But also, there was a failure of coordination, to get enough people to take anti-malaria drugs simultaneously. That coordination is not something that money can buy. s3 June 24, 2015 at 10:49 pm without reading the article my guess is he disposed of the weapon somehow. Tied it to a balloon maybe? Jaskologist June 24, 2015 at 10:36 pm The failure of studies to replicate bothers me a lot, but I get even more hung up on the meta issue of a study which shows that studies are wrong. If a study find that 66% of studies are wrong, should I then conclude that there is only a 34% chance that most studies are wrong? OK, there's probably still reason to ridicule (or worse) Rachel Dolezal specifically. And maybe we're even going to ridicule the trans-black in general, and maybe we should. But first, what about the trans-white? And for that matter, I think most of the hostility against transgendered people is focused on trans-women, with trans-men getting off relatively lightly. I don't recall, e.g., Nora Vincent getting any serious hostility when she came out towards the end of her year as a man. Historically, of course, "trans-whites" and "trans-men" were common long before we had the "trans-" prefix to label them; mostly we just referred to them as "passing". Assumed that the motive was to escape the social, economic, and legal disadvantage of the traditionally inferior race or gender, and maybe disapproved of the practice but didn't see it as ridiculous in concept. So when we do now ridicule trans-women or especially trans-blacks, is that because: A – It is inherently ridiculous, a sign of mental illness even, to want to be seen as a member of a disadvantaged class, for approximately the same reason it is an understandable ambition to want to be seen as a member of a privileged class, or B – Having accepted that our society has treated women and blacks poorly and maybe owes it to them to make up for some of the damage, trans-women and trans-blacks are seen as signaling "Ha, Ha, Fooled You! We actually have it pretty good, especially now that you all are giving us special privileges out of your guilt!", or C – Trans-anythingism is seen as an inherently ridiculous denial of objective reality, and we'd generally laugh at them all the way we do the trans-Napoleons in the local asylum, but people who can make a credible attempt at passing for white or male have an obviously rational and somewhat sympathetic motive so we cut them a bit more slack? Black people who try to pass for white have to say "I am white" as part of the process, but they don't actually consider themselves to be white; when they call themselves white they are lying. This makes them not an analogy to transwomen. John Schilling June 24, 2015 at 11:18 pm I am talking about the way the rest of society reacts to trans-black/white/men/women/small-fuzzy-green-things-from-outer-space. In this context, it does not matter that the antebellum quadroon passing for white is flat-out lying whereas Ms. Dolezal may believe her claims. The rest of the world, mostly doesn't care about such subtle distinctions. Dolezal is written off as a liar, and ridiculed. The quadroon, if caught, is written off as a liar but mostly gets a "nice try, now get back where you belong". Same dynamic, I think but with less certainty, w/re transmen and transwomen. There's a double standard in both cases, and I'd like to understand it better. "The quadroon is lying" Could a quadroon pass? My sense was that was reserved more for octaroons. And they were lying in a legal sense. But in a real sense? Someone with 7 white great grandparents and one black one? Until we start treating black in the same way we would treat Italian and/or French in that same scenario, I think we have an issue. Lying in the only sense that really matters – they know what "black" means to the person they are talking to, they believe themselves to be "black" by that definition, and they say that they are "not-black" with the specific intention of deceiving that person for private gain. That you can phrase a statement in such a way that it is technically correct IFF interpreted using locally-nonstandard definitions of the words in the statement, doesn't make it any less a lie. I was going to start making some arguments about whether someone who passes as "white" will necessarily know that they had a black ancestor, but I realized it doesn't really matter to your point. The actual lying/not lying is completely beside the point. It's the assumed motivated reasoning that you are trying to point at. So imagine two people in the old South, [A] who thinks they are an octaroon (but are not), and [B], who thinks they are "white" by the definition of the time (but are actually an octaroon). Neither of these people are liars, they tell people what they believe to be true. Once the actual truth is found out, [B] will be seen as a liar, but [A] will be presumed to have been simply mistaken, as their is no conceivable motivated reasoning that the time could fathom for lying when in [A]'s position. If we then find evidence that [A] is, in fact lying, they will be seen as depraved in some manner, because there is no motivated reasoning that is seen as sufficient. Now, I would also take issue with your assertion that [B] would suffer no extra repercussions for having passed if they, while passing, did things that would have impugned the [presumed] honor of associated whites. For instance, if a black man, while passing, dated a white woman, and then was found out, a lynching might soon follow. If a black man was caught dating a white woman in the antebellum south, he'd have been lynched whether he was "passing" or not – there'd have been no extra penalty for passing. Pragmatically, I expect his odds would have improved in the passing case, as the white woman's reputation could still have been saved by quietly running the black man out of town. But, more generally, yes – it's the reaction of the community I was getting at. I think the odds of an octoroon not knowing full well where they stood in the antebellum South are pretty small, but that wouldn't matter in the case of discovery. If recognized as black-passing-for-white, it's "get back in your place", and as you note punishment for any specific violations of the social order while passing. White-passing-for-black is either "that can't be right" or "WTF is the matter with you, you freak!" Anthony June 26, 2015 at 4:01 pm A friend of mine has a black father. (In pictures, he's *very* black, though that may partly be the film. His features all read as black, though.) I did not realize she was part black until she posted pictures with her father on facebook. Some of her physical features could be interpreted as indicating black ancestry, but none unambiguously. Her sister, on the other hand, looks mestizo. I don't think Nora is equivalent. There's a difference between spending time walking in someone else's shoes and just taking their shoes. I would jump at the chance to spend a day as a woman, or even as a random non-sentient animal, but I'm not some kind of trans-otherkin. DrBeat June 24, 2015 at 10:47 pm People who don't believe trans is "a thing" will believe transwomen are men and transmen are women, and as such, they will be far far far far far more hostile and violent and hateful towards the ones they see as men, because that is what happens with literally every gendered distinction. People who pretended they were white were doing so to prove they were worthy and had merit when they did not have a chance to do so; this is a laudable goal. "Trans-blacks" are lying in order to get attention and victimhood, to have other people cater their behavior to their needs. This is not a laudable goal. Proving that you are worthy is not considered a laudable goal if you are, in fact, not worthy. Yet people who genuinely considered blacks and/or women to be categorically unworthy, did not generally treat "transmen" and "transwhites" with the same degree of contemptuous ridicule that e.g. Dolezal is getting for her transblackness. But we don't need "laudable" for your framing to work. "Rational" and "sympathetic" may be enough. However much we may disapprove of them, we don't generally ridicule bank robbers because, well, that is where the money is, and who ever has enough money? Because by the time others found out about a 'transwhite' person, they'd got along well enough to prove they were capable of doing things and thus "one of the good ones". They also tended not to draw lots of attention to themselves; not true for a 'transblack' faking hate crimes against herself to get attention. "Transwhites" wanted to be contributing members of society, "transblacks" want to be victims and have other people act on their behalf. We consider wanting to be a contributing member of society laudable, and wanting to be a waited-on victim not to be laudable. You have an enviably optimistic view of human behavior. suntzuanime June 24, 2015 at 11:37 pm I think the idea is that blacks are not necessarily non-meritorious, but rather that racist society did not permit them the chance to demonstrate their merit. So blacks masquerading as whites could legitimately demonstrate their worth, which is laudable. They were, in this model, not trying to be deemed worthy just on account of their whiteness; they were trying to get their foot in the door, after which point they would rely on their actual worthiness. Different people have different concerns, there's not a single answer and setting those scenarios up as mutually exclusive or exhaustive would be misleading. That said, there is one asymmetry that does generalize, and from which you can derive a lot of the relevant dynamics: When you have two groups, segregation between them protects the positions of the bottom of the "better" group, but the top of the "lesser" one. Integrating the male sports leagues endangers the rankings only of the lowest male players, while opening the female leagues to transwomen endangers their champions. Allowing black labor into white unions displaces only the least productive white workers, while opening black neighborhoods to white businesses displaced many black entrepreneurs. Permitting "vaguely ethnic" actors like Rashida Jones and Ben Kingsley to audition for whatever they please lets them capture only a tiny percentage of all white roles, but a statistically relevant number of those written for minorities. In short, when a member of a low-status group tries to pass as a member of a high-status group, it's so they can compete on their merits. When a member of a high-status group tries to pass as a member of a low-status group, it's so they don't have to. The latter are far more likely to be malefactors. Jiro June 25, 2015 at 7:01 am I don't think that's quite right. Consider people who pretend to be military veterans (or to receive militar honors). They are hated because they didn't sacrifice like actual military veterans, but that's not really low status in the normal sense. In the case of Dolezal, Blues don't like her because she claimed high status, just like the fake military veterans. If you think of reds as assigning low status to black people, you could argue that that fits your scheme, but I would say that the reds hate Dozel because they consider her to demonstrate that blues are lying about the status of black people being low. Do you really think Blues regard black people as higher status? I think what you are actually saying is that Blues afford black people a conscious status boost, in the attempt to override a predicted unconscious status discount. But that's putting words in your mouth. DrBeat June 25, 2015 at 9:19 am If by "regard as higher status" you mean "believes they have more power in larger society", then no. If by "regards as higher status" you mean "gives them more power within Blue areas, allows them to do more and get away with more and devotes more attention and resources to them", the answer's a pretty obvious yes. I find your position lacking in empirical evidence. Can you cite some way in which Blue Tribe members actually treat black people as higher status than white? Do most Blue Tribe white members consciously seek to live in areas of high black concentration? Do Blue Tribe members higher blacks over and above their general population percentage? Do Blue Tribe members primarily want to hang out in gatherings made up mostly of blacks? Do they actually? Do they feel excluded because they cannot? Do Blue Tribe members feel that if only they were black, they could have accomplished [x]? Or do Blue Tribe members act in a manner that is consistent with regarding black association as enhancing their own status? This is different than regarding black tribe members as higher status, yes? Whatever happened to Anonymous June 25, 2015 at 1:19 pm >Do most Blue Tribe white members consciously seek to live in areas of high black concentration? >Do Blue Tribe members hire blacks over and above their general population percentage? >Do Blue Tribe members primarily want to hang out in gatherings made up mostly of blacks? Do they actually? Do they feel excluded because they cannot? I don't know it they actually do or think those, but I'm sure many of them claim they do. >Do Blue Tribe members feel that if only they were black, they could have accomplished [x]? Acheive a high position in the NCAAP? would seem so. More seriously though, is the usage of the term "status" that you object to? Do you not agree that, in blue circles (Like, say, academia) blacks are protected class, and as such one could obtain benefits from passing as one? Do you not agree that, in blue circles (Like, say, academia) blacks are protected class, and as such one could obtain benefits from passing as one? "High-status" and "protected class" do not necessarily coincide. For a hopefully uncontroversial example (giggle, snort), consider the situation of women in Victorian-era England or the US. That said, status is a pretty broad brush, and you can paint just about anything as high- or low-status depending on what you choose to emphasize. We'd probably be better off talking about the specific benefits and disadvantages of being black in Blue circles: the first one that comes to mind is the privilege of being listened to more carefully on certain topics. @Whatever happened to Anonymous: Well, the question is what Blue Tribe actually thinks, not what they say they think. What you don't see, behaviorally, is Blue Tribe members moving to black neighborhoods where rent is cheap in great numbers. If I could move to a high-status neighborhood where rent was cheap, would I not prefer it over a lower-status neighborhood where rents are higher? Instead we see gentrification, which is a different process altogether. Only after the character of the neighborhood has been changed by gentrification is the neighborhood seen as high-status. Where Blue Tribe has a great deal of control, blacks are a protected group. Children, the elderly, and any of those who might be classified as infirm are protected groups in, well, pretty much every tribe, but certainly in Red Tribe. Why does Red Tribe protect those groups? Why do they give them more benefits? It might be interesting to look at the Red Tribe – Blue Tribe divide on women to understand how this dynamic plays out in reality. Woman are protected in both tribes, for the same proximate reason, but different ultimate ones. Instead we see gentrification, which is a different process altogether. Is it? I spent several years living in Oakland, and while my neighborhood wasn't one of the affected ones I think I was still close enough to get a good view on the gentrification process. What I saw was a progression that started when white urban hipsters discovered a semi-crunchy (but usually not truly bad) neighborhood with cheap rent, good food, and access to the kind of culture they like. They move in as properties become vacant through normal turnover, and rent slowly creeps up. After a few years, businesses catering directly to the hipster demographic spring up in the area, and after a few more, new housing starts getting built. At that point it's mainstream enough that the hipsters starts losing interest (although they may hang on for a while if enough art galleries and drip coffee joints move in), but it's developed a reputation as a cool part of town and there's more than enough demand among sub-hipsters to replace them. Wait forty years and you have the Haight. @Nornagest: Did hipsters move in because the neighborhood was high-status in Blue-Tribe generally? If so, why weren't those properties already occupied by much broader mass of Blue-Tribe that is wealthier than hipsters? For every neighborhood that gentrifies, there are so many others that do not. If gentrification was really a broad, blue-tribe status seeking, then segregation would be a thing of the past. Blacks would generally have trouble keeping their neighborhood as concentrated black, as they generally lack the financial resources to outbid the Blue Tribe whites which would presumably be clamoring to live in their high status neighborhood. If, as a Blue Tribe parent of two children I said I was moving to, say, Compton, from my nice suburban subdivision, the reaction from Blue Tribe would not be "So Lucky!" but "So Brave! Good Luck." Did hipsters move in because the neighborhood was high-status in Blue-Tribe generally? Blue Tribe isn't a monolith, and different parts of it have different status criteria. Hipsters moved in because the neighborhood was attractive to the segment of Blue Tribe that most valued urban amenities and a certain flavor of authenticity and least valued stability, and they became a spearhead for the rest. This is an incremental process: you don't get established Blue families moving in right after the hipsters do, but you do get single Blues who want some of the same things hipsters do but would be scared off if the hipsters weren't already there. Then they open up the door for Blues on the new margin, and so forth. This happened early to the Haight and a few other neighborhoods, but in a broader context it's a fairly recent phenomenon — I want to put the inflection point in the early 2000s, but that might be off by as much as five years in either direction. I don't know how it's going to evolve or what it's going to do to urban segregation. But I do feel fairly safe in saying that the homogeneous white suburbs of the Eighties and Nineties are increasingly low-status in Blue circles. "Blue Tribe isn't a monolith" Wasn't the debate started by an assertion about black status in Blue Tribe monolithically? You seem to be turning the burden of proof upside down. I'm certainly willing to say that some Blue Tribe members view living in majority ethnic neighborhoods as obtaining a status for themselves, that living there has a certain cachet. This is of course different than viewing the residents as high-status, but I don't think I particularly need to debate that assertion at the moment. "But I do feel fairly safe in saying that homogeneous white suburbs are increasingly low-status in Blue circles." Living in a heterogeneous neighborhood is absolutely desirable in Blue Tribe. I'm absolutely willing to concede that given two neighborhoods that have identical amenities, the heterogeneously habitated one is higher status. That still doesn't mean that the black residents of that neighborhood are seen, generally, as higher status. Wasn't the debate started by an assertion about black status in Blue Tribe monolithically? You seem to be turning the buden of proof upside down. If you'll kindly scroll up, you'll find me arguing against that, too. But in this narrow context I don't particularly care what the debate started as, and I'm not interested in being mistaken for an opposing soldier in whatever war you think you're fighting. I'm arguing that gentrification provides a mechanism for Blues moving into black (and other poor) neighborhoods en masse, one that starts with a certain shade of Blue moving in in small quantities. It doesn't demonstrate that those neighborhoods are uniformly high status, because they aren't, but it wouldn't work if they didn't have high-status qualities among Blues. >Why does Red Tribe protect those groups? Why do they give them more benefits? Because they are seen as both vulnerable and valuable. I'd assume it's a similar rationale for blacks among blues. I'm not trying to question if this status as protected group is deserved or not. But I'm still not sure where you're going with all of this. If this black… privilege (ugh) exists, for whatever reason it may, then it stands to reason that, if someone were able to pass as black, they would be able to claim those benefits. Now, I'm not sure this was the case: while the benefits are non-trivial, how hard it must've been to pull off and how easy it would be for it fall apart make it weird to me that someone would do this without other involved reasons. However, it seems really likely that it played into it. Hell, going back to your example of children and the elderly, people pretend to be younger or older all the time to claim the benefits conferred by the protected status. @Mark: Are you contending that those neighborhoods had an influx of residents BECAUSE they were black? If so, why does the influx continue long after they have ceased being majority black? Why doesn't the trend slow as soon as some certain number of black residents are displaced? And are the blacks being followed to their new neighborhoods, broadly speaking? Or do they just concentrate somewhere else? "and I'm not interested in being mistaken for an opposing soldier in whatever war you think you're fighting." Sorry, I'm not sure what I said to make you think I was being so belligerent. I wasn't intending that to be my tone. "It doesn't demonstrate that those neighborhoods are uniformly high status, because they aren't, but it wouldn't work if they didn't have high-status qualities among Blues." I completely agree with this statement. Perhaps we are in vehement agreement. Again, my contention is that blacks are given a status boost by blue tribe, rather than being seen, generically, as higher status. This is the original contention by Jiro that I was speaking to. "it stands to reason that, if someone were able to pass as black, they would be able to claim those benefits." To the extent that those benefits are better than their current situation, yes. I agree with this. Dolazal clearly derived some benefit from claiming that she was black. She may have even received a status benefit (in Blue Tribe) over poor white evangelical member of red tribe. "make it weird to me that someone would do this without other involved reasons." Yeah. I completely agree with this. "Hell, going back to your example of children and the elderly, people pretend to be younger or older all the time to claim the benefits conferred by the protected status." Sure. But broadly speaking, if a child wants a status benefit, they will lie and increase their age. If someone is older (than 29, say) and wants a status benefit, they will lie and decrease their age. To the extent that they lie the other way, they don't want a status benefit, they want a more tangible benefit. If someone is older (than 29, say) and wants a status benefit, they will lie and decrease their age. To the extent that they lie the other way, they don't want a status benefit, they want a more tangible benefit. I think this is a great example of a situation where status is too broad a concept to make useful predictions. Younger people are seen as cheaper, sexier, more hip, more intellectually agile. Older people are seen as more experienced, more skilled, more cautious, probably more expensive. Depending on the situation you might be inclined to lie in either direction for a situational status boost: if you want to get hired as a mid-level manager at IBM, the optimal age is very different than if you want to get hired as an entry-level Web programmer at MoveFastAndBreakThings.com. And I'd expect to see a hilarious peak in the OKCupid data at age 29. Sure, I buy that. But I don't think are going to go so far as put themselves into the protected class, right? I 55 year old is going to try and pass as a 65 year old for a status bump. They might try and do it so they could claim social security or a retirement benefit, but is there some status bump that you can think of being generally conferred on them? More broadly, the protected classes get benefits precisely because they are generally seen as having to combat some disadvantages. If you fake your way into that protected class, you will inherit the presumption of those disadvantages (regardless of whether you actually have them). I suppose you might benefit from being seen as an exceptional member of the class, but that would probably only confer within the class. So a 55 year old faking being 65 so they would have better luck dating 65 year olds seems plausible, but it doesn't seem plausible they would do it so they would have better luck with 55 year olds. @HBC — No, I can't think of too many situations where you'd want to claim senior status per se, outside of benefits fraud or one of its private-sector relatives. There are social peaks at 35 and 45 and 55, but 65 is pretty much universally seen as over the hill unless you're trying to be elected Pope. But in the other direction, I can easily think of situations where adults would want to pass as minors, or minors as adults, for purely social reasons. And that's just as much a protected class, isn't it? The progressive stack and its ideological descendants? The way they get extremely upset and demand action when a black person is harmed and nothing else is known, the way people in other areas get extremely upset and demand action when a high-status person is harmed and nothing else is known? Constantly excusing their mistakes and blaming all wrongdoing forever on whiteness? Throwing shitfits when black people are punished for wrongdoing? Making people afraid to disagree with black people on any subject? "Do people like moving into their neighborhoods" isn't the only way we measure status — and Blue Tribers sure as fuck love to surround themselves with black people, use lack of black-people-surrounding as a cudgel against competitors and use black associations as a defense against same. The only reason I can think of for someone who is 25 to fake being 17 is the same reason someone who is 55 might fake being 65, so that they could have better luck as they were striking out in their age appropriate cohort. I'm not sure what other reasons you are thinking of? And I definitely can't see anyone who is fully post-pubescent (who isn't suffering some kind of trauma) trying to fake being pre-pubescent. Yes, there is the awkward period for some early-pubescent girls, but I don't think that really counts. >And I definitely can't see anyone who is fully post-pubescent (who isn't suffering some kind of trauma) trying to fake being pre-pubescent. Kid prices, discounts, being able to buy a happy meal for the toy. Those are examples of tangible benefits, rather than a status boost. I think everyone acknowledges that trying to enter a protected class for the tangible benefits it brings does happen. Jiro June 25, 2015 at 3:45 pm I think that's an excessively fine distinction. I brought it up in response to Alraune, who claimed that people don't like whites passing as blacks because blacks are low status. It doesn't matter if I say "actually, blues don't like it because blacks are high status" or "actually, blues don't like it because they give blacks status boosts"; either version counters Alraune's statement. Also, re: Victorian women, Victorian protection of women was a kind of paternalistic protection that stated that things were good for women, but ignored women's ability to decide for themselves. I don't think this plays a big role in modern-day protected classes. (Of course, people still do ignore members of protected classes who disagree with them, but they are generally seen as enemies, not genuinely thought of as targets for protection.) Upthread @John Schilling and I talked about why, if someone faked being in a low status cohort, this would be generally seen as punishable behavior. I realize that wasn't your argument, but I think there is some reason to believe this type of reasoning is at work. "I think that's an excessively fine distinction." Mmmm. I don't think it is, just from the standpoint of who gets angry and why. If I am attempting to join a new group consisting of avid golfers, one can reasonably say that low-handicap golfers have higher status and high-handicap golfers have lower status. If I pretend to be a low-handicap golfer, and then I am found out, I will be seen as sad and pathetic, but ultimately the only people who will be angry about it are people whose order in the status I threatened, those who generally feel insecure about their status in the group. But if I fake being a high-handicap golfer, no one will be angry unless we had some sort of handicap tournament. If I took no material advantage of my status I will just be seen as humble. But if I take 18 strokes instead of 2 or 3, EVERYONE will be mad at me. Also, re: Victorian women, Victorian protection of women was a kind of paternalistic protection that stated that things were good for women, but ignored women's ability to decide for themselves. This is not true, and people believe it because contemporary feminists do not want the sort of protection Victorian women had, and so conclude that Victorian women did not want that either, and that men were evil and threatening for inflicting it upon them. Women have always been the only people whose lives have inherent worth. Women have always been the people who others care about making happy. The fact that the things they wanted to make them happy was not the same thing that people today would want to make them happy does not mean their wants were ignored. Jiro, HeelBearCub, you're both hairsplitting and missing the point. Dolezal neither "lied about being black –which is secretly actually a marker of high status rather than low– and thereby gained status", nor "lied about being black –which is given artificially elevated status by the Blue tribe– and thereby gained status." She lied about being black, which allowed her to steal a specific racially segregated post in the blue tribe's priest-class. The JOB is what gave her higher status. That was what I was looking at with my high-handicap golfer example. If Dolezal was someone whose identity as black seemed only extraneously tied to her job, she could say "Look, I've felt black ever since I had to protect my black adopted brothers and sisters from my abusive parents", and we would shrug while she told her story in Dr. Phil. But because she appears to have had her livelihood directly tied to her racial status, it seems more like theft. And unlike a black person trying to pass as white so they could simply do ANY non-menial job, this feels like a different kind of motivated reasoning. Edit: and I don't think the priest class has to much to do with it. If she was selling herself as a Soul Food chef, she would seem equally as inauthentic. The NAACP thing makes it much more interesting to the tribes, which just raises it's media profile. I don't think the priest class has to much to do with it. If she was selling herself as a Soul Food chef, she would seem equally as inauthentic. The NAACP thing makes it much more interesting to the tribes, which just raises it's media profile. You're missing some important implications then. A soul food chef wouldn't be the same situation (also, stop giving Paula Deen ideas), the priestly status is highly significant to how it's played out. At the local level (As I've mentioned, my emotional pitch on this is as high as it is because I'm personal friends with a couple of her students.) this was taken as Betrayal rather than just misleading marketing or a news of the weird segment ("Local Sushi Chef Actually Squinty Italian!") because this was someone trusted to shape truth, mentor new agents, and set strategy. If you're pursuing a tribe war angle, then, well, the most significant aspect is that Dolezal was by all accounts actually pretty good at her jobs. Which means you don't actually need the "unique lived insights" of blackwomanhood to teach people about it, you can just preach the same lines as everyone else. Which means intersectionality is bullshit. Which means your increasingly budget-constrained and business-oriented college doesn't need to fund a separate African Studies professor. People are signal boosting this story because it provides some empirical evidence for their world-view. If Dolezal was a nationally famous Soul Food chef who talked about learning her recipes at her momma's elbow, her Food Network job would go away right quick and people who watched the show would be angry at being lied to. Her close associates would feel betrayed and hurt. People and US Weekly would cover it with suitable outrage. And then it would go away. But because it can be used as as a cudgel by one side against the other, well now people start fights about it. Maybe that is the point you are getting at? I'm definitely not willing to concede what you are saying about intersectionality. But that serms like a different conversation. I'm definitely not willing to concede what you are saying about intersectionality. But that seems like a different conversation. It is, but I should probably explain myself in slightly more detail than "bullshit" anyway. Intersectionality is a graft program, not an academic field. Its sole output is jobs for extremely non-oppressed people. Intersectionality, as I understand it, is an approach or technique, not a program. I'm not even sure what "program" means as you are using it. John Schilling June 25, 2015 at 7:27 am @Alarune: That's a very good point, and particularly timely given the recent Jenner transformation. And even where, as with black/white, the "inferiority" of the low-status group is purely a social construct, the perceived threat will remain until the perception broadly changes. But Jiro's not wrong either, which puts Dolezal in exactly the wrong spot – everybody has a reason to despise her, even if they aren't the same reasons. Best example of this functioning even with purely social divisions would be the female e-sports leagues that need to cap the number of trans members per participating team. (Unless men are somehow genetically better at Starcraft…) LHN June 25, 2015 at 11:55 pm Assuming they aren't (and I have neither reason to believe they are nor much specific knowledge of e-sports), what's the reason for having a gender-segregated league? Held in Escrow June 26, 2015 at 12:05 am Two big reasons. First off, sponsors love having attractive female players to front for their goods; it's way easily to sell product to the nerdy demographics if you have someone who actually is known for playing the game rather than just using a model. Secondly, there just aren't many if any ciswomen at a truly competitive level. Part of this is probably because e-sports aren't something that girls get into and thus develop the skills to be pro at, so the idea is that by having female leagues where they can have people to look up to more girls will get into the game and you'll eventually end up with more female pros (as well as more players of course making the company more money). The only competitive video game I can think of which has a cisfemale top level player is Soul Calibur, but I may be out of date on that Because there's an audience to sustain it? I don't know. Edit: HIE is probably right though. I'd be surprised to find that men are "genetically better at Starcraft". I wouldn't be at all surprised to find that men's larger genetic variability (edit: more accurately, phenotypic variability; a lot of the difference is probably epigenetic) tends to place more of them at the far right of the curve on talents that are relevant to Starcraft. Men are over-represented at the top of almost any competitive endeavor. They're probably over-represented at the bottom, too, but no one bothers to track that. That's certainly a mechanism that shows up in a lot of areas, but I don't think it's going to be a primary one here. Higher male variability only becomes a dominant factor when there are enough would-be participants in a field that it can reliably fill its teams with people that have interest, dedication, training, AND freaky mutant powers. The skill level to participate in e-sports has risen over time, but for the moment, and moreso in the past, participation has been primarily based on willingness to dedicate your life to a bizarre and risky new career (which, yes, also skews the demographic towards men, but for different reasons), not by having +3SD reaction time scores. The most relevant mechanisms are likely that hobby participation is extremely prone to preference cascades in which the minority gender leaves, and that girls (whether naturally or due to socialization) are less interested in ordinal status rankings, therefore less interested in becoming Best Of My Friends At Video Games at age 10, therefore much less likely to be within reach of Best In The World At Video Games come age 20. I'd be surprised to find that men are "genetically better at Starcraft". I haven't played Starcraft in almost 20 years, but doesn't a lot of it come down to reaction times? Men consistently have been measured to have better reaction times than women. First google hit: http://www.iosrphr.org/papers/v2i3/R023452454.pdf (Age matters, too. Those women would probably be much better than me.) It's mostly cultural–few women want to get that into video games–but don't men usually score better at spatial tasks? And, I just wonder–is there really any social reason to want to get MORE women into video games? Shouldn't we want to get FEWER men into them? I mean, the fewer people who spend their time on those things the better, IMHO…and I used to play quite a bit… You gain no useful skills playing Starcraft… stillnotking June 26, 2015 at 12:25 pm Shouldn't we want to get FEWER men into [video games]? What's this "we" shit? 🙂 Some people enjoy playing video games, and/or watching them played competitively. Unless you take a hard-line stance that anything done for pure entertainment should be eliminated from one's life, the polite attitude is de gustibus non est disputandum. Re: gender imbalances and possible cultural reasons for them, we have the same old chicken-and-egg problem: are women less likely to be top Starcraft pros because fewer of them make the attempt, or vice versa? Culturally "neutral" competitive games, like Scrabble — at which my mom, girlfriend, and grandmother are all better than me — show the same pattern as Starcraft. Every single one of the top 10 Scrabble players in the world is male. @ Edward Scizorhands Men consistently have been measured to have better reaction times than women. That would make Just-So sense. A caveman risks a few bruises if he falls down a rocky slope in reacting to an imaginary tiger. A cavewoman with a baby wants to check if there's a real tiger, and if so, which escape route would be safest for the baby. I don't despise her, she sets a great precedent! For all the flak she's taken, she got a very sympathetic hearing in the national media (which she squandered, but there it is). NPR ran very sensitive features on her for a week. And I, for one, look forward to the day when we can all claim to be whichever race we like, because that is one step closer to the banishment of race as an issue from our national discourse. If race is purely a social construct, then having rules based on it is useless! Affirmative action grinds to a halt! Alternate title to Cartel Story: CIA Conquest of Mexico Complete. Already, the Sinaloa cartel is the world's largest, and Guzman [the leader] last year made Forbes magazine's list of the world's top billionaires. Nicholas June 24, 2015 at 5:59 pm On the topic of drug cartels, my understanding is that before 2011, the united states government provided material aid to the cartels and actively stymied the mexican governments attempts to deal with the problem. Now the us government does this much less, and the newly freed mexican government can respond as they have desired against a weakened cartel force. That sounds very much like conspiracy-theory nonsense, of the sort where I'd really like some evidence beyond "my understanding is". I'd also like to see some evidence that the cartels are actually weakened. Yes, there are now two major cartels where there were once many. The usual reason "many cartels" becomes "two cartels" (and eventually "the cartel") is that the whole point of a cartel is to establish a monopoly. As cartels become more powerful in a particular market, they necessarily become less numerous, with the weaker cartels being assimilated or destroyed by the stronger. It isn't necessarily the case that fewer cartels = stronger cartels, but it's usually the way to bet and I'm not seeing anything in Mexico to suggest otherwise. If the government also destroys cartels, that's fine. Sianola thanks the Mexican government for breaking its competitors up into more easily digestible chunks, and the current generation of Sianola leaders are fine with some of the last generation languishing in prison. Brad (the other one) June 25, 2015 at 9:09 am While I would like to see support for Nicholas's claim, I wouldn't put it over the US government to try such a thing. Don't they have a track record of supporting dictators, contras, etc? Doesn't anyone remember Operation Northwoods? While I might want A: evidence and B: a motive before supporting Nick's claim, to just automatically say "nun-uh, conspiracy theory" is silly. So you can randomly pick any bad thing that is caused by any vaguely political group of people, and without evidence or plausible motive say "I think that the United States Government is secretly behind that", and this isn't silly? Because the United States Government has done some bad things, has supported a minority of the world's dictators. Not buying it. The one thing that almost all conspiracy theories in the modern world have in common is the assertion that the rulers of the United States of America are secretly behind whatever unpleasantness the conspiracy theorist is peeved about today, without supporting evidence or a plausible motive. And pretty much everybody who does that sort of thing, either is a conspiracy theorist or is being sloppy and could come up with the evidence if they cared. For example, there is some actual evidence regarding Operation Northwoods, which can be found by a quick google or just hitting Wikipedia. So, not at all silly to point out that this sounds like a conspiracy theory and that the claim needs to be backed up by evidence or dismissed. Alright, so the first example of the US government's interference in how the Mexican government conducts their drug war was when in the second term of the Bush administration, president Felipe Calderón had a bill on his desk to decriminalize certain drugs and create a supply line of those drugs that didn't go through cartel sources. After steady pressure from the Executive Branch, Felipe Calderón reversed his position and vetoed the bill, citing strained foreign relations. Fast and Furious, the US program to allow the sale of firearms to cartels and then not track what they did with those guns, was a government program to sell guns to cartels, which is about as material aid as you can get. It's not per say a conspiracy, I don't believe there's a grand plan to destabilize Mexico. But the actions of the US government made the situation worse, and the US government only reversed course during the most recent Mexican presidential administration. In a more general sense: Every US administration has eventually been found or disclosed that a conspiracy of personal profit or episode of inexcusable negligence has occurred during that administration. Thus the prior that the government may be, say bugging the Watergate Hotel, or selling Iran guns so that they can fund the Contras, or antagonizing the Empire of Japan, or planning to assassinate FDR and start a Fascist coup, or overthrow the government of Iran, should be very high outside of dissenting evidence. stargirl June 24, 2015 at 5:18 pm I personally think of Rachel Dolezal as Black. I think people who are Trans-racial should be given support. I would validate their self-identified race. And I would hope that they were given access to whatever medical technologies could improve their transition. Rachel Dolezal however really did terrible things She faked hate crimes. I am not even sure what a suffient punishment for faking hate crimes is but it needs to be severe. she should e put in jail to deter people from faking hate crimes in the future. However I do think she should be able to transition further in jail if she wishes. And even if she was incarcerated I would wish people would treat her as Black. Me, too. The traditional test of blackness in the U.S. is the "one drop of blood" rule. Under Louisiana law, for example, your birth certificate had to state your blackness if you had as little as 1/64 black ancestry. In other words, if one of your great-great-great-great-grandparents was from Africa, and all the others were from Sweden, you'd be classified as some sort of black person. Your shameful "Negro" ancestry contaminated all the rest, and people were NOT kidding when they said things like that. As a result, there was a substantial number of Americans who (1) were identified and lived as black despite looking completely "white", or (2) "passed for white" while keeping their black identity secret. Passing for white was a perilous business; read some accounts of those who did. If your blackness was detected, you could lose your job, your home (if it was in an area restricted to whites). If you married a white person, you were breaking the law in many states; you'd be subject to prosecution. You had to cut yourself off from your more Negroid-appearing relatives: your neighbors would take notice if they came to visit. The major biography of Warren G. Harding (president in 1921-23) is titled The Shadow of Blooming Grove. This "shadow" was the scandalous rumor that he might have had a black ancestor. Had this been proven true, he would have been discredited and driven from office. Surveys show that most Americans still believe in the one-drop-of-blood rule for defining blackness, but we don't enforce it with the same old zeal. To a very large extent, race and ethnicity have become a matter of self-identification. Census data is based on how individuals classify themselves. If self-identification is the standard, nobody has the right to tell Ms. Dolezal she's not black. Steve Johnson June 25, 2015 at 1:20 am That's actually the opposite of what the result was. This is another progressive "would think". The actual result is that white people in the United States have extremely high amounts of European genes. If passing was widespread then you'd see more gene flow from African descended people into the European descended gene pool. You don't. That's kind of insulting, frankly. It's the result you'd logically expect from the data. The actual result is that white people in the United States have extremely high amounts of European genes. If passing was widespread then you'd see more gene flow from African descended people into the European descended gene pool. That doesn't follow at all. Most African-Americans could not plausibly pass. The ones passing had as much as 98% white ancestry. They had light skin and European facial features. And back when this was going on, there were ten times as many white people in this country as black people. If a small subset of the black population passed for white, AND somehow all their genes were "black genes", AND they all got married and had children who identified as white, even then, it would have an infinitesimal impact on the white genome. You probably would have trouble finding it. Moreover, passing was a secret. In some families, it still is. It's not possible that millions of people did this, but we don't really know how many. Some time (decades) ago, I read about a court case which overturned laws in two southern states (MS and AL, I think) which defined "black" as having 1/32 or more sub-Saharan African ancestry. What I remember from then was that meant there was *no* legally enforceable definition of any racial group except American Indians. (And possibly only for specific tribal membership, as opposed to being "American Indian, N.O.S.") Am I remembering things at all correctly? You are correct, and it was Louisiana. Indeed, you're more correct than I was, elsewhere in this thread, where I wrote it was based on 1/64 ancestry. See http://www.nytimes.com/1982/09/30/us/suit-on-race-recalls-lines-drawn-under-slavery.html Also http://www.nytimes.com/1983/06/26/weekinreview/the-nation-in-summary-louisiana-drops-racial-fractions.html fire ant June 24, 2015 at 5:09 pm It bounced! It bounced and then it flew off again!! That is so good! (I have just noticed that almost no of my comments go through the 'necessary' gate…) It seems to me that if you're going to fail a gate, the necessary one is likely a good choice. After that video, Youtube automatically cued up some tests of what appears to be a 3-inch naval gun firing full-auto. We live in an age of beauty. I'm not remotely qualified to hold an opinion on time crystals myself. But at first glance, it at least looks as if all the pop science articles about it are from 2012-2013, and predate this phys.org piece, "Physicist proves impossibility of quantum time crystals" "Only future developments (or absence thereof) will allow us to tell whether or not my paper has given a final answer to the question of whether quantum time crystals might exist," Bruno told Phys.org. "For the time being, what I can say is that my paper shows the impossibility of time crystals for all realistic models or mechanisms that have been proposed so far. So, until further developments occur, I consider the topic as closed. "I cannot exclude that someone will come up with an alternative proposal, outside the scope of my no-go theorem," he added. "However, considerations based upon the energy conservation objection suggest that time-crystal behavior, i.e., the nonstationary ground state, is generally impossible. birdboy2000 June 24, 2015 at 3:24 pm Time crystals offer a great explanation of how the computer continues to work in Asimov's The Last Question. Time Crystals are a strategy for BETA CUCKOLD ORBITERS. I'm holding out for BRUTE STRENGTH. The thing that struck me is that pretty much ANY of those colored pills could negate entropy just as well as the brute strength one. Blue and Green, sure. Grey too, probably. I don't see how pink and orange can, and whatever way black and yellow can doesn't seem to be large enough to be relevant. Jordan D. June 24, 2015 at 3:04 pm Re: The Popehat story Ken does good work, and that's a good story about a very important subject. I'm just not sure that the important subject is internet speech laws per se. I mean, as far as I can tell, the gag order and subpeona are already somewhat beyond the scope of the law- or at least the law as you or I would interpert it. Even if we never see restrictive SOPA-style regulation of internet speech, these sorts of scary investigations are a consequence of very basic bans on traditional threats. Not many people are in favor of legalizing all threatening speech, which means that the best we can hope to do is minimize abuse through a review or responsibility-assigning process. …but it seems to me that this is the same sort of thing which has happened in non-virtual history pretty regularly. In fact, it seems like exactly the sort of situation which came up in Watts v. United States in 1969. I might be misunderstanding you. If you mean (as Ken touches upon briefly here) that these kinds of traditional threat laws have problems dealing with the incredible hyperbole common to online comments because of a failure to contextualize well, I'd agree with that. If you mean that it shows that proposed laws targeting internet comments are dangerous, I'd agree that they are, but not that this case demonstrates it very much. In any event, great links post. We need more trucks being flung off things! Yeah- I was all "MY. GOD." until I read the post and was like….yeah, that's issuing threats against a federal judge. I'm all for less restricted speech, but the "threats" law seems pretty good and well implemented. I would say that this was government over-reaction, but unfortunately it is in America, where some people really do make crazy-sounding, aw c'mon they can't be serious, threats online and then really do go out and shoot people, make bombs, send anthrax through the post…. goodness' sake, in the Big Shouty Gun Control comment thread, someone said that yeah, America is a violent society and always has been and that's why Americans like guns, have guns, and should be permitted to have guns to defend themselves from the other people who have guns. We've just seen an idiot kid shoot nine people dead in a church. Anyone who saw the stuff he posted online would have said as well "This is just silly posturing". But he went out and did it. So yes, it may sound very over-the-top, but it's an unfortunate fact that there's a real (if tiny) chance someone might think that a judge needs to be shot pour encourager les autres and won't content themselves with shooting off their mouth online. Besides which, are we really supposed to just shrug and accept the coarsening of discourse where, when disagreeing with a court judgement, it's perfectly acceptable to refer to the judge as a "cunt" and talk about feeding her into a woodchipper feet-first? I don't think the keyboard warriors really mean to do anything of that sort. I do think my mother would have slapped the faces off them for using that kind of language about anyone. @DeiSeach – "Besides which, are we really supposed to just shrug and accept the coarsening of discourse…" …What's the alternative? I would sacrifice a chicken in your honor if it resulted in even the worst 1% of offenders encountering the back of your hand, much less all the gentlemen who have expressed malicious carnal intentions on my bum or that of my mother. Unfortunately your hand would be worn to a stump before you made it through even a fraction of the queue. The scale is the problem. I mean, we know "talk is cheap", but hamburgers are cheap. Coca-cola is cheap. We're talking about something that requires wiggling your fingers for ten seconds in exchange for a considerable brain-tingle reward. When you're looking at tens or even hundreds of millions of mean things said to every actual act carried out, I'm not sure you even have a correlation any more. Westboro Baptist Church might possibly be a good control group here, as they're a group that pretty much everyone agrees deserve a good punch in the nose, and I think they've even received a few. [EDIT] – …And also, it occurs to me that this vast explosion of meanness is actually probably a relatively recent development, since most of it requires the relative anonymity and mass audience of the internet. And despite this, violence has still been trending downward for the entirety of the internet's existence. I'm actually pretty opposed to the idea of catharsis; my prior is that thinking and talking about something repeatedly encourages you to think or talk about it more in the future, and maybe even act on it. I've got to admit that this seems like contrary evidence to that position, though. Maybe the coarsening of discourse lets people vent their aggression in the safest way imaginable, and they are less likely to engage in real-world violence as a result? I would say that this was government over-reaction, but unfortunately it is in America… Not this again. Yeah, I kinda feel like maybe someone should elaborate on some of the more likely factors for WHY america is more violent. The "Americans are uniformly barbarians" meme grows… off-putting. Heh. Funny story- gun stats are the first thing that turned me into a fairly right wing conservative (Grey tribe, but a very red grey) from slightly left of Bernie Sanders (or more precisely, what I think he believes but doesn't say). I was pro-gun control but I liked shooting and hunting once in a while, and I liked cool old guns. So I looked up some gun blogs, saw some stats, said nu-uh! and… Well, honestly, a lot of stats were nu-uh. But some were yah-huh. Basically, to get the "AMERICAN GUNS MURDER MORE PEOPLE THAN HITLER" memes going, you have to present the data carefully. First, you have to include suicides. Which is as may be, and I've seen the same studies on "guns correlate to suicide" as you have- but we have the same suicide rate as the UK. We have a pretty LOW suicide rate for developed nations. Undeveloped nations? No suicide rate to speak of (Seriously- look at the world stats on suicide. They're quite interesting.) (also pointing out that lots of people **PLAN** suicide and thus might, you know. Buy a gun.) Also, you have to correlate "illegal" and "legal" guns because the crime rate among legal gun owners…well, I've never gotten good data on ALL gun owners. Concealed Carry Permit holders have a fantastically low rate of revocation, and only a fraction of that fraction are for violent or weapons offense. You're much safer around a CCP holder than a cop. Nothing really indicates that legal guns correlate with crime. Astonishingly enough, they correlate with gun accidents, in much the same way owning a pool probably correlates to accidental drownings. And it involves a lot of decontexualized numbers. There are TWO HUNDRED PEOPLE A YEAR killing in mass shootings. Over FIFTY CHILDREN DIED of gunshot wounds…..and then you have to eliminate the ones shot in drivebys. If the question you're ultimate asking is "how likely am I to be shot by someone who isn't already a criminal on multiple other offenses carrying an illegal weapon" the answer is "not very at all." I don't have more than a 30% confidence in this as true, but I've suspected the international media drives the "Crazy redneck US" lines to use to drive their own population leftward: "We need more security cameras everywhere!" "But I heard someone say that those who exchange their liberty to gain a little temporary safety deserve neither liberty nor safety" "Oh ho! Someone's been listening to the AMERICANS!" *general titter* "but seriously folks, unless we want to get gunned down in malls like the Americans- more cameras! And no knives with points!" Undeveloped nations? No suicide rate to speak of Not trying to challenge the bulk of your post, but you gotta be careful about saying stuff like this. A lot of public health data for developing nations is really deficient, and a lot more of it is massaged up or down for murky international politics reasons that you and I generally aren't privy to. You can kinda trust numbers on stuff like malaria deaths, where there are NGOs specifically working to get an accurate view on the problem. (The World Bank is a good aggregator of this kind of data, though it's got its own slant.) Stuff like suicide? Not so much. @CJB – Yeah, we went through that with a buncha people last thread. Showed em the stats and everything. Linking actual data seemed to be a good way to kill a thread stone dead. It doesn't seem that anyone actually looked at them, just sidled off when the cognitive dissonance kicked in. The general response seems to be that we were loud/rude/overbearing for not letting the usual canards pass without comment. Probably not worth pushing it. But but but…..muh stats. Still, thanks for the tip. I'll avoid it unless it turns into a thing again. @Nornagest – excellent point! Thanks for pointing that out. (I'd also note that the trend held true for "Shitty places I'd still expect to have decent record keeping"- Jamaica for example….although that's more complicated than you'd expect as well. http://www.jamaicaobserver.com/news/Jamaica-s-low-suicide-rate-no-reason-to-celebrate–warns-counsellor_17513871 suntzuanime June 24, 2015 at 9:48 pm That article is short on relevant information and long on non-sequiturs that try to create a feeling that suicide is a big problem without actually advancing an argument. It's sort of understandable coming from a suicide counselor in a low-suicide nation, he's trying to justify his existence, but I'm not sure why you thought it was worth linking. Heh. Funny story­ gun stats are the first thing that turned me into a fairly right wing conservative Funny thing: I had the opposite experience with homicide stats. But I'd rather not get into a long wrangle over some very minor points of disagreement. Eugene Dawn June 24, 2015 at 10:30 pm @FacelessCraven As someone who without having given the matter much deep thought is in favour of gun control, I thought you had a number of unanswered good points, and made me much more skeptical of my previous unexamined position; I would at least make a much more serious effort to engage with the anti-gun control side before stating an opinion in the future. "Concession is weakness" applies even in SSC comment threads (though hopefully less so than elsewhere), so judging success by the reaction of the participants in a thread can be misleading. I find my mind is much more likely changed by exchanges that I only observe, rather than participate in. Jeremy June 25, 2015 at 1:12 am @CJB: I'm not sure what stats you're looking at about mass shootings rarely using legal weapons. http://www.washingtonpost.com/blogs/wonkblog/wp/2012/12/14/nine-facts-about-guns-and-mass-shootings-in-the-united-states That's really a side-point (I was just curious about the stats you mentioned, so I googled), but I do think you didn't address the obvious question of whether legal gun ownership correlates with illegal gun ownership. I don't know the answer. I suppose based on your professed stances you think that there is little to no effect. Can you offer evidence? TheAncientGeek June 25, 2015 at 2:31 am "Yeah, we went through that with a buncha people last thread. Showed em the stats and everything. Linking actual data seemed to be a good way to kill a thread stone dead. " My memory is that you announced a precomitment to ignore statistics that don't suit you on the basis of a theory about cultural differences. That sort of thing will kill a thread. FacelessCraven June 25, 2015 at 2:52 am @TheAncientGeek – "My memory is that you announced a precomitment to ignore statistics that don't suit you on the basis of a theory about cultural differences. That sort of thing will kill a thread." If that is the interpretation you were left with after our numerous exchanges, it is certainly your right to retain it. In any case, our conversation remains available for all to view. If you feel my comments were unreasonable enough to cite to your advantage, I invite you to plunder them verbatim. I do however note that your paraphrases often do not sound much like what I remember writing. I do not think that sort of reinterpretation is likely to lead to mutual respect and understanding. You also don't seem keen on answering questions. I still don't know your stance on knife prohibition, for instance. @Jeremy – "I'm not sure what stats you're looking at about mass shootings rarely using legal weapons." A "Mass Murder" is defined by the FBI as one where there are more than four victims in a single event. Spree killings I think are similar, but in different locations. Thr problem is that there are two distinct patterns that fit this criteria: crazy people shooting random strangers in public, and career criminals killing rival career criminals and whatever bystanders get caught in the crossfire. The crazy people are very rare, usually have no significant prior criminal history, and usually use legally purchased guns. Murder committed by career criminals is the overwhelming majority of murders committed, and they overwhelmingly use illegally purchased guns. The mother Jones article screened out career-criminal murders to focus on the crazy people, which is why their stats show the majority being legally owned weapons. This makes sense if the modern Amok incidents are what you're worried about. On the other hand, they are a vanishingly small minority of actual murders, and it is arguable that our best strategy might be to ignore them and stop giving crazy people encouragement to aggrandize themselves via random murder. "but I do think you didn't address the obvious question of whether legal gun ownership correlates with illegal gun ownership." Uh, I'm pretty sure it doesn't. If you can legally purchase guns, why acquire them illegally? If you can't legally purchase guns, all your firearm acquisitions are by definition illegal. Maybe I'm misunderstanding the question? http://www.guncite.com/journals/gun_control_katesreal.html …Specifically the Massacres and Law Abiding Gun Owner As Domestic and Aquaintance Murderer sections and their citations might be of assistance for a more in-depth look at the statistics. Deiseach June 25, 2015 at 4:37 am Unfortunately, yes, this again. When our native born scumbag criminals started shooting each other in the streets and carrying out "gangland hits", the first thing most people said was "My God, it's getting like America here!" @Deiseach – "Unfortunately, yes, this again. When our native born scumbag criminals started shooting each other in the streets and carrying out "gangland hits", the first thing most people said was "My God, it's getting like America here!"" …Well, at least in this case they're referring to the violent crime problem that actually does exist. If everyone's going to assume we're kill-crazy maniacs over here, though, I say we run with it. Hockey masks and football pads with spikes sticking out. cars accessorized with decorative flamethrowers. Elaborate titles declaring our stature as warlords and delineating the extent of our rule. I could handle being the Ayatollah of Rock n' Rolla. If I inadvertantly claimed that mass shootings are done with illegal weapons, I withdraw it. They're about the only crime that IS routinely performed with legal weapons. I read through the thread. It was about half people quoting stats back and forth and about half people going "Well, I know very little about the US, guns, gun laws, violence statistics there, or the available literature on violence in America- but I see a lot of it on TV." "but I do think you didn't address the obvious question of whether legal gun ownership correlates with illegal gun ownership" I'd be very surprised if there wasn't. I expect there to be a correlation between the amount of legal drugs and illegal drugs- that doesn't mean we should forbid law abiding people to get Oxy. Here's my new gun rights argument: The CDC points out that 2 children a day die of drowning. Dying in swimming pools is the second most common cause of death in children 1-4. Drowning has a disproportionate racial effect. Overall, 3,500 people die of drowning every year, thousands of them in pools. By what right, sir, by what RIGHT do you claim the ownership of a swimming pool? There's no constitutional right to swimming, no SCOTUS ruling permitting pools. Oh sure, the vast majority of pools will never be involved with a drowning- but the hideous toll on our children cannot be ignored. Being a smart person, I'm presuming you see the obvious flaw with that argument- we don't restrict reasonable adults activities and ownership of things, even dangerous things, based on the risks they pose to children. Instead, there's an expectation that the world has unpadded edges. We dont' deal with sharp edges by padding them all- you deal with it by teaching children to be careful around sharp edges so when they're old enough they understand the risks well enough to use pools and guns and cars and prescription medication and bathrooms and any of the other thousands of things that cause lots of death every year. Pretending that guns are a carved out exculusion to this understanding requires explaining why we should ban (Things that are involved with thousands of deaths A) and not (Things that are involved with thousands of deaths B) Hippos! 3,000 people die of hippo attacks every year. Is the value of having "Large african mammal" WORTH 3,000 lives? That's a 9/11 EVERY. SINGLE. YEAR. Explain to me how the oppourtunity cost of "having hippos" being "3,000 dead human beings a year" is in any way functionally different from the oppourtunity costs of "guns" being "thousands of dead people a year"….. And that's ignoring that no one deserves hippo death, while many of those shootings were plenty justified. Oh, I'm sure at least a few of the people killed by hippos were asking for it When our native born scumbag criminals started shooting each other in the streets and carrying out "gangland hits", the first thing most people said was "My God, it's getting like America here!" Most people like responding to scope insensitivity, fictional evidence, and a basic need for an Other to aggrandize themselves against. Most people, in other words, don't know what they're talking about. I hope for better from these comments. If you think for some reason that SSC needs to hear what the Irish man on the street believes about the violent proclivities of the American public, please think again; I hear more than enough of that shit on Reddit. Psmith June 28, 2015 at 12:49 am " 3,000 people die of hippo attacks every year" But think of the bounty of hippo meat we get in exchange! Just keep the hippos in public swimming pools surrounded by people with guns. It'll work out. Deiseach, in her ignorance, believes the United States has too many guns. Therefore, it is appropriate that every American who ever says "go to hell" be the target of a federal investigation, just to be safe. Because, yes, some of the "threats" being investigated are variations on "go to hell", which no reasonable person would interpret as manifesting a real intent to cause harm or fear. And when Reason's lawyers proposed complying with the subpoena w/re the (remotely) plausibly threatening comments but stripping away the merely angry rhetoric, the Feds responded with "No, we need to go after all of these people". And, Deiseach, the bit about how America is some hyper-violent dystopia where any insult can presage a bloody massacre, is not true, not kind, and not necessary. Knock it off already. Funny thing: I had the opposite experience with homicide stats." I'd be genuinely interested to hear that story, if you feel like telling it. @suntzuanime: Yeah, I dropped the explanatory section while copy/pasting a comment. Essentially the point I wanted to make was one he raised- that a lot of people "commit suicide" though dangerous behavior, that even in a fairly well organized society like Jamaica, they still have problems with underreporting. @FacelessCraven: thanks for point me to that- great discussion. I left a comment on the ammo control idea, if anyone's interested. People who are positive they have all the answers tend to be extraordinarily intolerant of people who express even slight disagreement. Remember what happened to Scott Aaronson when he said he was 97% on board with feminism? Now, look. I've read all your recent posts. You're a great guy, you're articulate and knowledgeable. I value and take seriously all that you have to say. I look forward to meeting you in person some day. But when it comes to guns, I tend to doubt you're willing to entertain the possibility that someone else may have even a tiny contrary point about anything. @CJB – "I'd be genuinely interested to hear that story, if you feel like telling it." I actually started writing a reply saying the same thing, figured it would be taken as a thinly-veiled attempt to start the argument back up, couldn't think of a way to say it that didn't sound like that, and deleted it. @Larry Kestenbaum – "But when it comes to guns, I tend to doubt you're willing to entertain the possibility that someone else may have even a tiny contrary point about anything." I can't help feeling that's a fairly accurate assessment of my own behavior, at least. I can't really speak for anyone else, but I know I'd be happy to forgo replies/rebuttals entirely to hear your take, and the OT is pretty well empty by this point. For what it's worth, you mentioned in one of these threads that you're trying to be an example of a reasonable liberal to the conservatives around here. I think you're doing a damn good job of it. John, in any other country in the world, I would say "they're just being idiots". In America, we see that even if they are only a tiny minority, there are still sufficient angry extremists out there who will, after ranting on the Internet, pick up a gun and go shooting, or make their own bombs, or try and poison people by sending suspect packets through the post. It's not necessarily that you have too many guns. It's that you have too many crazy people who apparently have easy access to guns. Thank you for that. It means a lot, seriously. FWIW, I don't see myself as "trying" to be reasonable, rather, I like to think I am reasonable. @Deiseach – I guess what I'd try to point out is that I don't think we have much more of a crazy people problem than you do. We don't get our murder rate from crazy people, we get it from the massive number of violent criminals emerging from the massive permanent underclass created by the collision of slavery, racism, two horrifying attempts at prohibition, and terminally awful social policies. People saying mean things on the internet has no connection to either in any case. @Larry- I suppose I can see what you're getting at here. If your rationale is "will CJB be converted" the answer is "probably not"….although I would've said the same thing before I started playing with stats about guns. If you're worried about a flame war- I precommit to both not replying with anything more than a "thanks!" nor of allowing your opinion to influence my opinion of you. I'm sucessfully not allowing Deiseach's comments to influence my high opinion of her, and I react far more strongly to perceived anti-americanism than perceived anti-gunism. You're a smart woman, so you'll see the point here. I went to Northern Ireland one time- had some friends in Neury and stayed with them a few days. And the younger brother of my friend was maybe 16 at the time? And this was in 2009, so the Good Friday Accords were, at best, out of his memory if not lifetime. And these were not political people, even by US standards, let alone Norn' Irn ones. And so we're sitting around BSing, and the younger brother starts going on about the 'RA, which, being at the time young, stupid and a Plastic Paddy Republican, I thought was the coolest shit ever- a REAL northern Irishman talking about the glorious struggle! Looking back now, I realize that he was doing nothing more than glorifying murder. Hell, at the time, I had FRIENDS in the British army, and so did the brother I was staying with- several of the people we lived with at college were Scots going into or in the army already. So my point here is first, anecdotal, and second is – motes and beams, man. Motes and beams. I am also, based on FCs comment above, not going to make any more posts on guns in this thread- I put up one in the other thread for those that really want to argue with me about it, but he and Larry have good points about not starting up another flame war. I would, however, be interested to read any comments pointing out why my "pools dont' kill people, drowning kills people" argument is weak, although I won't respond. That is like saying that violent TV is okay in your country, but since our country has some people who watch violent TV and then go kill people, America must ban violent TV. Or that brushing your teeth is okay in your country, but since our country has some people who brush their teeth and then go kill people, we shouldn't brush our teeth here. If a behavior is engaged in by a very large portion of the population, the fact that one country's crazy people engage in that behavior as well isn't significant to an attempt to stop them from doing crazy things. People rant on the Internet. Killers are people. Therefore killers rant on the Internet. If your rationale is "will CJB be converted" the answer is "probably not"…. That's not at all what I was talking about. If I could sit for an hour and talk Mitt Romney out of being a Republican, I'd worry about his mental stability. If you suddenly decided that the Second Amendment wasn't all that important, I'd worry about yours. What I mean is, could you ever admit the possibility that your model of the world is only 95% right, instead of one hundred point zero percent right? Do you do nuance? I precommit to both not replying with anything more than a "thanks!" That is NOT what I want. If I'm wrong, or there is some problem with my facts, I want to know about it. Obviously I don't want a flame war either. What would please me the most is constructive and mutually respectful engagement on details. You're a big picture guy; I'm a detail guy. I'm much more interested in getting things to work on the ground than in any kind of consistent overarching philosophy. And damn right, swimming pools kill people. Freakonomics (love that book) has a chapter on how swimming pools are a lot more dangerous than guns. some of the "threats" being investigated are variations on "go to hell", which no reasonable person would interpret as manifesting a real intent to cause harm or fear. Not even "go to hell," which is effectively saying "die and go to hell" (to the point where it can be/is translated as "die"), but "I hope there's a place in hell," which is saying that he hopes that, after the judge dies, there exists a specific type of afterlife for him @Larry: when I read your initial statement, I was simply confused by it. I just don't see the connection between homicide stats and turning left-wing. I'm not saying "defend yourself," just wondering how that happened CJB June 25, 2015 at 12:43 pm @Larry- ok. Let me contextualize my response- I was thinking you were saying something more like "Hey, you're a cool dood but if I challenge your taboos I don't know if you'll freak." Hence precommitting not to freak. I generally operate on the silver rule in internet comments-and this is the least flamey place I've ever been. I am, as I pointed out, not super great at the….clinical sounds negative, but i think it's a good word for the tone here. But I'm also pretty funny, so you know. Tradeoffs. So, circling back to the original point- yes, I'm able to engage with details, and recognize nuance. For example, two things gun nuts aren't good at engaging with: Illegal guns are, far and away, stolen, legal guns. This is typically caused by people who don't properly secure, store, or carry their weapons. There have also been cases with CCP holders getting weapons stolen out of cars when they entered a gun free area. If we want to reduce illegal guns- and we do- then we need to work on Joe Gunowner having a better system for control of their weapon. We could also really use a better background check system- the problem is implementation without either side sabotaging it for political reasons. Background checks are good, useful ideas that reduce guns in criminal hands, but need better handling. So if you're up for it, I'd like to hear about the homicide stats and your nuanced ideas. I do tend to get….deontological about my overarching philosophies, admittedly. @careless et al. Playing devils advocate here- First, some of those threats were actual threats that can, reasonably, within my libertarianish principles, be investigated. Second, from an FBI/Enforcement perspective…. Well, to use an extreme example- if one member of a mosque blows someone up, for damn sure the rest are getting looked at. In this case, several people committed a federal crime (lightly prosecuted, admittedly). That they're looking at everyone who posted there, seemingly in concurrence with the threats? That doesn't seem….entirely unreasonable for a police investigation. Illegal guns are, far and away, stolen, legal guns. I thought straw purchases edged out stolen guns, but I could be mistaken. In any event, that's for the contemporary United States. In other countries, illegal guns are police or military guns sold by corrupt officials (e.g. Mexico), or Cold War leftovers thanks to the Great Warsaw Pact Going-Out-Of-Business Sale (e.g. much of Europe), or legacy guns from a time when such were still legal (e.g. modern UK), or manufactured in black-market machine shops (e.g. Brazil, Pakistan), or smuggled from elsewhere (in which case we redo this whole analysis at the point of origin). The United States is going to be stuck with a legacy stockpile of tens of millions of guns and billions of rounds of ammunition, even if we implement absolute prohibition and confiscation tomorrow. But, more generally, black markets work. The last time I checked, the price of a generic black-market handgun in the United States was about $200. Which was also the price in the no-private-handguns-for-anyone United Kingdom, and most of the rest of the world. If the demand exists, it will be met. If there is a local source of supply, it will be tapped out of convenience, but if not, smuggling always works. There is probably nothing you can do to the legal supply of firearms in the United States, that will be more than a small and temporary inconvenience to criminals. If it is more than a small and temporary inconvenience to law-abiding gun owners, we are going to be very suspicious of your motives. The last time I checked, the price of a generic black-market handgun in the United States was about $200. I haven't ever needed to look up handgun prices on the black market, so I could easily be wrong, but I was given to understand that guns are one of the few items that's more valuable on the black than the open market? That doesn't quite jibe with the $200 figure for me. I mean, you can get a handgun for $200, but it's going to be an exceedingly cheap one. Generic used semiautomatics, last I checked, were going for around double that. Well, yes – the black market deals mostly in cheap, crappy handguns, because most criminals only need to threaten to shoot people. But even if you're looking for something specific and/or fancy (and by black-market standards a stock Glock would count as "fancy"), there's still no requirement that it sell for more than the legal, retail price. Aside from straw purchases, none of the usual sources require the black-market supplier to pay retail in the first place. Also, back when ex-WP Makarovs were being legally imported to the United States, the retail price was IIRC in the $120-$150 range. That implies millions of reasonably potent and reliable military handguns being offered for two-digit wholesale prices, by sellers who probably didn't ask too many questions. Until that supply is exhausted, it puts a cap on what the black market can charge for generic handguns – $100 plus smuggler's markup, more or less. Thanks to the War on Drugs, we've got lots of competing smugglers with efficient, proven supply and distribution chains. FJ June 25, 2015 at 2:07 pm @John Schilling: I've never quite understood the notion that we can't confiscate all (or something close to all) guns if we really wanted. Yes, millions of law-abiding Americans currently possess guns, and seizing them would in many cases require invasive home searches. But so what? Suspicionless searches of millions of Americans' homes would be expensive, but expensive is not the same as impossible. And remember that gun confiscation already assumes we are going to repeal, redefine, or ignore the Second Amendment. I'm not sure why we couldn't equally well repeal, redefine, or ignore the Fourth Amendment at the same time. Heck, the Fourth Amendment is a much easier task: while the Second Amendment says that the right to bear arms "shall not be infringed," the Fourth Amendment merely forbids "unreasonable" searches and seizures. It would be odd for someone who was pro-gun confiscation to say that the necessary steps for gun confiscation are unreasonable. (I know that anti-gun people are almost universally in favor of a very strong Fourth Amendment. I just don't know why.) @FJ – "I've never quite understood the notion that we can't confiscate all (or something close to all) guns if we really wanted." This is the part where people usually start talking about prying guns from cold dead hands, La Resistance, etc etc, but really that's unnecessary. Canada passed a mandatory gun registration law a few years ago. Canadian gun owners saw this for the prelude to confiscation that it likely was, and simply refused to comply with the law as a unified group. The law was repealed within months. No voting from the rooftops necessary. No one went to jail even. A few more examples: "Some anti-gun crusaders have their own, predictably onerous proposal for avoiding the need for any accommodation. Their plan is for Congress to ban handguns and command their confiscation by a law imposing a mandatory minimum prison sentence of a year on every violator. Much the same proposal was made to the New York State Legislature in 1980. It was tabled when the Prison Commissioner testified that the state prison system would collapse if just 1 percent of the illegal handgun owners in New York City (where ordinary citizens cannot get a permit) were caught, tried, and imprisoned. Likewise, the federal prison system would collapse if it tried to house even a hundredth of one percent of the tens of millions who would not obey a federal handgun ban. Fortunately, violators would not get to prison because the federal court system would collapse under the burden of trying them." "…The problems of terminal systemic overload equally doom the anti-gun program. As noted earlier, the most specific proposal for banning and confiscating all guns (or even just handguns) also depends on mandatory sentencing: a mandatory 1-year term for anyone found with a gun, whether good citizen or felon. Forget about felons, either for gun crimes or crimes of any kind. To seriously enforce this law against the often-fanatic owners of 70 million handguns would far exceed the combined capacity of all courts in the United States, even if they stopped processing all other criminal and civil cases to try only gun cases. Less extreme anti-gun proposals are only less unrealistic. Consider the anti-gun claim that a waiting period, during which criminal records were checked, would have prevented John Hinckley from buying the gun with which he shot President Reagan, and would have prevented Patrick Purdy from buying the gun with which he massacred the children in Stockton. Regrettably, that claim is simply false–though it ought to be true! During the 1980 campaign, Hinckley, who was then stalking President Carter, was caught committing the state felony of carrying a concealed handgun and the federal one of trying to take it on an airliner. Neither charge was pressed "in the interest of justice" (i.e., the interest of prosecutors in focusing on their current overload of serious violent crime cases rather than on people who have not-yet-committed such a crime). The promise of gun laws is epitomized by the fact that, if he had been convicted and sentenced under those laws, Hinckley would not have been at liberty to shoot Reagan a year later. The frustration of that promise by systemic overload is epitomized by the fact that, even if a law existed to require a waiting period or a felony conviction check, it would not have prevented Hinckley from buying his new gun. He had no felony conviction record to be checked! The same is true of Purdy: he had been arrested for a succession of felonies over several years, but all had been plea bargained down to misdemeanors." @Larry: when I read your initial statement, I was simply confused by it. I just don't see the connection between homicide stats and turning left-wing. I'm not saying "defend yourself," just wondering how that appened I guess I should admit that our situations are not precisely symmetric. I've always been somewhat left of center, and the stats I mentioned didn't change that. What I found, some thirty years ago, was a seeming statistical anomaly that forced me to rethink my views on certain things. I'll expand on that some time. My life project has been to strive for a better understanding of the world, or at least the parts of it that interest me most. Data is (or can be) news from the real world. Yes, I got that, and appreciate it. I just really didn't want you to walk away without responding at all. Noted, and very much appreciated. @FacelessCraven: thanks for the thoughtful response, but I'm not sure it's as fatal as you believe it to be. Sure, we couldn't incarcerate even a tiny fraction of the 70 million gun owners in the U.S. But that wouldn't be necessary: the goal is to confiscate guns, not incarcerate recalcitrant gun owners. In 2011, the NYPD conducted 685,724 frisks. There were about 3.4 million housing units in New York City in 2011. So if the NYPD could search an apartment in the same amount of time it takes them to conduct a Terry frisk, they could search every apartment in the five boroughs in about five years without increasing staffing above 2011 levels. Now, presumably a home search takes more man-hours than a Terry stop. But presumably New York police officers didn't spend their whole shifts constantly frisking, either. And if you're willing to increase staffing for a few years to end the scourge of gun violence, it's entirely plausible that you could search the majority of Americans' homes in less than a decade. It would be a big job and you'd never be entirely finished, but you could make a very serious dent in the gun supply without straining the prisons or the courts. FJ- if you don't mind me getting personal….where do you live? Because that's…..not the sort of thing someone who is familiar with the American political landscape would say. One example: https://en.wikipedia.org/wiki/Bundy_standoff Now, you can think what you like about Cliven Bundy (no relation). I'm fond of the man for reasons I frankly don't care to get into right now. But here is ultimately what went down: Court ordered him to move cattle. He said no. Shit happens. Bundy puts out the word. Days later, heavily armed people from all over the country show up, force a number of federal agencies to back down. To my knowledge, he is still grazing his cattle in the same spot. You can go for hours over the details, but the simple fact is- this nobody rancher in Arizona summoned hundreds of people hundreds of miles- over cattle grazing rights. Which are a Big Deal in the west, but even so. Now, consider what happens when the first cop shows up to confiscate the first gun. As in, an actual constitutional right that even non-gun people are pretty willing to defend. And when you ask "Were they bluffing?" There's a very famous picture of a man looking down a rifle at the cops from a sniper perch. People who know what they're doing (and these were people who know what they're doing) will never, ever, ever point a gun at something they don't want destroyed for pretty much the same reason you don't shit in the office coffee pot. From a time-management perspective, sure. But that requires that gun owners A. have no access to search warrant law, and B. the armed people way into self defense won't defend themselves. The only way to get wide scale disarmament is for everyone to want it at once. Right now, lets say we got 99.7% of the vote, but that .3% is royally pissed? That's over a million armed and really pissed off people. That's a lot of fucking people. And now no one else has guns. @FJ – " thanks for the thoughtful response, but I'm not sure it's as fatal as you believe it to be. Sure, we couldn't incarcerate even a tiny fraction of the 70 million gun owners in the U.S. But that wouldn't be necessary: the goal is to confiscate guns, not incarcerate recalcitrant gun owners." …Huh. Gotta admit, I didn't see that coming. Like, you're saying that there's no actual attempt to engage the gun owners beyond what's needed to secure them while you search? Let's ignore the question of time and cost, and say we have can afford to do this sort of search. Heck, let's not even make it guns, just anything prohibited. You're saying we abandon the Fourth Amendment in exchange for zeroing out all legal consequences for being caught with contraband, right? …At first blush, I would think that losing the threat of law would mean MASSIVE indirect resistance. You're turning prohibition into a game; the advantages of breaking the law are still there, but the downsides are almost completely removed. Like, this seems like a strictly less effective version of prohibition. The more I think about it, the more it seems like it actually might be MORE effective, because it removes all the dead weight that makes regular prohibition ineffective. It removes a lot of the hazard from disobedience, but it also a great deal of the cost from the enforcers. @CJB: I live back East, where Clive Bundy-style resistance is met with aerial bombardment. Not that I'm endorsing that! @FacelessCraven: Precisely right! Glad I could tickle your fancy. I think The Great Contraband Roundup would work best if there were no criminal penalties on possessing contraband, or at most a bearable fine on par with a traffic ticket. Fines make the system self-financing, but they incentivize greater resistance, so I don't know if they are worth it in this case. [Proposal to ban guns, search every house in America to confiscate same] It would be a big job and you'd never be entirely finished, but you could make a very serious dent in the gun supply without straining the prisons or the courts. How does this not strain the prisons or the courts? What is it you are expecting the police to do, when they find an illegal gun in the possession of a now-criminal gun owner, that doesn't involve prisons and courts? For that matter, given that the fourth amendment is A Thing, one that blue tribe values quite highly and red tribe certainly will by the time you get anywhere near this proposal, how do you even conduct these searches without a separate judicial action for each and every one? If you are imagining that the prospect of having the police certainly search their house will cause every single one of America's gun owners to quietly turn in all of their guns in advance, without a fuss, then you really, really, don't understand the United States of America at all. And if even one percent of America's gun owners pick any of the alternative strategies to "turn in all the guns without a fuss", then the whole thing collapses into a bigger fiasco than Prohibition ever was, with vastly more guns in the hands of criminals. American policemen know this, American politicians mostly know this, and so most every gun control law that has or ever will be passed in the United States includes a grandfather clause saying that anyone who already has a gun (and doesn't commit any other crimes) can keep it. Glen Raphael June 25, 2015 at 6:00 pm @FJ: given the scenario described, wouldn't the populace just get a lot better at HIDING their guns? Even WITHOUT assuming any behavior changes it would take at least a hundred times longer to competently and non-destructively search my apartment than it would to search my person – I don't have THAT many pockets when I'm walking around town. But once I know the government is doing a house-to-house search for guns I'd be inclined to keep a spare hidden somewhere that's not IN my apartment so the search wouldn't find it. (Though on the plus side, such a program would do wonders for the nation's collective skills at gardening and carpentry if we all have to start keeping our guns under well-tended potted plants and behind well-constructed false windowsills rather than in a traditional gun safe or on a traditional gun rack…) keranih June 25, 2015 at 6:02 pm @CJB Illegal guns are, far and away, stolen, legal guns. This is typically caused by people who don't properly secure, store, or carry their weapons. [snip] If we want to reduce illegal guns- and we do- then we need to work on Joe Gunowner having a better system for control of their weapon. (Please take this in the most polite, non-aggressive way possible – I am smiling, partner, when I say this.) I find this highly problematic in two ways – one, because I am a homeowner who had her firearms locked away in a cabinet. Thieves – criminals – broke into my home, stole stuff, and picked up and walked out with the locked cabinet. I am not really sure what else one would have wanted me to do. Secondly – "well, were your guns locked up?" sounds, to my ear, very much like asking a rape victim, "well, what were you wearing?" THEY BROKE INTO MY HOUSE. THEY TOOK MY STUFF. WHY THE FUCK ARE YOU MAKING THIS OUT TO BE MY FAULT???? *ahem* A highly non-rational response, I give you that. Also a response that could easily have been predicted. A third, and likely more significant point – while firearms thefts are accounted as felonies in most areas, if no one was hurt, the cops don't have space to chase it. The assumption is that the weapon went into someone's trunk and is on the way to a major (gun-free) city for cash sale. It might show up a year from now, or ten years, or thirty, or not at all. And there is nothing any registration in the world is going to do about that. A final point as I ramble on – firearms are things. Are property. The same technology (fingerprints, interviews, etc) that looks for other stolen stuff looks for stolen firearms. How badly do we want to find stolen stuff? To what lengths will we go? And I lied – one more point. Anyone who thinks that the owners of firearms in the USA will stand by peacefully while their homes, barns, sheds, and crawlspaces are searched for firearms needs to get out more. Their social circle is much too small. Hold it, hold it, this is all my fault. Time out. A few miles upthread, there was cheerful consensus that we had talked out the gun issue for the time being, and didn't need to belabor it any more. If nothing else, I think, we figured Scott would appreciate it if his comment sections weren't completely flooded with gun arguments. This is a psychiatry blog, after all, not a gun blog. But in discussing a funny story that I refrained from posting, I happened to ask our Cool Dood CJB if he did nuance. Turns out he did! Unfortunately, he gave a big pile of specific examples, and the whole war ignited all over again. Now we're already into suspending the 4th Amendment, police searching our crawlspaces, and criminals walking off with locked gun cabinets. Can we just call this off for now and (at least) wait until the next reasonable thread? @John Schilling – you're missing the point. He's not talking about a legal regime that attempts to modify or enforce behavior through punishments. He's talking about a super-pragmatic "we will search every nook and cranny for X, and destroy it when we find it." No penalties to the person caught with it, beyond the obvious one that they don't have the contraband they paid for anymore. You lose your privacy, but you also lose the penalties that are a significant part of why that privacy was necessary. This is obviously not a realistic policy suggestion; it's so far outside the Overton window as to be laughable. It's also unlikely that it would actually work long-term. the biggest choke-point for drugs is probably the border, and we already search for contraband there more or less to the limit of our ability. There's just too much volume flowing through to practically search, even when you have it all narrowed down to a single port. I'm pretty sure the ability to import and produce, combined with the ability to conceal, is pretty much always going to be orders of magnitude greater than the ability to search, if for no other reason than that there are orders of magnitude more civilians than there are police. It seems like this logic terminates in "well, make everyone the police", at which point it starts blending into some sort of weird anarcho-capitalism or something? Pretty sure this logic terminates in "well, make everyone the police", at which point it starts blending into some sort of weird anarcho-capitalism or something? I wish I could be that optimistic. A more likely scenario is something like East Germany's Stasi, which was a largeish force on its own (about twice the size of the NYPD for a country with twice the population of NYC) but which managed a network of informants that by some estimates made up a tenth of East Germany's population at its peak. (Note I'm not trying to invoke some kind of horns effect by bringing up a communist secret police group; it's just the most widespread informant network I can think of.) @FJ – pretty sure you want no fines. I'm picturing some sort of system where… Hmm. Let's say your local police department has a rack of orange vests right in the lobby. The vests have a omnidirectional body camera and microphone system, plus batteries and memory for 48 hours worth of recording. It also contains an RFID card that, when activated with the one-use code, will unlock any lock in the country. All locks are of course required to comply with this technology. The vests and keycards are freely available to the public, but must be checked out like a library book, and must be returned in a timely fashion. Anyone wearing the vest can, from the hours of 9am to 5pm, legally enter any non-government structure in the country and search for items on the contraband list. Any they find may be collected for disposal at the police station. If they find more contraband than can be carried, they can call others to assist them. assaulting or otherwise interfering with the searcher bears penalties as though they were a police officer. Obviously, bad conduct on the part of the searcher is also punished severely, and they lose the right to use the vests ever again. …And now I'm sliding the other way, because it seems to me that community members have a much clearer idea of what's going on than the police. The obvious problem would be intimidation against the searchers, to keep them away from lucrative stashes. Maybe they get a mask too? Damn, this is an amazingly weird idea. @Nornagest – maybe the bad part there is "secret"? Remove that, and you have citizens cooperating with the police to enforce prohibition. Logically, that's probably the only way a prohibition is ever going to work, right? @Larry Kestenbaum – I'm…confused, about why/how it was "decided" that we didn't want to talk more about this. However, in good faith, *shrug* okay. I'll keep a look out for the next thread, and bring up my issues with the "gun nuts ignore their own responsibility for gun thefts" meme then. In the theme of noting how wonderful SSC is – it's not a burden at all for me to go *shrug*, okay, we'll address later. In other places, I'd feel very comfortable assuming that the person who wanted to halt the convo was a person who wasn't interested in listening in the first place, and that that person thought I was making too much headway. Good on you, SSC. @ FacelessCraven – I like your idea, until I put that police station with the vests in a gang-infested city, or in, oh, Belfast. Or the worst sort of Jim-Crow South. Then all of a sudden live and let live starts to look a lot more attractive. James Picone June 25, 2015 at 10:05 pm @general discussion of gun control transitions: After a spree shooting in Australia, we had some pretty stringent firearm laws get brought in. Part of that involved confiscation of firearms and compensation of the people who had their firearms confiscated. Wiki article seems like a pretty good summary. I don't remember hearing about any violent confrontations as a result of that, although something something Australian and American culture is very different, we probably had less guns to begin with. Australia's experiment in gun confiscation involved seizing Evil Assault Weapons(tm) from less than 0.1% of the Australian population, and forcing maybe 2% of the Australian population to trade in their hunting or target guns for new models of roughly equal utility but without the Evil-Gun features. That's not the basis for an effective civil disobedience campaign (or armed revolt); Australia's police, courts, and prisons could have accommodated the dissenters, who would have had no popular support. FJ's proposed Universal American Gun Confiscation would be roughly two orders of magnitude greater in relative scope, and even greater in its unpopularity. Nathan June 27, 2015 at 9:56 pm I feel like there must be a pretty big difference between Australian and American culture in terms of guns, because when the govt took our guns we got so angry that we went and formed a new "give back our guns" political party that gets 2-3% of the vote in some states… and, like, it's really hard for me to comprehend a different reaction. You would resolve your political argument by SHOOTING PEOPLE? Really? "You would resolve your political argument by SHOOTING PEOPLE? Really?" No, that isn't what he is implying "looks at civil rights movement" okay, not since the 60s. Many people view guns are vital to protecting themselves. Eliminating that and making them depend on the government works… poorly. Nathan: I expect Americans feel precisely the same way about America taking their guns away as Australians feel about America taking their guns away. Careless June 25, 2015 at 10:52 am until I read the post and was like….yeah, that's issuing threats against a federal judge. "I hope there's a special place in hell" is a threat? As I wrote on Popehat, there's no way to see it as one, unless you believe that the person who wrote it is, in fact, a god. No. There's a well-established legal doctrine of "true threats". A threat has to not only be explicit, but such that a reasonable person would think that it was meant to be carried out. If I threaten to "punch" my brother "into orbit", this is not a true threat. Likewise, saying on the internet that a judge should be fed through a woodchipper isn't a true threat unless the person also knows where the judge is, has a woodchipper and could conceivably make good on it. And anyone who disagrees should be beaten to death with unstarted chainsaws! If I threaten to "punch" my brother "into orbit", this is not a true threat. Woops. In so far as punching him into the hospital or onto the ground is feasible, it can be a feasible threat. Hyperbole may signal non-seriousness, or "If you keep doing that, I'm gonna get mad and give you a bloody nose" — which can serve a bully's purpose just as well. Well, have a whack at convincing a jury that I legitimately intended to remove my brother from the gravitational pull of the planet via my mighty fists. In the meantime, don't open an investigation of Facebook, where I post said "threats" and get a pretty facially unconstitutional gag order on them to keep them from discussing what the government is demanding of them. If I threaten to "blow your fucking head off" with my .45 automatic, does the fact that a .45 ACP round is not capable of physically decapitating someone mean that I won't be arrested and convicted? The law, and reasonable men, operate on the "reasonable man" standard, not the "pedantic technicality" standard. Most of the alleged threats in the Reason piece don't meet that standard, but punching someone "into orbit" could reasonably be interpreted as a threat to punch them very hard. Noah Siegel June 24, 2015 at 3:02 pm "Whose Shining Garden is this?" Tom asked in Scots. eating fermented food decreases social anxiety? My first thought when reading this: Uh, do you mean beer? :D I don't think that's a truck driving off an aircraft carrier. I think that's a truck being fired off an aircraft carrier, by the catapult normally used to launch planes. The rape prevention article is here and the editorial, both open access. dlr June 24, 2015 at 2:06 pm I was sorry to see that the Nevada law doesn't allow you to spend the Education Savings Account money for things like college tuition etc, except when it is 'dual enrollment' (ie, taking the college class during high school). They do allow the money to roll over from year to year, but if you've been frugal, you lose the money when the kid graduates instead of being allowed to apply it to college costs. What a shame. People would have a real incentive to spend the money prudently if it could later be used for college costs. I bet it would be a real incentive for someone teetering on the brink of staying home and homeschooling their kids. AnnOminous June 24, 2015 at 1:59 pm Before we stop making fun of her entirely: The Original Rachel Dolezal Sigivald June 24, 2015 at 1:45 pm the Statue of Liberty is green because all old tarnished copper is green. When it was first built, it was, well, copper-colored. When it tarnished the government was supposed to raise money to fix it, but never got around to it. As the link says, much like other oxide layers on other metals, the tarnish is protecting the underlying metal. If it was "fixed" by polishing it all off, you'd have to replace the entire statue's skin every few decades (or so). Brad (the other one) June 24, 2015 at 5:22 pm What if you only polished it on special occasions, like, say, the centennial? alexp June 24, 2015 at 1:39 pm Are there any studies on whether courses to teach men not to rape work? If the contentions that the vast majority of rapes are committed by a small number of remorseless sociopathic serial rapists, then I'd think not. Teach men not to rape? How fucking offensive is that? It's a goddamned felony, second only to murder! There's big swathes of the country where if the male relative of a raped female kills her attacker, no jury convicts him. Rapists are the lowest status people in our society, even in prison. Every man knows not to rape. What all men don't know or don't agree with is the radical anti-male definitions of rape that if a woman ingests one drop of alcohol, she is incapable of consent, if a man asks twice, a woman is incapable of consent, if a woman consents enthusiastically but gets caught by her husband, it wasn't real consent, hell, even if a man has the gall to not, technically exist, he can still from the ether of a woman's own imagination, gang rape her all on his own (and break bottles on her face!)! You're gonna need more than one class to teach men all that. You may want to book the room for the semester. The offensiveness of this concept needs to be restated as often as possible. No one would propose fighting neonaticide by "teaching women not to kill their children", even though it's at least as gendered a crime as rape. Most rapists are men, but most men are not rapists, and it's unacceptable to treat us as if we are. It would be unacceptable even if these so-called rape-prevention classes actually lowered the incidence of rape. Gbdub June 24, 2015 at 5:08 pm Actually, when you start including in the definition of rape things like "emotional coercion (i.e asking more than once)", "sex with someone who's functional but had a couple drinks", "touching/kissing without explicit verbal consent" or any of a number of other things that fall afoul of "affirmative consent" but not outside normal sexual behavior… then it's not clearly true that "most rapists are men". There is a pretty good body of evidence that most rapes, like most crimes in general, tend to be performed by repeat offenders. I think it was Derbyshire that pointed out rape and mugging are not hobbies- you either never do them, or that's all you do. Obviously very reductionist, but the point stands. I think there needs to be a "this is rape, this isn't, this could become rape so heres how to proceed safely" class, but specific "Boys, here's how to not rape" classes are just going to piss off all the boys taking them. Also to help boys that get raped, maybe? But that seems like it should be a different thing. Ok CJB, now go read fifty romance novels and formulate a class for women about their "problematic" expectations of male behavior and sex. SFG June 26, 2015 at 8:10 am Whatever. I know lots of ladies have fantasies about this stuff, and feminists like to inflate the number of behaviors counted as rape, but real rape, as in forcing someone to have sex with you or drugging them and taking advantage of them, is very wrong. We didn't give Armin Meiwes a pass (or rather, the Germans didn't) because his victim wanted to be killed. That Meiwes analogy would only work if you were talking about statutory rape. I am certainly not arguing that rape isn't wrong. You have to read a lot of things that aren't there into what I wrote to get that. My point is that sexual interaction that isn't rape has a lot of uncomfortable situations and the expectations both men and women have can be at cross purposes. Expanding the definitions of rape to cover every awkward encounter is a vile intrusion into the most intimate of private interactions. James D. Miller June 24, 2015 at 1:24 pm If time crystals are the only way of doing an infinite amount of complex computing, then with probability 1 we are in a computer simulation run by a time crystal. What if the crystal is in fact a cube? In that case we really are educated stupid. https://www.youtube.com/watch?v=qWsXzrh17Dw By that reasoning, the computer simulating the time crystal is also inside a higher level simulation inside a time crystal, which is inside, etc. It's time crystals all the way down! Joseph Hertzlinger June 24, 2015 at 1:18 pm I'd better use this analogy before the nativists do: Is there any truth to the rumor that Jürgen Hass is planning to change his name to Shub-Niggurath? I can't make heads or tails of this. Shub-Niggurath, the Black Goat of the Woods with a Thousand Young is her full title. Ja! Ja! There is another element to the schools soliciting donations. https://www.timeshighereducation.co.uk/world-university-rankings/2015/world-ranking/#/ http://www.topuniversities.com/university-rankings/world-university-rankings/2014#sorting=rank+region=+country=+faculty=+stars=false+search= http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?int=9ff208 There's a small number of influential rankings that make a big difference to universities. Being one of the top 10 vs top 50 or top 100 vs top 300 makes a huge difference to the prestige and influence of the university. As you'd imagine the ranking system is strongly distorted to favor the dominant institutions and one of the ranking elements is "Alumni giving", the justification is that if your uni is really awesome then lots of your graduates will be wealthy and also grateful and so will donate a lot. In practice it's a way of abusing the system to boost the rankings of collages popular with the already rich. There's a number of metrics of Alumni giving but some are as simple as the percentage of alumni who've ever donated anything at all, even a dollar so they're strongly incentivized to try to get even poor alumni to donate something. bean June 24, 2015 at 12:34 pm The trucks aren't driving off the carrier. That's a catapult test. They get something big and heavy, fit it so it rolls well, and fire it off to make sure they're working. In fact, that's the tests of the new electromagnetic aircraft launch system on the USS Gerald R Ford. https://www.youtube.com/watch?v=rOijb3JPCe4 switchnode June 27, 2015 at 1:37 am That is amazingly cool. Thanks for the context. ADifferentAnonymous June 29, 2015 at 10:00 am And for those of you tracking the progression of sci-fi into reality, the "electromagnetic aircraft launch system" is what you might call a "railgun." You mean we could have been firing trucks this whole time? Quake really dropped the ball. But Half-Life 2 picked it up. Well, not trucks, but big things (violating Newton's 3rd Law, but still). The craziest thing about this comment thread is that we've got dozens of comments on Rachel Dolezal – well discussed everywhere – and none on Burnham v. Duquesne, a rivalry that spanned the Boer Wars and the logistics of rare earth metals during WW:I, with a temporary alliance in the great cause of turning the Gulf Coast into a giant hippo-ranch… in significant part, on the theory that bringing in one invasive species to eat another would work just fine. This is a story that includes more than a dozen jailbreaks, of all kinds, on three continents. It has professional critique of Winston Churchill's escape from a POW camp. It's got implications for childhood education. It's got a description of exactly what the Boy Scouts were designed to produce, and he's unexpectedly lethal. For crying out loud, this is the King of Scouts vs. The Black Panther! This is the kind of real-life adventure that pulps tried to emulate (watch for H. Rider Haggard looking to life for his inspiration). This really shouldn't need signal-boosting, because it is awesome. The craziest thing about this comment thread is that we've got dozens of comments on Rachel Dolezal. I will take the blame for that, but not apologize. I expect the length is at fault for the lack of hippo reaction though. Give it a day, they'll read it until after the initial commenting flurry has died down. Andy McKenzie June 24, 2015 at 11:28 pm It was this comment that got me to read the article, and I loved it, so thanks. Which I think explains why people are commenting on the Dolezal article instead — it has "inertia", as a topic, from a lot of pre-existing exposure. When there were no comments up, I doubt a person not exposed to any story in these links would be that much more likely to comment on Dolezal, but more people have been exposed to Dolezal's story and formed opinions they want to voice. Once the comments get going, they guide views and discussion — here as much as with the golden opportunity to have an argument discussion on race. BD Sixsmith June 25, 2015 at 3:35 am Yes, yes, that's interesting, but what exactly is there in that article that I can use to promote my beliefs and my personal virtues? zz June 25, 2015 at 4:14 am Well, if you're into free markets, you could point out that problems left to the market got solved (molybdenum), and problems left to the government didn't get solved (hippos). In fact, my reading indicated if Burnham had taken his 50k in venture capital and just started a hippo ranch, either (a) I can have hippo bacon and turkey omelettes for breakfast (and factory farming is less destructive) or (b) hippos wouldn't have actually worked in America for some reason and the whole venture was doomed from the outset. (I'm unclear if hippo ranches needed special government authorization because nature preserves or environmental regulations or something, or if they were just waiting for another 250k before starting the hippo ranch. In the former case, you also get to add "government regulation of the free market stopped entrepreneurs from developing hippo ranches, leading to the modern factory farm and the corresponding environmental destruction.") Excellent possibilities for arguments on unschooling (Burnham turned out fine), child-labor laws (Burnham, again), and Scouting curriculum. Hippos haven't been domesticated. They're highly aggressive, and kill people. Lots of people. Ranching them implies great pulp adventures; may not actually be efficient. A more zealous Boy/Girl Scout curriculum would apparently involve badges in bridge demolition, sabotage, escape, and assassination… which would certainly be different. Agronomous June 27, 2015 at 10:04 pm You're wrong, at least about the Demolition merit badge: I just sourced some C4 for my Scout son for his proj… Held in Escrow June 25, 2015 at 12:46 pm Yeah, I'm going to spend the next month annoying all my friends into reading this article. Wrong Species June 24, 2015 at 11:56 am I am 21 and just got my first date ever and it's to a girl that is actually very good looking. I always hear that depression is an internal thing and you won't be cured just by changing your circumstances but I have been miserable for half of my life and have felt ecstatic for the last week. Maslow is still descriptively helpful. Also, good luck. 8 years in while I don't have the same thrill as the first few months/years I've still got a warm glow and contentment where there used to be an endless chasm of fracturing self hate that became my mind before. Humans are social animals. Sexual/romantic intimacy is good for the mind. ddreytes June 24, 2015 at 3:34 pm That's great! I'm happy for you, and I hope it goes well. But on the topic: from my experience, at least, it's totally possible to have issues with depression and also be happy. IME depression does not mean being constantly miserable; it means having this recurrent miserableness that follows you around, that can show up as a disproportionate response to negative stimuli or even as a totally inappropriate response when things are going very well. So be happy and that's good and that's great! and like Murphy said, intimacy is good for the brain. But you should probably still be careful about depression generally even so. chaosmage June 24, 2015 at 4:12 pm There is such a thing as depressing situations and especially if you're a very romantic person, never having had a date at the age of 21 certainly counts. MDD "proper" is an internal reaction that's wildly out of proportion to, and not helping with, one problem or another. It can be useful to emphasize that because fixing the external issue will not usually help; somebody who's already in a depression loop will just find something else to despair over, so it's better to focus on the depression loop and ways to get out of it. But of course depression doesn't happen entirely in a vacuum. I'm not a doctor, but if you want my opinion anyway, please consider that in the absence of depression symptoms, you have the ability to improve your life in a way that you can't while you're miserable. And some possible improvements can indeed help guard against the symptoms coming back. Some of the usual suspects are: increase exercise, eat better, identify a few people who are dragging you down and reduce their impact on your life, actually get around to working through that depression self-help book… or whatever you never had the energy for although you know it'd be good for you. Take care, and cultivate gratitude. Creutzer June 25, 2015 at 12:38 am This. There are kinds of feelings miserable that are not the unchangeable-no-matter-what-happens kind of depression. It seems to be something of a fashion lately (in Western culture, and subject to variation within that) to deny that it's ever appropriate to feel miserable unless it's due to a clinical inability to feel happiness, i.e. MDD; but that's bullshit. I'm saying this as someone who has also been miserable for most of their life, but whose happiness level can and does vary based on external circumstances, of which my romantic situation is a significant aspect. Scott McGreal June 25, 2015 at 5:57 am There is a treatment modality called Interpersonal Psychotherapy that treats depression by focusing on improving the quality of one's close social relationships. The theory is that one's social needs are important to mental health and that problems in one's relationships can lead to depression. In this model, depression is not just an internal thing but an interpersonal thing. So from this perspective, starting a warm close romantic relationship could be quite therapeutic. Of course, I am not saying that internal factors do not matter, rather that they are not the whole story. All the best with the dating 🙂 chaosmage June 25, 2015 at 8:58 am That's right, but as far as I understand it, Interpersonal Psychotherapy in practice is usually focused on improving the worst relations you have, because those are the ones that are giving you problems. And usually it's the parents. People tolerate behavior from parents that they'd never tolerate from a friend or even a romantic partner. That tolerance is expected in our culture, and with lifespans getting ever longer, there's a growing incentive to "get along with", or tolerate, terrible parents since they can be around for most of your adult life. But tolerance is a conscious choide from the "grown up mind", and there are smaller, simpler, more sensitive parts inside any human mind that can have a much harder time handling the stresses that puts on the whole system. @Wrong Species, Congratulations! Wrong Species June 25, 2015 at 6:38 pm Thank you! And same to everyone else who commented. Faradn June 26, 2015 at 8:36 pm Note how you're feeling in six to twelve months. That will give you a better idea whether this young lady at that point has lifted you off the hedonic treadmill. AcidDC June 24, 2015 at 11:41 am Shorter Ozy: Transsexualism is a real thing. Is Transracialism a real thing? Doesn't seem so. notfbi June 24, 2015 at 11:37 am For the study from Brazil – the employees actually self-report their race and the employers then 'report' it in the sense of collecting and forwarding the information. There is a racial wage-gap in Brazil, so if people with ambiguous races adopt the race they observe in their new workplace environment a wage-gap will be observed by the study. "Thus, our data on race emerges from a process that is primarily based on information provided by the worker, but where the employer's interpretation of that information may play a role. " "Thus, our data on race emerges from a process that is primarily based on information provided by the worker, but where the employer's interpretation of that information may play a role." Shouldn't that be, "where the workers have motive to report their race as whatever race they perceive as on the higher side of a wage-gap, according to their observation on the ground, which ought to count as evidence of something"? bode June 24, 2015 at 11:09 am I don't think it's just prescribing pain killers that skews physician ratings. Whenever the topic comes up I always think of Dr. Hodad at Harvard: "How to Stop Hospitals From Killing Us" http://www.wsj.com/articles/SB10000872396390444620104578008263334441352 Exceptional physician and horrible surgeon. Jos June 24, 2015 at 10:56 am Other than the fact that there's no way to do it without lieing, is there anything else explicitly wrong about Rachel Dolezal? She assumed blackness in a way that was recognizable, so presumably she got the harms as well as any benefits that blackness confers on a day to day basis, and she seems to have been working for the cause. I sort of like the idea of a world where we can choose our race. In my opinion, if she's willing to appear black, she should get to be black. She faked a bunch of hate crimes against herself in order to get more of that sweet, sweet narcissistic supply. Jos June 24, 2015 at 4:12 pm Thanks, DrBeat – I didn't know that. I still don't know that… [she] faked a bunch of hate crimes against herself … which is a claim of fact, provable by physical evidence etc (though whether it has been proved I don't know) in order to get more … which is an opinion about a motive that would need solid evidence to prove (such as a statement from her, letters or recordings of her admitting this, etc) narcissistic supply … which is an opinion about, well, a diagnostic term that is not subject to solid evidence at all (though if several psychiatrists after much testing agreed to it, might be accepted by some court as a defense against libel) of that sweet, sweet … which is just nasty talk. http://www.people.com/article/rachel-dolezal-black-woman-cover-blown-hate-crimes This article has a good review of her reported hate crimes. Any one of them, I might give her the benefit of the doubt. (And I might still give her the benefit of the doubt on one or two.) But there's a big pattern, and for one noose her landlord told her before she called the cops that he had hung it for deer. If that's lying, is Bruce Jenner lying? sweeneyrod June 24, 2015 at 11:33 am No. Jenner isn't trying to cause anyone to believe false things. For instance, he is not claiming that he has female biological features. On the other hand, Donzeal was trying to make people believe that she had considerably more African heritage than she actually did. Mai La Dreapta June 24, 2015 at 3:48 pm Jenner and his apologists, however, are trying to enforce social stigma against anyone who points out that Jenner is not actually a woman. This isn't facially the same thing as lying, but it does require people to lie by implicature. Randy M June 24, 2015 at 5:46 pm Worse, they are trying to enforce saying Bruce Jenner was *always* a woman, even when competing in men's sports and fathering children, in a way that is every bit as much of being a woman as someone who mothered children. It sure looks like it is less about Jenner's feelings and "just being nice" and more about asserting control over what people perceive as truth–objective, physical reality be damned. In addition to what DrBeat said –and that is a drum that deserves to be pounded every time anyone thinks of defending her– you'll notice she faked being black in PNW academia, not the blue-collar south. Blacks are not a homogenous group, some get a lot better benefits:harms package than others. In my perfect world, you'll be able to measure your racial oppression (imperfectly of course) by how many people switch races, and in which directions. That seems like a poor choice of proxy variable, it's gonna be non-monotonic with the level of oppression. Zakharov June 24, 2015 at 10:44 am I don't think your weird murder mystery can beat the bizarre and fascinating murder of Rodrigo Rosenberg. Unknowns June 24, 2015 at 10:33 am Scott, I think it's pretty clear that Rachel Dolezal's claim to be black is in fact a good analogy for transgender considered in the way you discussed it. As far as I can tell, you basically said that we should redefine "woman" to mean "a human being who wants to be called a 'woman'," and that we should do this to be nice to them. In that case, why not redefine "black" to mean "a human being who wants to be called 'black'"? If we can find a reason not to redefine the word 'black', we can probably find one not to redefine the word 'woman'. Jiro June 24, 2015 at 10:39 am The problem here is that the way Scott discussed it is different from the way that other people discuss it. Scott's principle was broad enough that it does cover transracialism (and otherkin). The principles espoused by the Tumblir post are narrower, and don't. How much does Scott agree with the Tumblir post, and is he aware that agreeing with it means disagreeing with his previous post about transgender? anodognosic June 24, 2015 at 10:41 am This is why I generally reject the standard argument for accepting transgenderism. (Or rather, I accept it only on a personal rather than societal basis, essentially out of politeness and not being particular about being a stickler in social situations; I'll even respect otherkin's identity preferences, as long as they don't pee on my carpet.) A better argument is the one Scott linked via Ozy, which is based on the actual science of gender. Did you read that? The argument there seems to be, "sex/gender essentialism -> transgenderism good; racial non-essentialism -> transracialism bad." Strangely, this cuts in the opposite direction of the major thrusts of arguments in the last half-century. anodognosic June 24, 2015 at 4:19 pm I'm not sure essentialism is exactly the right way to slice this. For instance, this approach can accomodate various axes and degrees of gender, which essentialism would forbid; it is also compatible with a racial essentialism that denies any mechanism that would produce a transracial person as hormones might a transgender person. this approach can accomodate various axes and degrees of gender, which essentialism would forbid …only if those axes/degrees are rooted in some essential material feature. So, still essentialism. it is also compatible with a racial essentialism that denies any mechanism that would produce a transracial person as hormones might a transgender person. This is less clear to me. I think their perspective does necessarily rest upon a racial essentialism, but they try pretty hard to avoid the appearance of it. That was because Scott's defense of transgender was wrong. Anything that concludes "the only cost is that we are a bit nicer to people" is one hundred percent guaranteed to be absolute bullshit and you should weight it as having no factual relevance. If we were to be obligated to "respect" everyone's claims about their identity, then the Identity Games would become our national fucking pastime, and personal identity would become exclusively a thing that you manipulate in order to demand other people give you things you want or to be able to abuse people and get away with it. The actual defense of transgenderism has to do with the empirically verifiable neurological differences between male and female brains and the fact that brain differentiation occurs at a different stage of fetal development than the body's sex differentiation and there is a small but extant possibility that this process won't go the way it is supposed to. Perhaps we should be less stereotypical and demanding of conformity to gender roles of people's brains. What does that have to do with anything? Because by saying men's and women's brains are different, you are reinforcing complementarianism rather than egalitarianism, and since the emphasis has been that men are superior to women in part because of their brains which are built to run on reason and logic and science, while women's brains are built to run on emotion and fluffy kittens, you are saying that women can't do maths, can't read maps, and should stay in the kitchen making buns for tea 🙂 Men are superior to women in part because we throw all the ones whose brains aren't built to run on reason and logic and science in prison. In America anyway. That's just finding the physical mechanism causing it all, though. It doesn't follow from that that we should accept the self-identification as true. If we found (as I'm sure we have) empirically verifiable neurological differences in the brains of schitzophrenics, it wouldn't follow that we should declare that the voices they hear really do exist, and deny github commit privileges to anybody who says otherwise. That's not even close to comparable? The brain you have is the person you are. With transgender people, the brain they have is not the same gender as the body they have. The fact that we can see there are differences means that their statement that they are the opposite gender their body appears to be is NOT a delusion. The brain architecture backs this up. There are empirically verifiable differences in brain structure between schizophrenics and non-schizophrenics. It does not follow — and nobody has suggested — that the voices they hear are real. It follows that they are schizophrenic, and they have the brain of a schizophrenic person. Schizophrenic people may experience delusions. But saying "I am schizophrenic" is not itself a delusion. (Unless we're talking about like Munchausen's where they're lying about schizophrenia to get — fuck, you know what I mean.) "I have the brain of a woman, so I am a woman even though my body is not" != "I am schizophrenic, so the voices I hear are real." "I have the brain of a woman, so I am a woman even though my body is not" == "I have the brain of a schizophrenic, so the fact that I hear voices is real." The brain you have is the person you are. That's a nice assertion you have there. Bonus points for the essentialism. Schizophrenics don't actually hear voices. Their brain thinks they hear voices. So, "I have the brain of a schizophrenic, so the fact that my brain thinks I hear voices is real." == "I have the brain of a m2f transgender, so the fact that my brain thinks that I am a woman is real." The biggest unsupported piece of gender essentialism in your post is the claim that "having the brain of a woman" is a thing. There is some research to try to show this, but to my understanding, it's not 'good' yet. The sensation of hearing is one made entirely by the brain, so I don't see how "I hear voices" and "my brain thinks I hear voices" are different in any meaningful way. Anyone who actually hears another person actually speak, also has a brain that thinks they hear a voice. I don't see how "I hear voices" and "my brain thinks I hear voices" are different in any meaningful way. In the former, sound waves excite the eardrum, which passes the signal to the cochlea. There, clever arrangements of neurons process this signal and pass it on to the portions of the brain which interpret it as "hearing voices". Only the final stage (the interpretation of hearing voices) occurs for the latter. Anyone who actually hears another person actually speak, also has a brain that thinks they hear a voice. Affirming the consequent is quite out of fashion. But if brain architecture over-rides bodily structure, so that it is irrelevant if you possess a penis and testicles and have testosterone levels and secondary sexual characteristics comparable to those of males, it's what your brain thinks that identifies you – why should it be relevant that the voice the brain thinks it hears is not actually caused by the vibration of air against the eardrum passing on the signal? If the signal is faulty in one instance, why can't we say it's faulty in another? Your schizophrenic brain architecture makes you think you hear voices, unlike the brain architecture of a non-schizophrenic person. Your transgender brain architecture makes you think you are a different gender, unlike the brain architecture of a non-transgender person. The sensation that I am a woman does not arise out of any physical input from my body; the sensation that I hear a voice speaking to me does not arise out of any physical input from my body. Why validate the brain architecture over the bodily reality in one case but not the other? (You know, I begin to see the appeal of Least Convenient Universe and things like creating the Trolley Problem; there's a kind of sadistic enjoyment in forcing people to put their backs to the wall and defend their position to the utmost detail in the teeth of all objections, cutting down those same objections and going "But what if – what if – ") You are being deliberately obtuse. If you have the brain of a schizophrenic, you are mentally a schizophrenic. This does not cause any changes in external reality. You do not erroneously or delusionally believe you are schizophrenic. If you have the brain of a woman, then you are mentally a woman. It is a statement about the brain. About the brain. Not external reality. The brain. If you compare it to a statement about factual external reality you are lying. It is not a delusion, quit acting like it is — a delusion would be "I don't actually have a penis and testicles, these do not exist." That is a statement about external reality that is analogous to "the voices I hear are real". Since that is not happening, stop fucking posturing as if it is. It is no more a delusion than phantom limb syndrome is a delusion. People with phantom limb do not claim the arm is really there and was never amputated. They claim they still experience sensation as if it was there, and that is true. The sensation that you are a woman does not arise from any physical input from your body, whether or not your body is that of a woman. It is from your brain. Your brain has a map of what all its parts are and what they are doing. It's the sense of proprioception. This is why people get phantom limb syndrome — part of their body is taken off, and not removed from their brain's map. Transgender people have a brain map that says their body is all wrong. It is not a delusion. It is not a belief. It is a state of their brain. They say it is a state of their brain. The way we have to solve this is by altering the body so it matches the sens of proprioception, just like if we HAD the ability to regrow limbs, that would be how we treated phantom limb, and why therapy does not treat phantom limb. Because these neurological states are not delusions even though they do not match external reality. You are not presenting the least convenient world — you are just conflating things. You are being deliberately obtuse. (…see how easy and unhelpful that was!) If you have the brain of a m2f transgender, then you are mentally transgender. This does not cause any changes in external reality. You do not erroneously or delusionally believe you are m2f transgender. This is a statement about the brain. About the brain. Not external reality. The brain. If you compare it to a statement about factual external reality you are lying. It is not a delusion, quit acting like it is. A delusion would be, "I am actually a woman, and you should treat me like one." That is a statement about external reality that is analogous to, "The voices I hear are real." Since that is happening, stop fucking posturing as if it isn't. The sensation that you are hearing voices does not arise from any physical input from your body, whether or not your body is that of a mentally-well person. It is from your brain. Your brain has a map of what's happening in the world. Schizophrenic people have a brain map that says the body is all wrong about what they hear. The way we solve it is to adopt a strongly ideological philosophy that ignores thousands of years of traditional Western philosophy. You are not presenting any reasonable distinction – you are just conflating things. (Repeat the above using various other body dysmorphic disorders and demand that we solve them by altering the body so it matches the senses. Just flipping phantom limb for body integrity identity disorder is sufficient (because it forces you to confront the fact that you're hiding normative value), but cases like anorexia should work, too.) The words you ever-so-cleverly substituted into my argument caused you to say complete fucking gibberish, because you do not understand the discussion, because you are choosing not to understand the discussion, because you believe strategic incomprehension will allow you to win, and I'm not going to bother with your feigned idiocy any more. Seriously, if you think ANOREXIA involves PROPRIOCEPTION… No. You don't think anorexia involves proprioception. You just threw things into a word-pudding that would allow you to keep posturing. …are you seriously hinging all your claims on some idea that there is a unique relationship between gender dysphoria and PROPRIOCEPTION that is not captured by something like body integrity identity disorder (my main example)? If so, that's a heck of a new one, and I'd really like you to explain further so that we can rigorize it. If not, then please stop intentionally going off on irrelevant tangents and instead stick to what we both know is the core of the discussion. And please lay off the insults; they just make you look silly. Bodily integrity identity disorder and phantom limb syndrome also involve proprioception. That is the brain's map of the body. Anorexia does not. Making an analogy to anorexia because it involves the brain thinking something, rather than this specific sense that works in this specific way, is just substituting words in at random and acting as if you're being insightful. Just like you are when you claim that any statement involving the word "should", as in "you should treat me like a woman", can be delusional and an inaccurate model of reality. Should statements are not models of reality. And yes, the treatment for someone who actually has bodily integrity identity disorder is amputation of the affected limb, I don't know why you acted as if you had driven me into a corner on this. If you have a brain map that says "this limb does not exist, it is not part of your body", no amount of therapy is going to fix that. People should be screened to see if they actually have BIID as opposed to an amputation fetish or general self-hatred expressing in a way that appears similar to BIID, but people who pass the screening should get the amputation because that is what information suggests makes the best outcome. What would be the benefit of forcing someone to keep a body part that will always feel repulsive and alien? Bodily integrity identity disorder and phantom limb syndrome also involve proprioception. Agreed. Now your challenge is four-fold: 1) Provide evidence that gender identity disorder does as well. (This is not trivial; DSM doesn't seem to make any feigns in this direction, though it does make analogy to certain forms of schizophrenia.) 2) Give a criteria for why "involves proprioception" is a meaningful distinction for types of psychiatric conditions. 3) Provide evidence that "having the brain of a woman" is a thing rather than "having the brain of a transgender." (Note the essentialism here.) 4) Provide a test for determining whether individuals "have the brain of a woman", so that we can make the distinction Deiseach is concerned about below. If you can't do the first/second (which you haven't yet), then anorexia is still relevant (…and it might be even if you're successful at these items). Thus, I bring it up in order to cause you to actually argue these two points. So do it. the treatment for someone who actually has bodily integrity identity disorder is amputation of the affected limb Many medical professionals disagree with you, and there is evidence showing that such treatment is not more effective than a control group. So we can add to your challenge, 5) Explain why proprioception, in particular, is a critical feature for determining that the appropriate treatment is always to do what the individual asks for. Hyzenthlay June 26, 2015 at 2:49 am How close is the correlation between brain architecture and feelings of gender identity? Are there any studies about this that include both cis and trans people? I mean, I'd expect there to be some overlap, in that brain architecture probably has some influence on whether someone "feels" male or female. But I'd also expect there to be some women with spatial/mathematical "male" brains who still identify as women, and vice versa. Plus, lots of people with mixed or ambiguous brains. @Hyzenthlay My understanding is that it's really quite bad at this point and doesn't come close to demonstrating the things some people want to claim. They're just being hopeful; that's all. Has there been a brain study of transsexual people who weren't taking hormones? For that matter has the original study been replicated? What do you do if someone identifies as transsexual, gets the brain scan, and is told "Sorry, your brain architecture and physical gender are congruent"? Not really transsexual? Only faking? Deluded? InferentialDistance June 24, 2015 at 3:56 pm Transtransgender. They have dysphoria over not being transgender. Gender dysphoria dysphoria, if you will. I am admiring the biological essentialism on show here, that is usually the first thing decried in discussions of "what makes a transgender person transgender". If we are claiming there are measurable physical differences between male and female brains, and if we are claiming that racial attributes are also physically measurable, are we really going to go so far as to say there are black brains and white brains and Asian brains? What if you scanned Rachel's brain and found she had a 'black' brain? If the developmental influences in utero on the foetus can alter brain architecture independently of gender chromosomal phenotype, what about environmental developmental influences in utero that can alter brain architecture independent of racial chromosomal phenotype? Not really transsexual? Not really transsexual. Since, you know, the entire argument is about how transsexuality is a verifiable neurological state and not merely what someone "identifies" as. Some people think they are transsexual and are not. Some people will think they have ANY medical issue you can name, but really do not. This is why screening processes for elective surgery are such a good idea. If the developmental influences in utero on the foetus can alter brain architecture independently of gender chromosomal phenotype, what about environmental developmental influences in utero that can alter brain architecture independent of racial chromosomal phenotype? Those don't exist. Gender differentiation in utero was not made up in order to fit a theory and you cannot make up other things that look like it to fit your theory. Every person's DNA contains information for both male and female development; the gender differentiation is based on hormone levels, and hormone levels are ACTIVATED BY the presence or absence of a Y chromosome, but the information of what the body does in either case is fully present in everyone. There are people with androgen insensitivity syndrome, who are mentally and physically female, but have XY chromosomes. They had the male sex chromosome, but the signal it gave off couldn't be received, so their bodies and brains developed as female. There are people who are mentally and physically male and have XY chromosomes, because due to transcription error they got an X chromosome with the "Turn On Maleness" gene. Developing fetuses cannot differentiate as being either white or black, the information for both phenotypes is not present in all white and black people and there is no "Turn On Whiteness" gene. This is why people of mixed racial heritage can inherit some but not all features of a given race in their parentage, but someone who has one male and one female parent does not commonly inherit Mom's boobs and Dad's penis. the entire argument is about how transsexuality is a verifiable neurological state and not merely what someone "identifies" as. I'm going to repeat one more time. Please cite your sources for this. Peer-reviewed publications are obviously preferred. Show me precisely how strong gender essentialism is and how we can quantify "the brain of a woman". I know Deiseach wants to know so that she can feel put down by you. Hyzenthlay June 26, 2015 at 12:59 pm @ Deiseach: Or…what happens if a person who's happy with his birth sex gets a brain scan for an unrelated reason and accidentally finds out his brain is actually female? Should he be told, "Hmm, you're clearly a woman. You need to start identifying as 'she' and get genital reassignment surgery right away." "B-but wait…I like being a guy!" "Sorry, you have a large hippocampus, a high ratio of gray to white matter, and language centers in both hemispheres. No penis for you. TAKE HER TO THE SURGERY ROOM!" Okay, this would never happen, but it illustrates (for me, anyway) the problem with trying to define gender exclusively by brain structure. Hyzenthlay: This happens. Today. In Iran. Granted, they don't use MRIs to diagnose, working off of revealed preferences instead. Also, as the penal code would otherwise require execution, there's an argument that such surgeries, under various degrees of compulsion, are compassionate. There are other opinions. Hyzenthlay June 26, 2015 at 6:30 pm @notes: I guess I should amend that to, "this would (probably) never happen in America." Troubling stuff. JDG1980 June 24, 2015 at 11:22 pm Are there actual peer-reviewed studies on this? Because if you're going to turn society upside down, require people to deny the evidence of their own eyes, and accept a person with their dick swinging out in the women's locker room, then you better be damn sure you're right. And how does this take into account people who decide to "transition" well into middle age? Many of these people were traditionally masculine for most of their lives, married women, fathered children… then one day they suddenly decide they want to be girls? Sorry, but no. Blanchard's autogynephilia hypothesis makes a hell of a lot more sense here. If that's the case, then Bruce Jenner is just a pervert, and we have no obligation to play any part in his weird sexual fantasies. Here's a study and I am not looking for more because for some fucking reason Google has decided not to function for me, again. Have you searched with Yahoo lately? It's AGONY. The link you provide does not appear to be about someone with their dick swinging out in the women's locker room. It appears to be about a woman who saw someone who looked like a man enter the locker room, and that's all the information provided about the incident. Unless it's in the video that I can't get to work. But if it's not, you're saying this is bad based on an egregious behavior that you yourself invented. (Someone who swings their dick around in the women's locker room is probably not an MtF transsexual. "Hey, gals, everyone look at the genitals I hate and am repulsed by because they shouldn't be part of my body! Whee!" doesn't pass the smell test.) I don't think Bruce Jenner is a pervert, and not everything usual that involves gender is a "weird sex fantasy" but I don't think he is transgender, no. He participated in masculinity with enough zeal to make me very, very, very suspicious of a declaration that he has "always been a woman"; having actual dysphoria makes that kind of behavior agonizing to engage in. While people can work through it, calling that much ATTENTION to it, and calling that much attention to the body you are repulsed by because it shouldn't be yours, just does not match up at all. Plus, picking the name "Caitlyn". Caitlyn is a name that is popular recently. If he was always a woman, he would have picked a female name much, much earlier, when "Caitlyn" wasn't on the radar. I FULLY support a screening process before sex reassignment surgery, because there are people who believe they are transgender but are not, and should not be given surgery that will be irreversible and not address the problem. This is also why psychological screenings before plastic surgery are such a good idea: there are some people who want nose jobs because they hate their noses, and some people who want nose jobs because they hate themselves. The former should be allowed to get plastic surgery and not the latter. I think that Bruce Jenner appears, given the information available, to be someone who thinks he is transgender but is not and will end up regretting SRS. randy m June 24, 2015 at 10:15 am "Contain somewhat contained" Because it's not like universities influence nearly every single person who good on to any important position in the nation. "Open borders advocates take note" Was that an admonition to beware of how their goals can weaken the west, or to adopt these tactics in order to accomplish their noble goals? Re: Rachel Dolezal. There probably isn't a chemical that changes the race of someone's brain. But on the other hand, it's still plausible that someone with the right childhood experiences and upbringing can identify with a race that they don't physically match,in a way that they can't just throw off later in life. She did have black siblings, after all. (And on the third hand, I suspect people seem to be picking on her because she seemed to have been calling herself black in order to gain benefits, which is not relevant to the Tumblir post, but is relevant to your explanation of why people pick on her. They're not picking on her because they're anti-black and found a person that society lets them be anti-black against; they're picking on her because the fact that she benefits from doing this calls the narrative into question.) Also it looks like that's by someone else on Ozy's Tumblir because of the way Tumblir does quotes, it's not by Ozy. AbuDhabi June 24, 2015 at 10:11 am Isn't DNA a chemical? Your glibness is grating. DNA is literally a chemical, but that is not the connotation of the word "chemical" in this context. randy m June 24, 2015 at 12:19 pm Never mind the connotation, DNA doesn't "change" a person's race, nor could it unless it was swapped out at the moment of conception. stillnotking June 24, 2015 at 10:44 am The outrage against Dolezal seems fairly straightforward: she's a fraud, which trips our cheater-detection, retributive-anger response. There's also a bit of cognitive dissonance created, among those with a certain political bent, who may find it hard to articulate exactly how she's a fraud, whom she is defrauding, and why. I suspect people seem to be picking on her because she seemed to have been calling herself black in order to gain benefits How does this make sense? Even Jon Stewart said that passing as black only brings hardship and pain, not benefits. You may have noticed that when it comes to "SJ" subjects, Jon Stewart never ever ever stops lying. Similarly, the benefit of being black is, at least in part, that you have a large contingent of the population who are willing to never ever ever stop lying in order to defend you from consequences of your actions, get you things you want, and shower you with attention and victimhood. Most of the non-benefits… malefits?… of being black apply to actual black people regardless of what race they claim, and do not apply to white people who claim to be black. All of the benefits of being black apply to white people who claim to be black. Whatever happened to Anonymous June 24, 2015 at 11:20 am I'm pretty sure that was sarcasm. Since I can't tell if you're joking here… In this case, the hardship and pain were fucking lies that only hurt the other black people of the Pacific Northwest, while the speaking engagements and professorships were real and accrued directly to her. The lies about hate crimes are separate from the penalties that accrue from being black in America. the speaking engagements and professorships were real and accrued directly to her Normally, when these things accrue to black people, we say that they happen in spite of their position of disprivilege, not because of it. That's not true. They are in part due to their position, which has an unequal and oft changing mix of benefits and drawbacks. …not according to Jon Stewart (and a suspiciously large contingent of other commenters). Careless June 25, 2015 at 9:41 am Yeah, being black is a huge disadvantage for becoming an African Studies professor Tom Womack June 24, 2015 at 11:17 am It's not obvious that it brings exclusively hardship and pain if you personally strongly value the company of black people and the camaraderie of black institutions. It seems the same sort of category of thing as pretending to be Jewish because you enjoy Talmudic argument and the company of yeshivot, or pretending to be an aspiring priest because you value the Oxbridge collegiate environment at a period in time when there was a clerical requirement. I don't think those are the types of benefits that Jiro was referring to (this hunch is supported by Alraune's response). RE: pretending to be Jewish, what about the arguments over defining who is Jewish? If the Orthodox (or orthodox) argument is that heritage is through the maternal line, so that only the children of a Jewish mother can be Jewish, what about the children of Jewish fathers or descendants of more remote Jewish ancestry? Some strands of Jewish thought accept people as Jewish even if they can't produce the requisite matrilineal proofs. Are they transracial wannabes, only pretending to be Jewish for the benefits? For example, the Ethiopian Jews who migrated to Israel in the 1980s-90s, risking their lives and losing relatives along the way, have faced persistent doubts as to whether they are properly Jewish in doctrine and descent. "I feel that I'm the Jew I want to be," protests Fentahun Assefa-Dawit of Tebeka, an advocacy group for the 130,000-strong community. "I don't want anyone to tell me how to be Jewish." Eugene Dawn June 24, 2015 at 7:32 pm This is the example that occurred to me as well. My initial feeling is that anyone with some sort of "meaningful" Jewish ancestry or connection with Jewish tradition or heritage ought to be allowed to identify as Jewish ("meaningful" being left deliberately vague) — thus someone with a Jewish father should count, or someone adopted into a Jewish family and raised Jewish. I think this intuition is also behind my feeling that race and gender really are different for this sort of "trans" phenomenon–it would be absurd for someone to claim that they just "feel like a Jew", or that they "should have a Jewish body" — to be Jewish is as much historical and cultural as strictly biological. With less confidence, I'd say something similar holds for being black, perhaps especially in America — that there's a culture element and an element of some shared history as well. I think I remember debates about whether Obama should be considered part of the "black community" as his African ancestry was more recent, and for example his ancestors had missed out on slavery. I think this is what people find most unlikely about Dolezal's claim, and what separates it from claims to feel like a woman–even if you "feel" your body ought to have more melanin, that would't make you truly African-American any more than a Ghanaian born with strong, stereotypically Jewish features would be considered Jewish. Whereas, anyone with female features really is considered a woman. Of course, there are still complications and edge cases, hinted at by the issues raised by others, but I think this sort of idea underlies my intuitions. I know Jewish conversions are fairly rare, but do exist. Presuming the convert is a woman, are her children considered Jewish? Brett June 24, 2015 at 8:22 pm No comment on how to make the decision, but I was highly amused by the genetic evidence suggesting that the Ashkenazim are descended from male Jews who picked up Italian wives and then settled in Germany. Basically, their Y chromosomes (the paternal line) look Levantine, while their mitochondria (the maternal line) looks southern European. So very likely by this standard, most European Jews aren't Jewish! Yes, they would be, with the caveat that different denominations have different standards for what counts as a conversion. Aaaand, while looking up Jewish conversion, I discovered the existence of the Subbotniks, a Russian Christian sect that adopted Jewish rites and eventually came to regard themselves as Jews, while still holding reverence for Jesus and the Gospels. Following Wiki, it seems many of them moved into the Pale of Settlement, or became early Zionists and moved to Palestine, intermarrying with ethnic Jews. It seems Subbotnik communities don't celebrate Hannukah, as it has no religious significance, and is only significant as a national holiday for ethnic Jews. Well, that is just fascinating. I have no idea what this means for the possibility of "identifying as Jewish". Not really so rare. There are more converts to Judaism ("Jews By Choice") in the world today than ever before. A convert is considered completely Jewish, just as if he or she were born to a Jewish mother, so yes. Of course, different movements have different standards for what is considered a valid conversion. I have heard of a guy who went through conversion three times, with differently affiliated rabbis, in order to satisfy various conflicting authorities. So very likely by this standard, most European Jews aren't Jewish! In Biblical times, Jewish descent was patrilineal. The matrilineal thing came along later. Brett June 25, 2015 at 12:30 pm Jewish settlement in Central Europe was in medieval, not biblical times, though. Definitely post-Rabbinical Judaism. Dain June 25, 2015 at 1:47 am Interesting, the way social forces push on some but not others. My ex-gf is mixed race, half black/white. She said that as a child she was never white enough, and by high school she wasn't black enough. The latter is what has shaped her self-conception as definitely black, as the perception of the black community is what she considered most authoritative. She's now with a black husband. But an old Barnes & Noble coworker of mine, also half black/white, refused to pick one side or the other, and more or less considered her identity to be shaped by her interests (graphic novels and going to Comic Con). Her boyfriend is white. I suspect a lot of geeks consider themselves that first and an ethnicity second. FWIW, I see a lot of black nerd women on OKC making it quite clear they're fine with interracial dating. 😉 Appearances can be deceiving. During the Battle of the Atlantic, you had steadily mounting Allied losses — higher and higher — and then, abruptly, one month, in which the Allies introduced no radical new measures, they collapsed, and the next month (two months?) nothing, followed by one last trivial attempt and no more. If you analyze it in terms of Allied losses per sub it was in decline a long time. They just managed to reach a tipping point where Allied anti-sub measures — centimetric radar, Leigh light, aircraft carriers — were deployed widely enough to counter increasing fleet size. Brian June 24, 2015 at 9:54 am Great. My wife is pregnant with our first, due in November. November, apparently, is the cruelest month. Like I wasn't nervous enough. 🙂 Scott Alexander Post author June 24, 2015 at 12:31 pm I was born in November and turned out okay. OK, that helps. Plus my dad slapped me in the head at lunch today when I voiced concerns after reading the study. His take: "that's stupid." Sage wisdom, probably. Especially because kids don't necessarily arrive when expected. If the kid stays in for nine months, count your blessings. ton June 24, 2015 at 9:49 am The Ozy post seems to be a reblog of someone else? Alex Richard June 24, 2015 at 9:33 am The cartel link is incredibly misleading at best, and probably just plain wrong. (The Mexican government's official position is that there are 9 cartels left.) There are two possible steelmen of the cartel article. One is that there are only two national alliances of cartels in conflict with each other. This says absolutely nothing about progress in the war, and contradicts the article, but is true from a certain point of view. The other is that killing the leaders of cartels leads to the cartels balkanizing/fragmenting into smaller groups, which is true. But this again does nothing to actually reduce violence or the drug trade, and AFAIK it's flatly false that all but two cartels have fragmented. (Mexico actually does appear to be making some progress; e.g. homicides have declined from their height. But the article is wrong or misleading.) Error June 24, 2015 at 9:26 am Please tell me this title is a Legend of Zelda reference… James Picone June 24, 2015 at 9:31 am https://en.wikipedia.org/wiki/Monsters,_Inc. presumably Error June 24, 2015 at 10:32 am Damnit. 🙁 thepenforests June 24, 2015 at 9:45 am Probably Monsters, Inc. Rachael June 24, 2015 at 9:25 am I'm embarrassed to ask this, because I think of myself as good at spotting ambiguity, but: what ambiguity do you mean, in the book subtitle? I assume the intended meaning of "Intelligence: All That Matters" is that intelligence is the only thing that matters. If I squint a bit I can contort "all that matters" into meaning "all of those things matter" instead, but even then it's a stretch to make the referent be "things other than intelligence". Is that what you're suggesting, or am I missing another interpretation? "All That Matters About Intelligence" vs. "Intelligence is All That Matters" Oh, I see, thanks. The first one didn't occur to me, because it seems like such a contentless subtitle that doesn't add anything to the title. I still think the second one is intended, then, especially given the chapter heading "Why intelligence matters". The first one conveys that v the eponymous time has all one needs to know about intelligence–no other reading required. The second is making a strong case about the role of intelligence, but doesn't claim to describe everything about intelligence. @Rachael, @rand m: Another way to put it: "Hand Crafting Dollhouses: All That Matters" has an entirely different connotation. Psmith June 24, 2015 at 8:27 pm Shit, I read it as "Intelligence is All That Matters" vs "Intelligence: It Matters!" Go figure. Godzillarissa June 24, 2015 at 9:03 am Re that adoption thing: His last name, "Hass", is the german word for "hate". Now remind me how it's called when the name describes the person's character… Edit: Ah… it was nominative determinism. Anyway, just felt like pointing that out is a thing around here, so there. anodognosic June 24, 2015 at 9:06 am Hah! It's nominative determinism. Re: Tolkien and the amount of work he did on designing his calendar; from a letter to Naomi Mitchison: I am sorry about my childish amusement with arithmetic; but there it is: the Númenórean calendar was just a bit better than the Gregorian: the latter being on average 26 secs fast p. a., and the N[úmenórean] 17.2 secs slow. I am also very pleased to see the link about Yola 🙂 Doug M. June 24, 2015 at 8:07 am I note that the article also manages to ignore the fact that many countries have welfare states nearly as large and as generous as Sweden's. Germany, for instance, has universal health care, large pensions, generous unemployment benefits, a raft of mandatory benefits for employees including high minimum wages and generous maternal and paternal leave, heavily subsidized state-run day care, and absolutely free state-funded higher education. With 80 million people and the world's sixth largest economy, Germany is not exactly a small country. Yet somehow Germany has managed not only to survive but to prosper. (Among other accomplishments, it's the only large economy that's managed the trick of running a massive trade surplus with China.) excess_kurtosis June 24, 2015 at 9:18 am Scandinavia has almost thirty million people. It isn't really *that* small in the scheme of things. That's true, but I actually hesitate to lump "Scandinavia" all together. Although they all have 'welfare states' by US standards, if you look closely the details are very different. To give a single minor example, it's really really hard to fire someone in Sweden. (Like, unless the employee is caught stealing from you, you almost have to declare bankruptcy to get rid of him or her. That level of hard.) Denmark, on the other hand, makes it easy for employers to fire people — employment is basically at-will, like in the US. But Denmark balances this with very generous unemployment benefits and retraining programs, much more so than Sweden. There are lots of things like that. E. Harding June 24, 2015 at 2:20 pm Ah. So that's why the Swedish natural rate of unemployment is so high. Jon Gunnarsson June 25, 2015 at 4:24 pm Your inclusion of the minimum wage in that list is somewhat misleading. Until recently, Germany did not have a minimum wage, and still did well economically, so that can't be an explanation for Germany's relatively strong economic performance. The minimum wage law is only in effect since the beginning of this year, so it's still too early to tell what the consequences are. Actually, about 90% of Germany's workers have had minimum wages since the 1970s. That's because Germany has really strong unions, and the unions in each sector negotiated minimum wages for that sector. So construction workers had one minimum wage, supermarket cashiers had another, and so forth. There were a few sectors that didn't get covered — most notably food preparation workers in restaurants — but the great majority of workers had minimum wages, and fairly high ones at that. What Germany didn't have until quite recently was a national minimum wage set by law instead of bargaining. That was seen as almost unnecessary, since almost all industries already had wages set by bargaining with the unions. So it took them years to get around to it. For the sake of most of the economic arguments around minimum wages, industry/union-negotiated minimums are the same as "no minimum." Getting a contract where no new worker in your same job can come in and undercut you does have some negative externalities, but they're substantially different (and according to anti-MW arguments, less harmful) ones from passing a law where no new worker even in a completely different field can undercut you. You're talking about *Tarifverträge*, which are quite different from a minimumg wage. Aside from being negotiated between unions and employers, rather than being set by the government, *Tarifverträge* do not establish a uniform minimum wage in a particular sector (to which not even all the employers in that sector are bound), since the wages in question vary by factors such as the type of job and seniority. I don't know where you're getting this 90% figure from. The figures that I found with a little bit of googling are that a little over half of all employees are paid according to a sector-wide *Tarifvertrag*. See for example https://www.destatis.de/Europa/DE/Thema/BevoelkerungSoziales/Arbeitsmarkt/Tarifbindung.html 90% isn't how may employees are paid a minimum wage, but rather how many are working in sectors where a minimum wage has been established — since having a minimum wage in an industry is going to affect everyone in the industry, for good or for ill. Jon Gunnarsson June 27, 2015 at 3:27 am The ~50% figure IS the proportion of employees who are working in firms which are bound by a sector wide "minimum wage" (a Branchentarifvertrag). And no, such a Branchentarifvertrag does not affect everyone in that particular industry since not all employers are bound by it. I was a little disappointed with your "The Cost of Satisfaction" summary. You know when you read an article about something and the article talks about the half of the results they like but ignore the others? "respondents in the highest patient satisfaction quartile (relative to the lowest patient satisfaction quartile) had lower odds of any emergency department visit (adjusted odds ratio [aOR], 0.92; 95% CI, 0.84-1.00), higher odds of any inpatient admission (aOR, 1.12; 95% CI, 1.02-1.23), 8.8% (95% CI, 1.6%-16.6%) greater total expenditures, 9.1% (95% CI, 2.3%-16.4%) greater prescription drug expenditures, and higher mortality" So patients were more likely to get routine care rather than ending up in the emergency room and were between 5% and 53% more likely to die in that time. (hazard ratio, 1.26; 95% CI, 1.05-1.53) That's one hell of a wide confidence interval. Subjects were followed for between 1 and 6 years. This reminds me of an analysis which showed that death rates drop when doctors are away. http://www.newscientist.com/article/mg22530032.100-death-rate-drops-when-top-heart-surgeons-are-away.html#.VYqXJUau_Zc Because surgeries tend to be put off while the doctors are away so the death rate drops. The patients aren't getting better care but over the short term the added risk from surgeries increases mortality. Now imagine 2 groups of people, one who's doctors ignore their complaints until they end up in the emergency room and the other who's doctors run tests and refer them for surgery if they find something serious. Over the short term, just as with the surgeons being away, we'd expect the first group to have lower mortality while the second group are more likely to get needed surgery right now and get their dose of micromorts in the short term from surgeries. they adjusted for chronic diseases but that will also mess things up if one group of patients are less likely to know they even have the long term health conditions in question, indeed we'd expect people with better doctors to look sicker by that score. Health care expenditure? lowest quartile: 4542 highest quartile: 4534 Oh but in that case the ignored the totals and picked out the one sub item in the list where patients in the most satisfied group paid slightly more while ignoring the areas where they paid less. This is also a spot where they stopped doing stats and just compared the numbers without stats because then they would have had to adjust for the other 100 things they were measuring. They also neglected to mention that patients in the most satisfied group were more likely to rate their health as better and more likely to come out as healthier in the questionaire. Bullshit alarm is going off (WEE OO WEE OO) They've pretty blatantly scoured the data for anything where the highest quartile looks even vaguely worse than the lowest quartile then only mentioning that as if it's a primary outcome. Unfortunately they don't appear to have preregistered their study design (as is best ethically when running studies involving human participants) so we can't say for certain whether they changed their primary outcome measures after peeking at the data. aguycalledjohn June 24, 2015 at 8:04 am Are there any practical takeaways from the Glial link to depression? (For non-researchers at least) Re the Sweden article. 1) It's Megan McArdle. That's a red flag right there. 2) Sweden is not by any stretch a "homogeneous" society. That's a lazy generalization that's also about two generations out of date. Sweden has been very open to immigration for about 50 years now. So, as of 2015, almost 20% of the citizens or permanent residents of Sweden are either recent immigrants or their children — some from Eastern Europe, but most from Africa, Asia, or the Middle East. Add to them about 150,000 refugees (about half of them from Syria) plus an indeterminate but large number of guest workers both legal and otherwise, and the number rises well above 20%. And then, "native" Swedes include several non-Swedish ethnic groups, such as the Finnish Swedes and the Sami. So when you total it all up, about 25% of the people living in Sweden are not ethnic Swedes. That still may not be quite as diverse as the US, but it's definitely not "homogenous". 2) The main thrust of the article is that Sweden can afford its welfare state because it's freeloading off of US innovation. This is, in a word, nonsense. Sweden has been producing a steady drumbeat of world-class innovations for decades now, from the three-point seatbelt, the milk carton and ultrasound in the last century to Skype and AIS in this one. But never mind past accomplishments — let's look at objective numbers. In 2012, the United States granted 121,026 patents to persons resident in the United States. (All data here is from the World Intellectual Property Organization, http://www.wipo.int.) Sweden granted 2,434 patents to persons resident in Sweden. Now, Sweden has about 1/33 the population of the US — in 2012, around 9.4 million vs. about 312 million. So, if we calculate "patents granted per million people" the figure is about US 383, Sweden 255. The US is clearly ahead, but Sweden is not exactly slacking. But wait: what if we correct for GDP? We'd expect countries with bigger GDPs to have more patents, right? And while Sweden is a rich country, the US is even richer. Well, US nominal GDP in 2012 was about $16.3 trillion. Sweden's was about $404 billion. So if we measure patents granted per billion of GDP… pow, more than half the difference vanishes: it's now US 7.44 to Sweden 6.02. The US is still ahead — go us! — but not by remotely enough to support the "lazy Nordics mooching off our innovation" model that the article is trying to push. (Incidentally, the US is not the world's leader in either patents granted per capita or patents granted per GDP. Just sayin'.) 4) The article does a certain amount of lip-smacking over the fact that the Norwegians are supporting their welfare state on oil. Somehow it manages to ignore the fact that the Swedes have no oil, no gas, and indeed hardly any natural resources at all beyond iron ore, timber, and fish. Most of the country is marginally habitable subarctic wasteland. Most of the population lives on a thin, chilly strip of bleak, flat coastal plain on the wrong side of the Baltic. Yet somehow they're managing to run one of the world's most advanced economies. It's almost as if they've managed to organize their society in some strangely efficient way. Go figure. Sweden also considerably outperforms the US in terms of scientific publications per capita, http://academia.stackexchange.com/posts/18768/revisions . I don't have the link off hand, but my impression is this lead stays when you restrict to high impact factor articles. LTP June 24, 2015 at 12:56 pm Ah, but here's a big difference between the US and Sweden: Sweden is free-loading on the security provided by the US military. If Sweden had to truly pay for its own defense, such a society might be much more difficult to maintain. This is the point I always make when I see the "WE SPEND MORE THAN THE NEXT X countries COMBINED on the military!" Like- yes, I'm aware that we spend too much and have lots of boondoggles. But we're also the "Baltic Army". We're guaranteeing the borders of a lot of places right now who are incredibly nervous at even a HINT we won't be coming in like the wrath of god if Russia decides to get all Anschluss-y….again. Off the top of my head- if the US stopped protecting other places…… Singapore and South Korea would last a week, tops. Most of Eastern Europe would become it's historically realistic position of "greater Russia". Japan wouldn't last too long either- the Chinese have a big ol' grudge they've wanted to settle for 70 years. Large cartel expansion in Mexico and south America, sudden explosions in the size of terrorist groups. Palestine would cease to exist. You thought I was gonna say Israel, didn't you? Israel will be fine. They're mean, their neighbors are helpless, and their stuff isn't good enough to interest the big boys. Palestine only exists because we keep begging the Israelis not to bomb them into atoms every time they perform an act of war against civilians. The Baltic states would last a very little while. The Nordic countries would last, I think- but only after Europe gets its shit together. Finland might not make it….they've gotten a lot wussier since Simo Hayha. Ultimately? We're the only ones actually living up to the NATO treaty and spending 2% of GDP on defense. Zykrom June 25, 2015 at 10:16 pm I'm surprised you think Japan would fall, or South Korea for that matter. I'd predict both could get a reasonable nuclear deterrent rather quick. Unfortunately, North Korea has a nuclear deterrent right now, and a historical predilection for an aggressively forward defense posture. That's a recipe for instability in the interval between "right now" and "rather quick". Japan won't fall; North Korea doesn't have enough nukes to flatten it, China barely does but has bigger concerns, and neither has the naval or amphibious capability to conquer an incompletely-flattened Japan. But the Japanese kind of hate North Korea, and the North Koreans really, really hate the Japanese, so a few flattened Japanese cities would be a distinct possibility. Doug Muir June 26, 2015 at 10:02 am 1) It's an open question whether NK can miniaturize weapons to fit on their current rockets. They say they can; but then they would say that, wouldn't they. 2) Japan has quietly been "one year from a bomb" for about 40 years now. 3) Really unclear what possible motivation NK could have for an ultimately suicidal attack on a distant country with ~5X the population and an economy over 100 times bigger. "Because they really hate them!" seems a bit of a stretch here. Note that from the narrow POV of North Korean elites, NK's foreign policy has actually been both rational and reasonably successful. They have various good reasons to act crazy; this does not mean that they are in fact crazy. Narratives that focus on their various provocations tend to miss the fact that those provocations have consistently stopped well short of actually getting them into a shooting war with anyone. I'd say you underestimate the tensions between China and Japan, and the degree to which the chinese political establishment cares about casualties. They're pretty ideally set up for expansionism, but they don't really have anywhere to go- although I think they might cut a deal with Russia to move north and possibly west. One interesting argument I read pointed out that US hegemony prevents nuclear proliferation. If the US is seen as backing off it's commitments (Like maybe to a nation that signed a treaty securing it's borders in exchange for it's nukes, a treaty we supported pretty heavily) then lots of little, rich places are gonna go nuclear, in places we really don't want having nukes (I'm pretty sure the Gulf states could flat out buy them.) It's an open question whether NK can miniaturize weapons to fit on their current rockets. They say they can; but then they would say that, wouldn't they? It's not just the North Koreans saying that. Claims that North Korea can only build big, clunky Fat Man style atom bombs, are unsupported by evidence, generally devoid of any technical rationale or understanding of nuclear weapons design, and usually come from politicians tasked with the impossible problem of dealing with a nuclear-armed North Korea and desperately trying to kick that can down the road a few more years. We're at the end of that road. North Korea almost certainly has 12-20 nuclear warheads that it can mount on medium-range missiles any time Kim wants. As far as wanting to attack Japan, a major issue for North Korea is ensuring that South Korea stands alone in any conflict. Right now, much of the basing and logistics for US forces earmarked for Korea, flows through Japan, and Japan is perceived as susceptible to nuclear intimidation. "Nice island you've got there; be a shame if something happened to it. Oops, did you just lose Hiroshima again?" If, as hypothesized, the US is no longer part of the equation, that obviously changes. Most obviously, nuclear attacks by North Korea become less inevitably suicidal, because nobody else would find it safe or easy to destroy the North. If, beyond that, Japan declares itself a pacifist neutral, then Japan is probably safe but South Korea's in a bad spot. If Japan and South Korea form a defensive alliance against external threats, then again the DPRK is going to want to make it painful for Japan to remain in that alliance. And China will probably feel the same way, but want somebody Not China to do any necessary dirty work. If the first thing the new alliance does is to announce that they are going to deploy their own nuclear weapons in a year or so, that makes for a very interesting year. 1) Who's going to invade Singapore? There are only two neighbors, and neither one has any reason to bother. Malaysia actively _does not want_ Singapore — they kicked them out back in 1962! Indonesia already had one "confrontasi" with Singapore back in the '60s, and it was such a fiasco they've never moved in that direction since. 2) South Korea has double the population and ~18X the GDP of the North, and they spend about twice as much on their military. They have fewer men under arms (about 650,000 vs. an estimated 1,100,000) but those men are vastly better trained and equipped. It's very nice for South Korea to have US support, but it's not an existential necessity. 3) If the US disappeared tomorrow, NATO would still exist. NATO without the US would still be by far the world's strongest military alliance. And the other NATO members have a fairly strong vested interest in keeping the Russians out of the Baltic States. (Finns: not sure where you're getting the "wussy" idea from. Finland has total male conscription — every fit male, and many women, does time in the military — and devotes a higher percentage of its GDP to the military than any other country in the region. The military is very popular. Finland sends troops all over the world, from peackeeping missions in Mali and Lebanon to active deployment in Afghanistan. No, they haven't had a war in 70 years, but there's no reason to think they've become any less badass than they were in 1940-44.) 4) The PLA Navy has very rough parity with Japan's Maritime Self-Defense Force — China has more stuff, Japan's stuff is better. However, PLAN has basically zero ability to deliver large numbers of troops over blue-water distances. China is not currently capable of invading Taiwan, never mind Japan. Sweden is a neutral country. It's not a member of NATO, nor is it allied with the US in any other way. That's nice. And notably, last time there wasn't a single worldwide hegemon, they went over like a load of wet bricks in a paper balloon. Not to mention that, again, Neutrality isn't a default position where everyone stops wanting your resources. Switzerland, the other notable Euro hold out, maintains it's neutrality through a heavily armed, well defended position. The swedes maintain it because we won't let anyone take them over. I'm sorry, but back when the USSR was really expansionist, you really think the French army was what was keeping them from moving into Sweden? One may fairly note that there was never a land border between Sweden and the Soviet bloc, because Finland provided a buffer. One may also note that the Soviets tried to conquer Finland, Sweden declared itself non-belligerent (not neutral) in that war and sent more than a few volunteers and arms their way, and that the USSR settled for modest border concessions instead of outright conquest. (Also, Sweden did maintain a fairly large military deterrent during the Cold War. It's reduced it since because it has far less to fear in this international situation.) And I can't think of any way the socialists can fully accomplish their goals without mountains of skulls, therefore I get to accuse them of being a bunch of Pol Pot wannabes, every last one. Right? If a person, or a nation, is conspicuously not doing some horrific thing that would benefit them, the charitable assumption is and the default assumption ought to be that they are not horrible people. Claiming that they must secretly desire to unleash horrors but are being somehow restrained, probably ought not be done without real evidence. Israel is very dependent on an influx of money from the US, US foreign aid is just under 1% of Israel's GDP, and comes with enough strings attached to be less useful than an equivalent sum of domestically-sourced money. It would be quite disruptive if this were to go away unexpectedly and instantly, but "very dependent" is I think understating it. Or, to link this cross-thread, Israel is maybe as dependent on US money as Sweden is dependent on US military might. vV_Vv June 26, 2015 at 7:38 am Not really. The US may or may not defend Sweden in case of an attack (there is no open alliance). The US will certainly defend Israel in case of an attack, as historically it did multiple times, and in addition the US subsides Israel and supports it at the UN Security Council. Think of it, if Israel didn't depend on the US that much, why haven't they bombed the hell out of the hated Palestinians already? Whatever Happened To Anonymous June 26, 2015 at 7:45 am I think you're misunderstanding their point: Sweden is not very dependant on US' military might, Israel is not very dependant on US' money. They, of course, have strong US military backing, but the US is also one of the things keeping them in check. Think of it, if Israel didn't depend on the US that much, why haven't they bombed the hell out of the hated Palestinians already According to the Palestinians, the Israelis dropped a Hiroshima's worth of bombs on them last year alone, with quite hellish results. So I gather you are really asking why the Israelis haven't bombed the Palestinians into (local) extinction or the like. Hmm, let's think about that a bit. Is there any possible reason why the Israelis might be averse to engaging in genocide? Possibly because western democracies in general aren't keen on genocide, or possibly because of some unique quirk in the Israeli national psyche for some unfathomable reason? Nope, nope, that can't be it. Everybody knows the Israelis are secretly genocidal maniacs, restrained only because Bibi is Obama's bitch. What John said. Also, note that Israel actually has cool-to-okay relations with some of its Arab neighbors, most notably Egypt and Jordan. Israel and Egypt quietly cooperate on a range of issues, they just signed an agreement on Jordan on exploiting the Dead Sea, and trade across the border is exploding — about a thousand trucks a month are crossing the King Hussein Bridge now. This hardly means that everything is going to be hunky-dory! But Israel is doing rather well by the current status quo, and has little interest in upsetting it. >Nope, nope, that can't be it. Everybody knows the Israelis are secretly genocidal maniacs, restrained only because Bibi is Obama's bitch. The thing is that Israel is always in a retaliation position. The idea is not that they're going to go ahead and just kill all those pesky palestinians, but that their retribution to attacks from Hamas could be much stronger. When CJB said that Palestine would be gone, I didn't think he'd mean "Glassed, following by the systematic erradication of Palestinians", but rather "Completely annexed, with a two-state solution as a distant memory". "Annexed" won't do it, because then they'll have to deal with all those Palestinians who actually live there. What are they going to do, let them live in new Greater Israel but without civil rights or the ability to vote? Annexation plus ethnic cleansing, now. What I assume happens is something like this: Reduced US hegemony leaves Israel on it's own. Israel is more or less fine, probably raises taxes and increases military spending to compensate. It's known to have nuclear ballistic subs, so it's probably safe from open invasion by a major power. And the duly elected Palestinian government returns to radicalism. And as has happened many times in the past, the duly elected Palestinian government launches an act of war against the Israeli state….except now there's no reason to listen to US liberals- and they're the only people on earth willing to actually do anything about the Israeli situation. The big powers that could care (Russia, China, UK) won't. The other big powers (France, Germany, Japan, etc) are going to be busy probably worrying about the first list of people. Then you get basically a war of modern weapons against people with RPGs and AK-47s…..except this time they don't have the advantage of soft power. I'm presuming between the bombings, the other bombings, the mass attacks, the disease and disruption and eventually the just plain fleeing, Palestine would end up pretty depopulated. The United States, both civilian and military establishment is the best friends the Palestinian government ever had. vV_Vv June 27, 2015 at 4:22 pm Given the Israeli policies for the last decades (e.g. this), which culminated with the election of people who say that all the Palestinians are enemies and there will never be a Palestinian state, and the general lack of effort towards a two-state solution, it seems quite clear that the prevailing political position among the Israeli calls for annexing the Palestinian territories. Since the Israeli aren't obviously willing to give Israeli citizenship to the Palestinians, the Palestinian territories will have to be depopulated before the annextion by some combination of displacement and genocide (e.g. something like what happened to the Armenians in Turkey). The question is why they didn't do this already. The only reasonable explanation I can think of is that the US is holding the leash. Ty Myrick June 24, 2015 at 2:04 pm That sounds like an excellent reason for the US to reallocate some of its military spending to social security. Sweden wasn't even dragged into WW II. It also has good submarines. …and very good fighter jets, and very good missiles, and light antitank weapons so good that the United States Army buys them. Sweden is one of the top ten arms exporters in the world, which serves to subsidize a arms industry that gives Sweden an independent defensive capability it would not otherwise be able to afford. Swedish military security is complicated, and not accurately described as " free-loading on the security provided by the US military" excess_kurtosis June 24, 2015 at 6:44 pm The US spends 3.8% of GDP on it's military, Sweden spends 1.3%. US government spending is 34% of GDP, Swedish government spending is 51% of GDP. That is to say, differences in military spending explain only 10% of the difference in government spending between the US and Sweden. This is a really stupid talking point. I was getting all geared up for an argument when I realized that I'm not posting on the part of the internet where any incorrect statement forever discolors everything you say, and it's perfectly acceptable to say "That information is new to me, and while I think my underlying point is still good, I think you're probably right about this." So that information is new to me, you're probably right, and while I think my general point about US military involvement stands, you're probably right about this. That's such a relief. You know how awful it is to defend terrible talking points because you're in a fight and concession is weakness? This comment makes me really happy; I think the fact that people feel this way about the SSC comments is a real credit to Scott. I am thrilled to be part of an online community where this is possible, and I will strive to uphold this standard with my own behavior. excess_kurtosis June 24, 2015 at 11:23 pm Sorry for the tone! @CJB/@Larry Kestenbaum/@Eugene Dawn: How do we have more of this and less of everyone fighting to defend weak arguments and the associated problem of being incapable of seeing their arguments are weak? Certainly, modifying one's own behavior around one's own weak arguments to the extent that it is possible, is the first step. But that runs smack into the problem of frequently not being able to see that one has a weak argument. What steps can anyone take to help others to identify/acknowledge their own weak arguments? One idea that seems reasonable is to be willing to call out arguments which are weak which support a viewpoint or worldview that you believe to be true. It's easier to accept criticism from your "own" side. That strikes me as a fundamentally hard thing to do. @ HeelBearCub For what it's worth, while composing a comment on where mass shooters get their guns, I noticed a flaw in my own argument. The Aurora and Tucson guys spent months planning, and Aurora (and Breivik) spent a long time accumulating equipment. With that kind of diligence, they could probably manage to get an illegal one. The Newtown guy might have been acting on impulse and been deterred by the lack of convenient guns in their house, if his mother's background check had discovered a son under mental observation living with her. I'm too lazy to look up more samples, but a key might be 'murderous crazy + OCD'. Certainly, being willing to examine one's own thinking and identify flaws is what we should be after. Acknowledge the flaw out loud can make for a powerful example to others. Eugene Dawn June 27, 2015 at 2:03 am @HBC Hahaha, as if I know 😛 As a pretty new commenter here, I hardly feel as if I'm qualified to talk about how to raise the standard of discourse, but what I've tried to do so far is 1. Refuse to respond to a thread if reading the thread makes me frustrated or angry, since that's likely to activate the motivated reasoning parts of my brain 2. Try and make my comments as factual as possible, without much argumentation 3. If I say something that is more opinionated, try and distance myself from it — I've tried to say things like, "my intuition is", rather than "I think", since I feel that it commits me less to the position I'm advancing. Another one that I try and use in real life is not to respond to flaws in arguments with "Oh yeah? Well what about ______", but rather to try and say something like, "I think a possible response to that is ________", again putting some distance between me and the counter-argument I'm making. I really find that this lets someone attack my view without me feeling like I have to go on the warpath to defend it. I've also found that this technique helps me keep discussions from turning into arguments, which again are where my motivated reasoning comes out. As with all sorts of personal virtues, I think the best strategies don't rely on you being able to muster up the strength to be virtuous once you're in a difficult spot; rather they prevent you from getting in such a spot in the first place. This implies a certain ability to ignore what you feel are extremely wrong arguments, though, which has its own drawbacks. But yeah, it's pretty hard, in general. I think part of it is also to forgive yourself and others for failing to uphold perfect standards of discussion at all times–pretty much anyone in the SSC comments is doing better than the vast majority of the rest of the internet. @Eugene Dawn: All very salient points for modifying one's own behavior. My intuition is that #3 is very important. It seems that even just examining one's own comment and putting the think/believe/intuit modifier where this is actually what is happening is important. We should strive to differentiate between what we "know" to be true based on citable evidence and what we are less sure of. I am using the word know here in a fairly weak sense. I'm not asking for the accumulated body of evidence for classical and quantum physics. But I feel that it is all to often easy to unconsciously remove admissions of the limits of our own knowledge in effort to make our arguments appear stronger than they are. You make a good point that this, also unconsciously, pre-commits us to defending weak arguments I'm very tolerant on the tones of others. I tend to write in a fairly obscene, sarcastic, CAPITALS AND EMPHASIS heavy manner, which I hope comes off as jocular and fun to read, but can come off abrasively. Dinna fash yerself lad/lassie, as the Scots. While I've got you all here- Any shibboleths, taboos, customs of the country I should observe as a noob?
CommonCrawl
Properties Of Lc Impedance Function It is convenient to work with 3. Ac-cording to Eq. Voltage dividers which include capacitors resistors have transfer functions which with usually vary with frequency. offers a variety of instruments, fixtures, and software to measure the dielectric properties of materials. Natural base exponential functions are exponential functions with base e. An automatic pulsation-correction function and high-speed micro plunger drive system enable pulse-free solvent delivery. The Hsp90 isoforms from S. Impedance offered by LC circuit is given by $$\frac{Supply \: voltage}{Line equation} = \frac{V}{I}$$ At resonance, the line current increases while the impedance decreases. The acoustic impedance influence at the interface of. SOLUTIONS 5-8 177 It is easily shown that the parallel resonant impedance Z = L/CR, i. This is ascribed to the fact that frequency can be measured with a very high degree of accuracy. And inside these boxes are one of our favorite passive components, either an R, an L, or a C. In a typical resonant technique based onan LC-tank circuit, the capacitor or inductor couples to the material under study and act as. Inductive reactance is directly proportional to frequency, and its graph, plotted against frequency (ƒ) is a straight line. Explanation: The driving point function is the ratio of polynomials in s. This article discusses what is an LC circuit, resonance operation of a simple series and parallels LC circuit. 3 Influence of Series Resistance on TDR Measurements 169 3. Because an imperfectly-terminated transmission line causes power to be reflected back to the source, the impedance seen looking into such a line is not equal to the characteristic impedance of the line, but some function of the reflection coefficient at the far end, and the length of the line. Impedance and Admittance Formulas for RLC Combinations Here is an extensive table of impedance, admittance, magnitude, and phase angle equations (formulas) for fundamental series and parallel combinations of resistors, inductors, and capacitors. Function impedance and threshold detection. The ruby red color and vivid pink undertones of blazing red quartz are incredibly rare. Any given wire material (copper, steel, aluminum, et cetera) has resistance, and DC resistance is inversely proportional to the circular mil area. 3 Prelab Exercises 3. Topological Analysis of. The idea of 'impedance' allows for many of these things to be wrapped up into one subject so that they are easier to communicate. Impedances describe the properties of circuit elements; phasors describe sinusoidal voltages or currents. wonderful explanation of Properties of RC admittance and RL impedance. Figure 6 depicts impedance functions for a half space that is either dry or saturated and overlayed by a surface layer. These expressions neglect the effect of resistive loss in the walls of the waveguide. Impedance function cannot have multiple poles or zeros on the axis. Answer to What is LC immitance function? Explain with an example LC Immitance Function Property 1: ZLC (s) or YLC (s) is the ratio of odd to even or even to odd polynomials Consider the impedance Z(s) of passive one-port network. Associations among Gender, Coping Patterns and Functioning for Individuals Associations among Gender, Coping Patterns and Functioning for Individuals with Chronic Pain: A Systematic Review. What is Delrin? Delrin is an advanced acetal homopolymer developed by DuPont using the latest Dupont Delrin 150 NC010 resin technology. The only way you can stop lc-lyric-parser is using pause function. To calculate impedance, you must know the value of all resistors and the impedance of all inductors and capacitors, which offer varying amounts of opposition to the current depending on how the current is. C Voltage, current step-up or step-down. Any language that contains first-class functions can be written in a functional style. Wood Properties offers luxury real estate services in Southwest Florida. read/write properties vlc. INTRODUCTION Some graphic and numerical methods of impedance matchingwillbereviewedhere. 13: LC impedance vs. Matching networks S-probe pair Probe components palette: Data Display template gives Impedance and stability. Optimal Single Resistor Damping of Input Filters Robert W. The log-magnitude of a PR function has zero mean on the unit circle. Impedance really is an abstraction of things that are far more complicated (things like time constants and rise times) that electrical engineers have to constantly consider. PROPERTIES OF LOGARITHMIC FUNCTIONS EXPONENTIAL FUNCTIONS An exponential function is a function of the form f (x)=bx, where b > 0 and x is any real number. We also found that droplet volume and surface tension could be accurately estimated from impedance data. What is the output impedance of the square wave generator? What is the load impedance presented by the oscilloscope? An audio-frequency oscilloscope typically has input impedance 1 M or 10 MOhm. 【商品コード:13005720362】 【送料無料】サンワサプライ 光ファイバ自作工具セット(lc・sc両用) hkb-tlset2. A much more elegant way of recovering the circuit properties of an RLC circuit is through the use of nondimensionalization. If you are ready to buy or sell a property in this area, contact us today!. Blending cosmopolitan style with timeless elegance, The Whitley, a Marriott Luxury Collection Hotel, is ideally situated in the vibrant community of Buckhead, one of Atlanta's most prestigious and trendsetting neighborhoods. An active termination for a transmission line comprising a reference impedance, a terminating impedance and a control circuit. The transfer function of the filter is the ratio of the impedance from point Y to 0 V divided by the impedance from X to 0 V through the filter. ) The specific acoustic impedance is a ratio of acoustic pressure to specific flow, which is the same as flow per unit area, or acoustic flow velocity, u. It is basically the concept which is used in the field of the network analysis and design and also in the field of the filter design. Sometimes the imaginary part of an impedance is called reactance. The voltage across the capacitor lags the current by $90^\circ$. IDENTIFICATION OF SOIL PROPERTIES FROM FOUNDATION IMPEDANCE FUNCTIONS. Type ?curve to see other available options for the curve() function. Access is free through a registered publishing personal account until 02. Any given wire material (copper, steel, aluminum, et cetera) has resistance, and DC resistance is inversely proportional to the circular mil area. Topological Analysis of. 2 IMPEDANCE AND TRANSFER FUNCTIONS 2 Impedance and Transfer Functions The concepts of impedance and transfer functions are are essential for a clear understanding of analog electronic circuits. The term complex impedance may be used interchangeably. The paper deals with analysis and design of multi-element circuit. Shame there''s no field for power supply, if that makes any difference? 110V I was looking for. The number of parameters to the current subroutine (shell function or script executed with. The resonant frequency of the sensor is monitored by measuring the input impedance of the antenna, and correlated to the desired quantities. Inductive reactance is directly proportional to frequency, and its graph, plotted against frequency (ƒ) is a straight line. Consider an LC circuit in which capacitor and inductor both are connected in series across a voltage supply. Capacitive reactance is inversely proportional to frequency, and its graph, plotted against ƒ is a curve. At frequencies above the SRF, impedance decreases with increasing frequency. Below cut-off the impedance is imaginary (reactive) and the wave is evanescent. Publications and Manuscripts: [1]!. 1pu on its ratings base. Polyester's strength and durability make it a good choice for outerwear garments. Filter types and characteristics A filter is a circuit whose transfer function, that is the ratio of its output to its input, depends upon frequency. The 3dB point of the high-pass fllter was measured to. This introductory textbook on Network Analysis and Synthesis provides a comprehensive coverage of the important topics in electrical circuit analysis. What is the output impedance of the square wave generator? What is the load impedance presented by the oscilloscope? An audio-frequency oscilloscope typically has input impedance 1 M or 10 MOhm. In the example above, lc is the variable used to consume the information that the context object captured and log. Some Mapping Properties of RC and RL Driving-Point Impedance Functions Abstract: We show that an RC or RL driving-point impedance function Z(s). Resistance is the special case of impedance when φ = 0, reactance the special case when φ = ± 90°. At resonance, the voltage and current are in phase. Electronic communication uses electronic circuits to transmit, process, and receive information between two or more locations. Innovation C/LC/Thin Series C/LC/Thin Series www. (a) the current as a function of time, (b) the average power delivered to the circuit, (c) the current as a function of time after only switch 1 is opened. This page contains the basic equations for an L-C filter. CognitoIdentityPoolID) prints that information, in this case, the CognitoIdentityPoolID. The resonant frequency of the sensor is monitored by measuring the input impedance of the antenna, and correlated to the desired quantities. Could be due to the the. This is usually a rational function in 's'. We study these circuits in detail and in particular we shall focus on the desirable. Kemper1 and N. In order for the impedance to be positive real the coefficients must be real and positive. • all capacitors with parallel LC circuits, (open at resonance) and • all inductors with series LC circuits (short at resonance) 1 n n C L ω ω = ω n is the centre frequency of the filter. Natural base exponential functions are exponential functions with base e. Two Port Network, Driving Point And Transfer Functions Transfer impedance, Z(s) = Transfer admittance. RLC Circuits - SciLab Examples rlcExamples. The resistance of a capacitor in a DC circuit is regarded as an open connection (infinite resistance), while the resistance of an inductor in a DC circuit is regarded as a short connection (zero resistance). Basic PCB material electrical and thermal properties for design Introduction: In order to design PCBs intelligently it becomes important to understand, among other things, the electrical properties of the board material. Module-VI DRIVING-POINT SYNTHESIS WITH LC ELEMENTS: Elementary Synthesis Operations, LC Network Synthesis, RC and RL networks. Inductive reactance is directly proportional to frequency, and its graph, plotted against frequency (ƒ) is a straight line. Bio-electrical impedance (BI) analysis is a simple body composition method ideal for children. Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of a sinusoidal voltage between its terminals to the complex representation of the current flowing through it. KAPTON ® Polyimide Film (Type H) is a polyimide film which possesses a unique combination of properties among polymeric film materials. Y1 - 1965/1/1. Fixtures to hold the material. Series Resonance. Linear Function. Wood Properties offers luxury real estate services in Southwest Florida. In capacitors, impedance is used to control the flow of electricity in a circuit board. The probe is characterised by a fringing capacitance C and conductance G which are a function of its physical dimension and can be measured with the impedance analyser. When we study the beam dynamics in the time domain, as usually done for linear accel-erators, it is convenient to make use of the wake functions or potentials. Reflected power is commonly expressed as return loss (R L. To learn some things about the Fourier Transform that will hold in general, consider the square pulses defined for T=10, and T=1. The default linetypes for a particular terminal can be previewed by issuing the test command after setting the terminal type. density and P-wave impedance versus the gamma-ray values for a well in Colombia. The following pulses are related to mul-tiple reflections at both ends of the coaxial cable. The primary function of the middle ear is to offset the decrease in acoustic energy that would occur if the low impedance ear canal air directly contacted the high-impedance cochlear fluid. 5 pF/m Operating voltage RMS value 100 V Mechanical data Number of electrical cores 2. Using an equivalent circuit model of the sensor that accounts for the properties of the. Furthermore, the equivalent circuit of each part Furthermore, the equivalent circuit of each part of the coupled lines includes L e and C e. You can define your own function in MATLAB. Thus, the design is reduced to the synthesis of active filter subcircuits that simulate the normalized immitance functions of RLC circuits (the transmittances in the graph) by current transfer functions. Impedance can also be dissimilar from resistance when a DC circuit changes flow in one way or other- similar to the opening and closing of an electrical switch, as is observed in the computers when they open and close switches to represent ones and zeros (binary language). The following plot shows a sample of the baseband-equivalent RF signal generated by this LC Bandpass Pi block. The transformer has a per unit reactance of 0. Melde, Senior Member, IEEE Abstract—This paper presents the design of a broadband RF impedance tuner that is part of a dynamically reconfigurable automatic match control (AMC) circuit that can be used for a. The properties of the transfer function of LC circuit are : ZLC(s) or YLC(s) is the ratio of odd to even or even to odd polynomials, The poles and zeros are simple and lie on the jw axis, The poles and zeros interlace on jw axis, The. An audio crossover circuit consisting of three LC circuits, each tuned to a different natural frequency is shown to the right. You've just watched JoVE's introduction to the time dependent behavior of circuits using resistors, capacitors and inductors. Impedance matching is one of the most important aspects of RF circuit design. Skin barrier perturbation due to exposure to different chemicals can be quantitatively expressed by measuring changes in skin impedance. 2 kOhm resistor. At frequencies above the SRF, impedance decreases with increasing frequency. Hence, a constant current=V/(Rs+Z0) is continuously injected into the transmission line entry point all during this time (T d=transmission line delay, n is the number of LC segments,. Also note that R DC represents only the DC resistance of the line. Using an equivalent circuit model of the sensor that accounts for the properties of the. 4 2 4 2 0 5 3 5 3 1 ( ) a s a s a Z s b s b s b s The highest powers of the numerator and the denominator polynomials can differ by, at most, unity. 53 where the orientation angle of LCs is defined as θ =30° and φ=150°. Without the capacitors controlling and regulating electrical flow, your electronics that use alternating currents will either fry or go berserk. 5, response is equivalent to 2 pole RC filter. When a sound wave is transferred from a low-impedance medium (eg, air) to one of high impedance (eg, water), a considerable amount of its energy is reflec. Y1 - 1965/1/1. The transfer function for the compensator is rigorously developed. (6), the measured signal corresponds to twice the incident signal, i. Mapping properties of RC and RL driving point impedance functions in right half of. LC Impedance matching network designer Enter the input and output impedances to be matched and the centre frequency. Wire properties. Characteristics of Thermal Interface Materials - 4 - Apparent Thermal Conductivity It is a common practice to calculate an "apparent" or "effective" thermal conductivity from a single measurement of thermal impedance (Z) at one thickness (t) of a particular interface material. The resonance of a series RLC circuit occurs when the inductive and capacitive reactances are equal in magnitude but cancel each other because they are 180 degrees apart in phase. The more complex a passive crossover, the more energy is required from the amplifier for it to function. Further development of statistical models and algorithms (Caulfield and Vim, 1983) establishes relationships between the acoustic. Recognized standards express the dissipation factor at specific frequencies typically 120Hz for Aluminum. Measured data was then taken with respect to the variables Vo, I 1 , I 2 , I 3 for both frequencies. Terminations are used within the skin-effect region to abate resonance on long transmission lines in precisely the same manner as used in LC region. Let's continue the exploration of the frequency response of RLC circuits by investigating the series RLC circuit shown on Figure 1. If you are ready to buy or sell a property in this area, contact us today!. Two Port Network, Driving Point And Transfer Functions Transfer impedance, Z(s) = Transfer admittance. In this method, a very precise impedance bridge is used as shown in Figure 9. Using the de-embedding function in the VNA, the influence of the sample holder on actual material measurement can be cancelled out. dielectric properties. the input impedance, current and output voltage of the series RLC resonant tank circuit. A computer aided design method was developed for the purpose of designing a multi-section lumped-parameter impedance matching network. At a frequency of 4 MHz a parallel wire transmission line has the following parameters: R = 0. From the data in Table 2, it is clear that the output voltage became its maximum value at the resonance frequency. The comparison of such function with the faradaic impedance of the equivalent circuit used in the fitting procedure allows to obtain kinetic information for each elementary process as well as the superficial concentration of each reaction intermediate. (Note that f (x)=x2 is NOT an exponential function. For example, taking the voltage over the inductor results in a high-pass filter, while taking the voltage over the resistor makes a band-pass filter. At frequencies above the SRF, impedance decreases with increasing frequency. Access is free through a registered publishing personal account until 02. The voltage in the resistor is precisely in phase with the current. It is a crystalline plastic which uses a breakthrough in stabilization technology by offering an excellent balance of desirable properties that bridge the gap between metals and plastics. More specifically, the relay operates depending upon the impedance between the point of fault and the point where relay is installed. However, its utility in sick or malnourished children is complicated by variability in hydration. Electronic communication uses electronic circuits to transmit, process, and receive information between two or more locations. 1/26/2005 The Reflection Coefficient Transformation. Example of a 2nd order LC filter: - Q of the circuit affects response. Experiments. , it has minimal resistance, current just flows through it like it would flow in a piece of straight. In order to bring a Lo-Z or Hi-Z source up to Line Level, we must use a Preamp. For the 741 the input resistance measured to one input with the other grounded is about 2 Megohms. The ruby red color and vivid pink undertones of blazing red quartz are incredibly rare. The only way you can stop lc-lyric-parser is using pause function. The bandwidth is obtained as a function of coupling and switching currents, but another calculation is required to get the unloaded Q (Q u) to calculate the insertion loss. When introduce complex numbers, the solution to circuits like the series RLC circuit become only slightly more complicated than solving Ohm's law. Like a pure series LC circuit, the RLC circuit can resonate at a resonant frequency and the resistor increases the decay of the oscillations at this frequency. Prior to 1952, much of the work on voltage transfer functions consisted of the realization of transfer impedance or admittance terminated at one or both ends with resistors. Z = Q/w"C= 100/(21T X 106 X 200 X 10-12) n = 79. Experiment 14 LUMPED-PARAMETER DELAY LINE Introduction 1 Dispersion Relation 1 Characteristic Impedance 3 Cutoff Frequency 5 Propagation on the Line and Reflections at Terminations 6 Steady-State Response and Resonances 9 Prelab Problems 10 Procedure 11 Data Analysis 13 Appendix A: Effects of Inductor Self-Resonance 14. Can't find what you're looking for? Try: Classic. State and prove the properties of Fourier Transform. Reorientation tendencies of liquid crystal (LC) molecules are investigated for two different fundamental orientation configurations known as homogeneous (HG) and homeotropic (HT). First calculate the required resistor R such that the total resistor corresponds to the one found in the pre-lab for each case. When performing impedance spectroscopy. Example of a 2nd order LC filter: - Q of the circuit affects response. An inductive impedance will have a current waveform that lags behind the voltage waveform. of important functions in a system. tissue modeling and electrical-impedance imaging techniques are presented. Series-Parallel RC Circuits • An approach to analyzing circuits with combinations of both series and parallel R and C elements is to: – Calculate the magnitudes of capacitive reactances (XC) – Find the impedance of the series portion and the impedance of the parallel portion and combine them to get the total impedance Power in RC. The first Impedance Matching concept in RF domain was related to Antenna Matching. Hey guys! In this video I told about properties of LC Immitance function. The dnorm() function has other options that allow you to choose normal distributions with another mean and standard deviation (again type ?dnorm to see the usage). Page 1 Migrating 419xA / 439x to E5061B LF-RF Network Analyzer with new Impedance Analysis function Agilent Technologies February 2011 4194A 4192A 4195A. As the capacitor is probably the lowest impedance device in the circuits electrically connected to the capacitor, the LCR meter should be able to distinguish capacitors with unusually high ESR. Agilent measure-ment instruments, such as network analyzers, LCR meters, and impedance analyzers range in frequency up to 325 GHz. Looking ahead to Calculus: — lim 1 + — X — 00 A limit is a y value. The first calculator is metric, whereas the second is inc. Using the de-embedding function in the VNA, the influence of the sample holder on actual material measurement can be cancelled out. Alternatively, a transformer can be used to do the same thing with current. 5 Possibility of Severe Resonance within the LC Region 176 3. No, it's not resonance (unless you actually build a circuit with a resonance frequency. The filter is comprised of the inductor (L) and capacitor (C). This LC Meter allows to measure incredibly small inductances making it perfect tool for making all types of RF coils and inductors. The results are applicable to any of the other two-port parameters and the conclusions parallel many of the powerful theorems concerning lumped. 4 Mutual Inductance of a Coil Wrapped Around a Solenoid A long solenoid with length l and a cross-sectional area A consists of N 1 turns of wire. y is the exponent. Surface Impedance The normal incidence surface impedance is a complex coefficient given by the acoustical pressure to velocity ratio at the surface of the tested sample when it is excited by a normal incidence acoustical wave. When a sound wave is transferred from a low-impedance medium (eg, air) to one of high impedance (eg, water), a considerable amount of its energy is reflec. These circuits are used extensively in electronics, for example in radios and sound-producing devices, but they can also be. T1 - Some Mapping Properties of RC and RL Driving-Point Impedance Functions. This is because the electrostatic capacitance impedance accounts for much of the total, so the effect of ESL and ESR can be ignored. Kirchoff's voltage law. When a high impedance device, such as an oscilloscope is used to measure the output of the function generator, the waveform appears to be twice the voltage set on the display of the oscilloscope. For odd-order Chebyshev filters, the output impedance is slightly less than the input impedance. The classic series and parallel LC resonant circuit have been used in numerous applications such as oscillators and filters. The Bode diagram is a log-log plot of the. You can predict the phase and amplitude response of a skin-effected transmission line beginning with the definition of the propagation function [3. This example shows how to design broadband matching networks for a low noise amplifier (LNA). EQUATION 5:. Frequency for RLC parallel Circuit. Z is the end impedance of a 1/4-wavelength of wire which is resonant at the higher of the two basic operating frequencies. Because an imperfectly-terminated transmission line causes power to be reflected back to the source, the impedance seen looking into such a line is not equal to the characteristic impedance of the line, but some function of the reflection coefficient at the far end, and the length of the line. Also, I only measure amplitudes and do not attempt to determine the phase shift. As a parallel resonance circuit only functions on resonant frequency, this type of circuit is also known as an Rejecter Circuit because at resonance, the impedance of the circuit is at its maximum thereby suppressing or rejecting the current whose frequency is equal to its resonant frequency. ! Impedance £ 0. Both the methods will produce the same results. The measured s-parameters are then post processed to determine the complex dielectric properties using a program. 5 Ω/km Capacity per length / at 1 kHz 28. Wyndrum 14 shows that the repeated use of Richard's theorem to extract unit RC sections from a desired impedance function results in the realization of the desired transfer or driving point function by a cascade of a finite number of RC sections. Ceramic Electronic Components 2 Figure 2. There are also small differences in their cross section, which is why the resonant frequency shifts slightly. This new solution is based on the properties of the current conveyor, which are here used as transconductor. Agilent measure-ment instruments, such as network analyzers, LCR meters, and impedance analyzers range in frequency up to 325 GHz. These are listed in the following table: Name of Variable Description Symbol Center Frequency This is the frequency at which. Zero (0) on the frequency axis corresponds to the center frequency specified in the Input Port block. s-Domain Circuit Analysis Operate directly in the s-domain with capacitors, inductors and resistors Key feature – linearity – is preserved Ccts described by ODEs and their ICs Order equals number of C plus number of L Element-by-element and source transformation Nodal or mesh analysis for s-domain cct variables Solution via Inverse Laplace. 0 kHz, noting that these frequencies and the values for L and C are the same as in Example 1 and Example 2 from Reactance, Inductive, and Capacitive. Reflection Coefficient: Consider a line of characteristic impedance Zo and length ℓ terminated with a load impedance of ZL, as shown in Fig. To this end the MATLAB toolbox fdident is used [12]. Thus, if the user determines the self-impedance (frequency) and knows the current (frequency) of the PDN, then the voltage (frequency) can be determined. impedance of capacitors and inductors. They are of course functions of frequency, which explains the use of the prefix 'pseudo'. Links Band Pass Filters. 01 μF poly capacitors. - When Q=0. Using an equivalent circuit model of the sensor that accounts for the properties of the. Example 1: Must calculate the resistance of a copper cable that has a cross-sectional area of 2. 3, and the potential, Eq. 2 kOhm resistor. The Unexposed Secret of gamesInternet online casino came about during the early nineties, and have absolutely ended up rising significantly possibly since. SOLUTIONS 5-8 177 It is easily shown that the parallel resonant impedance Z = L/CR, i. Main circuits of proposed topology consist of series LC resonant branches and of parallel LC sinusoidal output filters. Both impedances are made of an active and a passive resistor in series. Mapping properties of RC and RL driving point impedance functions in right half of. Tolerance is the property name and 1% is the value of the property. d=2n√(LC), the impedance seen by the voltage source at x=0 is Z0 itself, irrespective of the type of termination. The tunability of adaptive impedance matching modules is controlled by the following: the L network [19,20] consisting of a series LC and a parallel LC network; the capacitance array [21{25] conflgured by parallel switched capacitors; and the tuning transformer [26] based on the nonlinear properties of the ferromagnetic core material. Instantaneous Impedance of a Transmission Line I = vC L V Z = V I = V vC LV = 1 vC L Z 0= 1 vC L Features of the impedance: • looks like a resistor • dependant on intrinsic properties only • is an intrinsic property • independent of length • defined as the "characteristic impedance" = Z 0 • also called the "surge impedance" or. Properties of RC Network Function, Foster Form of RC Networks, Foster From of RL Networks, The Cauer Form of RC and RL Networks. Experiment 2 Impedance and frequency response The first experiment has introduced you to some basic concepts of analog circuit analysis and amplifier design using the "ideal" operational amplifier along with a few resistors and operating at low frequencies. inductance of the toroid in this limit has the same form as that of a solenoid. Impedance Matching Networks Michael F. The many properties and uses of the S-parameters in applications are discussed in [1135–1174]. This novel LC resonant tank amplifies the magnetic coil current by 2X while generating a high-frequency magnetic field. This discussion on The network functionrepresents ana)RC impedanceb)RL impedancec)LC impedanced)None of theseCorrect answer is option 'D'. The gain of the circuit is: and the following graph shows the phase as a function of frequency: A bandpass filter has five characteristic parameters. Impedance and Admittance Formulas for RLC Combinations Here is an extensive table of impedance, admittance, magnitude, and phase angle equations (formulas) for fundamental series and parallel combinations of resistors, inductors, and capacitors. Subsequent sections discuss the implications of the skin effect, which increases the total line resistance at high frequencies, and dielectric loss, which can further. Overload protection; Short-circuit protection; No-load protection; No output current overshoot at mains on/off; Burst protection voltage 1 kV. Everything About Cytokines: Their Function, Structure, and Properties. For Butterworth and odd-order Chebyshev filters, this is also the filter output (termination) impedance. Abstract—A general model for injection-locked LC oscillators (LC-ILOs) is presented that is valid for any tank quality factor and injection strength. The driving-point functions have been presented in the previous page. Figure 6: An equivalent circuit for the high-impedance surface. And also demonstrate the interplay between the time and frequency domain. It is the Dissipation factor is also known as the tangent of the loss angle and is commonly expressed in percent. Fixtures to hold the material. Therefore, the impedance must be at least continuous, and then a jump in its derivative will give rise to a delta function in the potential. The relationship between acoustic impedance and specific soil properties has been empirically derived from world averages of measured impedance versus sediment characteristics (Hamilton and Bachman 1970, 1972, 1982). This is illustrated in the following example. The dc output voltage is fixed mainly by the dc resistance of the choke. Rp - parallel resistance determined by the inductor and circuit radiation losses, inductor core losses, amplifier loading, etc. The impedance is found by combining the resistance, the capacitive reactance, and the inductive reactance. Z = Q/w"C= 100/(21T X 106 X 200 X 10-12) n = 79. • all capacitors with parallel LC circuits, (open at resonance) and • all inductors with series LC circuits (short at resonance) 1 n n C L ω ω = ω n is the centre frequency of the filter. T & p representation. [8] 2016/04/26 16:24 Male / 20 years old level / A teacher / A researcher / Very /. Measured data was then taken with respect to the variables Vo, I 1 , I 2 , I 3 for both frequencies. An Introduction to. The characteristics parameters of the two port network are given below: Image Impedance. As seen above, the acoustic impedance Z 0 is the ratio between the sound pressure and the instantaneous particle velocity: Z 0 = ρ 0 × c = p / v. Overload protection; Short-circuit protection; No-load protection; No output current overshoot at mains on/off; Burst protection voltage 1 kV. An element in a DC circuit can be described using only its resistance. In all cases, unless you specify a matching network (see below), this is equal to the filter input (source) impedance. It's given by the expression : where R 20 is the resistance at 20C, T the ambient temperature and k T the temperature coefficient. biasing to node 'A' and hence M 1 and M. The losses in a simple LC circuit can be separated in two physical components: 1. RLC Frequency Response 1. POLE Function Call. of Kansas Dept. Rf Impedance Matching Calculator. , it has minimal resistance, current just flows through it like it would flow in a piece of straight. Mapping properties of RC and RL driving point impedance functions in right half of. A tank circuit is a parallel combination of L and C that is used in filter networks to either select or reject AC frequencies. The driving-point functions have been presented in the previous page. NEW TRANSISTOR-BASED MATCHING CIRCUIT The new approach consists in using second generation. It can be calculated to be 1. Op-amp Input Impedance One of the practical op-amp limitations is that the input impedance finite, though very high compared to discrete transistor amplifiers. An active termination for a transmission line comprising a reference impedance, a terminating impedance and a control circuit. impedance of capacitors and inductors. Of interest is the establishment of the effects the soil saturation have on the dry soil impedances. offers a variety of instruments, fixtures, and software to measure the dielectric properties of materials. The acoustic impedance influence at the interface of. As a series resonance circuit only functions on resonant frequency, this type of circuit is also known as an Acceptor Circuit because at resonance, the impedance of the circuit is at its minimum so easily accepts the current whose frequency is equal to its resonant frequency. Properties of a single pair • Equivalent model of a single pair • Capacitance • Inductance • Skin effect • Resistance • Per unit length model of a pair • The telegraph equation - solution • Propagation constant • Characteristic impedance • Reflection coefficient, terminations. Devices must detect the following resistor ladder on the accessories. Bode Diagrams of Transfer Functions and Impedances ECEN 2260 Supplementary Notes R. Using an equivalent circuit model of the sensor that accounts for the properties of the. For a waveguide entirely filled with a homogeneous dielectric medium, similar expressions apply, but with the wave impedance of the medium replacing Z 0. The table below summarises the impedance of the different components. Impedance is measured in Ohms, indicated by the Greek sign Omega (Ω). This is because the electrostatic capacitance impedance accounts for much of the total, so the effect of ESL and ESR can be ignored. Being aware of the physical reason. 1 The Electrical Properties of a Series LCR Circuit at Resonance. The locus coeruleus, a small nucleus located in the pons, is the main source of noradrenaline in the forebrain. There are two basic types of wave, transverse and longitudinal, differentiated by the way in which the wave is propagated. This parallel double-plunger model provides superior performance at micro flow rates. Most are inefficient and inaccurate, regardless of the academic theory that describes them as being superior. β β β β + −−+ +− −+. The total impedance of a series LC circuit approaches zero as the power supply frequency approaches resonance. RLC Resonant Circuits Andrew McHutchon April 20, 2013 1 Capacitors and Inductors There is a lot of inconsistency when it comes to dealing with reactances of complex components. In general, it depends upon the frequency of the sinusoidal voltage. Also, I only measure amplitudes and do not attempt to determine the phase shift. Procedure: Using Eq. The RF filter subsystem consists of an LC Bandpass Pi block, and the Input Port and Output Port blocks. The impedance tube apparatus is commonly used to measure specific impedances, sound absorption coefficients (SACs), sound transmission losses (STLs) and acoustic properties (characteristic impedances, propagation wave numbers, effective densities, bulk moduli) of acoustic materials in normal incidence conditions. of Science & Technology, Enschede, The Netherlands Electrochemical Impedance Spectroscopy. Below resonance, the impedance of a ferrite choke is proportional to the length of the wire that is enclosed by the ferrite material. Huntbatch1 Search and Discovery Article #40869 (2012) Posted January 30, 2012 *Adapted from oral presentation at AAPG International Conference and Exhibition, Milan, Italy, October 23-26, 2011. (d) the capacitance C after switch 2 is also opened, with the current and voltage in phase, (e) the impedance of the circuit when both switches are open,. Megger makes electrical test equipment to help you install, improve efficiency and extend the life of electrical assets and cable networks at high, medium and low voltage. , the input impedance of the series resonant circuit as a function of and is given by, (5) The input impedance of Series RLC circuit is shown in Fig. 74 and no=1. Figure 1: Example Circuit. Solartron 1255B Frequency Response Analyzer and impedance/gain-phase analyzer - measurement for electrochemistry, materials research and electronic testing.
CommonCrawl
Can a planet have a day that's always longer than night? This question has been rewritten to incorporate all clarifications. On Earth, half the planet is illuminated at any time (let's ignore eclipses). Axial tilt lets day lengths vary, but over the course of a year, every location is illuminated half the time. It's easy to make a planet where, over a year, everywhere is illuminated more than half the time. Use a binary star. But is there a naturally occurring, stable solar system that satisfies the more restrictive requirement that the planet is always more than half illuminated? In the general case, if it orbits one star of a binary, there will be a point in its orbit where the other star passes behind the one the planet orbits. If it orbits both stars, there will likewise be a point where all three are in a line. And note that, even if the planet's orbit is inclined relative to the plane containing the stars' orbits, a collinear situation is still possible... barring some resonance that prevents it. Note that I'm only talking about solar system geometry. Cloud cover means you can't see the sun all the time (though light gets through). Atmospheric refraction and diffraction extend visible light onto the 'night' side; this gets extreme with a dense atmosphere like Venus. I know this, so I'm not asking for answers involving that. All solutions must work for a vacuum world. My purpose is to explore the geometry of solar systems. The planet must satisfy both of "At any time, >50% of the surface is illuminated" and "At any location, illuminated >50% of the year." Approximate scale of the effect: Let's say that "more than half" means at least 195/360 of the surface (IE, an extra hour in an Earth day). It must also be light providing meaningful illumination, not just technically visible. Let's say that said area is illuminated to a level at least 1/40 of (should it be "the brightest illumination it receives" or "the brightest illumination Earth receives"?). Before asking this question, I thought of a Trojan planet of a binary star. I then saw a figure of a minimum mass ratio of 25 for two bodies to generate stable L4/L5 points. With stars, luminosity is roughly proportional to mass to the 3.5 power. This means one star must be at least 78000 times brighter than the other, and the planet is equidistant from them. Given that full moonlight on Earth is about 1/400000 of full sunlight, this is hardly better, nowhere near enough to count as "day". That's why I asked the question. hard-science solar-system Tristan Klassen Tristan KlassenTristan Klassen $\begingroup$ Wont that depend on your definition of "day"? I think Alpha Centauri has 3 stars but one is so far outwards it doesnt illuminate much anymore. A similar system with the third star closer to the center (perhaps a "small" red dwarf?) would illuminate enough to give that "day" cycle. $\endgroup$ – Demigan $\begingroup$ Demigan: Yeah, definition is vague. But, for example, moonlight on Earth isn't bright enough to be thought of as 'day'. At what intensity does light become considered 'day' on Earth -- a few % of its full intensity? Karl: Yeah, I know those. But like eclipses, they're minor effects. $\endgroup$ – Tristan Klassen $\begingroup$ Your title asks a different question from the text. One asks for a day that's longer than night. The other asks that the planet always be more than half illuminated. That is not the same question at all. $\endgroup$ – chasly - supports Monica $\begingroup$ For a planet with no axial tilt and turning on its axis at a constant rate, they are the same AFAICT. $\endgroup$ $\begingroup$ Regarding "If it orbits one star of a binary, there will be a point in its orbit where the other star passes behind the one the planet orbits." and "If it orbits both stars, there will likewise be a point where all three are in a line" - these are true only if both stars share the same plane as the orbit of the planet (consider e.g. a state where the secondary star is "above" the orbital plane of the planet and the primary star...) $\endgroup$ – G0BLiN You can have a binary star, and the planet in a Trojan position in the same orbit as the smaller of the two stars. Basically, the two stars describe one side of an equilateral triangle, and the planet occupies the third vertex, in either L4 or L5 position. One such configuration is presented here (figure 2, on the right). Wikipedia gives stability requirements as "m1 > 100 m2 > 10,000 m3" so you'd need a large F-star as m1, and a small red-yellow dwarf as m2. This also requires a large orbital radius for habitability. This configuration is not long lived enough for life to originate on the world, though. For that you'd need a smaller, colder, longer-lived main star (for example a Sun-type G star), and then you'd need a very small brown dwarf which, at a distance of some 8-10 light-minutes, wouldn't probably supply much of a daylight. But if you don't require habitability, this would get you four hours of main daylight, eight hours of "reinforced" daylight, four hours of "secondary" daylight and eight hours of "night" every 24-hour day (thanks to @ltmauve for pointing this out). A simpler setup Again a large star, and a secondary smaller star. This time, though, the planet orbits around the smaller star, inside its gravitational well. The limits on the stars' sizes and luminosities are now more relaxed. We need two additional constraints: the secondary star's ecliptic is not coplanar with its orbit. Ideally they are perpendicular (so there are only two syzygy points where the daylight period might be 50% of the rotation period); and the secondary star's revolution period is an odd multiple of the planetary revolution period, so that at syzygies, when the three bodies are aligned, the planet is always in the middle and actually gets a 24-"hours" day (the two "daylights" are not overlapping). I'll try and run some simulation after New Year's Eve ;-D to check whether this setup does indeed work - I might well have missed something obvious. edited Jan 2 '19 at 7:17 LSerniLSerni $\begingroup$ I thought about a Lagrange point setup before. However, luminosity goes as mass to the 3.5 power, and the minimum mass ratio to make a stable Lagrange point is 25, meaning that the minimum luminosity ratio is 78000. That's hardly better than a full moon on Earth (1/400000 of full daylight). I want a day that would be recognized as 'daylight', which means an extended time period with at least a few % of full daylight. $\endgroup$ $\begingroup$ The answer that saved the question from a downvote. $\endgroup$ – Joshua $\begingroup$ Shouldn't the daylight levels change at 4 and 8 hour intervals, not 6? The stars should stay at about 60 degrees apart in the sky, so you should have 60 degrees of just one star, 120 of them sharing, 60 of the other star, and 120 degrees of night. $\endgroup$ – ltmauve $\begingroup$ This is somewhat a dangerous place to put a planet: the key thing about L4 and L5 is that they accumulate space rocks, so any planet living in this space is kind of being hit by a constant meteor shower of varying intensities. $\endgroup$ – CR Drost When you have a large geostationary moon that reflects enough sunlight to extend the day it should be possible. To increase the day time significantly. Earth Example: Formulas: $\alpha_z$ = $\frac{v^2}{r}$ which is the same as: $\alpha_z = \frac{4\pi^2}{T^2}$ $F = G\frac{m_1 m_2}{r^2}$ $\alpha_z$ is the centripetal acceleration it has to be the same as $F$ from the second formula which is the gravitational force. T is the time it takes for your object to complete one orbit in the case of earth about 24 hours or 86400 seconds. $r$ is the distance between the two objects. $m_1$ and $m_2$ are the masses of your moon and planet. $G$ is the gravitational constant. $v$ is the speed of your orbiting object relative to in this case earth. In our case we get about 0,0000000052885 for $\alpha_z$ For the second formula you leave the mass of the moon out because it cancels out with the first formula where you originally also have to adjust for the mass of the moon but you use it in both formulas so you can ignore it completely. If we plug this value for F in the second formula and switch the equation around to give us r we get $r^2 = G\frac{m_1 m_2}{F}$ and a value of 7.537137 × 10^22 we have to take the square root of it as it is $r^2$ and so we land at 2.74538468 × 10^11 meters which tells us that in our case it would not work because the moon would be to far away to reflect enough light onto your planet. So you would have to tweak your system if you want this solution to work. The things you could tweak to make it possible are: You could make the moon lighter Let the Planet rotate much faster around it self decrease the mass of the planet Also you could place the planet between two stars in such a way that both stars pull on the planet with the same force which leads to a cancellation of gravitational forces but you would have constant day and no seasons. Hope this helps. Οurous SoanSoan $\begingroup$ The formulas looked funny because you were using separate TeX blocks over multiple lines, I've suggested an edit to fix the issue. $\endgroup$ $\begingroup$ Thanks I hope I can recreate this next time. $\endgroup$ – Soan $\begingroup$ "Also you could place the planet between two stars in such a way that both stars pull on the planet with the same force which leads to a cancellation of gravitational forces but you would have constant day and no seasons." Isn't that unstable? $\endgroup$ $\begingroup$ depends on how consistent the stars orbit each other. But yeah in most cases it is. Which does not stop him from creating the planet with the perfect double star solar system where it is stable. $\endgroup$ The star is usually much larger than the planet, and there is diffraction, so any planet is always more than half illuminated. ;-) Significantly more than half, no, imo. Except ... if you put the orbit of the planet perpendicular to the plane in which the two binaries circumvent each other. It'll probably be tricky to stabilise such an orbit, which needs to be synced up to make sure the three objects really never line up. An astrophysicist could probably tell us if there is a "natural" resonance which would drive a planet into such an orbit, and keep it there. KarlKarl $\begingroup$ "a "natural" resonance which would drive a planet into such an orbit, and keep it there." You'll notice the same could also happen in a coplanar system! Can a resonance be set up so the three bodies never line up that way? $\endgroup$ $\begingroup$ Yes, except it's not the absolute size of the star, but its angular diameter as seen from the planet. The same geometry applies: from Earth, both moon and sun are about 1/2 degree, IIRC, so on average 181 degrees would experience some sun/moon light during the day. (Plus, as you say, some extra because of refraction.) $\endgroup$ $\begingroup$ @TristanKlassen No, in a coplanar system, there will always (once per year+a bit) be a moment when the three bodies line up. In a perpendicular system, you can (possibly) make sure the outer body always passes through the plane when the inner bodies are furthest from each other. $\endgroup$ – Karl $\begingroup$ @jamesqf Of course it's also about the angular diameter, but the moon is smaller than the earth. You will always see the sun from the north or south pole, but twice per month, you don't see the moon from either! $\endgroup$ Imagine a planet with an advanced civilization. It has launched many mirrors into space, to illuminate (part) of the planets dark side. You can tweak the amount of mirrors and increase the average length of the day to fit your story. AbigailAbigail Pyramid planet. With one light source and a spherical planet, I could not think of a way to illuminate more than half. It's a sphere thing. But if you can use shapes other than a sphere it is easy. The (tidally locked) pyramid planet keeps its apex point at its sun, and each of the triangular faces stays in the light. You could have it rotate around the axis down through the apex. The square side stays in the dark. Other tidally locked elongated shapes would also keep their elongated faces in the light and the base in the dark. OK. Tidal locking not allowed. I will borrow my answer from Why is my Dark World so dark? This world is a disc, turning on its axis. It stays with its edge facing its sun. On the ground, the sun is always moving along the horizon, never setting, never rising, never stopping. Sunlight is always redshifted and oblique. Shadows are long. There is sunrise and sunset on the edges of the disc. The edge is a minuscule fraction of the disc. $\begingroup$ "Other tidally locked elongated shapes would also keep their elongated faces in the light and the base in the dark." I guess I forgot to specify that tidal lock is ruled out, because everywhere on the planet must receive day at least some of the time. $\endgroup$ $\begingroup$ Also, a planet is generally understood as a body which becomes spherical under the own gravity. $\endgroup$ – o.m. $\begingroup$ Yer killin me, @o.m. Yer killin me. Here is some light reading on a fictional disc planet. It spins fast partly counteracting its gravity and flattening it out. That would be neat here too because the sun would appear to race around the perimeter and the long shadows would wheel around wildly. en.wikipedia.org/wiki/Mesklin $\endgroup$ – Willk If the planet is not required to be habitable for humans or to have advanced lifeforms or any lifeforms, the answer is simple. Make the star a star which has left the main sequence and swelled up to a red giant stage. Such a star could have expanded until it almost reached the orbit of the planet. If it reached the planet's orbit the drag of the star's gases would cause the planet to spiral down into the star. In such a situation light emitted from the edges of the star as viewed from the planet could reach the planet far into the side facing away from the star. But the increase in stellar a radiation and thus in the planetary temperature as the star became a red dwarf would have wiped out any preexisting native life on the planet. Of course a sufficiently advanced civilization could have terraformed the planet and introduced lifeforms from other worlds and/or made it habitable for humans. If the planet has to be naturally habitable for humans and/or have advanced native lifeforms the problem is more complex. The star could be made a red dwarf or main sequence star instead of a red giant. All red stars, giants or dwarfs, have low surface temperatures and thus emit less energy per unit of surface. So in order to have a surface temperature equal to that of Earth, a planet would have to orbit the red star close enough that the star appears several times as large in the planet's sky as the Sun does in Earth's sky. And that will help the light from the sun to reach more than half of the planet's surface. Of course the fainter the star, the closer the planet would have to be to it in order to have an Earth-like surface temperature, and the greater the proportion of the Planet's surface that would be illuminated by the star at any one moment. Thus it is desirable for the star to be a very, very faint red dwarf for as much as possible of the planet to illuminated at any one time. But for that the happen the planet would have to obit so close to the red dwarf star that the planet would probably become tidally locked to the star so that one side always faced the star and the other side always faced away from the star. But that would fail the original question. Thus the planet would have to be saved from being tidally locked to its star by being tidally locked to some other astronomical body. If the planet was actually a giant, Earth sized moon of a gas giant planet orbiting close to a red dwarf star, the planet/giant moon would become tidally locked to the gas giant planet instead of to the star. And the gas giant planet could appear several times larger in the sky of its planetary-sized moon than the red dwarf star appears. Meaning that the light from the gas giant planet planet could cover even more of the planetary sized moon than the light from the star. What light from the gas giant planet? Possibly light from countless lightning strikes in its atmosphere ever second. And certainly light from the red dwarf star reflected from the planet, just as sunlight is reflected from the Moon onto Earth. But probably many times as bright as a full moon on Earth. So if the planet sized moon orbits the gas giant planet in the same plane that the gas giant planet orbits the red dwarf star, there will be a moment in the orbit of the moon when it is directly between the red dwarf star and the gas giant planet and will be casting a shadow on a tiny part of the gas giant planet. And the rest of the gas giant planet will be reflecting light in all directions, and some of that light will illuminate the side of the moon facing away from the star. In that moment every part of the moon will be illuminated by the red dwarf star or the gas giant planet, and some parts will be illuminated by both. The closer the moon is to that part of its orbit, the greater the proportion of its surface that will be illuminated, and the farther the moon if from that part of its orbit, the smaller the proportion of it is surface that will be illuminated. When the planet sized moon is about 90 degrees from the line between the star and the gas giant planet, somewhat more than half of the moon will be illuminated by the the star and somewhat more than half of the moon will be illuminated by the gas giant planet. About one quarter of the moon will get light from both the star and the planet, one quarter will get light from only the star, one quarter will get light only from the planet, and one quarter or less will get no light. And when the moon is more than 90 degrees from the line between the star and gas giant planet the proportion of the moon's surface that is illuminated will get smaller and smaller. When the moon is exactly 180 degrees from the line between the star and the planet it will receive light only from the star. But since the star is assumed to be a red main sequence star that the moon and planet have to be very close to in order to have Earth like surface temperature, it should have several times the angular diameter of the sun as seen from Earth and thus should illuminate a bit more than half the surface of the moon. Would the moon be eclipsed by the planet one time every orbit, when it is 180 degrees from the star? Yes, if the moon orbits the planet in exactly the same orbital plane as the planet orbits the star. The moon should orbit the gas giant planet in the equatorial plane of the gas giant planet. The equatorial plane of the gas giant planet should be tilted to a greater or lesser degree relative to the orbital plane of the gas giant planet around the star. It is perfectly possible for a moon of a gas giant planet to never be eclipsed by its planet, if its planet has a high enough axial tilt. And I am not sure if brief eclipses, lasting hours at most, would count as violating the original question. If the moon is tidally locked to the gas giant planet, one half of that moon would always face the gas giant planet and would always be illuminated by light reflected from the planet, as well as being illuminated by the star for more than half the time. One half of the moon would always face away from the gas giant planet, and except for the section closest to the gas giant planet, would never be illuminated by the gas giant planet. And that half would be illuminated by the star somewhat more than half the time. There have been a lot of other questions about habitable moons of gas giant planets in the habitable zones of stars, and it is a good idea to refer to those questions and answers to see if they have any useful information, as I state in my answer to this question: How long will it take to discover they live on a moon and not on a planet?1 And I gave links to two earlier questions about habitable exomoons. The article "Exomoon Habitability Constrained by Illumination and Tidal heating" by Rene Heller and Roy Barnes Astrobiology, January 2013, discusses factors affecting the habitability of exomoons. Note that it states that a moon can not have a stable orbit unless the orbital period or year of the planet is more than 9 times the orbital period or month of the moon. So if your moon has an orbital period of 0.75 to 15.0 earth days, for example, the planet must have an orbital period of at least 6.75 to 135 days. The planets of the star TRAPPIST-1 that orbit in the habitable zone hare years of 4.05 to 12.4 days, so it is certailnly posible for a planet and it s moon in the habitable zone of red dwarf to have orbital periods of the necessary length. http://How%20long%20will%20it%20take%20to%20discover%20they%20live%20on%20a%20moon%20and%20not%20on%20a%20planet? M. A. GoldingM. A. Golding Yes. Admittedly an engineered system answer: Central object: A black hole. This must be rotating in the plane of the system so the deadly jets never get anywhere near the planet. Around it, a ring of 6 (or more) stellar objects. Case A: The planet orbits between the black hole and the stars. In this case if all the stars are burning you have perpetual sun, if you want night most of them must be dead (white dwarf or neutron star.) Case B: The planet orbits outside the ring of stars. At this point you have more than half light but it's not perpetual. So long as the black hole is sufficiently more massive than the stars (I don't know the required ratio) this is stable. Loren PechtelLoren Pechtel $\begingroup$ Won't the jets be illuminating more than half the planet all the time anyway? Maybe in x-rays though. :-) $\endgroup$ $\begingroup$ @brendan Depends on how much the black hole gets to eat. I figure there will be some light from the accretion disk and jets but the question was about "day". If it's not eating too much those will be luminous nighttime objects. $\endgroup$ – Loren Pechtel Not in the ways you expect, but there is a loophole. First let's consider the issue of it being day on more than half of the planet always. This is impossible with a single star system and an ellipsoidal planet. If you elongate the planet enough perhaps you could get more than half the planet facing the star at some point in its rotation. However, if you consider an oval in a 2D view then when the short side is facing the star at most only half of the planet is facing the star and likely much less than half if you elongate in any meaningful way (enough to make the long side significantly greater than 50% of the surface area). Now let's try a different approach with a single star system. Maybe it just needs to be day for more than half of a rotation cycle. Ok well then let's consider a point A. When A is on the side facing the sun the planet slows down. Then when it facing away the planet speeds up in rotation. This works, right? No, this still fails because now point A' which is the point on the opposite side of the planet will have long nights and short days. This means that short of making the sun send light in a larger band and somehow curve around the planet this is impossible. However, light travels along the geodesics of the surface of space-time which bends according to mass. This means that light can only curve according to mass. Furthermore according to one of Einstein's most famous thought experiments light will bend under gravity as if you were in a box accelerating upwards at the rate of gravitational acceleration. What this means is that light can only bend further around the planet if something either reflects it to the other side of the planet (however this would be side dependent) or something lifts the light around the planet. This might be possible if your planet has significantly large rings. Another possibility is that your planet has naturally low enough gravity such that the light actually curves to the other side of the planet. These are doable but it would a simple cop out, imo and pretty lackluster. I'm done bashing other potential methods so I'll now I'll just move on to how I think you can do it your scenario. Just before I continue I should explain why I'm considering only one planet and one star. It's because in certain orientations a binary star system is equivalent to a single star system in terms of the propagation of light. I mean technically one star will cause the other stars light to curve which might in turn allow it to bend around the planet, but that would be impossible for me to claim one way or the other, and it wouldn't be satisfying. Now here's a simple loophole. Your planet has literal oceans of glass. It is literally that simple. The sunlight can't get around the planet, so just make the planet transparent periodically. Enough locations and in size so that the light bleeds through past the one half of the planet and other to the other side. Crystals may also work better. If done properly it shouldn't affect life significantly. After all, sand is pretty much the same as glass. Sure it's not livable in those areas for the most part but there could be small rivers and islands or things that are adapted to such extreme conditions. Plus, the requirement here wasn't that things be able to live on this thing in particular. It's a bonus I imagine but the large glass sections are simply assumed to be unlivable. Now you might ask me what will happen when people or things damage the surface thereby lowering it's transparency. Sure, by all means let's go drain the entire atlantic ocean such that water can no longer evaporate and cause hurricanes. Do you get the point? The sunlight shining through it is on a grand scale. It's not going to be prevented by chipping away a volume equivalent to a square block surrounding the entirety of Manhattan island all the way from bedrock to the top of the empire state building let alone whatever humans or animals might damage. Perhaps if someone strikes the glass with a nuclear bomb it might have an effect, but at that point we're dealing with planetary level changes here and in that situation I'd say that anyone doing that would be really stupid as causing the glass to crack to that large of a degree would risk magma leeching out from under the surface and if the damage were significant enough might expect a super volcano to form. In fact, I'm going to go ask a question about that. tl;dr if the planet has large regions of glass all the way to the mantle or seafloor then it's likely you'll get the desired result. $\begingroup$ Instead of glass you could use diamond. Or ice! $\endgroup$ $\begingroup$ I'm not so sold on the glass ocean idea. Glass, in small quantities, can be quite transparent. If you take a pane of glass and look at its face, you'll see right through it—sure. But if you take that same pane and look through it edge-on, you'll likely get a different view, one that isn't wholly transparent. Any small, recurring artifacts in your ocean of glass will render it opaque at some distance away, not to mention that light itself won't propagate through glass indefinitely. Light loses energy as it passes through glass and eventually is absorbed by it. $\endgroup$ – B.fox $\begingroup$ Also, I would suspect that an ocean of glass would be massive, and have varying levels of densities. Light passing through these densities would owe itself to refraction, and there's hardly any telling in what way it'll become distorted. $\endgroup$ $\begingroup$ Given that I said "I'm not talking about atmospheric diffraction (Venus has some weird stuff going on there). Solutions must work for a vacuum world." I don't think making the planet transparent should count. $\endgroup$ $\begingroup$ @TristanKlassen as I said. It is a loophole. However, I doubt you can get the result you want from geometry alone. Any time the three elements of your system align along a line you will have roughly 50% of the planet in daylight. So to have a significantly longer day period always is likely unfeasible. The only way to avoid that would be to never allow the three bodies to align but I'm pretty sure even that is not possible. However if you simply want longer days 90% of the time then I have a sneaking suspicion that binary systems will naturally trend towards have mostly longer days. $\endgroup$ Does a "forever" day count? The moon has a few peaks of eternal light, and it has been theorized that Mercury may have them too. While the rest of the planet experiences solar days (lasting 176 Earth days each), the north pole has areas in perpetual shadow, so it would stand to reason that the south pole experiences the opposite phenomenon. For the lunar examples given in the linked article, the peaks don't all experience daylight 100% of the time. However, some experience daylight for more than 50% of their year, which could be simplified to longer days than nights, on average, in perpetuity. While this is specific to a satellite rather than a planet, I don't see why similar effects could not occur on a planetary scale, given the right combination of axis, orbit, and craters. CactusCakeCactusCake $\begingroup$ Yeah, you can get it locally, but I don't think this can be extrapolated to an entire planet. It only works near the poles. $\endgroup$ $\begingroup$ For spheres orbiting a single light source, that the best you're going to get. For the whole planet to experience longer days than nights, then more than 50% of the surface has to be illuminated at any given time. @LSerni's answer is the most viable solution. $\endgroup$ – CactusCake A possibility (also at the end of LSami's answer) might be a binary system with a suitable 3D configuration of the orbits. Imagine a 3D coordinate system with the $x$-axis pointing right, the $y$-axis pointing up, and the $z$-axis pointing at the viewer. We have a central yellow star (sitting at the origin in the animation, but it really should orbit the center of mass also), a red dwarf slowly orbiting the bigger component of the binary system in the $xy$-plane. And, finally, a planet (the blue dot), orbiting about the red dwarf in a plane parallel to $xz$-plane (i.e. one that has the $y$-axis as its normal). The animation tries to give a top-down view of the motion The points: Unless the smaller star is close to the $x$-axis, it will not eclipse the bigger star, because the bigger star is not on the plane of the orbit of the planet. When the smaller star is very close to the $x$-axis, it may eclipse the bigger star, but if we synchronize the periods we can arrange the planet to always be either above or below the $xy$-plane at those instants, when the small star crosses the $x$-axis. When the dwarf is near the $y$-axis, 3/4 of the planet bathes in starlight. The ratio goes close to 1/2 in those years, where the bigger star is closer to the plane of the planet's orbit, and even then only for a single season (one season the planet will be nearly between the stars and be fully lit). The caveats: I suck at celestial mechanics, but I suspect the long term stability of this set up may be in doubt. At least the ratio of orbital periods likely needs to be quite high, may be something like one hundred (if not thousands) of "planet years" per a single orbit of the red dwarf about the bigger star. Also, gravity of the bigger star may make the plane of the planet's orbit rotate over time. Also, if the ratio of periods is 1000:1, then the above synchronization idea doesn't help very much. The planet will reach the $xy$-plane, at points when the red dwarf has moved only very little off the $x$-axis. At those points the dwarf may almost eclipse the bigges star, resulting in something like only $50.001$ per cent of the panet having a semblance of a day. (in the animation the ratio of those periods is 10:1) But, those close to 50-50 days are few and far between. It might make for an occasion for the culture living on the planet! Jyrki LahtonenJyrki Lahtonen $\begingroup$ Oops, the same idea is in LSami's last paragraph. I only read the part about using Lagrange points. Sorry. $\endgroup$ – Jyrki Lahtonen A synopsis of a paper So the standard reference here would be Siegfried Eggl's Habitability of Planets in Binary Star Systems and his general conclusions are that binary star systems Support predominantly two kinds of stable reasonable orbits: ones where a planet orbits both stars as if they were one, and ones where a planet orbits one star but is perturbed by the other, and Can indeed have habitable zones in either configuration, but The presence of the second star perturbing a planet's orbit can draw it temporarily out of the habitable zone, requiring atmospheric inertia to keep it habitable for that part of the year until it comes back to proper temperatures. On a positive note based on the calculations here it does seem like two Sol-mass stars co-orbiting a common center at a distance of 10 AU (ten times the distance the Earth is from the Sun) can indeed sustain a planet orbiting one of them somewhere between 0.95-1.55 AU away from the one star, as long as the two stars do not co-orbit with an eccentricity greater than 0.2-0.3 or so. A binary star system with some inclination So you are gonna need a brighter star, probably something off of the main sequence. The issue is that you want some orbit of some Sun-like star at something like 1AU or so, but you want that Sun to be part of a binary system where the other star is maybe 10AU or more away. Since the brightness of a light bulb decreases with the square of its distance from you, if you want both stars to be approximately equal in the sky, this one star needs to be maybe 100+ times brighter than the Sun. Looking at the Hertzsprung-Russell diagram makes this very easy to see, but there is a noticeable "gap" between the very blue and the yellow-orange-red side: the main sequence stars would be really really blue and that's really really bad, because the more blue light you have the more ionizing radiation you get from the star. So you would have to go off the main sequence, to a red giant star. These do not have to be too dramatically red and can have surface temperatures (hence colors) similar to an incandescent light bulb; maybe a good (not-too-massive not-too-bright) giant to model the giant around would be Arcturus, 25 times larger in radius than our Sun, 170 times as bright if you're at the same distance from it, roughly the color of an incandescent light bulb. At 10 times further away it would only visually appear to have 2.5 times the radius of the main star, and though we might have to shrink both stars a little to get the temperatures down a bit, it shouldn't be too bad. Importantly, Arcturus is still about one solar mass in size. Each star illuminates half of a planet but as another answer points out, putting the orbit of the planet off-axis with some past collision might mean that the suns never eclipse each other, and so during the most "day/night" part of the year they might still be separated by, say, 10 degrees of the sky, causing 190 degrees of planet to be illuminated at once. But the more dramatic feature is that there would be a season where the sun rises just around when the giant sets, and vice versa, meaning that you have a very constant half-illumination for the entire day across nearly the whole planet, with some "South Pole" still having a day/night cycle but the "North Pole" not seeing either star set. So the seasons themselves would be quite interesting to work out here, and also the weather (our weather is dominated by the fact that our equator is approximately lined up with our orbit leading to a warm equator and cold poles; not clear how that transfers over and it depends in part on how the tilt lines up with the two other orbits). Since there is also an orbit of the stars about each other, there would also be a regular exchange between which pole had the day/night cycles and which one did not; this could change over a 100-year cycle or so, maybe. CR DrostCR Drost Use a protoplanetary disk According to this paper, the accretion luminosity of YLW 16B is somewhere between 0.31 and 0.64 times the luminosity of the star. (A different protoplanetary disk is shown in the link above, but you get the idea) Your planet orbits in a gap in the disk, seeing only a portion of it, but from very close by, so it seems just as bright as from space. (Inverse square law: the less of the disk you see, the bigger it is. This is the same reason why a bush doesn't get dimmer if you walk up to it) The result should be a band of daylight all through the night. Protoplanetary disks are not four star accommodations - Earth's experience in that regard is termed the Hadean - but your planet is a vacuum world, baked crust over a frozen core and mantle. While fairly large objects frequently crash into it, they don't cause global extinction events by striking gypsum or ocean or atmosphere. Your inhabitants, if any, have long since learned to live many miles beneath the surface, though their current situation is bound to bring out some science expeditions. Your planet orbited another star up to the red giant phase, encountered another planet during migration and was ejected from its home system. Blundering into the early protoplanetary disk of a nearby star, it was slowed down by interactions with the diffuse matter of the disk, and gradually came closer to orbiting in the plane of the system. There are still meaningful seasons because of remaining obliquity. The planet passes through a gap where the star is largely concealed then venturing out where the full glory of the disk and star become fully apparent. Mike SerfasMike Serfas If the planes's self rotation period is equal to the around-the-star period then always one side of the planet faces the star constantly, like the moon and the earth. Additionally if the planet is an irregular shape, for example, like a cone shape that one side is "flatter" than the other side yields one side area is larger than the other side. When the large-area side faces to the star, the planet have longer daylight than the night. Ozgur OzcelikOzgur Ozcelik $\begingroup$ A planet is a body in hydro-static equilibrium. This means it has to be a spheroid. $\endgroup$ Firstly, given a single light source at sufficient distance that light can be treated as parallel beams, together with a spherical planet and no other reflecting object, the answer is clearly 'No' since only half the planet is illuminated at any time. Given more than two sources of light, or reflections, we are into three-body problem so long term predictions are generally out which means that you need to decide how long you want the system to remain stable for. I can't give all conditions you want at a single point in time ('significantly' over 50% illuminated and day longer than night, but I can give each on a single planet. Consider a non-rotating planet with respect to fixed stars (implausible in the long term due to tidal effects but it could be 'stable' for many human lifetimes) which was in a highly eccentric orbit around a large star. Then, during periapsis (closest approach), over one face would be illuminated but day length would be comparatively short because the orbital speed would be high. During apoapsis, the opposite face of the planet would be illuminated although because of the distance, only slightly over one hemisphere in total. However the orbital speed is slower at apoapsis so the day on that face would be longer than the night. AlchymistAlchymist $\begingroup$ In trying to explain sunrise and sunset times to someone yesterday, I happen to have worked out that a similar effect exists for a planet in an eccentric orbit which has two sidereal days, and thus only one solar day, per year. One side gets a longer day, the other side a longer night. $\endgroup$ Not the answer you're looking for? Browse other questions tagged hard-science solar-system . How could a planet have a sky without stars at night? How long will it take to discover they live on a moon and not on a planet? Day/night cycle science help? Suppose I have an Earth sized planet whose oceans are made of pure glass - what would be the effect of striking said oceans with nuclear bombs? The view in a binary star system How long is a solar-solar eclipse in a Binary Star System? Radiation Levels and Effects on Planet with 27 Suns Longest possible eclipse in double star system Visibility of a Red Dwarf Companion Star Under which circumstances could a planet have two visible suns? Possible compositions of a rogue binary system? What are the upper and lower year lengths for a habitable planet of 40 Eridani A? Can periodic gravitational tug and pull of binary star ensure that a planet stays extremely tectonically active?
CommonCrawl
A novel spread slotted ALOHA based on cognitive radio for satellite communications system Min Jia1, Linfang Wang1, Zhisheng Yin1, Qing Guo1 & Xuemai Gu1 EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 232 (2016) Cite this article Cognitive radio (CR) as a promising way to solve the spectrum scarcity allows exploitation of the shared frequency bands while guaranteeing acceptable interference to incumbent users in satellite communication systems. An improved spread slotted ALOHA (SSA) based on multi-user, multi-channel CR model applying to the satellite communications is proposed in this paper. To make full use of the detection information of satellite earth stations, a novel joint collaborative sensing method is used in the sensing phase. Moreover, a better throughput is achieved by using the improved SSA strategy in the transmission phase comparing with the traditional slotted ALOHA (SA). Theoretical analysis shows that the system performs better when SSA is adopted. Theoretical analysis and simulation results indicate that the sensing method used in this model outperforms the traditional "hard combining" strategy in the whole sensing process. The demand for high-bandwidth application of mobile devices increases sharply with the boom in wireless communication, which needs the relevant technology to deal with the potential spectrum scarcity by enhancing the spectrum efficiency. One of the promising ways to improve spectrum efficiency is the cognitive radio (CR) proposed in [1]. The CR achieves the goal by allowing the secondary users (SUs) to access the unused licensed channels dynamically with a reliable energy detection performed by the SUs; therefore, the SUs will not disturb primary users (PUs). The coexistence of PUs and SUs can be achieved by either a centralized way or a decentralized way. In the centralized strategy, a center cooperator and a control channel are necessary to sense the licensed channels collaboratively and to allocate the idle channels, respectively. Ref [2] studied a media access control (MAC) protocol, SCA-MAC, based on CSMA which made decisions on channels accessing with the statistics characteristic of spectrum usage. Ganesan et al. in [3] proposed a sensing scheme which exploited the spatial diversity to improve the ability to sense in CR network. The throughput maximization and tradeoff in a centralized CR network are discussed in [4] and [5], respectively. The decision that the center cooperator makes on whether a licensed channel is idle can be carried out by hard combining or soft combining through the collaborative sensing. Hard combining, such as Ref [6], acts as the following process. To begin with, the SUs will determine whether a channel is idle through the energy detection and then send the result to a center cooperator. The cooperator will then make the final decision on the channel state based on all the SUs' results. In soft combining, e.g., Ref [7], each SU will send a full observation of signal energy on one channel to a cooperator and the cooperator will determine the channel state through these observations. In the decentralized scheme (or distribute scheme), no cooperator or control channel exists. The SUs will privately perform channel sensing at some random or consecutive channels and utilize the channel which is detected to be idle to transmit data. Apparently, the reliable channel sensing is vital in the whole decentralized CR network. Refs [8–11] show a way, respectively, to permit the SUs to sense and access channel independently without central cooperators or control channels. The ALOHA-based CR network is discussed in [12, 13]. In recent years, the CR technology applied to satellite communications has been studied by researchers. Three application scenarios of CR technique in satellite communications were studied in [14], and the key challenges and enabling technologies for each scenario are also analyzed. A satellite-based multi-resolution compressive spectrum detection algorithm is proposed to achieve the coexistence of a mobile satellite system and an infrastructure wireless terrestrial network in [15]. Icolari in [16] proposed an energy detector based on radio environment mapping for the spectrum awareness functionality of a hybrid terrestrial/satellite scenario. An adaptive modulation scheme is proposed in [17] to mitigate the effect of rain on cognitive radio-based geostationary earth orbit (GEO) satellites which operate in the Ka-band. An ALOHA based on CR model in the scenario of satellite communication which adopts the multi-user, multi-channel CR model in [13] is proposed in this paper. And two major innovation points are as follows: (i) In the sensing phase, we adopt a novel collaborative sensing method proposed in [7] rather than a distributed method. By using this method, GEO satellite can make more reliable decisions and interfere with the PUs less. (ii) In the transmission phase, we bring spread slotted ALOHA (SSA) [18] in our work expecting better system performance. Besides, the collaborative sensing strategy in [6] is also introduced to make a contrast on which sensing scheme can perform better. The rest of this paper is arranged as follows. The proposed model is described in Section 2. In Section 3, the throughput of the proposed model is analyzed, and the analytical expression of the throughput is also given. The simulation and analysis result as well as the contrast of the performance using different collaborative sensing methods are shown in Section 4. Finally, the conclusion is drawn in Section 5. System model description The CR model proposed in this paper will be discussed in the satellite communication scenario in [19]. As shown in Fig. 1, the GEO satellite and its satellite earth stations can be seen as the SUs and the fixed service stations are the PUs. Satellite earth stations communicate with the GEO satellite through cognitive links, i.e., the licensed channels. Satellite earth stations utilize the licensed channels only when they detect one channel as idle (i.e., no PU transmission in the channel), and the spectrum must be vacant if a PU accesses the licensed channel. It is clear to see in Fig. 1 that satellite earth stations may disturb the PUs because of imperfect channel sensing, so it is vital for satellite earth stations to perform continual spectrum sensing. Cognitive satellite communication model The cognitive satellite communication model is shown in Fig. 1. In this model, we consider a spectrum with N licensed channels and the channels are distinguished by frequency. An example of channel state description is given in Fig. 2, where shaded areas represent the presence of the PU. The channel states will not change within one frame in this paper. Further, assume that the secondary users consist of K satellite earth stations and one GEO satellite. Besides, the time axis in this model is divided into duration-fixed frame with the duration T F. Each frame consists three parts which are sensing time T S, report time T R, and transmission time T T. The concrete meaning of these three time types will be explained in the following part. The operating principle of this model can be described as two phases, which are the sensing phase and the transmission phase. The satellite earth stations will first detect the licensed channels in the sensing phase and then transmit data in the transmission phase through idle channels with spread slotted ALOHA algorithm. The two phases will be described then in order to reveal this model better. Channel state in one frame Sensing phase Each satellite earth station conducts channel sensing procedure during the sensing phase. As depicted in Fig. 3, N S are sensing slots in which time lengths are T Sm group sensing time, i.e., N S=T S/T Sm. In this model, each satellite earth station senses one licensed channel within a sensing slot and senses all licensed channels during the whole sensing time, which means N S=N. The channel detection can be achieved through energy detection which can improve the satellite earth stations' detection performance. Each satellite earth station obtains its full observation through energy detection and stores the signal samples of all licensed channels into its buffer after sensing time. Satellite earth stations will then send them to a GEO satellite. The GEO satellite will acquire the energy statistic of every licensed channel by summing the sample values received from all satellite earth stations with corresponding weights and finally reach a decision on states of all licensed channels. Then the GEO satellite will immediately broadcast the states of all the licensed channels to all satellite earth stations. Satellite earth stations transmit packets on idle channels with the result given by GEO satellites. However, if the GEO satellite wrongly declares a busy channel as an idle one, satellite earth stations may transmit packets on busy channels. As a result, satellite earth stations not only fail to transmit packets but also interfere with PUs. So, it is vital for GEO satellites to conduct accurate channel sensing. In the sensing phase of cognitive radio, the reliability of the decisions that GEO satellites make is subject to P d and P f. P d represents the probability of correct detection with the presence of PU, and P f represents the probability of falsely declaring one vacant channel as busy. The whole sensing phase is illustrated in Fig. 4. Frame structure Other sensing methods can also help the GEO satellite make channel state decisions except the above scheme. A "hard combining" method is studied in [6], which is introduced to make a contrast with the sensing algorithm in this paper. The method can be stated as follows. Satellite earth stations will dependently reach a decision on the channel state and then send the results to GEO satellites. If all satellite earth stations claim idle on one channel, the satellite will announce the channel state vacant to all stations or the channel will be declared as a busy one. Transmission phase The SUs have to wait for the T R (i.e., the report time), which includes propagation delay and processing delay, to receive the results of channel states after sensing phase, where T R=η 2 T Sm. After obtaining the channel states, satellite earth stations will transmit packets using the SSA strategy in the transmission phase. In SSA, all transmission terminals are provided with a set of orthogonal codes, different terminals own the same code set, and the total number of the codes is represented by N C. Considering the complexity of receivers, both the number of the codes and the spread factor (i.e., the length of a code) should be low. Besides, code words can be time-shifted versions of a single spreading-dispreading sequence [18]. The detail transmission processing can be described as follows. If one packet is generated, it will be first spread by a random spread code among the code set. Then a satellite earth station accesses an idle channel announced by a GEO satellite randomly at the beginning of a transmission slot T Tm and sends a spread packet with a specific probability of P tra, where T Tm=η 1 T Sm. We can see from Fig. 3 that the transmission slot is the basic unit of transmission time and denotes that transmission time consists of N T transmission slots, i.e., N T=T T/T Tm. Besides, suppose that the time length of a spread packet is equal to T Tm. It is obvious that more than one satellite earth station will transmit a packet with the same idle licensed channel, which means two or more packets may be overlapped in the same channel. According to [18], we assume that the receiver of GEO satellite can only recover one packet on one channel. If the number of packets on one channel that arrived in a slot is more than one, only one of them can be processed and the rest of them will be seen as "interfering packets." The receiver can tolerate the interfering packets at a specific level; note that N IL is the maximum number of interfering packets that the receiver can tolerate and once the receiver receives more than N IL interfering packets, all the packets cannot be processed correctly. Further, the interfering packets will be declared as discarded. It is worth noting that the overlap is not equal to irreversible collision in SSA. The packet chosen by the receiver can be dispread and decoded correctly only if its spreading code is different from those used by all the other interfering packets. Figure 5 depicts the above workflow. Receiver's workflow in the transmission phase Based on the description of the transmission phase, if one packet is received successfully, except that it is sent to an idle channel, one of the following conditions should be satisfied: (1) No other satellite earth station sends packets on the same idle channel in the current slot; (2) Two or more satellite earth stations access the same idle channel and sends packets in the current transmission slot. Under this condition, if the number of packet interfered is no more than N IL, one random packet will be processed by the receiver correctly only if the code word the packet uses is not the same with those the interfering packets use. The GEO satellite will broadcast the result on which packet is processed successfully. The satellite earth station will retransmit the packet if the packet is declared as a fail transmission. For analysis simplicity, the channel environment between satellite earth stations and the GEO satellite is set to be idle, which means the channel noise, channel gain, or near-far effect are not considered. Based on this, the packet is dispread and decoded with zero-error if the above conditions are satisfied. In this section, the system throughput is formulated. Throughput is defined as the number of packets successfully transmitted in one sensing slot and will be the evaluation criterion of the model performance. The simple condition, which is that the satellite can give a perfect decision on the channel states and the number of the idle channels is fixed, is first considered. Then, we discuss how the throughput changes under the imperfect detection and the fixed idle channel. Finally, a more universal condition where every channel becomes busy with a specific probability in one frame is considered. In addition, it is assumed that there will be always at least one packet in the buffer of satellite earth station for simplicity as Ref [9]. For simplicity, we first suppose that there are always M random idle channels among the entire licensed channels. Also, the GEO satellite can make the perfect decision on the channel states with the sampling values that the satellite earth stations send. In the transmission phase, the probability that the kth satellite earth station chooses one of the M channels and sends a spread packet within a slot is \(P_{\text {access}}^{k} = {{{P_{\text {tra}}}} \left /\right. M}\). Note that \(P_{\text {access}}^{k}\left ({k = 1,2 \cdots K} \right)\) is i.i.d, i.e., \(P_{\text {access}}^{1} = P_{\text {access}}^{2} = \cdots = P_{\text {access}}^{K} = P_{\text {access}}^{}\). It is obvious that, on the rth channel, the probability that m packets are overlapped within one transmission slot follows a binomial distribution. We denote the probability is represented by \(P_{\text {rec}}^{r}\left (m \right)\) and \(P_{\text {rec}}^{r}\left (m \right) = \left ({\begin {array}{c} K\\ m \end {array}} \right)P_{\text {access}}^{m}{\left ({1 - P_{\text {access}}^{}} \right)^{K - m}}\). As referred before, the receiver of GEO satellite can only endure a fixed number of interfering packets, and we assume the number is N IL in this paper. We can formulate the probability that one packet can be successfully dispread and decoded among overlapped packets under the condition that interfering packets is less than N IL, i.e., P sur(m|m≤N IL +1)=(1−1/N C )m−1. Further, we can draw the conclusion that, on the rth channel, the probability that m(m≤N IL +1) packets overlapped and one of them is processed successfully is \(P_{\text {suc}}^{r}\left ({\left. m \right |m \le {N_{IL}} + 1} \right) = P_{\text {rec}}^{r}\left (m \right){P_{\text {sur}}}\left (m \right)\). On the other hand, if the number of interfering packets is larger than N IL, the receiver will not dispread and decode any one of these packets correctly because of the overload interference. So, The average number of successful transmission on channel r within one transmission slot, which is denoted as \(S_{\text {ave}}^{r} = \sum \limits _{i = 1}^{{N_{\text {IL}}}+1} {P_{\text {suc}}^{r}\left ({\left. i \right |i \le {N_{\text {IL}}} + 1} \right)} \). Further, we can formulate the average number of successful transmission packets on the entire M channels within one transmission slot, which can be expressed as $$ \begin{array}{l} S_{\text{tot}}^{*}\left({{P_{\text{tra}}},M} \right) = \sum\limits_{r = 1}^{M} {S_{\text{ave}}^{r}} = MS_{\text{ave}}^{r}\\ = M\sum\limits_{j = 1}^{{N_{\text{IL}}} + 1} {\left({\begin{array}{c} K\\ j \end{array}} \right)} {\left({\frac{{{P_{\text{tra}}}}}{M}} \right)^{j}}{\left({1 - \frac{{{P_{\text{tra}}}}}{M}} \right)^{k - j}}{\left({1 - \frac{1}{{{N_{\mathrm{C}}}}}} \right)^{j - 1}} \end{array}. $$ It is worth noting that the definition of throughput in this paper is the number of packets transmitted successfully in one sensing slot. Finally, we obtain the throughput of this simple scenario, which is represented by \(S_{\text {sys}}^{*}\left ({{P_{\text {tra}}},M} \right)\) $$ \begin{array}{ll} S_{\text{sys}}^{*}({P_{\text{tra}}},M) &= \frac{{{T_{\mathrm{T}}}}}{{{T_{\mathrm{S}}} + {T_{\mathrm{T}}} + {T_{\mathrm{R}}}}}S_{\text{tot}}^{*}({P_{\text{tra}}},M)\\ &= \frac{{{N_{T}}{\eta_{1}}}}{{{N_{\mathrm{S}}} + {N_{T}}{\eta_{1}} + {\eta_{2}}}}S_{\text{tot}}^{*}({P_{\text{tra}}},M) \end{array}. $$ Now we consider that the channel state decisions made by the GEO satellite is not perfect, which means P d ≠1 and P f ≠0. If a channel is idle, GEO satellite will detect this channel as idle with the probability of 1−P f . On the other hand, GEO satellite will claim a busy channel as idle with the probability of 1−P d . Any channel state decision made by GEO satellite is independent. Under the condition of fixed M random idle channels, the average number of vacant channel that satellite broadcasts to satellite earth stations is \({N_{\text {idle}}}\left (M \right) = \left ({\begin {array}{c} N\\ M \end {array}} \right)\left ({1 - {P_{\mathrm {f}}}} \right) + \left ({\begin {array}{c} N\\ {N - M} \end {array}} \right)\left ({1 - {P_{\mathrm {d}}}} \right)\). So, The system throughput with the imperfect detection is obtained as $$ {S_{\text{sys}}}\left({{P_{\text{tra}}},M} \right) = \frac{{\left({\begin{array}{c} N\\ M \end{array}} \right)\left({1 - {P_{\mathrm{f}}}} \right)}}{{{N_{\text{idle}}}\left(M \right)}}S_{\text{sys}}^{*}\left({{P_{\text{tra}}},M} \right). $$ The dynamic channel states are taken into account in this part. For analysis simplicity, we assume that channel state changes slowly so that the channel state will not change in one frame. We also assume that every channel will become vacant with the probability of q in every frame. So, the probability that M channels that are idle in one frame obeys binomial distribution. We denote the probability as P idle(M) and it can be expressed as follows. $$ {P_{\text{idle}}}\left(M,q \right) = \left({\begin{array}{c} N\\ M \end{array}} \right){\left({1 - q} \right)^{N - M}}{q^{M}} $$ Therefore, we can obtain the system throughput with imperfect detection and dynamic channel states as follows. $$\begin{array}{@{}rcl@{}} {S_{\text{sys}}}\left({{P_{\text{tra}}},q} \right) &=& E\left[ {{S_{\text{sys}}}\left(M, {P_{\text{tra}}}\right)} \right]\\ &=& \sum\limits_{m = 1}^{N} {{S_{\text{sys}}}\left({{P_{\text{tra}}},m} \right)} {P_{\text{idle}}}\left(m,q \right) \end{array} $$ Simulation results The simulation and analysis results are depicted in this section. The simulation is performed via Monte-Carlo simulation. In our simulation, the parameters are assigned as follows. The number of licensed channels N = 10, the number of satellite earth stations K = 8, the duration of sensing time slot T Sm=2 ms, η 1=10, η 2=120, and the number of transmission slot N T=100. We suppose that the sample frequency in sensing phase is 6 MHz, i.e., f s=6 MHz. We obtain the relationship between P f and P d from [7], which can be expressed as follows. $$ {P_{\mathrm{f}}} = Q\left({\sqrt {2\Phi \xi + 1} {Q^{- 1}}\left({{P_{\mathrm{d}}}} \right)} \right) + \xi \sqrt {\frac{{{T_{\text{Sm}}}{f_{\mathrm{s}}}}}{2}} \sum\limits_{i = 1}^{K} {{\omega_{i}}{{\left| {{h_{i}}} \right|}^{2}}}, $$ where ξ means the ratio of signal to noise at the PU node, h i means the channel gains and obeys the Gaussian distribution with mean zeros and variance one, and \({\omega _{i}} = {{{{\left | {{h_{i}}} \right |}^{2}}} \left /\right. {\sqrt {\sum \limits _{i = 1}^{K} {{{\left | {{h_{i}}} \right |}^{2}}}} }}\). Finally, we assume N C = 5 referring to Ref. [18]. The simulated and analytical results are shown in the following part. Figure 6 shows the system throughput with the change of P tra under the perfect detection and fixed number of idle channels. It can be seen that the simulated results are corresponding well with the analytical results at different N IL. Throughput with perfect detection Figure 7 gives the simulation on system throughput with imperfect channel detection and fixed idle channel number, where the relevant parameters are defined as ξ=−15 dB and P d=0.9. From Figs. 6 and 7, when N IL=1, the throughput tendency with P tra increases first and then decreases. The reason is that with the increase of P tra, the number of packets arriving in a slot increases and the throughput increases consequently. However, it also becomes more difficult for the receiver to dispread and decode the packets due to the continual increasing number of the packets arriving in a slot on one channel. At this point, the throughput decreases when P tra exceeds a specific level. In the scenario of N IL=3, throughput is improved as P tra increases because the interference level is always tolerated. In addition, we can see that, when N IL=0, the SSA transmission model is equal to traditional slotted ALOHA. It is apparent that the system will obtain more successful transmission when SSA strategy is adopted. Throughput with imperfect detection Figure 8 shows the simulation with imperfect channel detection and dynamic channel states; the relevant parameters are defined as q=0.4, ξ=−15 dB, and P d=0.9. We also bring the "hard combining" sensing scheme, which is depicted in the preceding part in this paper, in our model to see which sensing algorithm performs better. We can see from Fig. 8 that the two sensing schemes perform closely in this model when P tra is in a low level, but when P tra is large enough, the sensing method in our model outperforms the "hard combining" scheme. Throughput comparison versus different sensing schemes Figure 9 compares the throughputs in different channel occupancy adopting both sensing methods with imperfect detection. We assume that ξ=−15 dB, N IL=1, and P d=0.9. From Fig. 9, it can be seen that the sensing strategy in this model outperforms the "hard combining" method under any circumstances. In addition, when the channel occupancy rate is high, q=0.2 for example, the system performs best when P tra is approximately 0.5, and satellite earth stations can send the packet once it is generated when channels are not busy, e.g., q=0.6. Because all satellite earth stations will get the channel state information at the beginning of transmission phase, it is not a difficult problem for them to adjust P tra adaptively based on the information to maximize the system throughput. Throughput with different channel occupancy In this paper, a spread slotted ALOHA based on multi-user, multi-channel CR model in satellite communication scenario is proposed. In the sensing phase, satellite earth stations will store sampling values on all channels into buffer and then send them to a central cooperator, i.e., a GEO satellite, and the satellite will make a final decision on channel states of all licensed channels. Satellite earth stations will receive the results after the reporting time. Then, satellite earth stations choose an idle channel randomly based on the results and send one packet with a specific probability within a transmission slot. Based on the simulation and numerical analysis, we can draw the following conclusions. First, by comparing the sensing method used in this paper with the traditional "hard combining" sensing scheme, we found that "hard combining" sensing strategy is not as good as the sensing method in this model at the system performance. Second, system throughput improves a lot when SSA scheme is adopted in our model rather than traditional SA strategy. Third, the system will be improved greatly if receiver on the GEO satellite can tolerate more interfering packets which means that the complexity can exchange better performance in our model. Finally, the simulations in different channel occupancy show that the probability that satellite earth stations send a packet can change adaptively to maximize the system throughput; it is not difficult to realize because the satellite will broadcast the channel information at the beginning of transmission phase. J Mitola, G Maguire, Cognitive radio: making software radios more personal. IEEE Pers. Commun.6(4), 13–18 (1999). A Hsu, D Weit, CA Kuo, in Wireless Communications and Networking Conference, 2007. Cognitive MAC protocol using statistical channel allocation for wireless ad-hoc networks (IEEEHong Kong, 2007), pp. 105–110. G Ganesan, et al., Spatiotemporal sensing in cognitive radio networks. IEEE J. Sel. Areas Commun.26(1), 5–12 (2008). H Hang, et al., in 2012 IEEE 14th International Conference on Communication Technology (ICCT). Throughput maximization for cognitive radio networks with time-domain combining cooperative spectrum sensing (IEEECheng Du, 2012), pp. 235–239. L Y, et al., Sensing-throughput tradeoff for cognitive radio networks. IEEE Trans. Wirel. Commun.7(4), 326–1337 (2008). J Yuan, M Torlak, in 7th International Wireless Communications and Mobile Computing Conference. Optimization of throughput and autonomous sensing in random access cognitive radio networks (IEEEIstanbul, 2011), pp. 1232–1237. GX Liu X, M Jia, G Q, Joint cooperative spectrum sensing and channel selection optimization for satellite communication systems based on cognitive radio. Int. J. Satell. Commun. Network (2015). doi:10.1002/sat.1169.. Q Zhao, et al., Decentralized cognitive MAC for opportunistic spectrum access in ad hoc networks: a POMDP framework. IEEE J. Sel. Areas Commun.25(3), 589–600 (2007). Kwon, Sehoon, B Kim, BH Roh, Preemptive opportunistic MAC protocol in distributed cognitive radio networks. IEEE Commun. Lett.18(7), 1155–1158 (2014). SC Jha, et al., Design of OMC-MAC: an opportunistic multi-channel MAC with QoS provisioning for distributed cognitive radio networks. IEEE Trans. Wirel. Commun.10(10), 3414–3425 (2011). J Jia, Q Zhang, X Shen, Hc-mac: A hardware-constrained cognitive MAC for efficient spectrum management. IEEE J. Sel Areas Commun.26(1), 106–117 (2008). GU Hwang, S Roy, Design and analysis of optimal random access policies in cognitive radio networks. Commun. IEEE Trans.60(1), 121–131 (2012). X Li, et al., Throughput analysis for a multi-user, multi-channel aloha cognitive radio system. IEEE Trans. Wirel. Commun.11(11), 3900–3909 (2011). M Hoyhtya, et al., in 2012 IEEE International Symposium on Dynamic Spectrum Access Networks (DYSPAN). Application of cognitive radio techniques to satellite communication (IEEEBellevue, 2012), pp. 540–551. H Li, Q Guo, Q Li, in 2012 Second International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC). Satellite-based multi-resolution compressive spectrum detection in cognitive radio networks (IEEEHarbin, 2012), pp. 1081–1085. ea Vincenzo Icolari, in 2014 IEEE Global Communications Conference. An energy detector based radio environment mapping technique for cognitive satellite systems (IEEEAustin, 2014), pp. 2892–2897. PVR Ferreira, R Metha, AM Wyglinski, in 2014 IEEE Global Conference on Signal and Information Processing. Cognitive radio-based geostationary satellite communications for ka-band transmissions (IEEEAtlanta, 2014), pp. 1093–1097. D Makrakis, K Murthy, Spread slotted aloha techniques for mobile and personal satellite communication systems. IEEE J. Sel. Areas Commun.10(6), 985–1002 (1992). S Maleki, et al., Cognitive spectrum utilization in ka band multibeam satellite communications. IEEE Commun. Mag.53(3), 24–29 (2015). This work was supported in part by National Natural Science Foundation of China under Grants No.61671183 and 91438205. Communication Research Center, Harbin Institute of Technology, P.O.Box 3043, Yikuang Street 2, Harbin, 150080, China Min Jia , Linfang Wang , Zhisheng Yin , Qing Guo & Xuemai Gu Search for Min Jia in: Search for Linfang Wang in: Search for Zhisheng Yin in: Search for Qing Guo in: Search for Xuemai Gu in: Correspondence to Min Jia. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Jia, M., Wang, L., Yin, Z. et al. A novel spread slotted ALOHA based on cognitive radio for satellite communications system. J Wireless Com Network 2016, 232 (2016) doi:10.1186/s13638-016-0737-7 Collaborative sensing Radar and Sonar Networks
CommonCrawl
PID, BFO-optimized PID, and PD-FLC control of a two-wheeled machine with two-direction handling mechanism: a comparative study K. M. Goher1 & S. O. Fadlallah2 In this paper; three control approaches are utilized in order to control the stability of a novel five-degrees-of-freedom two-wheeled robotic machine designed for industrial applications that demand a limited-space working environment. Proportional–integral–derivative (PID) control scheme, bacterial foraging optimization of PID control method, and fuzzy logic control method are applied to the wheeled machine to obtain the optimum control strategy that provides the best system stabilization performance. According to simulation results, considering multiple motion scenarios, the PID controller optimized by bacterial foraging optimization method outperformed the other two control methods in terms of minimum overshoot, rise time, and applied input forces. For a tremendous amount of research studies, providing the ideal control strategy for inverted pendulum (IP)-based systems has been and still remains a field of interest. This can be related to the incomparable increase in the two-wheeled machines (TWMs) that serves nowadays in many applications, especially in applications that demand working in bounded spaces. For these types of highly unstable nonlinear systems, divergent control approaches have been established [1]. Some of these control methods include proportional–integral–derivative (PID) control scheme, bacterial foraging optimization (BFO) of PID control method, and fuzzy logic control (FLC) method. Proportional–integral–derivative (PID) control method This control loop feedback mechanism has been commonly utilized in various control systems, specifically in systems that are based on the inverted pendulum principle. Ren et al. [2] presented a motion control and stability analysis study of a two-wheeled vehicle (TWV). For providing a motion control system that balances the TWV and enables the vehicle to track a predefined path, a self-tuning PID control strategy is proposed. By employing the same PID control approach with an observer-based state feedback control algorithm, Olivares and Albertos [3] presented and controlled an under-actuated flywheel IP system. The study conducted by Wang [4] addressed in detail the issue of adjusting multiple PID controllers simultaneously for the purpose of stabilization and tracking control of three types of IPs. Bacterial foraging optimization (BFO) algorithm Initiated by Passino [5], bacterial foraging optimization (BFO) algorithm has been utilized in multiple research aspects and in different applications. Kalaam et al. [6] implemented BFO algorithm in a cascaded control scheme designed for controlling a grid-connected photovoltaic system. For modeling a single-link flexible manipulator system, Supriyono and Tokhi [7] developed an adaptable chemotactic step size bacterial foraging optimization (BFO) technique. Almeshal et al. [8] utilized the BFO algorithm on a smart fuzzy logic control scheme applied on a unicycle class of differential drive robot on irregular rough terrain. Significant research studies focused on improving the BFO algorithm's performance. These improvements were achieved either by combining BFO with another optimization approach [9, 10] or by modifying the algorithm's actual parameters [11]. Focusing on IP-based systems, Agouri et al. [12] developed a control scheme based on quadratic adaptive bacterial foraging algorithm (QABFA) for controlling a two-wheeled robot with an extendable intermediate body (IB) moving on an inclined surface. Al-rashid et al. [13] applied a constrained adaptive bacterial foraging optimization strategy for optimizing the control gains of a single-link inverted pendulum on cart system. On the other hand, Jain et al. [14] implemented BFO algorithm in tuning a PID controller utilized in controlling an inverted pendulum system on field-programmable gate array (FPGA). Fuzzy logic control (FLC) method Although the concept of fuzzy logic controller (FLC) was initiated in the 1960s [15], tremendous research studies applied this type of control scheme on IP-based systems because of its ability to deal with nonlinear systems, not to mention its intuitive nature. Czogała et al. [16] presented a rough fuzzy logic controller for stabilizing a pendulum-car system. As for Cheng et al. [17], their study focused on developing a FLC, with a high accuracy and resolution, for the purpose of stabilizing a double IP. On the other hand, Xu et al. [18] designed a FLC which obtains fuzzy rules from a simplified lookup table to stabilize a two-wheeled inverted pendulum. For the same aim, Azizan et al. [19] proposed a smart fuzzy control scheme for two-wheeled human transporter. The applied control method, when tested against different mass values that represent the transporter's rider, revealed a high robustness. For an under-actuated two-wheeled inverted pendulum vehicle with an unstable suspension that is subjected to non-holonomic constraint, Yue et al. [20] developed a composite control approach that consists of a direct fuzzy controller and an adaptive sliding mode technique. Amir et al. [21], for an IP on a cart, developed an effective hybrid swing-up and stabilization controller (HSSC) that consists of three controllers: swing-up controller, fuzzy stabilization controller, and fuzzy switching controller. As for Yue et al. [22], their study aimed to develop an indirect adaptive fuzzy control that is based on an error data-based trajectory planner for controlling a wheeled inverted pendulum vehicle. Other research studies, such as Tinkir et al. [23], focused on comparing a conventional PID controller and an interval type 2 fuzzy logic (IT2FL) control method in order to control the swing-up position of a double IP. Research objective and paper organization In order to provide the optimal control strategy for IP-based machines and to improve their stability performance, this paper sets a comparison between three control methods: PID controller, bacterial foraging optimization of PID controller, and fuzzy logic controller applied to control and stabilize a five-degrees-of-freedom (DOF) two-wheeled robotic machine (TWRM) introduced by Goher [24]. Despite the tremendous amount of control methods, the potential of the three selected approaches when it comes to dealing with highly unstable nonlinear systems such as inverted pendulums, as demonstrated in the literature, has encouraged the authors to investigate their implantation on the new five-DOF TWRM. The developed five-DOF two-wheeled machine, compared to current TWRMs, delivers payload handling in two mutually perpendicular directions while attached to the intermediate body (IB). This feature, as a result, increases the vehicle's flexibility and workspace and permits the employment of TWRMs in service and industrial robotic applications (i.e., material handling, objects assembly). The rest of the paper is organized as follows: "Two-wheeled robotic machine system description" section demonstrates a detailed description of the five-DOF two-wheeled machine that the control approaches were implemented on. The system's derived mathematical model is presented in "TWRM mathematical modeling" section. As for "Control system design" section, it illustrates the control system design and the implementation of the three control methods: PID controller, bacterial foraging optimization of PID controller, and fuzzy logic controller on the TWRM's derived mathematical model. "Conclusions" section concludes the paper by highlighting the findings of the research. Two-wheeled robotic machine system description The schematics diagram of the developed two-wheeled robotic machine (TWRM) is illustrated in Fig. 1. The robotic system consists of chassis, with center of gravity at point P1, and the linear actuators' mass, with center of gravity at point P2. As long as the wheeled machine maneuvers far from its initial position, along the X-axis, P1 and P2 coordinates will vary. Each wheel has been connected to a motor that provides the substantial torque, τR and τL, needed to control the TWRM. Both accelerometer and gyroscope sensors were fit to the robotic system in order to provide the necessary state variables that enables the applied control scheme to preserve the TWRM's position at the upright position uninterruptedly. With respect to the X- and Z-axis and referring to Fig. 1, the TWRM's five DOFs can be defined as the following: Two-wheeled robotic machine schematics diagram The attached payload linear displacement in vertical direction (h1). The attached payload linear displacement in horizontal direction (h2). The angular displacement of the angular rotation of the right wheel (δR). The angular displacement of the angular rotation of the left wheel (δL). The tilt angle of the intermediate body around the vertical Z-axis (θ). For a picking and placing scenario, Table 1 demonstrates the engagement of each of the wheeled machine's actuators for each sub-task, along with the DOFs associated with the corresponding process task. The reason behind the continuous activation of the TWRM wheels' motors is due to the external disturbances taking place while performing the picking and/or placing task, as well as the center of mass's continuous variation. Therefore, it is crucial to the wheels' motors to develop the necessary torque signal in order to maintain the upright vertical position of the TWRM. Moving to the linear actuators, their engagement is related to the appointed sub-task. Switching mechanisms are designed, as a major part of the three investigated control schemes, in order to define the period of engagement of each individual actuator in service. Table 1 Engagement of individual actuators for each sub-task [24] TWRM mathematical modeling The TWRM's mathematical model, explained in detail by Goher [24], is derived by employing Lagrangian modeling approach, which is considered as one of the powerful techniques for obtaining the equations of motion for any sophisticated system. Referring to the two-wheeled robotic machine's schematics diagram in Fig. 1 and its physical parametric specifications listed in Table 2, the system's kinematics was related to the torques/forces applied to its links and the five highly coupled differential equations of motion are represented as follows: Table 2 TWRM parameters description [24] $$\begin{aligned} & 2m_{2} \dot{\theta }\left( {\dot{h}_{2} h_{2} + \dot{h}_{1} h_{1} } \right) + \tfrac{1}{2}m_{2} (h_{1} \cos \theta - h_{2} \sin \theta )\left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right) + \tfrac{1}{2}m_{1} l\cos \theta \left( {\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} } \right) \\ & \quad - m_{2} g(h_{1} \sin \theta + h_{2} \cos \theta ) + \ddot{\theta }\left( {J_{1} + J_{2} + m_{1} l^{2} + m_{2} h_{2}^{2} + m_{2} h_{1}^{2} } \right) \\ & \quad + m_{2} \left( {\ddot{h}_{2} h_{1} + \ddot{h}_{1} h_{2} } \right) - m_{1} gl\sin \theta = 0 \, \\ \end{aligned}$$ $$\begin{aligned} & \tfrac{1}{2}m_{1} \left( {\tfrac{1}{2}\ddot{\delta }_{\text{R}} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} - l\dot{\theta }^{2} \sin \theta + l\ddot{\theta }\cos \theta } \right) + \tfrac{1}{2}m_{2} \left( {\ddot{h}_{1} \sin \theta + 2\dot{h}_{1} \dot{\theta }\cos \theta - h_{1} \dot{\theta }^{2} \sin \theta + h_{1} \ddot{\theta }\cos \theta } \right. \\ & \quad \left. { + \,\ddot{h}_{2} \cos \theta - 2\dot{h}_{2} \dot{\theta }\sin \theta - h_{2} \dot{\theta }^{2} \cos \theta - h_{2} \ddot{\theta }\sin \theta + \tfrac{1}{2}\ddot{\delta }_{\text{R}} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} } \right) \\ & \quad + 2m_{\text{w}} \ddot{\delta }_{\text{R}} + 2J_{\text{w}} \frac{{\ddot{\delta }_{\text{R}} }}{{R^{2} }} = \tau_{\text{R}} - \mu_{\text{w}} \left( {\frac{{\dot{\delta }_{\text{R}} }}{{R^{2} }}} \right) - \mu_{\text{c}} \dot{\delta }_{\text{R}}^{{}} \\ \end{aligned}$$ $$\begin{aligned} & \tfrac{1}{2}m_{1} \left( {\tfrac{1}{2}\ddot{\delta }_{\text{R}} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} - l\dot{\theta }^{2} \sin \theta + l\ddot{\theta }\cos \theta } \right) + \tfrac{1}{2}m_{2} \left( {\ddot{h}_{1} \sin \theta + 2\dot{h}_{1} \dot{\theta }\cos \theta - h_{1} \dot{\theta }^{2} \sin \theta + h_{1} \ddot{\theta }\cos \theta } \right. \\ & \quad \left. { + \,\ddot{h}_{2} \cos \theta - 2\dot{h}_{2} \dot{\theta }\sin \theta - h_{2} \dot{\theta }^{2} \cos \theta - h_{2} \ddot{\theta }\sin \theta + \tfrac{1}{2}\ddot{\delta }_{\text{R}} + \tfrac{1}{2}\ddot{\delta }_{\text{L}} } \right) \\ & \quad + 2m_{\text{w}} \ddot{\delta }_{\text{L}} + 2J_{\text{w}} \frac{{\ddot{\delta }_{\text{L}} }}{{R^{2} }} = \tau_{\text{L}} - \mu_{\text{w}} \left( {\frac{{\dot{\delta }_{\text{L}} }}{{R^{2} }}} \right) - \mu_{\text{c}} \dot{\delta }_{\text{L}}^{{}} \\ \end{aligned}$$ $$\tfrac{1}{2}m_{2} \left( {2g\cos \theta - 2h_{1} \dot{\theta }^{2} - 4\dot{h}_{2} \dot{\theta } - 2h_{2} \ddot{\theta } + 2\ddot{h}_{1} + (\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} )\sin \theta } \right) = F_{1} - \mu_{1} \dot{h}_{1}$$ $$\tfrac{1}{2}m_{2} \left( {2g\sin \theta + 2h_{2} \dot{\theta }^{2} - 4\dot{h}_{1} \dot{\theta } - 2h_{1} \ddot{\theta } - 2\ddot{h}_{2} - (\ddot{\delta }_{\text{R}} + \ddot{\delta }_{\text{L}} )\cos \theta } \right) = F_{2} - \mu_{2} \dot{h}_{2}$$ The developed mathematical model of the TWRM, considering the simulation parameters listed in Table 2, is simulated in MATLAB/Simulink® environment, and an open-loop response investigation was carried out in order to examine the behavior of the developed model. Figure 2 illustrates the system's open-loop simulation results. It is clear from the simulation results of the five targeted control variables [i.e., pitch angle (θ), vertical link displacement (h1), horizontal link displacement (h2), right wheel displacement (δR), and left wheel displacement (δL)] that the TWRM is a nonlinear unstable system that requires a closed-loop configuration in order to achieve the desired performance in terms of stabilizing the TWRM. Open-loop system response [24] Control system design This section concentrates on implementing and comparing the three control strategies (i.e., PID, bacterial foraging optimization of PID, and fuzzy logic control) for the sake of providing the optimal control strategy that improves the stability performance of the five-DOF TWRM by controlling the system's main variables [i.e., angle of the robot's chassis (θ), angular position of the right wheel (δR), angular position of the left wheel (δL), linear displacement of the attached payload in vertical direction (h1), linear displacements of the attached payload in horizontal direction (h2)]. PID control design The strategy schematics which are based on designing a feedback control mechanism mainly consist of five control loops, for controlling the TWRM by employing PID control scheme which is demonstrated in Fig. 3. By measuring the error in the tilt angle of the IB, the angular position of the IB is controlled. Out of the five feedback control loops, two are designed in order to control the position of the object by considering the object position's error as an input and the actuation force as an output. As for the two remaining control loops, they are designed with a view to mobilize the TWRM to follow a certain planner motion in the XY plane. For these two feedback loops, the error in the angular position of each wheel is considered as an input. Referring to Fig. 3, both the linear actuator forces (F1, F2) and the driving torques of the right and left wheels' motors (τR, τL) are defined as inputs to the TWRM. In order to prevent any disturbance at the start of working as a result of lifting an object, since the TWRM is designed for the applications of picking and/or placing, two switching mechanisms are added to the system to insure the occurrence of system stability before proceeding with the object handling task and to prevent any disturbance that might affect the control effort. The mechanisms are designed in a way that the linear actuators will activate only when the TWRM's IB reaches the stable upright position. Simulink model of the PID controller implementation [24] BFO-PID control design This part deals with employing bacterial foraging optimization technique on the five-DOF TWRM's PID control scheme, employed at earlier stages of this research, in order to control the vehicle by maintaining the TWRM's IB in the upright position while counteracting the disturbances occurring due to various motion scenarios. The BFO main parameters are listed in Table 3, whereas Fig. 4 demonstrates the algorithm's flowchart. Table 3 BFO algorithm parameters [5] Flowchart of bacterial foraging optimization (BFO) [5] In applying optimization techniques, the most crucial part is to select the objective functions that will be employed to evaluate the fitness function. Using performance indices to evaluate the controlled loops' errors, the objective functions can be created. These performance indices, that have been utilized to optimize the system's errors, can be defined as the following: Mean of the squared error (MSE). Integral of time multiplied by absolute error (ITAE). Integral of absolute magnitude of the error (IAE). Integral of the squared error (ISE). Integral of time multiplied by the squared error (ITSE). Based on the study conducted by Goher and Fadlallah [25], the best optimized PID controller was the one optimized by IAE for the low percent overshoot and minimum settling time. MATLAB/Simulink model of the BFO-PID control method built to control the TWRM is illustrated in Fig. 5. Table 4 lists the controller gain parameters boundary limits for each of the five control loops that are implemented in MATLAB/Simulink environment with a view to optimize these gains. Simulink model of the PID controller optimized by BFO Table 4 Controller gain parameters boundary limits PD-FLC control design For the five-DOF TWRM, the author propose a control scheme that consists of a robust PD-like fuzzy logic control strategy (FLC), as demonstrated in Fig. 6, with five independent control loops designed to control the vehicle for multiple-motion scenarios. Simple Mamdani fuzzy approach are implemented in the control of the two-wheeled robotic machine, where the inputs are the angle and velocity and the output is multiplication factor. This factor will be multiplied with the potentiometer data and will affect the TWRM's both right and left wheels' velocity. The vehicle's pitch angle and angular velocity feedback values are combined with fuzzy control, where the output is a multiplication factor that represents each wheel's actuation values. Both the wheels' angular velocity and the pitch angle consist of five membership functions. It is worth to mention that the steering system's value will impact each wheel (left and right) independently but simultaneously. The multiplication factor consists of five membership functions from 0 to 1 [i.e., negative big (NB), negative small (NS), zero (Z), positive small (PS), and positive big (PB)]. The fuzzy output is multiplied with the steering value so it has two conditions for both right and left wheels. Each of the data will be combined in order to balance the vehicle's IB while performing left and right turns. The total rules implemented to the five-DOF TWRM are listed in Table 5. Simulink model of the FLC implementation Table 5 Rules of navigation using fuzzy logic Comparison between implementation of PID, BFO-PID, and PD-FLC This section carries out a system response comparison, for various motion scenarios, between the three implemented control methods: PID controller, bacterial foraging-optimized PID controller, and PD-like fuzzy logic controller. Table 6 lists the control gain parameters utilized in each control loop for the three control methods with a view to attain a satisfactory system performance. Table 6 Gain values for the three control schemes Figures 7, 8, 9, 10, and 11 illustrates the two-wheeled robotic machine mathematical model simulation output results, including the applied control effort, for five different case scenarios: payload free movement, payload vertical movement only, payload horizontal movement only, simultaneous horizontal and vertical motion, and 1-m straight line vehicle motion. As visualized in the previous figures, the BFO-PID control scheme has a superior performance and optimized behavior compared to the PID- and PD-like FLC control methods. It is also observable that the optimized controller by BFO algorithm reduces the applied input forces required to stabilize the robotic machine. System output and input forces comparison for payload free movement (h1= h2=0). a System output and b system input forces System output and input forces comparison for payload vertical movement only. a System output with moving h1 and b system input forces System output and input forces comparison for payload horizontal movement only. a System output with moving h2 and b system input forces System output and input forces comparison for simultaneous vertical and horizontal motion (h1 and h2 ≠ 0). a System output with moving h1 and h2 and b system input forces System output and input forces comparison for a 1-m straight line motion. a System output: straight line for 1 m, and b system input forces: straight line for 1 m Taking the first motion scenario of payload free movement (h1 = h2 = 0) (Fig. 7) as an example, Table 7 lists a performance comparison between PID, PID-BFO, and PD-FLC control methods characterized by percentage overshoot, settling, rise, and peak times. Beginning with the system's percentage overshoot, the PID controller optimized by bacterial foraging algorithm gives better overshoot value (27.9%), which is much lower than the recorded overshoot values for both PID- and PD-like FLC control schemes, 48.1% and 38.6%, respectively. As for the system's settling time, the control strategy which is based on PID-BFO settles the vehicle in 0.78 s, which is three times less than the PID control method's settling time (2.287 s) and two times less than the PD-FLC scheme (1.441 s). Moving to rise time values, the best result is given by PD-FLC (0.217 s), followed by BFO-PID method (0.23 s), and finally PID control scheme (0.2790 s). It can be seen that the rise time values are almost the same for all methods with small difference between them. As for peak time values, the PID controller has the highest peak value (0.5710 s), where the PD-FLC method value is the lowest but almost the same as the BFO scheme (0.4 s). Table 7 System performance comparison between PID, PID-BFO, and PD-FLC control methods A phenomenon has been noticed in the scenarios of payload horizontal movement only case (Fig. 9) and the simultaneous horizontal and vertical motion case (Fig. 10). The TWRM's stability was disturbed by the horizontal actuator's activation, and the vehicle continues maneuvering instead of maintaining its initial position. This issue was only compensated by the BF-optimized PID controller, where it produced a satisfactory performance and robustness against the disturbance excited by the horizontal actuator's activation. Investigating real path trajectory with payload mass Since the TWRM is developed to be employed in industrial applications, Fig. 12 demonstrates the application where the robot will be used to manoeuver in a straight line and then activates both vertical and horizontal actuators in order to pick an object and return it to its initial position. As can be seen in Fig. 12a, the robot starts moving in straight line after achieving stabilization and the controllers act to maintain the robot's stability. At the time the robot handles the load object, the stability of the system is not affected. Therefore, the controllers provide a good performance. Based on Fig. 12b, which represents the applied forces of the actuators, the PID control method consumes more forces than the forces applied by both BFO-PID and PD-FLC. System output and input forces comparison for real path trajectory with payload mass. a System output: real path motion and b system input forces: real path motion Control system robustness investigation For the three proposed control methods, the TWRM stability was tested against the impact of disturbance force shown in Fig. 13a and the system performance is illustrated in Fig. 13b, c. As can be seen for the three control approaches, the vehicle in few seconds achieved its stability region about the vertical axis. However, the BF-optimized PID control method surpassed both PID and PD-FLC approaches in terms of withstanding the impact of disturbance on the vehicle wheels' displacement (δR, δL) and the horizontal linear actuator displacement (h2). Therefore, in terms of robustness and instability minimization, BF-optimized PID control approach has a superior performance. Response of system output and input forces by applying disturbance force. a Disturbance force applied, b the system output simulation with disturbance force, and c the system input forces of system with disturbance force Proportional–integral–derivative (PID) control scheme, bacterial foraging optimization (BFO) of PID control method, and fuzzy logic control (FLC) method have been applied on a novel five-DOF two-wheeled robotic machine (TWRM), and their performance has been compared in order to determine the optimum control strategy that provides the best stabilization performance for the system. The proposed TWRM's nonlinear equations of motion have been derived using Lagrangian modeling approach and simulated with the assistance of MATLAB/Simulink® environment. Based on the five case scenarios' simulation results (i.e., payload free movement, payload vertical movement only, payload horizontal movement only, simultaneous horizontal and vertical motion, and 1-m straight line vehicle motion), the BFO-PID control scheme has a superior performance compared to the other two control methods. This performance has been reflected through the reduction in percent overshoot, rise time, and the applied input forces. The same performance was expected from the BFO-PID method when the system was tested against external disturbance forces. Despite the satisfactory performance of the system using BFO technique, BFA has a slow convergence speed and longer computation time which makes the implementation unrealistic in real-time tuning for solving a complex real-world problem. In this research, only simulation scenarios have been considered and hence little concern has been considered about the limitations of BFO. Future considerations of this work will consider implementing and comparing various optimization techniques such as genetic algorithm (GA), spiral dynamics (SD), hybrid spiral dynamics bacterial chemotaxis (HSDBC), and particle swarm optimization algorithm (PSO) for optimizing the TWRM's PID controller gains in order to improve the system's stabilization performance. Furthermore, investigating the robustness of the system will be considered not only in the application scenario, but also in the system itself. By changing the system's physical parametric specifications, the performance of the proposed control methods in different parameters of the system will be evaluated. Moreover, the TWRM's hardware model can be built and the performance of the control approaches implemented on the system will be examined against real disturbance forces for real industrial applications. Chan RPM, Stol KA, Halkyard CR. Review of modelling and control of two-wheeled robots. Annu Rev Control. 2013;37(1):89–103. Ren TJ, Chen TC, Chen CJ. Motion control for a two-wheeled vehicle using a self-tuning PID controller. Control Eng Pract. 2008;16:365–75. Olivares M, Albertos P. Linear control of the flywheel inverted pendulum. ISA Trans. 2014;53(5):1396–403. Wang J. Simulation studies of inverted pendulum based on PID controllers. Simul Model Pract Theory. 2011;19(1):440–9. Passino KM. Biomimicry of bacterial foraging for distributed optimization and control. In: Proceedings of the IEEE control system magazine; 2002. p. 52–67. Kalaam RN, Hasanien HM, Al-Durra A, Al-Wahedi K, Muyeen SM. Optimal design of cascaded control scheme for PV system using BFO algorithm. In: International conference on renewable energy research and applications (ICRERA), Palermo; 2015. p. 907–12. Supriyono H, Tokhi MO. Parametric modelling approach using bacterial foraging algorithms for modelling of flexible manipulator systems. Eng Appl Artif Intell. 2012;25(5):898–916. Almeshal A, Goher K, Alenezi MR, Almazeed A, Almatawah J, Moaz M. BFA optimized intelligent controller for path following unicycle robot over irregular terrains. Int J Curr Eng Technol. 2015;5(2):1199–204. Nasir ANK, Tokhi MO. A novel hybrid bacteria-chemotaxis spiral-dynamic algorithm with application to modelling of flexible systems. Eng Appl Artif Intell. 2014;33:31–46. Nasir ANK, Tokhi MO, Ghani NMA. Novel hybrid bacterial foraging and spiral dynamics algorithms. In: 13th UK workshop on computational intelligence (UKCI), Guildford; 2013. p. 199–205. Nasir ANK, Tokhi MO, Ghani NMA. Novel adaptive bacterial foraging algorithms for global optimisation with application to modelling of a TRS. Expert Syst Appl. 2015;42(3):1513–30. Agouri SA, Tokhi MO, Almeshal AM, Goher KM. BFA optimisation of control parameters of a new structure two-wheeled robot on inclined surface. In: Paper presented at the nature-inspired mobile robotics: proceedings of the 16th international conference on climbing and walking robots and the support technologies for mobile machines, CLAWAR 2013; 2013. p. 189–96. Al-rashid N, Alfarsi Y, Al-Khudhier H. Application of constrained quadratic adaptive bacterial foraging optimisation algorithm on a single link inverted pendulum. Int J Curr Eng Technol. 2015;5(5):3301–4. Jain T, Patel V, Nigam MJ. Implementation of PID controlled SIMO process on FPGA using bacterial foraging for optimal performance. Int J Comput Electr Eng. 2009;1(2):107–10. Zadeh LA. Fuzzy sets. Inf Control. 1965;8:353–83. Czogała E, Mrózekb A, Pawlakc Z. The idea of a rough fuzzy controller and its application to the stabilization of a pendulum-car system. Fuzzy Sets Syst. 1995;72:61–73. Cheng F, Zhong G, Li Y, Xu Z. Fuzzy control of a double inverted pendulum. Fuzzy Sets Syst. 1996;79:315–21. Article MathSciNet Google Scholar Xu J, Guo Z, Heng T. Synthesized design of a fuzzy logic controller for an underactuated unicycle. Fuzzy Sets Syst. 2012;207:77–93. Azizan H, Jafarinasab M, Behbahani S, Danesh M. Fuzzy control based on LMI approach and fuzzy interpretation of the rider input for two wheeled balancing human transporter. In: Proceeding of the 8th IEEE international conference on control and automation (ICCA); 2010. p. 192–7. Yue M, Wang S, Sun JZ. Simultaneous balancing and trajectory tracking control for two-wheeled inverted pendulum vehicles: a composite control approach. Neurocomputing. 2016;191:44–54. Amir D, Chefranov AG. An effective hybrid swing-up and stabilization controller for the inverted pendulum-cart system. In: IEEE international conference on automation quality and testing robotics (AQTR); 2010. p. 1–6. Yue M, An C, Du Y, Sun J. Indirect adaptive fuzzy control for a nonholonomic/underactuated wheeled inverted pendulum vehicle based on a data-driven trajectory planner. Fuzzy Sets Syst. 2016;290:158–77. Tinkir M, Onen U, Kalyoncu M, Botsali FM. PID and interval type-2 fuzzy logic control of double inverted pendulum system. In: The 2nd international conference on computer and automation engineering (ICCAE); 2010, p. 117–21. Goher KM. A two-wheeled machine with a handling mechanism in two different directions. Robot Biomim. 2016;3(17):1–22. Goher KM, Fadlallah SO. Bacterial foraging-optimized PID control of a two-wheeled machine with a two-directional handling mechanism. Robot Biomim. 2017;4(1):1. KMG initiated the concept of two-wheeled machine with the two-direction handling mechanism. He derived the mathematical model in the linear and nonlinear forms. KMG simulated the system model and designed and implemented the control approach. SOF helped in writing the final format of the paper and analyzing and interpreting the results. SOF also led the work during the revision process and responded to the reviewer's comments. Both authors read and approved the final manuscript. The authors of this paper would like to thank the University of Lincoln for offering the funding support for this publication. This research has been funded by Sultan Qaboos University (Oman) for the simulation studies and the University of Lincoln (UK) for publication charges. School of Engineering, University of Lincoln, Lincoln, UK K. M. Goher Mechanical Engineering Department, Auckland University of Technology, Auckland, New Zealand S. O. Fadlallah Correspondence to K. M. Goher. Goher, K.M., Fadlallah, S.O. PID, BFO-optimized PID, and PD-FLC control of a two-wheeled machine with two-direction handling mechanism: a comparative study. Robot. Biomim. 5, 6 (2018). https://doi.org/10.1186/s40638-018-0089-3 Two-wheeled machine Two-direction handling BFO
CommonCrawl
Effects of transforming growth factor β-1 infected human bone marrow mesenchymal stem cells on high- and low-metastatic potential hepatocellular carcinoma Tianran Li1,2, Shaohong Zhao2, Bin Song1, Zhengmao Wei3, Guangming Lu4, Jun Zhou1 & Tianlong Huo3 This study investigates the effects of human bone marrow-derived mesenchymal stem cell (hMSC) on migration and proliferation ability of hepatocellular carcinoma (HCC) with high- and low-metastatic potential. The hMSC and transforming growth factor-β1 (TGFβ-1) gene infected hMSC were co-cultured with hepatoma cells. The ability of cells migration was assessed by Transwell assay. The ability of cells proliferation was detected using CCK-8 assay. The mice were engrafted with hMSC and TGFβ-1 gene infected hMSC, respectively, after hepatoma cells inoculation 15 days, twice a week for 6 weeks successively. The tumor inhibition rate was calculated. TGFβ-1, osteopontin (OPN), and programmed cell death protein 4 (PDCD4) genes expression of hepatoma cells were detected by quantitative real-time polymerase chain reaction (qPCR) before and after co-cultured experiments. TGFβ-1 infected hMSC or hMSC co-culture with hepatoma cells groups can significantly promote hepatoma cells proliferation (P < 0.05). The migration numbers of hepatoma cells with TGFβ-1 infected hMSC co-culture groups were significantly reduced compared with the other two groups (P < 0.05). The tumors weight inhibition rates of MHCC97-H and MHCC97-L animal models were the highest in the third week by hMSC engraftment. But the highest tumor inhibition rate of MHCC97-H animal models was observed in the fourth week and MHCC97-L animal models in the fifth week after TGFβ-1 infected hMSC engraftment. OPN gene relative quantitative expression of hepatoma cells was significantly down-regulated after co-cultured with hMSC and TGFβ-1 gene infected hMSC groups (P < 0.05). TGFβ-1 gene relative quantitative expression of MHCC97-H and MHCC97-L cells was significantly up-regulated after co-cultured with TGFβ-1 gene infected hMSC groups (P < 0.05). PDCD4 expression had no statistical differences among groups. hMSC and TGFβ-1 gene infected hMSC can promote hepatoma cells proliferation and inhibit hepatoma cells migration. hMSC and TGFβ-1 gene infected hMSC exhibit anti-tumor activity in a time-dependent manner. TGFβ-1 cytokine may be the main factor in HCC proliferation. OPN makes a significant contribution to the changes of hepatoma cells metastasis. Hepatocellular carcinoma (HCC) is one of the most common cancers with poor prognosis and high recurrence rate, and metastatic recurrence is the major obstacle to improve the prognosis of HCC patients. Human bone marrow-derived mesenchymal stem cell (hMSC) is a type of adult stem cells with multilineage differentiation potential. Previous studies have reported that hMSCs could inhibit the growth of hepatocellular carcinoma tissue and repair the impaired liver tissue. Hence, hMSCs become a hot-spot for the treatment of hepatocellular carcinoma [1, 2]. However, there is also the opposite view. Researchers believed that hMSC can promote the growth and metastasis of hepatocellular carcinoma cells [3]. Furthermore, the metastasis of hepatocellular carcinoma is a key factor for its prognosis, and it is important to extend the life-span of HCC patients by observing the effect of hMSCs on HCC metastatical ability. Enhancing hMSC resistance ability to HCC tissue and metastatic potentials by genic engineering technology has been regarded as a feasible way. It has been reported that transforming growth factor beta (TGFβ) is a multifunctional cytokine family, which mainly plays roles in regulating cell proliferation, differentiation, and embryo development, promoting the formation of extracellular matrix and inhibiting immune response. Among the members in this family, serum TGFβ-1 is a principal isoform in humans, and it is closely associated with the occurrence and development of tumor. TGFβ-1 plays dual roles in inhibiting and promoting tumor growth. At the early stage of tumorogenesis, TGFβ-1 inhibited normal cell growth and tumorogenesis by suppressing G1/S phase transition [4, 5]. However, the tumor would not be sensitive to TGFβ-1 mediated growth inhibition with the development of tumor. As a tumor growth stimulating factor, TGFβ-1 plays an important role in tumor growth, invasion, and metastasis. Therefore, in order to investigate intervention effects of hMSC to hepatocellular carcinoma tissue, in this study, TGFβ-1 was infected into human bone marrow-derived mesenchymal stem cells using a transgenic technology based on the biological characteristics of TGFβ-1. Our study mainly focuses on the effects of TGFβ-1 infected hMSC on HCC cells with high- and low-metastatic potentials, and the roles of hMSC in HCC progression in vitro are determined. Our data provide a preliminary exploration of the treatment against hepatocellular carcinoma metastasis using hMSC by genic engineering technology. Fetal calf serum (FCS) was purchased from PAN (Aidenbach, Germany), and trypsin tenfold was supplied by PAA (Pasching, Austria). Dulbecco's modified eagle's medium (DMEM) was supplied by Dulbecco's and α-minimum essential medium (MEM) was supplied by ATCC. Human hepatocellular carcinoma cell lines with high-metastatic potential (MHCC97-H) and low-metastatic potential (MHCC97-L) were provided by the Liver Cancer Institute of Fudan University (Shanghai, China). hMSC were purchased from the Cyagen Biotech Co. Ltd. (Guangzhou, China). Thirty-six 6-week aged specific-pathogen-free (SPF) grade nude mice of the Balb/c strain (female to male ratio = 1), each weighing 14–17 g, were purchased from the Laboratory Animal Center of National Institutes for Food and Drug Control (Beijing, China). Animals were kept within the animal care facility of the Peking University Health Science Center. The experiments conform to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85–23, revised 1996). The housing and care and procedures in the study were performed in accordance with the guidelines and regulations composed by the Animal Care Committee of the University of the Peking University Health Science Center and approved by the Institutional Animal Care and Use Committee of the Peking University Health Science Center, China. Hepatoma cells and hMSCs culture MHCC97-H and MHCC97-L cells were routinely cultured on 75-cm2 culture flasks in high-glucose DMEM supplemented with 10 % FCS, 1 % L-glutamine, and 1 % penicillin/streptomycin at 37 °C and 5 % CO2 in a humidified incubator. The medium was replaced at 50 % of cell confluence. Tumor cells were passaged once (80 % of cell confluence), cultured for a further 2 days, and then trypsinized for subsequent implantation studies. Tumor cells from the same passage were used for all the implantation experiments. The hMSCs were washed once with α-MEM and seeded at a concentration of 1 × 106 cells/cm2 per 100-mm cell culture dish (Corning, Corning, NY) in α-MEM media containing 10 % fetal calf serum, nonessential amino acids (Cellgro, Herndon, VA), and pyruvate (Invitrogen, Carlsbad, CA) and cultured at 37 °C in 5 % CO2. After 48 h, nonadherent cells were removed, fresh media was added, and the culture was maintained for 7 days. Before infusion, cells were washed twice with phosphate buffer solution (PBS) and harvested and using magnetic beads (Miltenyi Biotech, Auburn, CA). Cells were resuspended at a concentration of 5 × 106cells/ml. Expanded cells that displayed morphological, immunophenotypical, and differentiation properties of mesenchymal stem cells were identified again. Flow cytometry were used to observe the cluster of differentiation (CD) of hMSC. Cy3 and hoechst33342 (Sigma) labeled hMSC to observe proliferation under fluorescence microscopes. Cy3 excitation wavelength is 550 nm, and emission wavelength is 565 nm. hoechst33342 excitation wavelength is 350 nm, and emission wavelength is 461 nm. TGFβ-1 gene infecting hMSC First is the construction of retroviral vector. The pLV.EX3d.P/neo-EF1A > TGFβ-1 > IRES/eGFP shuttling plasmid was constructed using Gateway technology. Overlap extension quantitative real-time polymerase chain reaction (qPCR) was used to amplify attB1-Kozak-TGFβ-1-attB2. Primer sequences were as follows: attB1-K-TGFβ-1, 5′-GGGGACAAGTTTGTACAAAAA AGCAGGCTGCCACCATGCCGCCCTCCGGGCTG-3′; attB2-TGFβ-1, 5′-GGG GACCACTTTGTACAAGAAAGCTGGGTTCAGCTGCACTTGCAGGAGCG-3′. Subsequently, pDown-TGFβ-1 plasmid and pLV.EX3d.P/neo-EF1A > TGFβ-1 > IRES/eGFP plasmid were constructed. The positive clone was screened using colony PCR, and the positive plasmids were further sequenced. Second, lentivirus was produced. Totally, 5 × 106 well-growth 293FT cells were counted and seeded into a 10-cm culture dish overnight. The old medium was removed, and 5 ml of DMEM containing 10 % FCS was added. Plasmid pLV.EX3d.P/neo-EF1A > TGFβ-1 > IRES/eGFP was added into the culture medium. The medium was substituted 24 and 48 h after the infection, respectively. The viruses were collected and concentrated 72 h after the infection, and the viral titre was further measured. Moreover, the cells were stained with crystal violet after drug screening. Third is the TGFβ-1 infecting of hMSC. One milliliter of fresh complete medium of hMSCs was added into each well. Totally, 30 μl of viruses was used to infect the cells in one well, and the other well was set as blank control. The cells were cultured after mixing. The complete medium containing viruses was removed after 16 h of infection, and the cells were washed with PBS twice. The cells were continuously cultured for 72 h, and the infection efficacy was observed using a fluorescence microscope. The cells were used to perform PCR and western blot when the infection rate met the requirements. And hMSCs were observed using immunofluorescence before and after the infection. Assessment of tumor cell proliferation Cell counting kit (CCK-8) assay is used to evaluate the proliferation ability of hepatoma cells. Dimethyl sulfoxide (DMSO) in 3-(4, 5)-dimethylthiahiazo(-z-y1)-3,5-di-phenytetrazoliumromide (MTT) assay would affect the culture membrane; therefore, we detected cell proliferation using CCK-8 assay. Totally, 2 × 103 tumor cells (75 μl) were seeded into each upper chamber in a 96-well plate after the hMSC cells were adhered to the lower chamber. After 48 h of co-culture, the medium in each well was supplied to 80 μl in total, and 8 μl of CCK-8 was added into each upper chamber well. The mixture was incubated at 37 °C for 4 h. After the incubation, the optical density (OD) at 490 nm was detected using an ELISA reader. The experiment is repeated for six times. Assessment of tumor cell migration The basement membrane (BM) is a specialized form of extracellular matrix (ECM), which is a major barrier to tumor cells during metastasis. Transwell assay is used to evaluate that the hepatoma cells break through the basement membrane (or migration) ability. ECM gel (Sigma, Swiss) was precooled at 4 °C for 2 h, and 50 μl of ECM gel was added into the upper chamber. The chamber was incubated at 37 °C in a humidified atmosphere of 5 % C02 until the gel was solidified. The tumor cells were cultured with serum-free medium for 24 h and were counted after digestion. Moreover, these cells were resuspended using medium supplemented with 0.1 % FCS, and the cell concentration was adjusted to 1 × 106 cells/ml. Totally, 10,000 human mesenchymal stem cells were seeded in the lower chamber using 500 μl of medium supplemented with 20 % FCS, and 100 μl of hepatoma cells suspension at a concentration of 1 × 106 cells/ml was added into the upper chamber. ECM gel chamber was incubated at 37 °C in a humidified atmosphere of 5 % CO2 for 36 h. The cells on the upper surface of chamber were removed. Next, 500 μl of MTT (0.5 mg/ml) was added into each well of a 24-well plate. The chambers were immersed in medium at 37 °C for 4 h. Then, 500 μl of DMSO was added into each well, and the chambers were immersed in DMSO and oscillated for 10 min. Finally, formazan crystal was completely dissolved. The chambers were got out, and 100 μl DMSO was transferred into each well of a 96-well plate. The tumor cells on ECM gel and in upper chamber were removed using cotton swabs. The tumor cells were stained using 0.1 % crystal violet. The tumor cells were counted using a common microscope, and the chamber was everted to clearly observe the tumor cells in the lower chamber. The experiment is repeated for six times. Animal procedures Mice were randomly assigned into one of two groups (n = 15 animals per group) in MHCC97-H and MHCC97-L group, and all mice were weighed and numbered. The mice in the experimental group were engrafted with hMSCs (5 × 105 cells per mouse) via the tail vein 15 days after inoculation of tumor cells, twice a week for 6 weeks successively, while the animals in the control group were injected with hMSC culture medium (0.2 ml per mouse) via the tail vein at the same time. The subcutaneous tumor size was measured using an electronic digital caliper once every 4 days after hMSC engraftment. After 2, 3, 4, 5, and 6 weeks of tumor cell inoculation, the mice were killed and the tumors were collected in their entirety. The tumor weight and body weight of mice were measured. The tumor inhibition rate was calculated using the following formula: $$ \mathrm{Tumor}\kern0.5em \mathrm{inhibition}\kern0.5em \mathrm{rate}\left(\%\right)=\left[1-\frac{E\left(\mathrm{g}\right)}{C\left(\mathrm{g}\right)}\right]\times \mathsf{100}\% $$ E (g), mean tumor weight in the experiment group; C (g): mean tumor weight in the control group. Relative quantification using quantitative RT-PCR analysis The primer sequences were obtained from GenBank database and synthesized by Cyagen Biotech Co. Ltd. (Guangzhou, China). The primer sequences of these genes were as follows (see Table 1). Table 1 Invasion and proliferation-related gene relative quantitative expression of hepatoma cells β-Actin and GAPDH was used for normalization. Totally, 3 × 104 hMSC cells that were seeded into the lower chamber of a 12-well plate after MHCC97-H cells were adhered to the upper chamber. After 48 h of co-culture, cells of the upper chamber were harvested for PCR detection. Total RNA was extracted, and cDNA was obtained using RevertAid ™ M-MLV RT system and Oligo (dT). The specific cDNA was amplified using PCR, and the profile was as follows: 95 °C for 5 min, followed by 30 cycles of 94 °C for 30 s, 58 °C for 30 s, and 72 °C for 30 s, with a final extension at 72 °C for 10 min. The PCR products were semi-quantitatively analyzed using gel electrophoresis. Data were analyzed using Statistical Product and Service Solutions software (SPSS, version 16.0). Results were expressed as mean ± standard error of the mean (SEM). Differences between two groups were evaluated with the unpaired Student's t test. Analysis of variance (ANOVA) was used to determine statistical differences. A p value < 0.05 was considered significant. Mesenchymal stem cells identification results hMSCs growth curve was drawn from the first day they recovered. The morphology, growth and proliferation of the cells were observed during continuous culture in vitro, and doubling time of cells was calculated. According to growth curve of experiment data to calculate the cells, doubling time is 26 h. After pass-generation tests for five times, the cells still has the vigor. At a ratio of 1:2 cells inoculated, cells can be covered within 72 h. In morphological observation under microscope and fluorescence microscopes (Fig. 1a, b), hMSCs are spindle-shaped, the size are uniform, and the polarity is arranged. Distribution of collagen in the cytoplasm and the nucleus shape are normal. The expressions of surface antigens CD29, CD44, and CD105 on these cells were detected by Flow Cytometry, and cells did not express CD45 and CD14 surface antigens. So, the experimental hMSC conformed to standards made by The International Society for Cellular Therapy position statement (2006) [6]. hMSC type I collagen Cy3 immunofluorescence staining and nucleus hoechst33342 staining. a Red represents the Cy3 immunofluorescence staining positive cells skeleton. hMSCs are spindle-shaped, the size are uniform, and the polarity is arranged. b Blue represents the hMSC nucleus. The nuclei are round, oval shape. Scale bars = 50 μm for (a–b) TGFβ-1 gene infection of hMSC First, morphological change of infected hMSCs was observed. Green fluorescent protein (GFP) was used as a reporter gene, and the target gene infected hMSC was observed under the fluorescence microscope compared with without gene infected hMSC (Fig. 2a,b). hMSC imaging of hMSC infected TGFβ-1 gene with reporter gene GFP observed under fluorescence microscope and inverted microscope. a hMSCs which have been penetrated by the green fluorescent protein (GFP) begin to glow with bright green. Original magnification. b Cells arranged regularly. The shape of gene-transduced hMSC was essentially the same as no-genetically modified cells. Original magnification. Scale bars = 100 μm for (a), 50 μm for (b) Second, TGFβ-1 gene expression changes of hMSC were detected by qPCR before and after the infection target gene. hMSC and hMSC infected GFP are the control groups; hMSC infected TGFβ-1 is the experimental group (Fig. 3). TGFβ-1 expression changes were detected using qPCR before and after hMSC infected target gene. The TGFβ-1 expression of hMSC infected TGFβ-1 gene group was higher than that of control groups (#p < 0.001). But the difference between hMSC and hMSC infected GFP group had not statistical significance (p > 0.05) Figure 3 showed that the TGFβ-1 expression of hMSC infected TGFβ-1 gene group was higher than that of control groups ((#p < 0.001), and TGFβ-1 expression of experimental group was about seven times as that of blank control group. But the difference between hMSC and hMSC infected GFP group had no statistical significance (p > 0.05). It has been indicated that the hMSC infected TGFβ-1 gene was successful and can proceed to the next step. Assessment of hMSC on hepatoma cells proliferation The proliferation ability effects of hMSC and TGFβ-1 infected hMSC on hepatoma cells (MHCC97-H and MHCC97-L) were detected using CCK-8 (Cell Counting Kit-8) assay. hMSC and TGFβ-1 infected hMSC co-culture with hepatoma cells groups served as experimental groups. MHCC97-H and MHCC97-L groups served as control groups (Figs. 4 and 5). TGFβ-1 infected hMSC co-culture with MHCC97-H cells proliferation test. OD value of TGFβ-1 infected hMSC co-culture with MHCC97-H cell group was significantly higher than that of other groups (**p < 0.05). OD value of hMSC co-culture with MHCC97-H cell group was higher than that of control group (*p < 0.05) TGFβ-1 infected hMSC co-culture with MHCC97-L cells proliferation test. OD value of TGFβ-1 infected hMSC co-culture with MHCC97-L cell group was significantly higher than that of other groups (**p < 0.05). OD value of hMSC co-culture with MHCC97-L cell group was higher than that of control group (*p < 0.05) As shown in Figs. 4 and 5, hMSC can promote MHCC97-H and MHCC97-L cells proliferation, especially TGFβ-1 gene infected hMSC. Assessment of hMSC on hepatoma cells migration The two hepatoma cells were co-cultured with hMSC and TGFβ-1 infected hMSC, respectively, and hepatoma cells with culture medium acted as the control groups. The breaking through basement membrane migratory hepatoma cells were stained using 0.1 % crystal violet. The migration of hepatoma cells was evaluated by means of a quantitative counting procedure (CCK-8) (Figs. 6 and 7). MHCC97-H cells migration counting of before and after TGFβ-1 infected hMSC co-culture with MHCC97-H. The migration numbers of hepatoma cells with TGFβ-1 infected hMSC co-culture groups were significantly reduced compared with those of the control group (*p < 0.05) Cells migration counting of before and after TGFβ-1 infected hMSCs co-culture with MHCC97-L. The migration numbers of hepatoma cells with TGFβ-1 infected hMSC co-culture groups were significantly reduced compared with the other two groups (**p < 0.05), and TGFβ-1 infected hMSC co-culture group was significantly reduced compared with hMSC co-culture group (*p < 0.05) The migration numbers of MHCC97-H cells with TGFβ-1 infected hMSC co-culture groups were significantly reduced compared with the control group (*p < 0.05). The migration numbers of MHCC97-L cells with TGFβ-1 infected hMSC co-culture groups were significantly reduced compared with the other two groups (**p < 0.01), and TGFβ-1 infected hMSC co-culture with MHCC97-L cells group was significantly reduced compared with hMSC co-culture group (*p < 0.05). Animal model experiment results Firstly, effects of hMSC on transplanted tumors in nude mice produced by inoculation of MHCC97-H and MHCC97-L cells were investigated. The experiment lasted for 6 weeks. According to the tumor tissue specimens' weight in vitro before and after hMSC intervention, the tumor weight inhibition rate was calculated (Fig. 8). The changes of HCC animal model tumors weight inhibition rate following engraftment of hMSC over time. The tumors weight inhibition rates of MHCC97-H (blue) and MHCC97-L (green) animal models were the highest in the third week by hMSC engraftment It was observed that the highest tumor inhibition rate was observed 3 weeks after hMSC engraftment, and the tumor inhibition rate gradually reduced with the prolongation of time. Secondly, effects of TGFβ-1 infected hMSC on transplanted tumors in nude mice produced by inoculation of MHCC97-H and MHCC97-L cells were investigated. The experiment lasted for 6 weeks. According to the tumor tissue specimens' weight in vitro before and after TGFβ-1 infectedhMSC intervention, the tumor weight inhibition rate was calculated (Fig. 9). The tumors weight inhibition rate changes of MHCC97-H and MHCC97-L animal model following engraftment of TGFβ-1 transfected hMSC over time. The tumors weight inhibition rate of MHCC97-H animal models (blue) was the highest in the fourth week and MHCC97-L (green) animal models were the highest in the fifth week by TGFβ-1 transfected hMSC engraftment Compared with the hMSC without TGFβ-1 gene transfection, the highest tumor inhibition rate of MHCC97-H animal models was observed in the fourth week after TGFβ-1 infected hMSC engraftment, and the tumor inhibition rate gradually reduced with the prolongation of time. The highest tumor inhibition rate of MHCC97-L animal models was observed in the fifth week after TGFβ-1 infected hMSC engraftment, and the tumor inhibition rate gradually reduced with the prolongation of time. Relative quantification using real-time PCR analysis MHCC97-H and MHCC97-L cells served as control groups, and osteopontin (OPN), TGFβ-1, and programmed cell death protein 4 (PDCD4) gene expression levels in hepatoma cells was set as 100 %. So, OPN, TGFβ-1, and PDCD4 gene relative expression levels in hepatoma cells with hMSC and TGFβ-1 gene infected hMSC co-culture groups were shown in the Table 1. OPN gene relative quantitative expression of MHCC97-H cell were significantly down-regulation after co-cultured with hMSC (p < 0.05) and TGFβ-1 gene infected hMSC groups (p < 0.01). But OPN gene relative quantitative expression of MHCC97-L cell was only significantly down-regulated after co-cultured with TGFβ-1 gene infected hMSC groups (p < 0.01). Transforming growth factor-β1 (TGFβ-1) gene relative quantitative expression of MHCC97-H cell was significantly up-regulated after co-cultured with hMSC (p < 0.05) and TGFβ-1 gene infected hMSC groups (p < 0.01). TGFβ-1 gene relative quantitative expression of MHCC97-L cell was also significantly up-regulated after co-cultured with TGFβ-1 gene infected hMSC groups (p < 0.05). PDCD4 gene relative quantitative expression of MHCC97-H cell was significantly down-regulated after co-cultured with hMSC but was significantly up-regulated after co-cultured TGFβ-1 gene infected hMSC groups (p < 0.05). PDCD4 gene relative quantitative expression of MHCC97-L cell was down-regulated after co-cultured with hMSC and up-regulated after co-cultured with TGFβ-1 gene infected hMSC groups. But there were no statistical differences among groups. Hepatocellular cancer (HCC) remains a common cause of cancer and death worldwide, especially in China. HCC mortality is closely associated with primary organ failure, metastasis, and recurrence after surgical resection. A hot new topic in medical treatment is the use of mesenchymal stem cells (MSC) in treatment of HCC in recent years [7]. Human bone marrow stem cells mainly contain two cell types, including hematopoietic stem cells (HSCs) and bone marrow mesenchymal stem cells (hMSCs). The hMSCs play a multiple role in tumor growth: (1) inhibition of tumor growth, such as in lung cancer and liver cancer; (2) promotion of tumor growth, such as in multiple myeloma and breast cancer; 3) no obvious effect on tumor growth, such as in colon cancer. In 1999, Petersen et al. for the first time reported that liver oval cells and liver cells in rat can be differentiated from bone marrow cells [4]. Sato et al. divided human bone marrow cells into three types, including hMSCs, CD34 cells, and hMSCs/CD34-cells. These three types of cells were respectively transplanted into rat liver that was injured by allyl ethanol, and hMSCs were the main sources of hepatocytes in necrotic zone. Liver-specific markers were observed in these cells, and cell fusion was not observed [5]. Thereafter, in vivo and in vitro experiments demonstrated that hMSCs can differentiate into hepatocytes or hepatocyte-like cells [8]. So, aggregation of cells, both primary hepatoma cells lines and hMSC, replicate the natural environment of tumor stroma and permit an evaluation of the metastatic behavior and treatment effects of tumor [9]. In our study, all hMSC cells used were restrained within the tenth generation, and these cells were identified using surface antigens and inducing differentiation assay before using it. The hMSCs express CD29, CD44, and CD105 but not CD45 and CD14. Furthermore, these cells can differentiate to adipogenic cells, osteoblasts, and chondrocyte. Hence, the hMSCs were in line with the international standard [6]. MHCC97-H and MHCC97-L are liver cancer cell lines with different metastatic potential, constructed by the Liver Cancer Research Institute of the Fudan University (Shanghai, China) [9, 10]. Moreover, experimental evidence in support of uses of hMSC as vehicles of therapeutic genes is discussed. Because of its regenerative capacity and its particular immune properties, the liver is a good model to analyze the potential of MSC-based therapies. Finally, the potential application of hMSC and genetically modified hMSC in HCC is proposed in view of available evidence [7]. So, our study's purpose is to observe the effects of hMSC and genetically modified hMSC on different metastatic potential hepatoma cells and xenograft models. It has been reported [11, 12] that transforming growth factor beta (TGFβ) is a multifunctional cytokine family, which mainly plays roles in regulating cell proliferation, differentiation, and embryo development, promoting the formation of extracellular matrix and inhibiting immune response. Among the members of this family, serum TGFβ-1 is a principal isoform in humans, and it is closely associated with the occurrence and development of tumor. TGFβ-1 plays dual roles in inhibiting and promoting tumor growth. At the early stage of tumorogenesis, TGFβ-1 inhibited normal cell growth and tumorogenesis by suppressing G1/S phase transition. However, the tumor would not be sensitive to TGFβ-1 mediated growth inhibition with the development of tumor. Therefore, based on the hMSC is a good carrier for gene, our aim is to observe effects of the exogenous TGFβ-1 by gene modified hMSC experission on HCC in vitro, especially on the proliferation and invasion ability of tumor tissue or tumor cells. Results showed that hMSC TGFβ-1 gene transduction rate can reach more than 90 %. Reporter gene eGFP imaging showed that vectors carrying target gene sequences have been successfully imported hMSC. There were not obvious difference on cell morphology between hMSC and hMSC infected TGFβ-1 gene. So, it is to observe the changes of proliferation and metastasis ability, which the high expression TGFβ-1 of hMSC by gene modified technology is on the different metastatic potential hepatoma cells and xenograft models. In this experiment, we successfully infected TGFβ-1 gene to hMSC, which was high expression of TGFβ-1 cytokine. This study demonstrated that the hepatoma cells proliferation increased after these cells were co-cultured with hMSC and TGFβ-1 gene transfection hMSC, suggesting that hMSC that can promote the hepatoma cells proliferation and exogenous TGFβ-1 (by TGFβ-1 gene infected hMSC high expression) can also promote the hepatoma cells proliferation. Indeed, qPCR results showed that hepatoma cells expression TGFβ-1 in gene infected hMSC and hMSC groups were higher than in control groups. Our results were consistent with the findings of researchers in Fudan University (Shanghai, China) [10]. Fierro et al. study indicated that hMSCs have effects on tumor cells through paracrine mechanism. The hMSC promoted tumor cells proliferation by secreting several factors, including basic fibroblast growth factor (bFGF), platelet derived growth factor B (PDGF-BB), transforming β growth factor 1 (TGFβ-1), and vascular endothelial growth factor (VEGF) [13]. Zhang, et al. found also that TGFβ-1 in paracancerous liver tissue was positively correlated with tumor size. Higher production of TGFβ-1 in paracancerous liver tissue was always associated with bigger liver tumors [14]. So, TGFβ-1 gene infected hMSC can still further promote cell proliferation than hMSC without gene infection. Further, from the TGFβ-1 and PDCD4 gene expression by qPCR method, test results showed hMSC promoted hepatoma cells proliferation but had no obvious effect on hepatoma cells apoptosis. So, we can draw a conclusion that the hMSC of infected TGFβ-1 gene can more obviously promote the hepatoma cells proliferation and does not affect the hepatoma cells apoptosis. We are speculating it might be the exogenous TGFβ-1 working. However, we still do not know exogenous TGFβ-1 (by TGFβ-1 gene infected hMSC expression) or endogenous TGFβ-1 (by hepatoma cells expression) the differences of the biological mechanism in promoting hepatoma cells proliferation. The cells migration change represents the ability changes of cells metastasis. Tumor cells migration breaking through is a vital step for tumor metastasis. When MHCC97-H and MHCC97-L cells were used as control groups in the experiment, our data suggested that hMSC inhibited hepatoma cells migration and made metastatic potential decrease, especially TGFβ-1 genetically modified hMSC. But it was reported [15] that the exogenous TGFβ-1 cytokine could promote tumor migration and metastasis. How to explain this contrary results? It is worth mentioning that OPN expression down-regulated after hepatoma cells were co-cultured with hMSC and TGFβ-1 gene infected hMSC and may be used to explain metastatic potential decrease. In the malignant setting, OPN expression by breast, colorectal, and hepatocellular cancer cell lines stimulates primary tumor proliferation, migration, invasion, angiogenesis, and metastasis. OPN combined with tumor cell surface integrins receptor promotes tumor cell adhesion and extracellular matrix degradation [16, 17]. OPN contains RGD (Arg-Gly-Asp) sequence, which plays key roles in tumor invasion and metastasis. Previous studies found that integrin mediated the adhesions between HCC cell-cell interaction, as well as hepatoma cells and extracellular matrix, respectively. Additionally, integrin was involved in extracellular matrix degradation and hepatocellular carcinoma cell motility (thigmotaxis and chemiotaxis) [18]. Another study showed that the effect of hMSC that influences the hepatoma cells was by means of OPN expression. Hepatoma cells grown with hMSCs for 12 h aggregated into cell clusters resembling hepatospheres. However, cluster formation was inhibited in cultures with ablation of extracellular OPN and was also seen in cell surface αvβ3 integrin blockade with RGD peptide. Hepatospheres are organizing structures found in hepatic cell culture [19]. So, in the absence of OPN expression, cancer cells were less viable away from the primary tumor environment and were less capable of metastases. No doubt, further studies will be necessary to clarify the underlying mechanisms and relationship of TGFβ-1, OPN, and other biological factors. The paper has carried out a preliminary study for the hMSC intervention in xenograft models on the basis of cytological studies [20]. The most important finding obtained from the study is that the effectiveness of the intervention of hMSCs on HCC with high-metastatic potentials changes with time in animal models. The highest inhibition on HCC was observed at third week after hMSC engraftment. The highest tumor inhibition rate of MHCC97-H animal models was observed in the fourth week, and MHCC97-L animal models was observed in the fifth week after TGFβ-1 infectedhMSC engraftment. It was the same that the tumor inhibition rate gradually reduced with the prolongation of time. hMSC and TGFβ-1 infected hMSC exhibit anti-tumor activity in a time-dependent manner, and the activity against HCC gradually reduces with the prolongation of time. With regard to the intervention mechanism of hMSC on HCC, studies have reported that hMSC are found to colonize the liver of mouse models of orthotopic liver transplantation and can differentiate into hepatocytes that express albumin; in addition, the hMSCs are mainly found in the marginal area of the tumor and rarely present in tumors or normal liver tissues, and a large necrotic area is observed in tumor tissues following hMSC transplantation [21]. In addition to homogeneous gene characteristics and multilineage differentiation potential, hMSC also have a high efficacy of immunosuppression and anti-infection activity, and the characteristics of migration to inflammatory tissues and remodeling tissues (homing characteristic), which accelerate the repair of injured liver tissues, inhibit immune responses and confer anti-hepatic fibrosis functions [22]. However, prior to this work, to the best of our knowledge, there have been no reports on hMSC anti-tumor activity in a time-dependent manner. Our studies demonstrated that hMSC and TGFβ-1 gene infected hMSC can promote the different metastatic potential hepatoma cells proliferation and inhibit different metastatic potential hepatoma cells migration. In the animal models test, we demonstrated that hMSC and TGFβ-1 gene infected hMSC exhibit anti-tumor activity in a time-dependent manner, and the activity against HCC gradually reduces with the prolongation of time. TGFβ-1 cytokine may be mainly factor in HCC proliferation. OPN makes a significant contribution to the changes of hepatoma cells migration. Whether gene modified hMSC as a novel method to intervene the different metastatic potential HCC is the focus of future work. hMSC: Human bone marrow-derived mesenchymal stem cell HCC: Real-time PCR: OPN: Osteopontin MHCC97-H: High-metastatic hepatocellular carcinoma cell 97 MHCC97-L: Low-metastatic hepatocellular carcinoma cell 97 SPF: Specific-pathogen-free TGFβ-1: Transforming growth factor-β1 Cluster of differentiation PDCD4: Niess H, Bao Q, Conrad C, Zischek C, Notohamiprodjo M, Schwab F. Selective targeting of genetically engineered mesenchymal stem cells to tumor stroma microenvironments using tissue-specific suicide gene expression suppresses growth of hepatocellular carcinoma. Ann Surg. 2011;254:767–74. Aquino JB, Bolontrade MF, García MG, Podhajcer OL, Mazzolini G. Mesenchymal stem cells as therapeutic tools and gene carriers in liver fibrosis and hepatocellular carcinoma. Gene Ther. 2010;17:692–708. Choi D, Kim JH, Lim M, Song KW, Paik SS, Kim SJ, et al. Hepatocyte-like cells from human mesenchymal stem cells engrafted in regenerating rat liver tracked with in vivo magnetic resonance imaging. Tissue Eng C Meth. 2008;14:15–23. Petersen BE, Bowen WC, Patrene KD, Mars WM, Sullivan AK, Murase N, et al. Bone marrow as a potential source of hepatic oval cells. Science. 1999;284:1168–70. Sato Y, Araki H, Kato J, Nakamura K, Kawano Y, Kobune M, et al. Human mesenchymal stem cells xenografted directly to rat liver are differentiated into human hepatocytes without fusion. Blood. 2005;106:756–63. Dominici M, Le Blanc K, Mueller I. Minimal criteria for defining multipotent mesenchymal stromal cells. The International Society for Cellular Therapy position statement. Cytotherapy. 2006;8:315–7. Abd-Allah SH, Shalaby SM, El-Shal AS, et al. Effect of bone marrow-derived mesenchymal stromal cells on hepatoma. Cytotherapy. 2014;16:1197–206. Snykers S, Vanhaecke T, Papeleu P, Luttun A, Jiang Y, Vander Heyden Y, et al. Sequential exposure of cytokines reflecting embryogenesis: the key for in vitro differentiation of adult bone marrow stem cells into functional hepatocyte-like cells. Toxicol Sci. 2006;94:330–41. Li GC, Ye QH, Dong QZ, et al. TGF beta1 and related-Smads contribute to pulmonary metastasis of hepatocellular carcinoma in mice model. J Exp Clin Cancer Res. 2012;31:93. Li GC, Ye QH, Dong QZ, Ren N, Jia HL, Qin LX. Mesenchymal stem cells seldomly fuse with hepatocellular carcinoma cells and are mainly distributed in the tumor stroma in mouse models. Oncol Rep. 2013;29:713–19. Bollard CM, Rössig C, Calonge MJ, Huls MH, Wagner HJ, Massague J, et al. Adapting a transforming growth factor beta related tumor protection strategy to enhance antitumor immunity. Blood. 2002;99:3179–87. Meulmeester E, Ten Dijke P. The dynamic roles of TGF-β in cancer. J Pathol. 2011;223:205–18. Fierro FA, Kalomoiris S, Sondergaard CS, Nolta JA. Effects on proliferation and differentiation of multipotent bone marrow stromal cells engineered to express growth factors for combined cell and gene therapy. Stem Cells. 2011;29:1727–37. Zhang H, Wei S, Ning S, Jie Y, Ru Y, Gu Y. Evaluation of TGFbeta, XPO4, elF5A2 and ANGPTL4 as biomarkers in HCC. Exp Ther Med. 2013;5:119–27. CAS PubMed Central PubMed Google Scholar Sun YF, Zhang CX, Qin YM. The serum levels of VEGF and TGF-β1 in patients with primary hepatic. Cancer Chin J Clini Hepatol. 2010;26:205–7. Dai J, Peng L, Fan K, Wang H, Wei R, Ji G, et al. Osteopontin induces angiogenesis through activation of PI3K/AKT and ERK1/2 in endothelial cells. Oncogene. 2009;28:3412–22. Oates AJ, Barraclough R, Rudland JS. The role of osteopontin in tumorigenesis and metastasis. Invasion Metastasis. 1997;17:1–15. Giannelli G, Bergamini C. Human hepatocellular carcinoma (HCC) cells require both alpha3beta1 integrin and matrix metalloproteinases activity for migration and invasion. Lab Invest. 2001;81:613–27. Bhattacharya SD, Mi Z, Talbot LJ, Guo H, Kuo PC. Human mesenchymal stem cell and epithelial hepatic carcinoma cell lines in admixture: concurrent stimulation of cancer-associated fibroblasts and epithelial-to-mesenchymal transition markers. Surgery. 2012;152:449–54. Tianran L, Bin S, Xiangke D, Zhengmao W, Tianlong H. Effect of bone-marrow-derived mesenchymal stem cells on high-potential hepatocellular carcinoma in mouse models: an intervention study. Eur J Med Res. 2013;18:34. Choi D, Kim JH, Lim M, Song KW, Paik SS, Kim SJ, et al. Hepatocyte-like cells from human mesenchymal stem cells engrafted in regenerating rat liver tracked with in vivo magnetic resonance imaging. Tissue Eng Part C Methods. 2008;14:15–23. Yen ML, Chien CC, Chiu IM, Huang HI, Chen YC, Hu HI, et al. Multilineage differentiation and characterization of the human fetal osteoblastic 1.19 cell line: a possible in vitro model of human mesenchymal progenitors. Stem Cells. 2007;25:125–31. The authors would like to thank Deng Yibing and Deng Ling laboratory technicians (Cyagen GuangZhou Biosciences Inc.) for their excellent work in preparing the hMSC gene modified and PCR analyses for this article. The study was supported by the National Science Foundation of China (Grant No. 81271607). This work was supported by the Natural Science Foundation of Fujian Province, China (Grant No. 2013 J01392). Department of Radiology, The 95th Hospital of PLA, 485 Dongyan Road, Putian, Fujian province, 351100, People's Republic of China Tianran Li, Bin Song & Jun Zhou Department of Radiology, The 304th Hospital of PLA, 51 Fucheng Road, Beijing, Haidian District, 100048, People's Republic of China Tianran Li & Shaohong Zhao Department of Radiology, Peking University People's Hospital, 11 South street of Xizhimen, Beijing, Xicheng District, 100048, People's Republic of China Zhengmao Wei & Tianlong Huo Department of Radiology, The Nanjing General Hospital of PLA, Nanjing, Jiangsu province, 21000, People's Republic of China Guangming Lu Tianran Li Shaohong Zhao Bin Song Zhengmao Wei Jun Zhou Tianlong Huo Correspondence to Guangming Lu. LTR and LGM participated in the conception and design, administrative support, collection and/or assembly of data, data analysis and interpretation, manuscript writing, and final approval of manuscript. ZSH contributed to review the manuscript. ZJ contributed to the conception and design and administrative support. SB designs the clinical trial, collected patient data, and analyzed the data. HTL participated in collection the data and drafted the manuscript. WZM participated in the collection of data. All authors read and approved the final manuscript. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Li, T., Zhao, S., Song, B. et al. Effects of transforming growth factor β-1 infected human bone marrow mesenchymal stem cells on high- and low-metastatic potential hepatocellular carcinoma. Eur J Med Res 20, 56 (2015). https://doi.org/10.1186/s40001-015-0144-2 hMSC Hepatoma cells TGFβ-1 PDCD4
CommonCrawl
Section I.C. OTHER ESTIMATING CONTEXTS (draft) © 2003, 2014 M. Flashman Throughout the remainder of this book estimating a value of some quantity will be a recurrent theme as it has been throughout the history of mathematics. In Sections I.C.1 and I.C.2 we'll examine two more subject areas of both historical and current importance that were introduced in Section 0.C, probability and economics. In these subjects estimations of the type investigated with tangent lines and instantaneous velocity are also relevant.If you are anxious to reach the mathematical concepts that these models (along with those of Sections I.A and I.B) motivate you may proceed now directly to section I.D. However, be sure to cover this section before proceeding to Chapter II, as these models will serve as interpretations for some of the later results in the text. I.C.1 Probability Distributions and Density. In Section 0.C (which you might want to review now), we introduced some basic concepts of probability with the darts model. To explore these ideas further we'll continue our examination of that model. Recall that we were considering a circular magnetic board of radius sixty centimeters. This magnet is very strong and uniformly attracting so that when I turn my back to it and release a magnetic dart, the dart will be drawn to the board and is equally likely to land on any point on the board. This is our basic experiment. After performing the experiment we measured the random variable $R$, the distance from the dart to the center of the circle. Notation: As in Chapter 0.C we will denote the probability that the random variable $R$ has a value less than or equal to $A$ by F($A$) where $A$ is any real number. F is a function of $A$ called the (cumulative) probability distribution function for the random variable $R$. When analyzed by cases we saw that $ F($A$) = \begin{align} 0 \ \ \ \ \ \ \ \ A\le0 \\ \frac {A^2}{3600}\ \ 0 \lt A \lt 60 \\ 1 \ \ \ \ \ \ \ \ 60 \le A \end{align}$ . See Figures *** and *** for a mapping diagram and graph of this function. We also saw in Section 0.C that the probability of the random variable $R$ being precisely any specified value is always $0$. A related and important issue is how to measure the likelihood of the dart landing close to or equal to a particular distance, $A$, from the center of the dart board. To be more specific, how can we measure the likelihood that the value of $R$ would be close to or equal to $A=50$ centimeters? And can we compare this to the likelihood that the value of $R$ would be close to or equal to $A=10$ centimeters? The actual probability of these events depends on the value of $A$ and the length of the interval that is considered "close" to $A$. Example I.C.1.For our discussion, let's consider the dart falling close to 50 centimeters from the center as meaning that $R$ is in the interval [49,51], and falling close to 10 centimeters as having $R$ in the interval [9,11]. The probabilities for $R$ being in these intervals is computed as in Section 0.C: F(51) - F(49)\ = 512 / 3600 - 492 /3600 = 200 / 3600 = 1/18 for the interval [49,51], and F(11) - F(9) = 112 / 3600 - 92 / 3600 = 40 / 3600 = 1/90 for the interval [9,11]. These measurements do not resolve the dart issue very well since the probabilities change with the choice of the intervals used to determine what "close" means. (What if we had used the interval [49.9,50.1]? ) To resolve the issue of relative likelihood of the dart falling a certain distance from the center, we will develop the concept of point probability density. Point Probability Density: Recall from Section 0.C that the average probability density (APD) of a random variable $R$ for an interval [A,B] is the ratio of the probability that A< $R$ < B to the length of the interval $[A,B]$, i.e., $APD(A<B)=\frac {F(B)-F(A)}{B-A}$ So the APD of $R$ in the interval $[29,31], APD(29,31) = (1/18)/2 = \frac1 {36}$ and for the interval $[9,11]$ the $APD(9,11)= (1/90)/2 = \frac 1{180}$ Definition: The (point) probability density (PD) of a random variable $R$ for a value $A$, denoted $f (A)$, measures the likelihood that $R$ will take a value close to $A$. This number is estimated by average probability densities for very short intervals with $A$ as one endpoint. Let $\overline f(x)=\frac {F(x)-F(A)}{x-A} = APD(x,A)$, the APD for $R$ on an interval with endpoints $A$ and $x$ where $x$ is different from $A$. The preceding statements can be expressed by saying that as $x$ approaches $A$, the values of $\overline f(x)$ should approach $f (A)$, i.e., as $x \rightarrow A$, we should find that $\overline f(x) \rightarrow f(A)$. Note: This method is very similar to the way that average velocities for very short time intervals estimate the instantaneous velocity of a moving object, or that the slope of a secant line for a short interval estimates the slope of the tangent line to a curve. We'll continue the example that illustrates the meaning of this by determining the point probability density of $R$ for the values $A=50$ cm. and $A = 10$ cm. [You might pause a minute here and make your own estimates for these numbers.] EXAMPLE I.C.1. (Cont'd.) In the darts experiment described above and in Example 0.C.3 with the random variable $R$, we find the average probability density for $R$ for the intervals $[49,50]$ and $[50,51]$ to estimate the point probability of $R$ at $50$ cm. Using the probabilities from the previous discussion, we find the average probability density for $[49,50]$, $APD(49,50)= 99/3600 = 11/400$, while the $APD(50,51) = 101/3600$. We continue to examine more closely the APD for intervals with 50 as an endpoint in an attempt to determine whether some number can be assigned as a point probability density to the number 50. Figure I.17 shows a table of APD's for intervals with one endpoint being 50. Figure I.17 Table of APD's for intervals with one endpoint being 50 Interval Average Prob.Density [49,50] 99/3600 [49.9,50] 99.9/3600 [49.99,50] 99.99/3600 [50,51] 101/3600 [50,50.1] 100.1/3600 [50,50.01] 100.01/3600 Letting $\overline f(x)=$ the APD for $R$ on an interval with endpoints 50 and $x$ where $x$ is different from 50, we have that $\overline f(x) = [ x^2/3600 - 2500/3600] / [x -50]$ $= [(x+50)(x-50)] /[3600 (x-50)]$ $= [x + 50]/ 3600$ . Thus as $x \rightarrow 50, \overline f(x) \rightarrow \frac1{36}$ and we can say that at $A$ = 50 the random variable $R$ has a (point) probability density of $\frac1{36}$, or $f (50) = \frac1{36}$. Similar analysis of APD's for intervals with endpoint 10 leads to a finding that $R$ has a (point) probability density at $A= 10$ of $1/180$ or $f (10)= 1/180$. The higher probability density for $50$ in some sense explains why the value of $R$ - which measures the distance of the dart from the center of the dart board- is more likely to be closer to $50$ than to $10$. Comment: It is left for the reader to find a formula for the point probability density of $R$ for $A$ with $0 < A < 60$, i.e., a formula for f$ (A)$. [See Exercise 2.] Exercises I.C.1. For related problems on probability distributions and average probability density see Exercises 0.C. 1. For the random variable $R$ of Example I.C.1 find the point probability density for 20. Find the point probability for 40. Explain briefly why you would expect $R$ to be close to 40 more likely than it would be close to 20. 2. For the random variable $R$ of Example I.C.1 find the point probability density for $A$ where 0 < $A$ < 1 . Dart Boards with Different Shapes. 3. Suppose the dart board for our experiment is a 3' by 5' rectangle and when the dart lands we report the distance of the dart to the 5' side with the random variable L. [See Figure I.18] a. Find the point probability density for L at 1. b. Find the point probability density for L at $A$ with 0 < $A$ < 3. 4. Suppose the dart board is a right triangle with legs 3' and 4' and when the dart lands we report the distance of the dart to the 4' side with the random variable H. [See Figure I.19] a. Find the point probability density for H at .1. b. Find the point probability density for H at $A$ with 0 < $A$ < 3. 5. Suppose X is a random variable which takes values in the interval [0,2] and that for each of the following definitions of F, F($A$) gives the probability that X $A$ where 0<$A$<2. i. Find the (point) probability density of the distribution for X at 1. ii. Find the (point) probability density of the distribution for X at $A$ where 0 < $A$ < 2. a. F(A) = .5 A. b. F(A) = 1/4 A 2. c. F(A) = A2 - 3/2 A. d. F(A) = 1/8 A3 . e. F(A) = 1/16 A4. 6. Suppose F(A) = sin(A) describes the probability that a random variable Y is less than or equal to $A$ where 0 < A < π /2. Using your calculator estimate the point probability density for Y at π/4 using the intervals [ π/6, π/4] , [π /4, π/3], and [π /4, π /4 + .1]. Discuss briefly what you think the point probability density for Y at π /4 might be. [See problem 11 in section 0.C.] 7. Suppose that an object is inside a sphere of radius 10 centimeters and it is equally likely that it is at any point in the sphere. Let $R$ denote the random variable that measures the distance from the object to the center of the sphere. [See problem 12 in section 0.C.] Why is the probability that $R$ <= A with A < 10 given by A3 /1000? Find a formula for the point probability density for $R$ at $A$ with 0 <$A$ <10. 8. In Example I.C.1 consider the average probability density for intervals of the form [x,60] and [60,x]. Discuss the issue of whether there is a point probability density for $R$ at $A$=60 based on some calculations for these intervals. 9. Another Darts Board. Suppose the dart board has a shape bounded by the polygon in Figure *** in the coordinate plane and when the dart lands we report the first coordinate of the point measuring the distance of the point to the Y axis with the random variable X. a. Draw a mapping diagram and sketch of what the cumulative distribution function F for X might be, based on the figure, i.e., the function F where F(A) is the probability that $X \le A$. b. Estimate the point probability density for X at $A$ = 1, 2, 3, and 4.. c. Find the point probability density for X at $A$ with 0 < $A$ < 5. 10. Point Probability Density and The Tangent Problem. Write a comparison of the treatment of point probability density to the solution of the tangent problem. Discuss the following statement in your essay: " The probability density of a random variable X at a point $A$ is measured by the slope of the line tangent to the graph of the cumulative distribution function F at the point $(A,F(A))$." 11. Project on Estimation and Archimedes: The Greek mathematician and scientist Archimedes (287-212 B.C.E.) wrote perhaps the first work to give a systematic method for estimation. Read On the Measurement of the Circle where Archimedes estimates pi in Proposition 3. Write a paper describing the content and method of this work.
CommonCrawl
Optimization of a Bearing Fault Diagnosis Method Based on Convolutional Neural Network and Wavelet Packet Transform by Simulated Annealing Feng He, Qing Ye Subject: Engineering, Mechanical Engineering Keywords: Simulated annealing; Wavelet packet transform; Convolutional neural network Bearings are widely used in various types of electrical machinery and equipment. As their core components, failures will often cause serious consequences . At present, most methods of parameter adjustment are still manual adjustment of parameters. This adjustment method is susceptible to prior knowledge and easy to fall into the local optimal solution, failing to obtain the global optimal solution and requires a lot of resources.Therefore, this paper proposes a new method of bearing fault diagnosis based on wavelet packet transform and convolutional neural network optimized by simulated annealing algorithm.The experimental results show that the method proposed in this paper has a more accurate effect in feature extraction and fault classification compared with traditional bearing fault diagnosis methods. At the same time, compared with the traditional artificial neural network parameter adjustment, this paper introduces the simulated annealing algorithm to automatically adjust the parameters of the neural network, thereby obtaining an adaptive bearing fault diagnosis method. To verify the effectiveness of the method, the Case Western Reserve University bearing database was used for testing, and the traditional intelligent bearing fault diagnosis method was compared. The results show that the method proposed in this paper has good results in bearing fault diagnosis. Provides a new way of thinking in the field of bearing fault diagnosis in parameter adjustment and fault classification algorithms Driver Monitoring of Automated Vehicles by Classification of Driver Drowsiness using a Deep Convolutional Neural Network Trained by Scalograms of ECG Signals Sadegh Arefnezhad, Arno Eichberger, Matthias Frühwirth, Clemens Kaufmann, Maximilian Moser, Ioana Victoria Koglbauer Subject: Engineering, Automotive Engineering Keywords: Convolutional neural network; Driver drowsiness; ECG signal; Heart rate variability; Wavelet scalogram Driver drowsiness is one of the leading causes of traffic accidents. This paper proposes a new method for classifying driver drowsiness using deep convolution neural networks trained by wavelet scalogram images of electrocardiogram (ECG) signals. Three different classes were de-fined for drowsiness based on video observation of driving tests performed in a simulator for manual and automated modes. The Bayesian optimization method is employed to optimize the hyperparameters of the designed neural networks, such as the learning rate and the number of neurons in every layer. To assess the results of the deep network method, Heart Rate Variability (HRV) data is derived from the ECG signals, some features are extracted from this data, and finally, random forest and k-nearest neighbors (KNN) classifiers are used as two traditional methods to classify the drowsiness levels. Results show that the trained deep network achieves balanced accuracies of about 77% and 79% in the manual and automated modes, respectively. However, the best obtained balanced accuracies using traditional methods are about 62% and 64%. We conclude that designed deep networks working with wavelet scalogram images of ECG signals significantly outperform KNN and random forest classifiers which are trained on HRV-based features. Quantized Constant-Q Gabor Atoms for Sparse Binary Representations of Cyber-Physical Signatures Milton Garces Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Gabor atoms; wavelet entropy; binary metrics; acoustics; quantum wavelet Increased data acquisition by uncalibrated, heterogeneous digital sensor systems such as smartphones present new challenges. Binary metrics are proposed for the quantification of cyber-physical signal characteristics and features, and a standardized constant-Q variation of the Gabor atom is developed for use with wavelet transforms. Two different continuous wavelet transform (CWT) reconstruction formulas are presented and tested under different signal to noise ratio (SNR) conditions. A sparse superposition of Nth order Gabor atoms worked well against a synthetic blast transient using the wavelet entropy and an entropy-like parametrization of the SNR as the CWT coefficient-weighting functions. The proposed methods should be well suited for sparse feature extraction and dictionary-based machine learning across multiple sensor modalities. High Impedance Fault Detection in MV Distribution Network using Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Inference System Veerapandiyan Veerasamy, Noor Izzri Abdul Wahab, Rajeswari Ramachandran, Muhammad Mansoor, Mariammal Thirumeni Subject: Engineering, Electrical & Electronic Engineering Keywords: Discrete Wavelet Transform (DWT); Adaptive Neuro-Fuzzy Inference System (ANFIS); Fuzzy Logic system (FLS); High Impedance Fault (HIF). This paper presents a method to detect and classify the high impedance fault that occur in the medium voltage distribution network using discrete wavelet transform (DWT) and adaptive neuro-fuzzy inference system (ANFIS). The network is designed using Matlab software and various faults such as high impedance, symmetrical and unsymmetrical fault have been applied to study the effectiveness of the proposed ANFIS classifier method. This is achieved by training the ANFIS classifier using the features (standard deviation values) extracted from the three phase fault current signal by DWT technique for various cases of fault with different values of fault resistance in the system. The success and discrimination rate obtained for identifying and classifying the high impedance fault from the proffered method is 100% whereas the values are 66.7% and 85% respectively for conventional fuzzy based approach. The results indicate that the proposed method is more efficient to identify and discriminate the high impedance fault accurately from other power system faults in the system. Research on Image Denoising in edge detection Based on Wavelet Transform You Ning, Han LiBo, Zhu Daming, Song Weiwei Subject: Earth Sciences, Other Keywords: edge detection; wavelet transform; wavelet basis function; canny; Pratt quality factor In the process of image feature extraction, noise in the image will greatly affect the accuracy of edge detection. In this paper, the image is filtered to remove noise before edge detection by using the algorithm of wavelet transformation. Different wavelet functions are used to decompose image. Based on the experimental results, the best denoising wavelet function is selected. Canny algorithm is used to detect the edge of the denoised image, and the result of edge detection is evaluated according to the Pratt quality factor. It is proved that wavelet transform can improve the edge detection results. Identification of the Planetary Magnetosphere Boundaries with the Wavelet Multi-Resolution Analysis Mauricio Bolzan, Ezequiel Echer, Adriane Marques de Souza Franco, Rajkumar Hajra Subject: Physical Sciences, Acoustics Keywords: planetary magnetosphere; planetary bow shocks; planetary magnetopauses; Haar wavelet; Solar wind; Wavelet analysis The Haar wavelet decomposition technique is used to detect the planetary magnetosphere boundaries and discontinuities. We use the magnetometer data from the CASSINI and MESSENGER spacecraft to identify the abrupt changes in the magnetic field when the spacecraft crossed the magnetospheric bow shocks and magnetopauses of Saturn and Mercury, respectively. The results confirm that the Haar transform can efficiently identify the planetary magnetosphere boundaries characterized by the abrupt magnetic field changes. It is suggested that this technique can be applied to detect the planetary boundaries as well as the discontinuities such as the shock waves in the interplanetary space. Bagged Decision Trees Based Scheme of Microgrid Protection Using Windowed Fast Fourier and Wavelet Transforms Solomon Netsanet Alemu, Jianhua Zhang, Dehua Zheng Subject: Engineering, Electrical & Electronic Engineering Keywords: Microgrid; Protection; bagged decision tree; Wavelet; FFT Microgrids of varying size and applications are regarded as a key feature of modernizing the power system. The protection of those systems, however, has become a major challenge and a popular research topic for the reason that it involves greater complexity than traditional distribution systems. This paper addresses the issue through a novel approach which utilizes detailed analysis of current and voltage waveforms through windowed fast Fourier and wavelet transforms. The fault detection scheme involves bagged decision trees which use input features extracted from the signal processing stage and selected by correlation analysis. The technique was tested on a microgrid model developed using PSCAD/EMTDS, which is inspired from an operational microgrid in Goldwind Sc. Tech. Co. Ltd, in Beijing, China. The results showed great level of effectiveness to accurately identify faults from other non-fault disturbances, precisely locate the fault and trigger opening of the right circuit breaker/s under different operation modes, fault resistances and other system disturbances. Stability and Resolution Analysis of the Wavelet Collocation Upwind Schemes for Hyperbolic Conservation Laws Bing Yang, Jizeng Wang, Xiaojing Liu, Youhe Zhou Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Stability; Resolution; Wavelet upwind scheme; Hyperbolic conservation laws A system of wavelet collocation upwind schemes is constructed for solving hyperbolic conserva-tion laws based on a class of interpolation wavelets. The bias magnitude and symmetry factor are defined to depict the asymmetry of the adopted scaling basis function in wavelet theory. Effects of characteristics of the scaling functions on the schemes are explored based on numerical tests and Fourier analysis. The numerical results reveal that the stability of the constructed scheme is af-fected by the smoothness order, N, and the asymmetry of the scaling function. The dissipation analysis suggests that schemes with Neven have negative dissipation coefficients, leading to unstable behaviors. Only scaling functions with Nodd and bias magnitude of 1 can be used to construct stable upwind schemes due to the non-negative dissipation coefficients. Resolution of the wavelet scheme tends to the spectral resolution as the order of accuracy of the scheme in-creases. Coupling Fine-Scale Root and Canopy Structure Using Ground-Based Remote Sensing Brady S. Hardiman, Christopher M. Gough, John R. Butnor, Gil Bohrer, Matteo Detto, Peter S. Curtis Subject: Biology, Ecology Keywords: canopy; root; biomass; spatial wavelet coherence; radar; lidar Ecosystem physical structure, defined by the quantity and spatial distribution of biomass, influences a range of ecosystem functions. Remote sensing tools permit the non-destructive characterization of canopy and root features, potentially providing opportunities to link above- and belowground structure at fine spatial resolution in functionally meaningful ways. To test this possibility, we employed ground-based portable canopy lidar (PCL) and ground penetrating radar (GPR) along co-located transects in forested sites spanning multiple stages of ecosystem development and, consequently, of structural complexity. We examined canopy and root structural data for coherence at multiple spatial scales ≤ 10 m within each site using wavelet analysis. Forest sites varied substantially in vertical canopy and root structure, with leaf area index and root mass more evenly distributed by height and depth, respectively, as forests aged. In all sites, above- and belowground structure, characterized as mean maximum canopy height and root mass, exhibited significant coherence at a scale of 3.5-4 meters, and results suggest that the scale of coherence may increase with stand age. Our findings demonstrate that canopy and root structure are linked at characteristic spatial scales, which provides the basis to optimize scales of observation. Our study highlights the potential, and limitations, for fusing lidar and radar technologies to quantitatively couple above- and belowground ecosystem structure. Robust Codes Constructions based on Bent Functions and Spline-Wavelet Decomposition Alla Levina, Gleb Ryaskin Subject: Mathematics & Computer Science, Applied Mathematics Keywords: robust codes; bent-functions; spline-wavelet decomposition; error detection The paper investigates new robust code constructions based on bent functions and spline–wavelet transformation. Implementation of bent functions in codes construction increases the probability of error detection in the data channel and cryptographic devices. Meanwhile, use of spline–wavelets theory for constructing the codes gives the possibility to increase system security from the actions of an attacker. Presented constructions combines spline–wavelets functions and bent functions. Developed robust codes, compared to existing ones, have a higher parameter of maximum error masking probability. Illustrated codes are ensuring the security of transmitted information. Some of the granted constructions were implemented on FPGA. Combining Faraday Tomography and Wavelet Analysis Dmitry Sokoloff, Rainer Beck, Anton Chupin, Peter Frick, George Heald, Rodion Stepanov Subject: Physical Sciences, Astronomy & Astrophysics Keywords: galactic magnetic field; RM-synthesis; Faraday depolarization; wavelet analysis We present an idea how to use long-wavelength multi-wavelength radio continuum observations of spiral galaxies to isolate magnetic structures which were previously accessible from short-wavelength observations only. The approach is based on the RM-synthesis and 2D continuous wavelet transform. Wavelet analysis helps to recognize a configuration of small-scale structures which are produced by Faraday dispersion. We find that these structures can trace galactic magnetic arms for the case of the galaxy NGC 6946 observed at $\lambda = 17 \div 22$~cm. We support this interpretation by an analysis of a synthetic observation obtained using a realistic model of a galactic magnetic field. A Novel Device-Free Counting Method Based on Channel Status Information Junhuai Li, Pengjia Tu, Huaijun Wang, Kan Wang, Lei Yu Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: wavelet transform; covariance matrix; spatial diversity; frequency diversity; robustness Crowd counting is of significant importance for numerous applications, e.g., urban security, intelligent surveillance and crowd management. Existing crowd counting methods typically require specialized hardware deployment and strict operating conditions, thereby hindering their widespread deployment. To acquire a more effective crowd counting approach, a device-free counting method based on Channel Status Information (CSI) is proposed, which could mitigate environment noise through wavelet transform and extract the amplitude or phase covariance matrix as the feature vector. Moreover, both the spatial diversity and frequency diversity are leveraged to improve detection robustness. The accuracy of the proposed CSI-based method is compared with a renowned crowd counting one, i.e., Electronic Frog Eye: Counting Crowd Using WiFi (FCC). The experimental results reveal an accuracy improvement of 30% over FCC. Methods for Analyzing Surface Texture Effects of Volcanoes with Plinian and Subplinian Eruptions Types: Cases of Study Lascar (23° 22' S) and Chaiten (42 ° 50' S), Chile Luis Fernández, Gabriel Álvarez, Renato Salinas Subject: Earth Sciences, Environmental Sciences Keywords: subplinian eruption; co-occurrence matrix; wavelet transform; similarity metrics This paper presents a new methodology that provides the analysis of surface texture changes in areas adjacent to the volcano and its impact product of volcanic activity. To do this, algorithms from digital image processing such as the co-occurrence matrix and the wavelet transform are used. These methods are working on images taken by the Landsat satellite platform sensor 5 TM and Landsat 7 ETM + sensor, and implemented with the purpose of evaluating superficial changes that can warn of surface movements of the volcano. The results were evaluated by similarity metrics for grayscale images, and validated in two different scenarios that have the same type of eruption, but differ, essentially, in climate and vegetation. Finally, the proposed algorithm is presented, setting the parameters and constraints for implementation and use. Fault Prediction Based on Leakage Current in Contaminated Insulators Using Enhanced Time Series Forecasting Models Nemesio Fava Sopelsa Neto, Stefano Frizzo Stefenon, Luiz Henrique Meyer, Raúl García Ovejero, Valderi Reis Quietinho Leithardt Subject: Engineering, Electrical & Electronic Engineering Keywords: LSTM; GMDH; ANFIS; Ensemble Learning Models; Wavelet; Time Series Forecasting To improve the monitoring of the electrical power grid it is necessary to evaluate the influence of contamination in relation to leakage current and its progression to a disruptive discharge. In this paper, insulators were tested in a saline chamber to simulate the increase of salt contamination on their surface, and to evaluate the supportability of these components. From the time series forecasting of the leakage current, it is possible to evaluate the development of the fault before a flashover occurs. Choosing which method to use is always a difficult task since some models may have higher computational effort. In this paper, for a complete evaluation, the long short-term memory (LSTM), group method of data handling (GMDH), adaptive neuro-fuzzy inference system (ANFIS), bootstrap aggregation (bagging), sequential learning (boosting), random subspace, and stacked generalization (stacking) ensemble learning models are analyzed. A review and comparison of these well-established methods for time series forecasting is performed. From the results of the best structure of the model, the hyperparameters are evaluated and the Wavelet transform is used to obtain an enhanced model. A Wavelet-Based Method for the Impact of Social Media on the Economic Situation: The Saudi Arabia 2030-Vision Case Majed Salah S. Balalaa, ANOUAR Ben Mabrouk, Habiba Abdessalem Subject: Social Sciences, Accounting Keywords: Textual analysis; Media; Correspondence analysis; Wavelet thresholding; KSA-2030 Vision In the present paper, we propose a wavelet method to study the impact of electronic media on economic situations. We precisely apply wavelet techniques versus classical methods to analyze economic indices in the market. The technique consists firstly in filtering the data from unprecise circumstances (noise) to construct next a wavelet denoised contingency table. Next, a thresholding procedure is applied to such a table to extract the essential information porters. The resulting tables subject finally to correspondence analysis before and after thresholding. As a case of study, we are empirically concerned with the 2030 KSA vision in electronic and social media. Effects of the electronic media texts about the trading 2030 Vision on the Saudi and global economy have been studied. Recall that the Saudi market is the most important representative market in the GCC continent. It has both regional and worldwide influence on economies and besides, it is characterized by many political, economic, and financial movements such as the worldwide economic NEOM project. The findings provided in the present paper may be applied to predict future GCC markets situation and thus may be a basis for investors' decisions in such markets. Multi-Horizon Dependence between Crude Oil and East Asian Stock Markets and Implications in Risk Management Xiaojing Cai, Shigeyuki Hamori, Lu Yang, Shuairu Tian Subject: Social Sciences, Finance Keywords: crude oil; East Asian stock markets; wavelet; copula; dynamic hedging This paper examines the dynamic dependence structure of crude oil and East Asian stock markets at multiple frequencies using wavelet and copulas. We also investigate risk management implications and diversification benefits of oil-stock portfolios by calculating and comparing risk and tail risk hedging performance. Our results provide strong evidence of time-varying dependence and asymmetric tail dependence between crude oil and East Asian stock markets at different frequencies. The level and fluctuation of their dependencies increase as time scale increases. Furthermore, we find the time-varying hedging benefits differ at investment horizons and reduced over the long run. Unsupervised Analysis of Small Molecule Mixtures by Wavelet-Based Super-Resolved NMR Aritro Sinha Roy, Madhur Srivastava Subject: Chemistry, Physical Chemistry Keywords: NMR; shift spectra; wavelet packet transform; automated small molecule mixture analysis Resolving small molecule mixtures by nuclear magnetic resonance (NMR) spectroscopy has been of great interest for a long time for its precision, reproducibility and efficiency. However, spectral analyses for such mixtures are often highly challenging due to overlapping resonance lines and limited chemical shift windows. The existing experimental and theoretical methods to produce shift NMR spectra in dealing with the problem have limited applicability owing to sensitivity issues, inconsistency and / or requirement of prior knowledge. Recently, we have resolved the problem by decoupling multiplet structures in NMR spectra by the wavelet packet transform (WPT) technique. In this work, we developed a scheme for deploying the method in generating highly resolved WPT NMR spectra and predicting the composition of the corresponding molecular mixtures from their 1H NMR spectra in an automated fashion. The four-step spectral analysis scheme consists of calculating WPT spectrum, peak matching with a WPT shift NMR library, followed by two optimization steps in producing the predicted molecular composition of a mixture. The robustness of the method was tested on an augmented dataset of 1000 molecular mixtures, each containing 3 to 7 molecules. The method successfully predicted the constituent molecules with a median true positive rate of 1.0 against the varying compositions, while a median false positive rate of 0.04 was obtained. The approach can be scaled easily for much larger datasets. Prediction of Wind Speed Using Hybrid Techniques. Three locations: Colombia, Ecuador and Spain Luis Lopez, Ingrid Oliveros, Luis Torres, Lacides Ripoll, Jose Soto, Giovanny Salazar, Santiago Cantillo Subject: Engineering, Automotive Engineering Keywords: Empirical Mode Decomposition; Hybrid techniques; LSSVM; Wavelet transform; Wind speed prediction This paper presents a methodology to calculate day-ahead wind speed predictions based on historical measurements done by weather stations. The methodology was tested for three locations: Colombia, Ecuador, and Spain. The data is input into the process in two ways: 1) as a single time series containing all measurements, and 2) as twenty-four separate parallel sequences, corresponding to the values of wind speed at each of the 24 hours in the day over several months. The methodology relies on the use of three non-parametric techniques: Least-Squares Support Vector Machines, Empirical Mode Decomposition, and the Wavelet Transform. Also, the traditional and simple Auto-Regressive model is applied. The combination of the aforementioned techniques results in nine methods for performing wind prediction. Experiments using a MATLAB implementation showed that the Least-squares Support Vector Machine using data as a single time series outperformed the other combinations, obtaining the least mean square error. An Approach Towards Motion-Tolerant PPG-Based Algorithm for Real-Time Heart Rate Monitoring of Moving Pigs Ali Youssef, Alberto Peña Fernández, Laura Wassermann, Svenja Biernot, Eva-Maria Wittauer, Andre Bleich, Joerg Hartung, Daniel Berckmans, Tomas Norton Subject: Engineering, Biomedical & Chemical Engineering Keywords: Pig's Heart Rate; Photoplethysmography (PPG); Continuous Wavelet Transform (CWT); Motion Artefacts Animal welfare remains a very important issue in the livestock sector but monitoring animal welfare in an objective and continuous way remains a serious challenge. Monitoring animal welfare based upon physiological measurements instead of audio-visual scoring of behaviour would be a step forward. One of the obvious physiological signals related to welfare and stress is heart rate. The objective of this research was to measure heart rate (beat per minutes) on pigs with technology that soon will be affordable. Affordable heart rate monitoring is done today at large scale on humans using the Photo Plethysmography (PPG) technology. We used PPG sensors on pig's body to test whether it allows getting reliable heart rate signal. A continuous wavelet transform (CWT)-based algorithm is developed to decouple the cardiac pulse waves from the pig. Three different wavelets, namely 2nd, 4th and 6th order Derivative of Gaussian (DOG, are tested. We show results of the developed PPG-based algorithm against electrocardiograms (ECG) as a reference measure for heart rate and this for an anesthetized versus a non-anesthetised animal. We tested three different anatomical body positions (ear, leg and tail) and give results for each body position of the sensor. In summary, it can be concluded that the agreement between PPG-based heart rate technique and reference sensor goes from 91 to 95 percentage. In this paper we showed the potential of using the PPG-based technology to assess pig's hear rate. Non-Invasive PPG-Based System for Continuous Heart Rate Monitoring of Incubated Avian Embryo Ali Youssef, Daniel Berckmans, Tomas Norton Subject: Engineering, Biomedical & Chemical Engineering Keywords: Embryonic Heart Rate; Photoplethysmography (PPG); Continuous Wavelet Transform (CWT); Spectral Entropy The chicken embryo is a widely used experimental animal-model in many studies such as developmental biology and to study the physiological responses and adaptation to altered environments as well as for cancer and neurobiology research. Embryonic heart rate is an important physiological variable useful as an index reflecting the embryo's natural activity and is considered one of the most difficult parameters to measure. An acceptable measurement technique of embryonic heart rate should provide a reliable cardiac signal quality while maintaining adequate gas exchange through the eggshell along the incubation and embryonic developmental period. In this paper, we presented a detailed design and methodology for a non-invasive PPG-based prototype (Egg-PPG) for real-time and continuous monitoring of embryonic heart rate during incubation. An automatic embryonic cardiac wave detection algorithm, based on normalised spectral entropy, is described. The developed algorithm successfully estimated the embryonic heart rate with 98.7% accuracy. We believe that the developed overall system presented in this paper is showing a promising solution for non-invasion, real-time monitoring of embryonic cardiac signal, which can be used in both experimental studies (e.g., developmental embryology and cardiovascular research) and in industrial incubation applications. Dimension Reduction of Machine Learning-based Forecasting Models Employing Principal Component Analysis Yinghui Meng, Sultan Noman Qasem, Manouchehr Shokri, S Shamshirband Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Machine learning; Dimensionality reduction; Wavelet transform; Water quality; Principal component analysis In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-Wavelet-ANN models are compared with those obtained from Wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid Wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the Wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-Wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicators in rivers. Removing Striping Noise from Cloudy Level 2 Sea Surface Temperature and Ocean Color Datasets Brahim Boussidi, Ronan Fablet, Bertrand Chapron Subject: Earth Sciences, Oceanography Keywords: Destriping; Undecimated wavelet transform; Fourier filtering; Sea Surface Temperature; Ocean color This paper introduces a new destriping algorithm for remote sensing data. The method is based on combined Haar Stationary Wavelet transform and Fourier filtering. State-of-the-Art methods based on the discrete wavelet transform (DWT) may not always be effective and may cause different artifacts. Our contribution is three-fold: i) we propose to use the Undecimated Wavelet transform (UWT) to avoid as much as possible shortcomings of the classical DWT; ii) we combine a spectral filtering and UWT using the simplest possible wavelet, the Haar basis, for a computational efficiency; iii) we handle 2D fields with missing data, as commonly observed in ocean remote sensing data due to atmospheric conditions (e.g., cloud contamination). The performances of the proposed filter are tested and validated on the suppression of horizontal strip artifacts in cloudy L2 Sea Surface Temperature (SST) and ocean color snapshots. Dependence Structure between Bitcoin and Economic Policy Uncertainty: Evidence from Time-Frequency Quantile-Dependence Methods Samia Nasreen, Aviral Kumar Tiwari, Zhuhua Jiang, Seong-Min Yoon Subject: Social Sciences, Economics Keywords: Bitcoin; economic policy uncertainty; spillover; wavelet coherence analysis; quantile cross-spectral dependence In this study, the dependence between Bitcoin (BTC) and economic policy uncertainty (EPU) of USA and China is estimated by applying latest methodology of quantile cross-spectral dependence. The findings indicate a positive return interdependence between BTC and EPU is high in short-term, and this dependence decreases as investment horizons increase from weekly to yearly. The information on above interdependence is also extracted by applying wavelet coherence analysis and the estimation results suggest that correlation between BTC and EPU is positive during short-term investment horizon. Furthermore, more diversification benefits of BTC can be obtained during USA-EPU as compared to China-EPU. Perspectives and Interpretations: Weighted Power Law Analyses of Three Short Works for Piano Douglas Scott Subject: Arts & Humanities, Music Studies Keywords: Music, Power laws, Zipf's law, Noise profiles, Timescape, Wavelet, Schumann, Chopin, Mendelssohn Power law relationships, which describe scaling relationships of data, are powerful information theoretic descriptive tools in many empirical contexts, including that of music. Zipf's law (pink noise) describes the optimum case of a power law relation where observations are exactly inversely proportional to their rank. Descriptions that approximate a pink noise signature can be said to maximize the amount of information in a signal, and is thus suggestive of a richness of an understanding. This information density of pink noise signatures is not, however, necessarily a desirable quality for explanations in general, which, by definition, "flattens out" some data while highlighting others, ideally those most relevant to an interpretation. The privileging of data most relevant to comprehension corresponds to a red noise relationship, as opposed to the pink noise of Zipf's law or the white noise of a description that highlights nothing in particular. Here, I explore and evaluate this concept of red noise explanations in the form of analyses of three comparable short piano works: Robert Schumann's "Von fremden Ländern und Menschen" (no.1 from Kinderszenen op. 15, 1838), Frédéric Chopin's Prelude op. 28 no. 20 (1838-9), and Felix Mendelssohn's "Venetianisches Gondellied" (no. 6 from Lieder ohne Worte op. 19b, 1829-30). Continuous m-Health Monitoring and Patient Verification Using Bioelectrical Signals Timibloudi S. Enamamu, Abayomi M. Otebolaku, Dany Joy, Jims N. Marchang Subject: Mathematics & Computer Science, Other Keywords: bioelectrical signals; biorthogonal wavelet; approximation coefficients; detail coefficient; smartwatch; m-health monitoring The World Health Organization(WHO) in 2016 considered mHealth as: "the use of mobile wireless technologies including smart devices such as smartphones and smartwatches for public health" as an important resource for health services delivery and public health given their ease of use, broad reach and acceptance. WHO emphasizes the potential of this technology to increase access to health information, services and skills as well as promoting positive changes in health behaviors and management of diseases. In this regard, the capability of smartphones and smartwatches for m-health monitoring as well as verification of the patient the signal has become an important component of mHealth system. Most of the smartwatches could extract more than one bioelectrical signal therefore, therefore they provide suitable platform for extracting health data for e-monitoring. The existing approaches have not considered the integrity of data obtained from these smart devices. Therefore, it is important that the integrity of the collected data be verified continuously through user authentication. This could be done using any of the bioelectrical signals extracted and transmitted for e-monitoring. In this article, a smartwatch is used for extracting bioelectrical signal before decomposing the signal into sub-bands of Detail and Approximation Coefficient for user authentication. To select suitable features using biorthogonal wavelet decomposition of signal from a non-intrusive extraction, a detailed experiment is conducted extracting suitable statistical features from the bioelectrical signal from 30 subjects using different biorthogonal wavelet family. Ten features are extracted using Biorthogonal wavelet to decompose the signal into three levels of sub-band Detail and Approximation Coefficient and features extracted from each level the decomposed Detail and Approximation Coefficients. Comparison analysis is done after the classification of the extracted features based on the Equal Error Rate (EER). Using Neural Network (NN) classifier, Biorthogonal Wavelet Detail Coefficient Sub-band level 3 of bior1.1 achieved the best result of EER 13.80% with the fusion of the best sub-band three levels of bior1.1 achieving a better result of 12.42% EER. Investigate the Co-Movement Relationship between Medical Expenditure and GDP in Taiwan – Base on Wavelet Analysis Hsin-Pei Hsueh, Chien-Ming Wang, Cheng-Feng Wu, Fangjhy Li Subject: Social Sciences, Economics Keywords: gross domestic product; medical expenditures; Wavelet analysis; co-movement relationship; health insurance The universal health insurance system in Taiwan was formed with good intentions to help vulnerable groups. However, the possibility of bankrupting the system due to wasted medical resources. In this study, using the medical expenditures of the Taiwanese Government and gross domestic product (GDP) as variables, the wavelet analysis method was used to empirically study the correlations and leading-lagging relationships in quarterly data in the period from 1996 to 2016. In addition, the dependent population of the insured was used as the control variable. This population had no income and had high medical demands. Results: After the dependent population was included as a control variable, there was a period of low-frequency (one to four years short-term) linkage correlation, as well as a period of high-frequency (four to eight years long-term) linkage correlation. In addition, for more than eight years, there was also a high degree of linkage correlation, indicating that the linkage between medical expenditures and GDP occurred over the long term. Moreover, since medical expenditures positively affected GDP, one-way causality was observed. However, after 2008, regardless of whether a long or short term was examined, there was almost no linkage correlation. Before 2008, the medical expenditures of the government were positively correlated with economic growth; i.e., they enhanced economic growth. But, after 2008, this effect had already disappeared. The universal health insurance system has long been denounced as a waste of medical resources, and the waste must be immediately stopped. The government urgently needs to find a new solution. Quantification of the Direct Solar Impact on Some Components of the Hydroclimatic System Mares Constantin, Mares Ileana, Dobrica Venera, Demetrescu Crisan Subject: Earth Sciences, Atmospheric Science Keywords: time series; causality; entropy transfer; wavelet analysis; neural networks; climate response; solar impact This study addresses the causal links between external factors and the main hydro-climatic variables. There is a gap in the literature on the description of a complete chain in addressing the structures of direct causal links of solar activity on the terrestrial variables. This is why, the present study uses the extensive facilities of the application of information theory in view of recent advances in different fields. Also, by other methods (e.g. neural networks) first are tested the existence non-linear links of solar-terrestrial influences on hydro-climate system. The results are promising related to the solar impact on terrestrial phenomena which is discriminant in space-time domain. The implications prove robust for determining the causal measure of climate variables under direct solar impact which makes it easier to consider solar activity in climate models, by appropriate parametrizations. This study found that hydro-climatic variables are sensitive to solar impact only for certain frequencies (periods) and these have a coherence with the Solar-Flux only for some lags of the Solar-Flux (in advance). Time Series Analysis of MODIS-Derived NDVI for the Hluhluwe-iMfolozi Park, South Africa: Impact of Recent Intense Drought Nkanyiso Mbatha, Sifiso Xulu Subject: Earth Sciences, Environmental Sciences Keywords: drought; NDVI; ENSO; wavelet; time series analysis; Hluhluwe-iMfolozi Park; Google Earth Engine The variability of meteorological parameters such as temperature and precipitation, and climatic conditions such as intense droughts, are known to impact vegetation health over southern Africa. Thus, understanding large-scale ocean–atmospheric phenomena like the El Niño/Southern Oscillation (ENSO) and Indian Ocean Dipole/Dipole Mode Index (DMI) is important as these factors drive the variability of temperature and precipitation. In this study, 16 years (2002–2017) of Moderate Resolution Imaging Spectroradiometer (MODIS) Terra/Aqua 16-day normalized difference vegetation index (NDVI), extracted and processed using JavaScript code editor in the Google Earth Engine (GEE) platform in order to analyze the response pattern of the oldest proclaimed nature reserve in Africa, the Hluhluwe-iMfolozi Park (HiP), during the study period. The MODIS-enhanced vegetation index and burned area index were also analyzed for this period. The area-averaged Modern Retrospective Analysis for Research Application (MERRA) model maximum temperature and precipitation were also extracted using the JavaScript code editor in the GEE platform. This procedure demonstrated a strong reversal of both the NDVI and Enhanced Vegetation Index (EVI), leading to signs of a sudden increase of burned areas (strong BAI) during the strongest El Niño period. Both the Theilsen method and the Mann–Kendall test showed no significant greening or browning trends over the whole time series, although the annual Mann–Kendall test, in 2003 and 2014–2015, indicated significant browning trends due to the most recent strongest El Niño. Moreover, a multi-linear regression model seems to indicate a significant influence of both ENSO activity and precipitation. Our results indicate that the recent 2014–2016 drought altered the vegetation condition in the HiP. We conclude that it is vital to exploit freely available GEE resources to develop drought monitoring vegetation systems, and to integrate climate information for analyzing its influence on protected areas, especially in data-poor counties. Disturbance Elimination for Partial Discharge Detection in Spacer of Gas-Insulated Switchgears Guoming Wang, Gyung-Suk Kil, Hong-Keun Ji , Jong-Hyuk Lee Subject: Engineering, Electrical & Electronic Engineering Keywords: partial discharge; gas-insulated switchgears; spacer; capacitive component; wavelet transform; multi-resolution analysis With the increasing demand for precise condition monitoring and diagnosis of gas-insulated switchgears (GIS), it has become a challenge to improve the detection sensitivity of partial discharge (PD) induced in the GIS spacer. This paper deals with the elimination of the capacitive component from the phase resolved partial discharge (PRPD) signal generated in GIS spacer based on the discrete wavelet transform. Three types of typical insulation defects were simulated using PD cells. The single PD pulses were detected and were further used to determine the optimal mother wavelet. As a result, the bior6.8 was selected to decompose the PD signal into 8 levels and the signal energy at each level was calculated. The decomposed components related with capacitive disturbance were discarded whereas those associated with PD were de-noised by a threshold and a thresholding function. Finally, the PRPD signals were reconstructed using the de-noised components. Wavelet Long Short-Term Memory to Fault Forecasting in Electrical Power Grids Nathielle Waldrigues Branco, Mariana Santos Matos Cavalca, Stefano Frizzo Stefenon, Valderi Reis Quietinho Leithardt Subject: Engineering, Electrical & Electronic Engineering Keywords: Electrical Power Grids; Fault Forecasting; Long Short-Term Memory; Time Series Forecasting; Wavelet Transform The electric power distribution utility is responsible for providing energy to consumers in a continuous and stable way, failures in the electrical power system reduce the reliability indexes of the grid, directly harming its performance. For this reason, there is a need for failure prediction to reestablish power in the shortest possible time. Considering an evaluation of the number of failures over time, this paper proposes to perform a failure prediction during the first year of the pandemic in Brazil (2020) to verify the feasibility of using time series forecasting models for fault prediction. The Long Short-Term Memory (LSTM) model will be evaluated to obtain a forecast result that can be used by the electric power utility to organize the maintenance teams. The Wavelet transform shows to be promising in improving the predictive ability of the LSTM, making the Wavelet LSTM model suitable for the study at hand. The results show that the proposed approach has better results regarding the evaluation of the error in prediction and has robustness when a statistical analysis is performed. Statistical Model-Based Classification to Detect Patient-Specific Spike-and-Wave in EEG Signals Antonio Quintero Rincón, Hadj Batatia, Jorge Prende, Valeria Muro, Carlos D'Giano Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Spike-and-wave; Generalized Gaussian distribution; EEG; Morlet wavelet; k-nearest neighbors classifier; Epilepsy Spike-and-wave discharge (SWD) pattern detection in electroencephalography (EEG) signals is a key signal processing problem. It is particularly important for overcoming time-consuming, difficult, and error-prone manual analysis of long-term EEG recordings. This paper presents a new SWD method with a low computational complexity that can be easily trained with data from standard medical protocols. Precisely, EEG signals are divided into time segments for which the Morlet 1-D decomposition is applied. The generalized Gaussian distribution (GGD) statistical model is fitted to the resulting wavelet coefficients. A k-nearest neighbors (k-NN) self-supervised classifier is trained using the GGD parameters to detect the spike-and-wave pattern. Experiments were conducted using 106 spike-and-wave signals and 106 non-spike-and-wave signals for training and another 96 annotated EEG segments from six human subjects for testing. The proposed SWD classification methodology achieved 95 % sensitivity (True positive rate), 87% specificity (True Negative Rate), and 92% accuracy. These results set the path to new research to study causes underlying the so-called absence epilepsy in long-term EEG recordings. A Comparison of Denoising Methods in Onset Determination in Medial Gastrocnemius Muscle Activations during Stance Jian Zhang, Rahul Soangra, Thurmon E. Lockhart Subject: Keywords: ensemble empirical mode decomposition (EEMD); denoising; mode mixing; electromyographic (EMG) signals; filtering; wavelet method One of the most basic pieces of information gained from dynamic electromyography is accurately defining muscle action and phase timing within the gait cycle. The human gait relies on selective timing and the intensity of appropriate muscle activations for stability, loading, and progression over the supporting foot during stance, and further to advance the limb in the swing phase. A common clinical practice is utilizing a low-pass filter to denoise integrated electromyogram (EMG) signals and to determine onset and cessation events using a predefined threshold. However, the accuracy of the defining period of significant muscle activations via EMG varies with the temporal shift involved in filtering the signals; thus, the low-pass filtering method with a fixed order and cut-off frequency will introduce a time delay depending on the frequency of the signal. In order to precisely identify muscle activation and to determine the onset and cessation times of the muscles, we have explored here onset and cessation epochs with denoised EMG signals using different filter banks: the wavelet method, empirical mode decomposition (EMD) method, and ensemble empirical mode decomposition (EEMD) method. In this study, gastrocnemius muscle onset and cessation were determined in sixteen participants within two different age groups and under two different walking conditions. Low-pass filtering of integrated EMG (iEMG) signals resulted in premature onset (28% stance duration) in younger and delayed onset (38% stance duration) in older participants, showing the time-delay problem involved in this filtering method. Comparatively, the wavelet denoising approach detected onset for normal walking events most precisely, whereas the EEMD method showed the smallest onset deviation. In addition, EEMD denoised signals could further detect pre-activation onsets during a fast walking condition. A comprehensive comparison is discussed on denoising EMG signals using EMD, EEMD, and wavelet denoising in order to accurately define an onset of muscle under different walking conditions. Investigation of De Speckling Techniques for Echocardiographic Images: A Review Rehan Ahmad, Mohan Awasthy Subject: Engineering, Electrical & Electronic Engineering Keywords: ultrasound image; speckle noise; wiener filter; average filter; wavelet filter; adaptive filter; fractional filter Speckle noise corrupt the major part of ultrasound image, because of which the quality deteriorate and loss of valuable information leads to false diagnosis. A large community of images like synthetic aperture radar (SAR) image, Synthetic image, and simulated ultrasound image, require despeckling at pre-processing stage for better processing. Cleaning the speckle from image and preserving the edge details is a vital task. Nowadays not only despeckling is considered as an important process but also preserving information at boundary and edges of image is also important. As most of the algorithms able to remove speckle noise but fails to preserve the details of edges. This paper covers several recent methods for removal of speckle noise along with various metrics opted for comparisons. The distinctive part of this paper is, a mathematical and parametric review has been done. Also a table is also included which summarizes the entire paper. Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints Assia El Mahdaoui, Abdeldjalil Ouahabi, Mohamed Said Moulay Subject: Engineering, General Engineering Keywords: compressive sensing; image reconstruction; regularization; total variation; augmented Lagrangian; non-local self-similarity; wavelet denoising In remote sensing applications, one of the key points is the acquisition, real-time pre-processing and storage of information. Due to the large amount of information present in the form of images or videos, compression of this data is necessary. Compressed sensing (CS) is an efficient technique to meet this challenge. It consists in acquiring a signal, assuming that it can have a sparse representation, using a minimal number of non-adaptive linear measurements. After this CS process, a reconstruction of the original signal must be performed at the receiver. Reconstruction techniques are often unable to preserve the texture of the image and tend to smooth out its details. To overcome this problem, we propose in this work, a CS reconstruction method that combines the total variation regularization and the non-local self-similarity constraint. The optimization of this method is performed by the augmented Lagrangian which avoids the difficult problem of non-linearity and non-differentiability of the regularization terms. The proposed algorithm, called denoising compressed sensing by regularizations terms (DCSR), will not only perform image reconstruction but also denoising. To evaluate the performance of the proposed algorithm, we compare its performance with state-of-the-art methods, such as Nesterov's algorithm, group-based sparse representation and wavelet-based methods, in terms of denoising, and preservation of edges, texture and image details, as well as from the point of view of computational complexity. Our approach allows to gain up to 25% in terms of denoising efficiency, and visual quality using two metrics: PSNR and SSIM. Continuous Wavelet Transform of Schwartz Tempered Distributions in $S'(\mathbb R^n)$ Jagdish Narayan Pandey, Jay Singh Maurya, Santosh Kumar Upadhyay, Hari Mohan Srivastava Subject: Mathematics & Computer Science, Analysis Keywords: function spaces and their duals; distributions; generalized functions; distribution space; wavelet transform of generalized functions In this paper we define a continuous wavelet transform of a Schwartz tempered distribution $f \in S^{'}(\mathbb R^n)$ with wavelet kernel $\psi \in S(\mathbb R^n)$ and derive the corresponding wavelet inversion formula interpreting convergence in the weak topology of $S^{'}(\mathbb R^n)$. It turns out that the wavelet transform of a constant distribution is zero and our wavelet inversion formula is not true for constant distribution, but it is true for a non-constant distribution which is not equal to the sum of a non-constant distribution with a non-zero constant distribution. Automated Recognition of Epileptic EEG States Using a Combination of Symlet Wavelet Processing, a Gradient Boosting Machine, and a Grid Search Optimizer Xiashuang Wang, Guanghong Gong, Ni Li Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: recognition of epilepsy EEG; Symlet wavelet; gradient boosting machine; grid search optimizer; multiple-index evaluation Automatic recognition methods for non-stationary EEG data collected from EEG sensors play an essential role in neurological detection. The integrative approaches proposed in this study consists of Symlet wavelet processing, a gradient boosting machine, and a grid search optimizer for a three-level classification scheme for normal subjects, intermittent epilepsy, and continuous epilepsy. Fourth-order Symlet wavelets were adopted to decompose the EEG data into five time-frequency sub-bands, whose statistical features were computed and used as classification features. The grid search optimizer was used to automatically find the optimal parameters for training the classifier. The classification accuracy of the gradient boosting machine was compared with that of a support vector machine and a random forest classifier constructed according to previous descriptions. Multiple-index were used to evaluate the Symlet wavelet transform-gradient boosting machine-grid search optimizer classification scheme, which provided better classification accuracy and detection effectiveness than has recently reported in other work on three-level classification of EEG data. The Guided Ultrasonic Wave Oscillation Phase Relation Between the Surfaces of Plate-Like Structures of Different Material Settings Liv Rittmeier, Natalie Rauter, Andrey Mikhaylenko, Rolf Lammering, Michael Sinapius Subject: Engineering, Mechanical Engineering Keywords: Guided Ultrasonic Waves; Continuous Wavelet Transformation; Instantaneous Phase Angle; Oscillation Phase; Numerical Simulation; Finite Element Method Lamb waves occur in thin-walled structures in two wave modes, the symmetric and antisymmetric mode. Their oscillation on the structures' surfaces is either in phase (symmetric) or shifted by a phase angle of π (antisymmetric). In this work, a method is developed to compare the surfaces' oscillation phase relation. It is based on the evaluation of time signals regarding the instantaneous phase angle using the continuous wavelet transformation and as a comparative method the short-time Fourier transformation. For this purpose, numerical simulations utilizing the finite element method provide time signals from the top and bottom surface of different thin-walled structures. They differ with respect to their material settings and laminate configurations. The numerically obtained time signals are evaluated by the developed methods. The occurring oscillation phase differences on the top and bottom surface are studied and both methods are compared. Subsequently, the oscillation phase is evaluated experimentally for the wave propagation in a fiber metal laminate. It is shown that the method based on the continuous wavelet transformation is suitable for the evaluation of oscillation phase relations in time signals. Additionally, it is proven that fiber metal laminates show only two phase relations which indicates the occurrence of Lamb waves. Influence of a Flat Polyimide Inlay on the Propagation of Guided Ultrasonic Waves in a Narrow GFRP-Specimen Liv Rittmeier, Thomas Roloff, Natalie Rauter, Andrey Mikhaylenko, Jan Niklas Haus, Rolf Lammering, Andreas Dietzel, Michael Sinapius Subject: Engineering, Mechanical Engineering Keywords: structural health monitoring; narrow specimen; guided ultrasonic waves; continuous wavelet transformation; numerical simulation; composite materials; GFRP This work investigates how integrated polyimide inlays with applied sensor bodies influence the guided ultrasonic wave propagation in narrow glass fiber-reinforced polymer specimens. Preliminary numerical simulations indicate that in a damping-free specimen, the inlays show reflections for the S0-mode propagation. Hence, an air-coupled ultrasonic technique and a 3D laser Doppler vibrometer measurement are used to measure different parts of the propagating waves' displacement field after burst excitation at different frequencies. No significant reflections on the inlay can be seen in the experiments. However, it is shown that the reflections from the strip specimen's narrow width cause periodical reflections that superimpose with the excited wave fronts. A continuous wavelet transformation in the time-frequency domain filters discontinuities from the measurement signal and is used for reconstruction of the time signals. The reconstructed signals are used in a spatial continuous wavelet transformation to identify the occurring wavelengths and hence to prove the assumption of reflections from the narrow edges. Since the amplitude of the reflections identified in the numerical data at the polyimide inlays are an order of magnitude smaller than the excited wave packages, it is concluded that material damping of the epoxy resin matrix extincts possible reflections from the inlays. Planetary Wave Spectrum in the Stratosphere–Mesosphere during Sudden Stratospheric Warming 2018 Yuke Wang, Gennadi Milinevsky, Oleksandr Evtushevsky, Andrew Klekociuk, Wei Han, Asen Grytsai, Oleksandr Antyufeyev, Yu Shi, Oksana Ivaniha, Valerii Shulga Subject: Earth Sciences, Atmospheric Science Keywords: planetary wave; mesosphere; stratosphere; major sudden stratospheric warming; mi-crowave radiometer; carbon monoxide; wavelet power spectra The planetary wave activity in the stratosphere–mesosphere during the Arctic major Sudden Stratospheric Warming (SSW) in February 2018 is discussed on the basis of the microwave radiometer (MWR) measurements of carbon monoxide (CO) above Kharkiv, Ukraine (50.0° N, 36.3° E) and the Aura Microwave Limb Sounder (MLS) measurements of CO, temperature and geopotential heights. From the MLS data, eastward and westward migrations of wave 1/wave 2 spectral components were differentiated, to which less attention was paid in previous studies. Abrupt changes in zonal wave spectra occur with the zonal wind reversal near 10 February 2018. Eastward wave 1 and wave 2, observed before the SSW onset, disappear during the SSW event, when westward wave 1 becomes dominant. Wavelet power spectra of mesospheric CO variations show statistically significant periods in a band of 20–30 days using both MWR and MLS data. Approximately 10-day periods appear only after the SSW onset. Since the propagation of upward planetary waves is limited in the easterly zonal flow in the stratosphere after the zonal wind reversal during SSW, forced planetary waves in the mid-latitude mesosphere may exist due to the instability of the zonal flow. Prediction of Dam Deformation Using SSA-LSTM Model Based on Empirical Mode Decomposition Method and Wavelet Threshold Noise Reduction Caiyi Zhang, Shuyan Fu, Bin Ou, Zhenyu Liu, Mengfan Hu Subject: Engineering, Construction Keywords: concrete dams; prediction model; empirical modal decomposition method; wavelet threshold; sparrow search algorithm; long short-term memory The deformation monitoring information of concrete dams contains some high-frequency com-ponents, and the high-frequency components are strongly nonlinear, which reduces the accuracy of dam deformation prediction. In order to solve such problems, this paper proposes a concrete dam deformation monitoring model based on empirical mode decomposition (EMD) combined with wavelet threshold noise reduction and sparrow search algorithm (SSA) optimization of long short-term memory network (LSTM). The model uses EMD combined with wavelet threshold to decompose and denoise the measured deformation data. On this basis, the LSTM model based on SSA optimization is used to mine the nonlinear function relationship between the reconstructed monitoring data and various influencing factors. The example analysis shows that the model has good calculation speed, fitting and prediction accuracy and it can effectively mine the date char-acteristics inherent in the measured deformation, and reduce the influence of noise components on the modeling accuracy. IWSNs with On-sensor Data Processing for Machine Fault Diagnosis Liqun Hou, Junteng Hao, Yongguang Ma, Neil Bergmann Subject: Engineering, Electrical & Electronic Engineering Keywords: industrial wireless sensor networks (IWSNs), fault diagnosis, wavelet transform, support vector machine, Industrial Internet of Things (IIoT) Machine fault diagnosis systems need to collect and transmit dynamic monitoring signals, like vibration and current signals, at high-speed. However, industrial wireless sensor networks (IWSNs) and Industrial Internet of Things (IIoT) are generally based on low-speed wireless protocols, such as ZigBee and IEEE802.15.4. To address this tension when implementing machine fault diagnosis applications in IIoT, this paper proposes a novel IWSN with on-sensor data processing. On-sensor wavelet transforms using four popular mother wavelets are explored for fault feature extraction, while an on-sensor support vector machine classifier is investigated for fault diagnosis. The effectiveness of the presented approach is evaluated by a set of experiments using motor bearing vibration data. The experimental results show that compared with raw data transmission, the proposed on-sensor fault diagnosis method can reduce the payload transmission data by 99.95%, and reduce the node energy consumption by about 10%, while the fault diagnosis accuracy of the proposed approach reaches 98%. Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection Ahmed F. Fadhil, Raghuveer Kanneganti, Lalit Gupta, Ravi Vaidyanathan Subject: Engineering, Control & Systems Engineering Keywords: unmanned aircraft (UAV); sensing; intelligent transportation; image fusion; signal alignment; runway detection; image registration; wavelet transform; Hough transform UAV network operation enables gathering and fusion from disparate information sources for flight control in both manned and unmanned platforms. In this investigation, a novel procedure for detecting runways and horizons as well as enhancing surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented real-world situations due to signal misalignment. We address this through a registration step to align the EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of the EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. We believe the fusion architecture developed holds promise for incorporation into head-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provided a basis rule selection in other signal fusion applications. Evaluation of the ContinuousWavelet Transform for Detection of Single-Point Rub in Aeroderivative Gas Turbines with Accelerometers Alejandro Silva, Alejandro Zarzo, Juan Manuel Munoz-Guijosa, Francesco Miniello Subject: Engineering, Mechanical Engineering Keywords: machine fault diagnosis; rotordynamics; rub; aeroderivative turbines; accelerometers; early fault detection; fourier analysis; real cepstrum; continuous wavelet transform A common fault in turbomachinery is rotor--casing rub. Shaft vibration, measured with proximity probes, is the most powerful indicator of rotor-stator rub. However, in machines such as aeroderivative turbines, with increasing industrial relevance in power generation, constructive reasons prevent the use of those sensors, being only acceleration signals at selected casing locations available. This implies several shortcomings in the characterization of the machinery condition, associated with a lower information content about the machine dynamics. In this work we evaluate the performance of the Continuous Wavelet Transform to isolate the accelerometer signal features that characterize rotor-casing rub in an aeroderivative turbine. The evaluation is carried out on a novel rotor model of a rotor flexible casing system. Due to damped transients and other short-lived features that rub induces in the signals, the Continuous Wavelet Transform proves being more effective than both Fourier and Cepstrum Analysis. This creates the chance for enabling early fault diagnosis of rub before it may cause machine shutdown or damage. Fault Detection and Classification of Shunt Compensated Transmission Line Using Discrete Wavelet Transform and Naive Bayes Classifier Elhadi Aker, Mohammad Lutfi Otman, Veerapandiyan Veerasamy, Ishak Aris, Noor Abdul Wahab, Hashim Hizam Subject: Engineering, Electrical & Electronic Engineering Keywords: static synchronous compensator (STATCOM); discrete wavelet transform (DWT); multi-layer perceptron neural network (MLP); Bayes and Naive Bayes (NB) classifier This paper presents the methodology to detect and identify the type of fault that occurs in shunt connected static synchronous compensator (STATCOM) transmission line using a combination of Discrete Wavelet Transform (DWT) and Naive Bayes classifier. To study this, the network model is designed using Mat-lab/Simulink. The different faults such as Line to Ground (LG), Line to Line (LL), Double Line to Ground (LLG) and three-phase (LLLG) fault are applied at different zones of system with and without STATCOM considering the effect of varying fault resistance. The three-phase fault current waveforms obtained are decomposed into several levels using daubechies mother wavelet of db4 to extract the features such as standard deviation and Energy values. The extracted features are used to train the classifiers such as Multi-Layer Perceptron Neural Network (MLP), Bayes and Naive Bayes (NB) classifier to classify the type of fault that occurs in the system. The results reveal that the proposed NB classifier outperforms in terms of accuracy rate, misclassification rate, kappa statistics, mean absolute error (MAE), root mean square error (RMSE), relative absolute error (RAE) and root-relative square error (RRSE) than MLP and Bayes classifier. Using the PI ProcessBook to Monitor Activities of Daily Living in Smart Home Care within IoT Jan Vanus, Jan Kubicek, Ojan Gorjani, Jiri Koziorek Subject: Engineering, Electrical & Electronic Engineering Keywords: smart home care (SHC); monitoring; prediction; trend detection; artificial neural network (ANN), Bayesian regulation method (BRM), wavelet transformation (WT), SPSS (statistical package for the social sciences) IBM; IoT (internet of things), activities of daily living (ADL) This article describes the use of the PI ProcessBook software tool for visualization and indirect monitoring of occupancy of SHC rooms from the measured operational and technical quantities for monitoring of daily living activities for support of independent life of elderly persons. The proposed method for data processing (predicting the CO2 course using neural networks from the measured temperature indoor Ti (°C), temperature outdoor To (°C) and the relative humidity indoor rHi (%)) was implemented, verified and compared in MATLAB SW tool and IBM SPSS SW tool with IoT platform connectivity. Within the proposed method, the Stationary Wavelet Transform de noising algorithm was used to remove the noise of the resulting predicted course. In order to verify the method, two long-term experiments were performed, (specifically from February 8 to February 15, 2015, from June 8 to June 15, 2015) and two short-term experiments (from February 8, 2015 and from June 8, 2015). For the best results of the trained ANN BRM within the prediction of CO2, the correlation coefficient R for the proposed method was up to 90%. The verification of the proposed method confirmed the possibility to use the presence of persons of the monitored SHC premises for rooms ADL monitoring. Demand Prediction with Machine Learning Models; State of the Art and a Systematic Review of Advances Amir Mosavi, Sina Faizollahzadeh Ardabili, Shahabodin Shamshirband Subject: Engineering, Electrical & Electronic Engineering Keywords: demand prediction, energy systems; machine learning; artificial neural network (ANN); support vector machines (SVM); neuro-fuzzy; ANFIS; wavelet neural network (WNN); big data; decision tree (DT); ensemble learning; hybrid models; data science; deep learning; renewable energies; energy informatics; prediction; forecasting; energy demand Electricity demand prediction is vital for energy production management and proper exploitation of the present resources. Recently, several novel machine learning (ML) models have been employed for electricity demand prediction to estimate the future prospects of the energy requirements. The main objective of this study is to review the various ML models applied for electricity demand prediction. Through a novel search and taxonomy, the most relevant original research articles in the field are identified and further classified according to the ML modeling technique, perdition type, and the application area. A comprehensive review of the literature identifies the major ML models, their applications and a discussion on the evaluation of their performance. This paper further makes a discussion on the trend and the performance of the ML models. As the result, this research reports an outstanding rise in the accuracy, robustness, precision and the generalization ability of the prediction models using the hybrid and ensemble ML algorithms.
CommonCrawl
Biological activities of spores and metabolites of some fungal isolates on certain aspects of the spiny bollworms Earias insulana (Boisd.) (Lepidoptera: Noctuidae) Eman Mohammed Abd-ElAzeem1, Warda Ahmed Zaki El-Medany1 & Hend Mohammed Sabry1 Egyptian Journal of Biological Pest Control volume 29, Article number: 90 (2019) Cite this article Biological activities of spores and metabolites of some fungi isolated from dead larva of the spiny bollworms (SBW), Earias insulana (Boisd.) (Lepidoptera: Noctuidae), against the newly hatched larvae of the pest were carried out. Results showed that the fungi Metarhizium anisopliae, Acremonium sp., and Paecilomyces variotii had affected the newly hatched larvae of (SBW). Acremonium sp. was the most potent one as it had the highest newly hatched larval mortality percentage (65 and 58.33%) for its spore suspension and metabolites, respectively, while the lowest one (41%) was for P. variotii metabolites. Also, spore suspensions of the all fungal isolates had the highest larval mortality than fungal metabolites. Studying the enzymatic activity showed that Acremonium sp. produced protease enzyme on media containing gelatin, which caused the highest larval mortality (72.22%). These isolates showed different effects on all stages of the pest and decreased pupal weight, adult emergence percentages, deposited eggs, and hatchability percentages than the control. Identification of Acremonium sp. EZ1 was confirmed using 18 s rRNA and its accession number MN25101. The spiny bollworm (SBW), Earias insulana (Boisd.) (Lepidoptera: Noctuidae), is an essential lepidoptera pest located in many countries of the Mediterranean basin, as well as in Africa and Asia (Mansour 2004). The SBW is one of the main cotton pests. Its larvae usually attack cotton flower buds, flowers, and bolls causing damage to seeds and fiber, especially at the late growing stage of the cotton plants leading to a decrease in the quality and quantity of the lint and the obtained oil yield (Salem 2008). Entomopathogenic fungi (EPF) are biological control agents against insect pests. Fungi invade insects through penetrating the body cuticle by a combination of mechanical force and enzymatic degradation depending on the structure and composition of the insect cuticle (Reda et al. 2013). Microbial degradation of insect lipid, protein, and chitin as well as production of lipase, protease, and chitinase has captured the worldwide attention for insect control and has become the object of extensive research (Barra et al. 2015). Reda et al. (2013) studied the effects of several microorganisms as biological control agents on economic pest showing cuticle degradation against the pink bollworm, Pectinophora gossypiella (Saund.), causing a high larval mortality and affected pupation and hatchability. Duarte et al. (2016) investigated the effect of Beauveria bassiana and Metarhizium rileyi on all biological aspects of diamondback moth (Plutella xylostella L.). Ibrahim et al. (2016) showed that B. bassiana and Paecilomyces lilacinus were virulent against the greater wax moth Galleria mellonella L. causing 98.0 and 87.5% larval mortality with a lethal time (LT50) of 1.7 and 2.2 days, respectively. Latent effects were markedly obvious on pupation and rates of adult emergence. Also El-Massry et al. (2016) reported the efficacy of Trichoderma harzianum on the cotton bollworms E. insulana and P. gossypiella. Proteases from a variety of sources (viruses, bacteria, fungi, plants, and insects) have toxicity towards insects. Other proteases play roles in insect development or digestion, but exert an insecticidal effect when over-expressed from genetically engineered plants or microbial pathogens. The sites of protease toxic activity range from the insect midgut to the hemocoel (body cavity) to the cuticle (Robert and Bryony 2010). The main objectives of this study were to isolate fungi from dead larvae of SBW and evaluate the efficacy of these isolates on the pest and its ability to produce cuticle degrading enzymes. Rearing of the spiny Full-grown SBW larvae of field strain were collected from infested cotton bolls in Sharkia Province, Egypt, and reared in the laboratory at the Bollworms Research Department, Plant Protection Research Institute, Agriculture Research Center, Giza, Egypt, for 6 generations. The neonate larvae were transferred into glass tubes (2.5 × 7cm) containing about 4 g semi artificial diet Shorey and Hale (1965). The experiments were performed at constant temperature of 26 ± 1 °C and 75 ± 5% RH. The diet for maintaining laboratory colony preparing by adding boiled water to 250 g kidney beans and 125 g wheat grated then add over heat for 70 min, lifted and left for 20 min to be cooled and clarifying water from them. The diet was blended by 100 ml milk in an electric blender and placed in the refrigerator for 24 h. After that 49 g dry active yeast, 3 g ascorbic acid, 1.75 g sorbic acid, 1.75 g methyl parahydroxy benzoate, 8 ml mixture of vitamins, and 2.5 ml formaldehyde 34–38% were added, all thoroughly blended and kept in the refrigerator for 24 h before being used (Amer and El-Sayed 2015). The culture was away from any contamination with any microorganisms or pesticides. The dead larvae in culture were obtained at full grown and stored slowly in sterilized tightly closed vials at 4 °C in a refrigerator until needed (Mahfouz and Abou El-Ela 2011). Fungi isolation technique In order to reveal any microorganisms associated with the dead spiny bollworm SBW larvae (4th instar), each of the refrigerated individuals was examined through 24–72 h from the time of storage under aseptic conditions. The larvae were surface sterilized by dipping in 2% sodium hypochlorite for 3–5 min to isolate fungi on insect's surface, then passed through 5 separated washings with sterile distilled water (Crecchio and Stotzky 2001). For insuring the appropriate surface sterilization, checks were made by spreading the last washing solution on Czapek- Dox agar medium. Sterilized larvae were dried up between 2 filter papers (Whattman No. 1), then transferred aseptically into a sterile mortar and macerated with a sterile pestle, diluted and plated on Czapek- Dox agar medium for growth incubating at 30 °C for 5–7 days. Incubated plates were inspected daily to observe the colonies growth that were then purified and stored on slants of the desired artificial media at 4 °C. The isolates were cultured periodically until they had been used in the subsequent experiments. Healthy larvae were subjected to the same procedures of isolation for obtaining the expected dormant pathogens. Screening of fungal isolates for their mortality effect on E. insulana Spore suspensions were obtained by washing the 7-day-old slant of tested fungal isolates (Dulmage et al. 1971; Mohd-Salleh and Lewis 1983), then inoculated a 100 ml of Czapek- Dox agar medium (Oxoid 1982) composed of (g/l) 20 sucrose, 2.0 NaNO3, 1.0 KH2PO4, 0.5 MgSO4·7H2O, 0.5 KCl, and 20.0 agar-agar and dissolved in 1 l tap water, pH 5.0 in a 250-ml Erlenmeyer flask with each suspension. The inoculated broth was incubated at 30 °C for 7 days, while metabolites were obtained by filtration using the filter paper (Whattman No.1.). Spore suspension and filtrate of all isolates were tested for their mortality effect and on biological aspects of E. insulana as described in the bioassay method. Two milliliters from each spore suspension and metabolites was mixed with the artificial diet in each dish, while the diet of control was mixed with water only. Each treatment was replicated 3 times. Batches of 20 1st instar larvae were transferred immediately after hatching using a fine brush to each treated Petri dish after about 30 min from mixing in the diet. Treated Petri dishes were covered by a fine and soft paper below the glass cover to prevent larvae to escape. All treatments were incubated at the constant conditions of 26 ± 1 °C and 70 ± 5% RH. After 24 h of exposure and feeding, dead and alive larvae were counted. The mortality percentages were calculated. $$ Larval\ mortality\%= dead\ larvae/ total\ larvae\times 100 $$ Mortality data were corrected according to Abbott (1925). $$ Corrected\ mortality\%=\frac{mortality\ in\ treated- mortality\ in\ control}{mortality\ in\ control}\times 100 $$ The remained alive larvae of each treatment were transferred singly to glass tubes (2 × 7.5 cm) containing about 4 g of untreated control diet and covered with a piece of absorbent cotton and held under the same conditions as mentioned above. Larvae were examined daily to record the biological parameter, larval duration and pupation percentage; then, pupae were transferred individually to other clean tubes and incubated until moth emergence. Pupal duration, adult emergence percentage, sex ratio (as females), and deformed adults were calculated. Emerged moths from each treatment were sexed and caged in 2 pairs, and eggs deposited on strips of muslin cloth hanged in the chimney cages. Forty pairs were used from each treatment (male and female) under the previously mentioned rearing conditions. A piece of cotton wool previously soaked in 10% sugar solution was hung inside the jars near its upper opening for moth feeding and changed by new one every 2 days. The upper openings of cages were covered by muslin cloth followed by a tightly secured paper with rubber bands. Each cage was examined daily to record data of several biological aspects such as preovipositional, ovipositional periods, number of deposited eggs, postovipositional period, and longevity of males and females. The deposited eggs were collected daily from strips of muslin cloth then transferred to a convenient glass jar and incubated at the same conditions to record hatchability percentages. Characterization of most potent fungal isolate Identification of isolated fungi by light microscope The developed fungal colonies were examined daily, and the purified fungi were identified to the species level whenever possible. The identification of fungal genera and species was carried out by the help of the following universally accepted keys for identification of the different isolates. Morphology based on colony shape, height, and color of the aerial hyphae as well as the base color, growth rate, margin characteristics, surface texture, and depth of growth into the medium. Tests were contrasted with an ordered key for the genus Acremonium sp. (Rifai 1969). Molecular characterization (sequence of 18S rRNA gene of DNA) Sequence of 18S rRNA gene of DNA of fungal isolates was done at Sigma Scientific Services Co, Cairo, Egypt, also kindly confirmed by Plant Pathology Research Institute, Agricultural Research Center, Giza, Egypt (Figs. 1 and 2). Molecular characterization involved the following steps according to the protocol adopted by Woese and Fox (1977) and Abdel-Salam (2003). 18S ribosomal RNA gene of Acremonium sp. Phylogenetic dendogram of different fungal isolates accessions revealed by average linkage cluster analysis based on 18S rRNA partial sequence Screening of lipase, protease and chitinase produced by Acremonium sp. Seven-day-old fungal culture was used as a standard inoculant. At the end of incubation period for each enzyme (protease, lipase and chitinase) respectively, the fungal cultures were filtered and the clear supernatants were considered the source of crude enzyme (Reda et al. 2013). The most active isolate of Acremonium sp. was screened for lipase, protease, and chitinase production according to clearing zone technique using Dox-yeast extract-tributyrin agar (Elwan et al. 1977); Dox agar with replacing of NaNO3 by 0.2% gelatin (Ammar et al. 1991) and chitin media which consists of (g/l): colloidal chitin, 0.5; yeast extract, 0.5; (NH4)2SO4, 1.0; MgSO4. 7H2O, 0.3; K2HPO4, 1.36; agar-agar, 20 (CM) (Rajamanickam et al. 2012), respectively. The culture filtrates of lipase, protease, and chitinase media of tested strains after incubation for 7 days at 30 °C were obtained and screening against SBW. Obtained results were analyzed according to Little and Hills (1975), using CoStat computer program Cohort Software, P. O. Box 1149, Berkeley CA 9471 (CoStat Statistical Software, 2005). Ten fungal isolates from naturally dead larvae of the SBW were preliminary bioassayed for pathogenicity against neonate larvae of the pest. The most effective isolates were identified morphologically and biochemically as Metarhizium anisopliae, Paecilomyces variotii, and Acramonium sp. Data in Table 1 show the effect of selected isolates on percentage of larval, pupal mortality rates, adult emergence, and deformed adults. Analysis of variance revealed highly significant effects on larval mortality percentage for all fungal isolates than the control (3.3%). Acremonium sp. showed the highest larval mortality (65.00 and 58.33%) for its spore suspension and metabolites, respectively. The pupal mortality indicated that effects of the 3 isolates for their spores and metabolites were insignificant; also Acremonium sp. showed the highest pupal mortality (14.48 and 14.40%) for spores and metabolites, respectively. Regarding adult emergence percentage, data indicated that there were highly significant effects between all isolates than the control. Acremonium sp. showed the lowest value of adult emergence (73.89 and 71.03%) compared with control (100%). On the other hand, there were significant effects between all isolates concerning deformed adult percentage (14.48 and 12.71%) for spores and metabolites compared with control (0%). From the previous results, it was obvious that Acremonium sp. was the most active isolate. Table 1 Effect of some fungal isolates on larval, pupal and adult stages of the spiny bollworm Earias insulana Data in Table 2 represent the effect of the previous selected isolates on duration (in days) of some developmental stages (larvae, pupae, and adult longevity) of SBW. Statistical analysis showed a significant influence in the developmental period of survived larvae for all isolates. The 3 isolates shortened the larval duration than control. There was a highly significant effect on the pupal period for all isolates which showed shorter periods than in the control. Longevity of emerged females treated with the isolates proved significant effect on the preovipositional, ovipositional, and postovipositional periods. Table 2 Effect of some fungal isolates on duration in days of some developmental stages of spiny bollworm Earias insulana Regarding longevity of males, data indicated that all tested treatments for the 3 isolates had significant effects on the longevity of males than on the control. Number of eggs, hatchability percentage, and sex ratio are presented in Table 3. Data indicated that there was significant effect in percent of sex ratio, while a highly significant effect was noticed in number of eggs and hatchability percentage than the control. Nada and Abdel-Azem (2005) revealed a significant effect between the control and treated larvae of P. gossypiella with P. violacea for the same previous aspects. Table 3 Effect of some fungal isolates on sex ratio, fecundity, and fertility of spiny bollworm Earias insulana Acremonium sp. was selected for further study as the most active fungus causing the highest larval mortality. Screening of the selected isolates for production of lipase, protease, and chitinase enzymes, using clearing zone technique, was carried out. The results revealed that Acremonium sp. exhibited high activities of protease, while no activity of chitinase and lipase. So, further study was completed on protease in vivo. The results showed that the screening of proteolytic, chitinolytic, and lipolytic activity of Acremonium sp. filtrates against SBW (in vivo) were (72.22, 5.50) and (5.20) compared with the control (5%). These results indicated that it had a proteolytic activity and caused the highest larval mortality effect. The same results of protease were produced from P. violaceae (Nada and Abdel-Azem 2005). Also, Jain et al. (2012) purified protease from Acremonium sp. for commercial purpose. Also, Reda et al. (2013) studied pathogenicity of protease and lipase enzymes produced from Streptomyces vinaceusdrappus against the PBW P. gossypiella. Sargin et al. (2013) and Cristina and Gheorghe (2017) investigated the virulence of different EPF, like M. anisopliae, B. bassiana, and Paecilomyces sp. associated with cuticle-degrading enzymes. These enzymes usually hydrolyze the major components of the insect's cuticle (protein, chitin, and lipid) through the infection process. The results indicated that the protease filtrates gave a high mortality percent in E. insulana larvae. Also Nada and Abdel-Azem (2005) reported similar results on P. gossypiella by protease secreted by Paecilomyces sp. Molecular identification of the selected fungus The PCR product of the selected fungus (Acremonium sp.) was sequenced using forward primer ITS1 (1). The resulting DNA sequences of the PCR compared with the published sequences were made using the Basic Local Alignment Search Tool (BLAST) program (http://www.ncbi.nlm.nih.gov/blast) and investigated whether homologs to the Gen Bank data. Figure 3 illustrates the sequence of the eluted PCR products of the selected fungus was homologs (97%) with the sequence of Acremonium sp. Phylogenetic tree of the tested isolate showed a position of Acremonium sp. isolate also constructed from the evolutionary distance matrix based on the partial 18S rRNA gene sequences (Fig. 1). 18S ribosomal RNA gene, partial sequence; internal transcribed spacer1 and 1.5.8S ribosomal RNA gene, complete sequence; and internal transcribed spacer 2, partial sequence The fungus Acremonium sp. had the ability to produce cuticle degrading enzyme (protease), so it can play an important role in the control of E. insulana in a safe manner and reduce environmental pollution by pesticides. All data are available in the manuscript, and the materials used in this work are of high transparency and grade. Abbott WS (1925) A method of computing the effectiveness of an insecticide. J Econom Entomo l18:265–267 Abdel-Salam AH (2003) Natural incidence of Cladosporium spp. as a bio-control agent against white flies and aphids in Egypt. J Appl Entomol 127(4):228–235 Amer AEA, El-Sayed AAA (2015) Lower threshold temperature and thermal unit of American bollworm, Helicoverpa armigera (Hübner) rearing on pea and lettuce and its rearing on new modified artificial diets. J Product & Dev 20(3):273–284 Ammar MS, Louboudy S, Abdul-Raouf UM (1991) Purification and properties of mesophilic proteases produced by Bacillus anthracis, S-44 isolated from a temple in Aswan. A1-Azhar Bull Sci 2(1):325–338 Barra P, Etcheverry M, Nesci A (2015) Improvement of the insecticidal capacity of two Purpureocilliu lilacinum strains against Tribolium confusum. Insects 6(1):206–223 CoStat Statistical Software. 2005. Microcomputer program analysis version, 6. 311. CoHort Software, Monterey, California. Crecchio G, Stotzky G (2001) Biodegradation and insecticidal activity of the toxin from Bacillus thuringiensis subsp. Kurstaki bound on complexes of montmorrillonite humic acid-hydroxy polymers. J Soil Biol and Biochem 33:573–581 Cristina P, Gheorghe S (2017) The role of hydrolytic enzymes produced by entomopathogenic fungi in pathogenesis of insects. mini review. Romanian J for Plant Protection 10:2248–2248 Duarte RT, Gonçalves KC, Espinosa DJ, Moreira LF, De Bortoli SA, Humber RA, Polanczyk RA (2016) Potential of entomopathogenic fungi as biological control agents of diamondback moth (Lepidoptera: Plutellidae) and compatibility with chemical insecticides. J Ecno Entomol 109(2):594–601 Dulmage HT, Boening OP, Rehnborc CS, Hansen GD (1971) A proposed standardized bioassay for formulation of Bacillus thuringiensis based on the international unit. J Invertebr Pathol 18:240–245 El-Massry SAA, Shokry HG, Hegab MEM (2016) Efficiency of Trichoderma harzianum and some organic acids on the cotton bollworms Earias insulana and Pectinophora gossypiella. J Plant Protection and Pathology 7(2):143–148 Elwan SH, El-Naggar MR, Ammar MS (1977) Characteristics of lipase in the growth filtrate dialysate of Bacillus stearothermophilus growth at 55oC using a tributyrin-cup Arabia. Bull Fac Sci Riyadh University 8:105–119 Ibrahim AA, Hussein M, El-Naggar SEM (2016) Isolation and selection of entomopathogenic fungi as biocontrol agent against the greater wax moth, Galleria mellonella L. (Lepidoptera: Pyralidae). Egypt J Biol Pest Control 26(2):249–253 Jain P, Aggarwal V, Sharma A, Kumar R, Pundir RM (2012) Isolation, production and partial purification of protease from an endophytic Acremonium sp. J Agric Technol 8(6):1979–1989 Little TM, Hills FJ (1975) Statistical method in agriculture research available from U. C. D. Book store, University of California, Davis; pp 241. Mahfouz SA, Abou El-Ela AA (2011) Biological control of pink bollworm Pectinophora gossypiella (Saunders). Microbial and biochemical Technology 3(2):30–32 Mansour ES (2004) Effectiveness of Trichogramma evanescens Westwood, bacterial insecticide and their combination on the cotton bollworms in comparison with chemical insecticides. Egypt J. Biol. Pest Control 14:339–343 Mohd-Salleh MB, Lewi LC (1983) Comparative effects of spor-crystal complexes and thermostable exotoxins of six subspecies of Bacillus thuringiensis against Ostrinia nubilalis (Lepidoptera: Pyralidae). J Invertebr Pathol 41:336–340 Nada MA, Abdel-Azem EM (2005) Effect of two types of entomopathogenic fungi, Paecilomyces violacea and Paecilomyces variotii on biological aspects of pink bollworm Pectinophora gossypiella (Saunders). Egypt J Appl Sci 20(12):691–698 Oxoid L (1982) The Oxoid manual of culture media, ingredients and other laboratory services Turnergraphic Ltd. England, 5th eds. Rajamanickam C, Kannan R, Selvamathiazhagan N, Suyambulingam AKM, Subbiah SN, Sengottayan SN (2012) Physiological effect of chitinase purified from Bacillus subtilis against the tobacco cutworm Spodoptera litura Fab. Pesticide Biochemistry and Physiology 104:65–71 Reda FM, Nada MA, Abdel-ElAzeem EM (2013) Biological activity of some actinomycetal and bacterial isolates on certain aspects of the pink bollworm, Pectinophora gossypiella (Saunders) (Lepidoptera:Gelechiidae). Egypt J Biol Pest Control 23(2):297–303 Rifai MA (1969) A Revision of the Genus Trichoderma. Mycological Paper 116:56 Robert LH, Bryony CB (2010) Proteases as insecticidal agents. Toxins 2:935–953 Salem MS (2008) Inducing resistance of cotton plants against cotton bollworms. Fac Agric (Moshtohor), Zagazig University, Egypt, Ph D Thesis Sargin S, Gezgin Y, Eltem R, Vardar F (2013) Micropropagule production from Trichoderma harzianum EGE-K38 using solid-state fermentation and a comparative study for drying methods. Turk J Biol 37:139–146 Shorey HH, Hal RL (1965) Mass rearing of the larvae of nine noctuid species on a simple artificial medium. J Econom Entomol 58(3):522–524 Woese CR, Fox GE (1977) Phylogenetic structure of the prokaryotic domain: the primary kingdoms. Proc Natl Acad Sci USA 74(11):5088–5090 Thanks are due to Prof. Dr. Ali Ahmed Elsayed, Plant Protection Research Institute, for his effort and helpful to carry out this experimental work. This work was not supported by any funding body but personally financed. Plant Protection Research Institute, Agriculture Research Center (ARC), 7 Nadi El-Saeid Street, Dokki, Giza, Egypt Eman Mohammed Abd-ElAzeem , Warda Ahmed Zaki El-Medany & Hend Mohammed Sabry Search for Eman Mohammed Abd-ElAzeem in: Search for Warda Ahmed Zaki El-Medany in: Search for Hend Mohammed Sabry in: The conception and design of the study were done by all authors, 1st author EMA isolate entomopathogenic fungi, screening of fungal isolates for their mortality effect and screening of lipase, protease and chitinase produced by Acremonium on E. insulana. The rearing of the spiny bollworm and detection the effects of the fungal isolates spore suspension and metabolites by WAZM and EMA. The identification of the most potent fungus by HMS and EMA. All authors read and approved the final manuscript. Correspondence to Eman Mohammed Abd-ElAzeem. This article does not contain any studies with human participants or animals The manuscript has not been published in completely or in part elsewhere The authors declare that they have no competing interests Abd-ElAzeem, E.M., El-Medany, W.A.Z. & Sabry, H.M. Biological activities of spores and metabolites of some fungal isolates on certain aspects of the spiny bollworms Earias insulana (Boisd.) (Lepidoptera: Noctuidae). Egypt J Biol Pest Control 29, 90 (2019). https://doi.org/10.1186/s41938-019-0192-y Accepted: 26 November 2019 Spiny bollworms Earias insulana Entomopathogenic fungi Acremonium sp. Metarhizium anisopliae Paecilomyces variotii
CommonCrawl
Chapters (1) Statistics and Probability (1) The Journal of Laryngology & Otology (3) The Journal of Agricultural Science (2) Epidemiology & Infection (1) Mathematical Proceedings of the Cambridge Philosophical Society (1) Nagoya Mathematical Journal (1) The European Physical Journal - Applied Physics (1) The Australian Society of Otolaryngology Head and Neck Surgery (3) Nitrogen use efficiency in spring wheat: genotypic variation and grain yield response under sandy soil conditions E. MANSOUR, A. M. A. MERWAD, M. A. T. YASIN, M. I. E. ABDUL-HAMID, E. E. A. EL-SOBKY, H. F. ORABY Journal: The Journal of Agricultural Science / Volume 155 / Issue 9 / November 2017 Published online by Cambridge University Press: 02 November 2017, pp. 1407-1423 Print publication: November 2017 Agricultural practices are likely to lower nitrogen (N) fertilization inputs for economic and ecological limitation reasons. The objective of the current study was to assess genotypic variation in nitrogen use efficiency (NUE) and related parameters of spring wheat (Triticum aestivum L.) as well as the relative grain yield performance under sandy soil conditions. A sub-set of 16 spring wheat genotypes was studied over 2 years at five N levels (0, 70, 140, 210 and 280 kg N/ha). Results indicated significant differences among genotypes and N levels for grain yield and yield components as well as NUE. Genotypes with high NUE exhibited higher plant biomass, grain and straw N concentration and grain yield than those with medium and low NUE. Utilization efficiency (grain-NUtE) was more important than uptake efficiency (total NUpE) in association with grain yield. Nitrogen supply was found to have a substantial effect on genotype; Line 6052 as well as Shandawel 1, Gemmiza 10, Gemmiza 12, Line 6078 and Line 6083 showed higher net assimilation rate, more productive tillers, increased number of spikes per unit area and grains per spike, extensive N concentration in grain and straw, heavier grains, higher biological yield and consequently maximized grain yield. The relative importance of NUE-associated parameters such as nitrogen agronomic efficiency, nitrogen physiological efficiency and apparent nitrogen recovery as potential targets in breeding programmes for increased NUE genotypes is also mentioned. Modifiable diarrhoea risk factors in Egyptian children aged <5 years A. M. MANSOUR, H. EL MOHAMMADY, M. EL SHABRAWI, S. Y. SHABAAN, M. ABOU ZEKRI, M. NASSAR, M. E. SALEM, M. MOSTAFA, M. S. RIDDLE, J. D. KLENA, I. A. ABDEL MESSIH, S. LEVIN, S. Y. N. YOUNG Journal: Epidemiology & Infection / Volume 141 / Issue 12 / December 2013 Published online by Cambridge University Press: 22 February 2013, pp. 2547-2559 By conducting a case-control study in two university hospitals, we explored the association between modifiable risk behaviours and diarrhoea. Children aged <5 years attending outpatient clinics for diarrhoea were matched by age and sex with controls. Data were collected on family demographics, socioeconomic indicators, and risk behaviour practices. Two rectal swabs and a stool specimen were collected from cases and controls. Samples were cultured for bacterial pathogens using standard techniques and tested by ELISA to detect rotavirus and Cryptosporidium spp. Four hundred cases and controls were enrolled between 2007 and 2009. The strongest independent risk factors for diarrhoea were: presence of another household member with diarrhoea [matched odds ratio (mOR) 4·9, 95% CI 2·8–8·4] in the week preceding the survey, introduction to a new kind of food (mOR 3, 95% CI 1·7–5·4), and the child being cared for outside home (mOR 2·6, 95% CI 1·3–5·2). While these risk factors are not identifiable, in some age groups more easily modifiable risk factors were identified including: having no soap for handwashing (mOR 6·3, 95% CI 1·2–33·9) for children aged 7–12 months, and pacifier use (mOR 1·9, 95% CI 1·0–3·5) in children aged 0–6 months. In total, the findings of this study suggest that community-based interventions to improve practices related to sanitation and hygiene, handwashing and food could be utilized to reduce the burden of diarrhoea in Egyptian children aged <5 years. q-Titchmarsh-Weyl theory: series expansion M. H. Annaby, Z. S. Mansour, I. A. Soliman Journal: Nagoya Mathematical Journal / Volume 205 / March 2012 Published online by Cambridge University Press: 11 January 2016, pp. 67-118 We establish a q-Titchmarsh-Weyl theory for singular q-Sturm-Liouville problems. We define q-limit-point and q-limit circle singularities, and we give sufficient conditions which guarantee that the singular point is in a limit-point case. The resolvent is constructed in terms of Green's function of the problem. We derive the eigenfunction expansion in its series form. A detailed worked example involving Jackson q-Bessel functions is given. This example leads to the completeness of a wide class of q-cylindrical functions. By Mohamed Aboulghar, Ahmed Abou-Setta, Mary E. Abusief, G. David Adamson, R. J. Aitken, Hesham Al-Inany, Baris Ata, Hamdy Azab, Adam Balen, David H. Barad, Pedro N. Barri, C. Blockeel, Giuseppe Botta, Mark Bowman, Chris Brewer, Dominique M. Butawan, Sandra A. Carson, Hai Ying Chen, Anne Clark, Buenaventura Coroleu, S. Das, C. Dechanet, H. Déchaud, Cora de Klerk, Sheryl de Lacey, S. Deutsch-Bringer, P. Devroey, Didier Dewailly, Hakan E. Duran, Walid El Sherbiny, Tarek El-Toukhy, Johannes L. H. Evers, Cynthia Farquhar, Rodney D. Franklin, Juan A. Garcia-Velasco, David K. Gardner, Norbert Gleicher, Gedis Grudzinskas, Roger Hart, B Hédon, Colin M. Howles, Jack Yu Jen Huang, N. P. Johnson, Hey-Joo Kang, Gab Kovacs, Ben Kroon, Anver Kuliev, William H. Kutteh, Nick Macklon, Ragaa Mansour, Lamiya Mohiyiddeen, Lisa J. Moran, David Mortimer, Sharon T. Mortimer, Luciano G. Nardo, Robert J. Norman, Willem Ombelet, Luk Rombauts, Zev Rosenwaks, Francisco J. Ruiz Flores, Anthony J. Rutherford, Gavin Sacks, Denny Sakkas, M. W. Seif, Ayse Seyhan, Caroline Smith, Kate Stern, Elizabeth A. Sullivan, Sesh Kamal Sunkara, Seang Lin Tan, Mohamed Taranissi, Kelton P. Tremellen, Wendy S. Vitek, V. Vloeberghs, Bradley J. Van Voorhis, S. F. van Voorst, Amr Wahba, Yueping A. Wang, Klaus E. Wiemer Edited by Gab Kovacs, Monash University, Victoria Book: How to Improve your ART Success Rates Print publication: 30 June 2011, pp viii-xii Optical properties of Azo Dye (1-Phenylazo-2-Naphthol) thin films M. S. Aziz, H. M. El-Mallah, A. N. Mansour Journal: The European Physical Journal - Applied Physics / Volume 48 / Issue 2 / November 2009 Published online by Cambridge University Press: 17 September 2009, 20401 Thin Films of Azo Dye (1-Phenylazo-2-Naphthol) have been prepared by thermal evaporation technique onto quartz substrates held at about 300 K during the deposition process with different thicknesses range 625–880 nm. X-ray diffraction and the differential thermal analysis showed that the Azo Dye sample is crystalline nature and thermal stable in temperature range from room temperature to 100 $^{\circ}$ C. The optical constants (the refractive index n, the absorption index k and the absorption coefficient α) were calculated for Azo Dye (1-Phenylazo-2-Naphthol) thin films by using spectrophotometer measurements of the transmittance and reflectance at normal incidence in the spectral range 400–2200 nm. The obtained values of both n and k were found to be independent of the film thicknesses. The refractive index has anomalous behavior in the wavelength range 400–1000 nm besides a high energy transition at 2.385 eV. The optical parameters (the dispersion energy E d , the oscillation energy E o , the room temperature optical dielectric constant $\varepsilon_{l}$ , the lattice dielectric constant $\varepsilon_{L}$ , the high frequency dielectric constant $\varepsilon_{\infty}$ and the ratio of carrier concentration to the effective mass $N/m^{\ast}$ ) were calculated. The allowed optical transition responsible for optical absorption was found to be direct transition with optical energy gap of 1.5 eV for Azo Dye sample. The band tail obeys Urbach's empirical relation. On the zeros of the second and third Jackson q-Bessel functions and their associated q-Hankel transforms M. H. ANNABY, Z. S. MANSOUR Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 147 / Issue 1 / July 2009 Published online by Cambridge University Press: 01 July 2009, pp. 47-67 We investigate the zeros of q-Bessel functions of the second and third types as well as those of the associated finite q-Hankel transforms. We derive asymptotic relations of the zeros of the q-Bessel functions by comparison with zeros of the theta function. The asymptotics of q-Bessel functions are also given. Zeros of finite q-Hankel transforms of q-summable functions are shown to be real and simple except for a finite number of possible non real zeros. Sufficient conditions are given to guarantee that all zeros are real. We give some applications concerning zeros of combinations of q-Bessel functions. Repair of tympanic membrane perforation using a modified cartilage–perichondrium composite ring graft M H Mansour, M H Askar, O A Albirmawy Journal: The Journal of Laryngology & Otology / Volume 120 / Issue 11 / November 2006 Ring graft is a modified cartilage-perichondrium composite graft (CPCG) with only a peripheral ring shaped cartilage. In this series, tympanic membrane perforations were repaired using (ring graft) during treatment of 18 cases of non-cholesteatomatous chronic suppurative otitis media (CSOM). This study showed that ring graft has the advantages of both CPCG and perichondrial graft but without their disadvantages. Complete closure of the perforations was achieved in all cases without delay in hearing improvement. It is recommended to use the ring graft whenever needed to repair central tympanic membrane perforations even with difficult anterior or total perforations. Anomalous behavior of confined-supercooled water near the bulk water hypothetical 2nd Critical Temperature F. Mansour, R. M. Dimeo, H. Peemoeller Published online by Cambridge University Press: 01 February 2011, P5.5 High resolution inelastic neutron scattering measurements of the molecular dynamics of deeply supercooled water confined to a porous host, MCM-41 are reported. Results obtained near the critical temperature of water are discussed. Anomalous behavior near and below the glass transition temperature is also presented and discussed. Results are compared to those from earlier studies on supercooled water. Hearing impairment in association with distal renal tubular acidosis among Saudi children Siraj M. Zakzouk, Samia H. Sobki, Faizeh Mansour, Fatma H. Al Anazy Journal: The Journal of Laryngology & Otology / Volume 109 / Issue 10 / October 1995 A follow-up of seven patients with the autosomal recessive inherited syndrome of distal renal tubular acidosis (RTA) and sensorineural hearing loss is described. Five patients were diagnosed as having primary distal renal tubular acidosis and rickets, four were found to have severe sensorineural hearing loss of over 80 dB: two of which are brothers. Two patients were diagnosed as having secondary distal renal acidosis due to a genetic disorder called osteopetrosis; they are brothers and their audiograms showed a mild conductive hearing loss of an average 35 dB bilaterally. All patients had growth retardation with improvement due to alkaline therapy but their hearing loss was not affected by the medication. The pedigrees of two families with half sibs showed the familial incidence for consanguineous marriage. Consanguinity was found to be positive in five out of the seven patients. The tribal tradition in Saudi Arabia fosters consanguineous marriages for cultural and social reasons and pre-arranged marriages are still seen. The role of transcanine surgery in antrochoanal polyps A. El-Guindy, M. H. Mansour Journal: The Journal of Laryngology & Otology / Volume 108 / Issue 12 / December 1994 Published online by Cambridge University Press: 29 June 2007, pp. 1055-1057 During a period of two years, 24 cases of antrochoanal polyps were diagnosed by clinical examination, nasal endoscopy and computerized tomography. Surgery started with endoscopic transnasal removal of the polyp. Every attempt was made to remove the antral portion of the polyp through the wide ostium. Then transcanine sinuscopy was performed. Remnants of the polyp were detected and removed in five cases. One or more other cysts were found and extirpated in 11 cases. Endoscopic follow-up for 18 months to three years revealed no recurrence. It is recommended that endoscopic middle meatal surgery should be combined with transcanine sinuscopy to ensure complete removal of antrochoanal polyps. Oestrous activity in three subtropical sheep breeds in Upper Egypt and response to long-day light treatment A. M. Aboul-Naga, H. Mansour, M. B. Aboul-Ela, M. T. Mousa, Ferial Hassan, F. El-Hommosi Journal: The Journal of Agricultural Science / Volume 116 / Issue 1 / February 1991 Published online by Cambridge University Press: 27 March 2009, pp. 139-143 Print publication: February 1991 Oestrous activity in local Rahmani and Ossimi sheep and imported subtropical Awassi sheep at different times of the year in Upper Egypt and the effect of continuous exposure to long-day conditions from July till December on this activity was studied in 1986. The three breeds differed substantially in oestrous activity but differences did not seem to be directly related to latitude of origin. Rahmani ewes from the Nile Delta were mostly cyclic all year round; the percentage coming to oestrus each month never fell below 70%. Ossimi ewes originating from mid-Egypt had a very long breeding season; 74% had an anoestrous period of 68·8 days (v. 27% for Rahmani ewes). All Awassi ewes, except one, had an anoestrous period of 96·5 days on average. Awassi ewes also showed more response to continuous exposure to long days (14 h) than Ossimi ewes, seen in a shorter reaction interval and a greater decrease in the percentage of ewes coming into oestrus (27 and 90% in control and treated Awassi ewes, respectively). The results indicated that changes in daylength, although small in subtropical regions, may be a major factor controlling seasonal changes in reproductive activity in subtropical sheep breeds, the more seasonal breed being more responsive to changes in daylength. The possibility of selection within these breeds for continuous reproductive activity is also indicated.
CommonCrawl
CP-Algorithms Page Authors Finding the nearest pair of points Given $n$ points on the plane. Each point $p_i$ is defined by its coordinates $(x_i,y_i)$. It is required to find among them two such points, such that the distance between them is minimal: $$ \min_{\scriptstyle i, j=0 \ldots n-1,\atop \scriptstyle i \neq j } \rho (p_i, p_j). $$ We take the usual Euclidean distances: $$ \rho (p_i,p_j) = \sqrt{(x_i-x_j)^2 + (y_i-y_j)^2} .$$ The trivial algorithm - iterating over all pairs and calculating the distance for each — works in $O(n^2)$. The algorithm running in time $O(n \log n)$ is described below. This algorithm was proposed by Preparata in 1975. Preparata and Shamos also showed that this algorithm is optimal in the decision tree model. We construct an algorithm according to the general scheme of divide-and-conquer algorithms: the algorithm is designed as a recursive function, to which we pass a set of points; this recursive function splits this set in half, calls itself recursively on each half, and then performs some operations to combine the answers. The operation of combining consist of detecting the cases when one point of the optimal solution fell into one half, and the other point into the other (in this case, recursive calls from each of the halves cannot detect this pair separately). The main difficulty, as always in case of divide and conquer algorithms, lies in the effective implementation of the merging stage. If a set of $n$ points is passed to the recursive function, then the merge stage should work no more than $O(n)$, then the asymptotics of the whole algorithm $T(n)$ will be found from the equation: $$T(n) = 2T(n/2) + O(n).$$ The solution to this equation, as is known, is $T(n) = O(n \log n).$ So, we proceed on to the construction of the algorithm. In order to come to an effective implementation of the merge stage in the future, we will divide the set of points into two subsets, according to their $x$-coordinates: In fact, we draw some vertical line dividing the set of points into two subsets of approximately the same size. It is convenient to make such a partition as follows: We sort the points in the standard way as pairs of numbers, ie.: $$p_i < p_j \Longleftrightarrow (x_i < x_j) \lor \Big(\left(x_i = x_j\right) \wedge \left(y_i < y_j \right) \Big) $$ Then take the middle point after sorting $p_m (m = \lfloor n/2 \rfloor)$, and all the points before it and the $p_m$ itself are assigned to the first half, and all the points after it - to the second half: $$A_1 = {p_i \ | \ i = 0 \ldots m }$$ $$A_2 = {p_i \ | \ i = m + 1 \ldots n-1 }.$$ Now, calling recursively on each of the sets $A_1$ and $A_2$, we will find the answers $h_1$ and $h_2$ for each of the halves. And take the best of them: $h = \min(h_1, h_2)$. Now we need to make a merge stage, i.e. we try to find such pairs of points, for which the distance between which is less than $h$ and one point is lying in $A_1$ and the other in $A_2$. It is obvious that it is sufficient to consider only those points that are separated from the vertical line by a distance less than $h$, i.e. the set $B$ of the points considered at this stage is equal to: $$B = { p_i\ | \ | x_i - x_m\ | < h }.$$ For each point in the set $B$, we try to find the points that are closer to it than $h$. For example, it is sufficient to consider only those points whose $y$-coordinate differs by no more than $h$. Moreover, it makes no sense to consider those points whose $y$-coordinate is greater than the $y$-coordinate of the current point. Thus, for each point $p_i$ we define the set of considered points $C(p_i)$ as follows: $$C(p_i) = { p_j\ |\ p_j \in B,\ \ y_i - h < y_j \le y_i }.$$ If we sort the points of the set $B$ by $y$-coordinate, it will be very easy to find $C(p_i)$: these are several points in a row ahead to the point $p_i$. So, in the new notation, the merging stage looks like this: build a set $B$, sort the points in it by $y$-coordinate, then for each point $p_i \in B$ consider all points $p_j \in C(p_i)$, and for each pair $(p_i,p_j)$ calculate the distance and compare with the current best distance. At first glance, this is still a non-optimal algorithm: it seems that the sizes of sets $C(p_i)$ will be of order $n$, and the required asymptotics will not work. However, surprisingly, it can be proved that the size of each of the sets $C(p_i)$ is a quantity $O(1)$, i.e. it does not exceed some small constant regardless of the points themselves. Proof of this fact is given in the next section. Finally, we pay attention to the sorting, which the above algorithm contains: first,sorting by pairs $(x, y)$, and then second, sorting the elements of the set $B$ by $y$. In fact, both of these sorts inside the recursive function can be eliminated (otherwise we would not reach the $O(n)$ estimate for the merging stage, and the general asymptotics of the algorithm would be $O(n \log^2 n)$). It is easy to get rid of the first sort — it is enough to perform this sort before starting the recursion: after all, the elements themselves do not change inside the recursion, so there is no need to sort again. With the second sorting a little more difficult to perform, performing it previously will not work. But, remembering the merge sort, which also works on the principle of divide-and-conquer, we can simply embed this sort in our recursion. Let recursion, taking some set of points (as we remember,ordered by pairs $(x, y)$), return the same set, but sorted by the $y$-coordinate. To do this, simply merge (in $O(n)$) the two results returned by recursive calls. This will result in a set sorted by $y$-coordinate. Evaluation of the asymptotics To show that the above algorithm is actually executed in $O(n \log n)$, we need to prove the following fact: $|C(p_i)| = O(1)$. So, let us consider some point $p_i$; recall that the set $C(p_i)$ is a set of points whose $y$-coordinate lies in the segment $[y_i-h; y_i]$, and, moreover, along the $x$ coordinate, the point $p_i$ itself, and all the points of the set $C(p_i)$ lie in the band width $2h$. In other words, the points we are considering $p_i$ and $C(p_i)$ lie in a rectangle of size $2h \times h$. Our task is to estimate the maximum number of points that can lie in this rectangle $2h \times h$; thus, we estimate the maximum size of the set $C(p_i)$. At the same time, when evaluating, we must not forget that there may be repeated points. Remember that $h$ was obtained from the results of two recursive calls — on sets $A_1$ and $A_2$, and $A_1$ contains points to the left of the partition line and partially on it, $A_2$ contains the remaining points of the partition line and points to the right of it. For any pair of points from $A_1$, as well as from $A_2$, the distance can not be less than $h$ — otherwise it would mean incorrect operation of the recursive function. To estimate the maximum number of points in the rectangle $2h \times h$ we divide it into two squares $h \times h$, the first square include all points $C(p_i) \cap A_1$, and the second contains all the others, i.e. $C(p_i) \cap A_2$. It follows from the above considerations that in each of these squares the distance between any two points is at least $h$. We show that there are at most four points in each square. For example, this can be done as follows: divide the square into $4$ sub-squares with sides $h/2$. Then there can be no more than one point in each of these sub-squares (since even the diagonal is equal to $h / \sqrt{2}$, which is less than $h$). Therefore, there can be no more than $4$ points in the whole square. So, we have proved that in a rectangle $2h \times h$ can not be more than $4 \cdot 2 = 8$ points, and, therefore, the size of the set $C(p_i)$ cannot exceed $7$, as required. We introduce a data structure to store a point (its coordinates and a number) and comparison operators required for two types of sorting: struct pt { int x, y, id; struct cmp_x { bool operator()(const pt & a, const pt & b) const { return a.x < b.x || (a.x == b.x && a.y < b.y); struct cmp_y { return a.y < b.y; vector<pt> a; For a convenient implementation of recursion, we introduce an auxiliary function upd_ans(), which will calculate the distance between two points and check whether it is better than the current answer: double mindist; pair<int, int> best_pair; void upd_ans(const pt & a, const pt & b) { double dist = sqrt((a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y)); if (dist < mindist) { mindist = dist; best_pair = {a.id, b.id}; Finally, the implementation of the recursion itself. It is assumed that before calling it, the array $a[]$ is already sorted by $x$-coordinate. In recursion we pass just two pointers $l, r$, which indicate that it should look for the answer for $a[l \ldots r)$. If the distance between $r$ and $l$ is too small, the recursion must be stopped, and perform a trivial algorithm to find the nearest pair and then sort the subarray by $y$-coordinate. To merge two sets of points received from recursive calls into one (ordered by $y$-coordinate), we use the standard STL $merge()$ function, and create an auxiliary buffer $t[]$(one for all recursive calls). (Using inplace_merge () is impractical because it generally does not work in linear time.) Finally, the set $B$ is stored in the same array $t$. vector<pt> t; void rec(int l, int r) { if (r - l <= 3) { for (int i = l; i < r; ++i) { for (int j = i + 1; j < r; ++j) { upd_ans(a[i], a[j]); sort(a.begin() + l, a.begin() + r, cmp_y()); int m = (l + r) >> 1; int midx = a[m].x; rec(l, m); rec(m, r); merge(a.begin() + l, a.begin() + m, a.begin() + m, a.begin() + r, t.begin(), cmp_y()); copy(t.begin(), t.begin() + r - l, a.begin() + l); int tsz = 0; if (abs(a[i].x - midx) < mindist) { for (int j = tsz - 1; j >= 0 && a[i].y - t[j].y < mindist; --j) upd_ans(a[i], t[j]); t[tsz++] = a[i]; By the way, if all the coordinates are integer, then at the time of the recursion you can not move to fractional values, and store in $mindist$ the square of the minimum distance. In the main program, recursion should be called as follows: t.resize(n); sort(a.begin(), a.end(), cmp_x()); mindist = 1E20; rec(0, n); Generalization: finding a triangle with minimal perimeter The algorithm described above is interestingly generalized to this problem: among a given set of points, choose three different points so that the sum of pairwise distances between them is the smallest. In fact, to solve this problem, the algorithm remains the same: we divide the field into two halves of the vertical line, call the solution recursively on both halves, choose the minimum $minper$ from the found perimeters, build a strip with the thickness of $minper / 2$, and iterate through all triangles that can improve the answer. (Note that the triangle with perimeter $\le minper$ has the longest side $\le minper / 2$.) UVA 10245 "The Closest Pair Problem" [difficulty: low] SPOJ #8725 CLOPPAIR "Closest Point Pair" [difficulty: low] CODEFORCES Team Olympiad Saratov - 2011 "Minimum amount" [difficulty: medium] Google CodeJam 2009 Final " Min Perimeter "[difficulty: medium] SPOJ #7029 CLOSEST "Closest Triple" [difficulty: medium] TIMUS 1514 National Park [difficulty: medium] (c) 2014-2021 translation by http://github.com/e-maxx-eng
CommonCrawl
Generating a pronunciation dictionary for European Portuguese using a joint-sequence model with embedded stress assignment Arlindo Veiga1,2, Sara Candeias1 & Fernando Perdigão1,2 Journal of the Brazilian Computer Society volume 19, pages 127–134 (2013)Cite this article This paper addresses the problem of grapheme to phoneme conversion to create a pronunciation dictionary from a vocabulary of the most frequent words in European Portuguese. A system based on a mixed approach funded on a stochastic model with embedded rules for stressed vowel assignment is described. The implemented model can generate pronunciations from unrestricted words; however, a dictionary with the 40k most frequent words was constructed and corrected interactively. The dictionary includes homographs with multiplepronunciations. The vocabulary was defined using the CETEMPúblico corpus. The model and dictionary are publicly available. The grapheme to phone(me)Footnote 1 conversion (G2P), also called letter-to-sound conversion, maps a written text into a string of symbols which represent the speech sounds exactly and unequivocally. Several frameworks have been proposed to tackle the G2P conversion, among which linguistically rule-based modules [18] and statistical approaches [10] can be mentioned. Mainly in the languages in which orthography is roughly phonologically based, such as the Portuguese and other Romanic Languages, linguistic rule-based systems should provide a good coverage of the association between letters and sounds [6, 25, 29]. However, probably no natural human-language satisfies this assumption exactly, because exceptions from the G2P conversion can be found perhaps in every language. The most common irregularity covers situations when the association between grapheme and phoneme is not quite one-to-one but can be, to some extent, ambiguous and greatly dependent on the neighbor-contexts.To deal with this problem, rule-based systems have been adopted along with a list of exceptions to cover the unruled situations. But this solution turns the development and the maintenance of the system very complex, hard and tiresome. Moreover, the rule-based G2P is more likely to make mistakes for new words. In contrast to the rule-based systems outlined above, a number of authors have addressed the G2P conversion from a stochastic perspective. This approach to G2P conversion is based on the idea that using pronunciation examples it could be possible to predict the pronunciation of unseen words by analogy. This method was already implemented by [8] and [2], among others, for Portuguese. In this paper, we use a new statistical approach for which outstanding results have been reported, named the joint-sequence model, [5]. In this model, graphemes and phonemes are combined into a single state, giving rise to "graphonemes". Although the joint-sequence model has shown to be a powerful tool, we also shown in this paper that for the case of the Portuguese language the determination of the stressed vowel leads to a substantial improvement in the system performance, as was also reported in [8]. Thus, we included a linguistically rule-based pre-processing stage, for stress assignment,which marks and disambiguates most of the pronunciations. Common errors in the conversion procedure occur with the heterophonic homographs. Some theoretical frameworks with experimental results were recently proposed, e.g. [32, 33] and [34] for European Portuguese (EP); and [35–38] and [39] for Brazilian Portuguese (BP). The study by Braga and Marques [34] proposed algorithms to deal with this problem of the homograph ambiguity in EP, using a linguistic rule-based methodology. Working with a part-of-speech (PoS) parser to disambiguate homographs which belong to different PoS, and a semantic analyser to disambiguate homographs that belong to the same PoS, the authors extended the approach proposed in [35, 36]. In fact, PoS categorization is insufficient to disambiguate entries in a pronunciation dictionary. Our solution consists in including PoS as well as pronunciation information for each dictionary entry. The vocabulary used to generatethe pronunciation dictionary is in its previous form of the current "Acordo Ortográfico" (AO).Footnote 2 However, we think that this mixed-based G2P can also achieve good performance for EP with the AO. The inherent flexibility in dealing with the EP could be extended to other Romanic languages, which make this an advantageous approach. The remainder of the paper is organized as follows. In Sect. 2, the joint-sequence model is briefly discussed. Section 3 presents how the vocabulary and dictionary were generated while Sect. 4 describes the linguistic model. In Sect. 5, experimental results are presented and the methodology used to deal with the heterophonic homographs is explained in Sect. 6. Then, the main conclusions are summarized and future work directions are foreseen. Joint-sequence model Given a sequence of \(N\) graphemes defined by \(G=G_1^N =\{g_1 ,g_2 ,...,g_N \}\), the goal is to find a sequence of \(M\) phonemes, \(F=F_1^M =\{f_1 ,f_2 ,...,f_M \}\), that best describes the phonetic transcriptionof the original sentence. The statistical approach to this problem corresponds to the determination of the optimal sequence of phonemes, \(F^*\), that maximizes the conditional probability of phonemes, \(F\), given a sequence of graphemes, \(G\): $$\begin{aligned} \text{ F}^*=\arg \mathop {\max }\limits _{ F} {P (F| G).} \end{aligned}$$ It is difficult to determine \(F^{*}\) directly by calculating \(P({F| G})\) for all possible sequences \(F\). However, using the Bayes theorem, we can rewrite the problem as: $$\begin{aligned} \text{ F}^*=\arg \mathop {\max }\limits _F P(F|G)=\arg \,\mathop {\max }\limits _F \{P(G|F).P(F) / P(G)\}.\nonumber \\ \end{aligned}$$ Since \(P(G)\) is common to all sequences \(F\), the problem can be simplified in the following way: $$\begin{aligned} {F}^*=\arg \mathop {\max }\limits _F P(G|F).P(F). \end{aligned}$$ Using a phonological dictionary, previously created, it is possible to estimate \(P({G|F})\) and the a priori probability, \(P(F)\), for all sequences \(F\) and \(G\) found in this dictionary. The Markov-based approaches estimate a model for each phoneme and use n-gram models to compute\(P(F)\). These approaches model the dependency between graphemes and phonemes and the dependency between phonemes, but do not model dependencies between graphemes [12, 17, 28]. Due to these constraints, other statistical approaches emerged proposing joint probability models \(P({F,G})\) to determine the optimal sequence of phonemes [4, 14], directly using the expression of the joint probability in (1) in place of the conditional probability. In this approach, all the dependencies present in the dictionary were modeled, resulting in improved performances than those obtained by the other models. Alignment between graphemes and phonemes Some graphemes have a univocal correspondence with the phonemes. However, for other graphemes the correspondence to phonemes depends on several factors, such as the grapheme context and the part-of-speech. There are also cases where several graphemes may lead to a single phoneme, and where a single grapheme can lead to several phonemes. All statistical approaches face this problem, being necessary, during the training process, to segment and align the two sequences (a phoneme sequence and the corresponding grapheme sequence) with an equal number of segments. The solution is not always trivial or unique and depends on how the alignment algorithms associate graphemes to phonemes of a given word. Alignment can be classified as follows [16]: "one-to-one" Each grapheme relates with only one phoneme (segments with one symbol only). A null symbol ('_') is used to deal with the cases in which a grapheme can originate more than one phoneme (the insertion of phonemes), or the cases where more than one grapheme originates only one phoneme (the deletion of phonemes). This alignment is easy to implement using the Levenshtein algorithm [22]. In the literature, these algorithms are called alignment "01-01" if insertions and deletions of phonemes are allowed, or "1-01" if only deletion of phonemes is allowed. This last case corresponds to the alignment used in this work. "many-to-many" The segments are composed of various symbols, which allow the association of several graphemes to several phonemes. This alignment is more generic and can be used without any prior knowledge of mapping between graphemes and phonemes. It handles insertions and deletions of phonemes without using any special symbol. On the other hand, the resulting model is more difficult to estimate and its performance is generally lower than the model with alignment "one-to-one". These alignments are also known as "n-to-m". Statistical model After the alignment, the sequences of graphemes and phonemes have the same number of segments. So, a new entity, born from the association of a segment of graphemes and phonemes can be defined, and is called "graphone(me)" [4]. A sequence of \(K\)graphonemes is annotated as \(Q(F,G){\,=} \{q_1,q_2,...,q_k \}\). Given a sequence of \(K\)graphonemes, \(Q({F,G})\), rather than assuming independence between symbols, the probability of the joint-sequence, \(P(Q(F,G))\), can be estimated using the so-called "n-grams" [5] (sequences limited to \(n\) symbols). Model estimation The n-gram models are used to estimate the probability of symbols knowing the previous \(n-1\) symbols (history). The estimation of the probability of an n-gram is based on the number of its occurrence. This probability is easy to compute, but there is a problem in assigning a zero probability to the n-grams not seen or with limited number of training examples. To overcome this limitation, it is necessary to model unseen examples (using a discount) or uncommon examples (using smoothing). Thus, a small probability mass must be reserved from the most frequent n-grams to the absent or uncommon n-grams. There are several proposed algorithms to solve this problem of probability mass redistribution, such as Good-Turing [15], Witten-Bell [31], Kneser-Ney [20], Ney's absolute discount [23] and Katz's smoothing [19]. In this work, we have adopted the algorithm implemented by [13], which uses a modified version of Kneser-Ney algorithm [9]. Pronunciation dictionary In this work, we intend to create a pronunciation dictionary from a given vocabulary. The vocabulary derives from the CETEMPúblico corpus [26], that corresponds to a collection of newspaper extracts published from 1991 to 1998, annotated in terms of sentences and containing 180 million words in European Portuguese. The process of generating the vocabulary starts by taking all the strings annotated as words, which obey simultaneously to the following criteria: (1) start with a letter ( a–z, A–Z, á–ú, Á–Ú); (2) do not contain digits; (3) are not all upper case (e.g. acronyms); (4) do not have the character '.' (e.g. URLs); (5) end with a letter (e.g. not A4, UTF-8); (6) the corresponding lemmas do not contain '\(=\)' (e.g. compound nouns). From the resulting list, we took the sub-list of words that occur more than 70 times in the corpus, totaling about 50k different words. Foreign words were then removed, using an automatic criteria followed by manual verification. This process results on a vocabulary of 41,586 words. The transcription of the vocabulary words is a result of an iterative procedure. First, a statistical model was estimated, as described in 2.2, using the SpeechDat pronunciation dictionary [27]. This dictionary contains about 15k entries, from which foreign words were deleted. Some SAMPA transcriptions [30] were substituted according to the following directions: (1) we did not use the velar /l\(\sim \)/ and the semivowels /j/ and /w/; and (2) some standardization in the pronunciations was done, such as considering /6i/ as the pronunciation of all \(<\text{ ei}>\) grapheme sequences (e.g. \(<\)l eite\(>\) /l6it@/ and \(<\)alh eia\(>\) /6L\(\underline{{\mathbf{6i }}}\)6/). The result of applying the statistical model to CETEMPúblico vocabulary was fairly accurate, although with some significant flaws. Then, we followed a long procedure of manual verification and correction of the transcriptions. The next step was to compare the transcriptions with other ones, generated by a commercial speech synthesizer. This comparison allowed us to rely on our results since the majority of the transcriptions agreed. All different transcriptions were analyzed one by one and we found that the transcriptions from our dictionary were the right ones most of the times. This has led to the phonological transcription dictionary referred to as "dic_CETEMP_40k". With the "dic_CETEMP_40k", a new statistical model was built. The test of this model on the training dictionary, allowed us to correct some remaining errors as well as to standardize and regularize some transcription procedures. Throughout the development of this work, the dictionary had been revised and corrected. Although it may still contain some errors, we are confident on its accuracy. We think that this dictionary could be an interesting resource for studies about phonetics and phonology of Portuguese. Graphoneme alignment An important step for establishing the statistical model is the alignment between graphemes and phonemes in the form "1-01" (one grapheme leads to zero or one phoneme; see Sect. 2.1).The option"1-01" was chosen from the beginning, because we had identified only six cases where a grapheme could originate more than one phoneme. Some cases had the insertion of a yod in some words beginning with \(<\)ex-\(>\); others had the cases of non-common pronunciations such as \(<\)põem\(>\rightarrow \)/po\(\sim \)i\(\sim \)6\(\sim \)i\(\sim \)/ and \(<\)têm\(>\rightarrow \)/t 6\(\sim \)i\(\sim \)6\(\sim \)i\(\sim \)/. Defining symbols corresponding to more than one phoneme solved this problem of phoneme insertion. The problem of the phoneme deletions still remains, because there are always graphemes that do not originate any phoneme. The alignment between graphemes and phonemes was then obtained using the known edit distance or Levenshtein algorithm [22]. This required defining a distance between each phoneme and grapheme. This distance or cost of association was defined using the log probability of this association, which was estimated from an aligned dictionary. Phonetic–phonological restrictions Since the EP is a language with much phonological regularity, we added to the G2P module some linguistic restrictions, which were pertinent to convert graphemes into phonemes. Before any regard on the linguistic rules, an aspect concerning the phonetic/phonological binomial must be clarified. While phonetics gives us the physical and articulatory properties of the sound pronounced (it means the surface structure), phonology studies the sound that has a given role in the pronunciation (the underlying structure). However, any methodological perspective concerning the speech transcription links these two linguistic fields since it deals with the inter-relationship between the units and its distinctive character (phonemes) and the physical reality of those units (phones and allophones) [11]. The studies on the G2P often alternate between the term phone [8, 24] and the term phoneme [2], without any clarification on the perspective followed. We justify our option to adopt the term phoneme mainly, because the procedure to convert the letter into the sound brings us information that derives from the structure of the language (such as both left and right context which implies the choice of a single unit excluding all other units available in the language). The phoneme that corresponds to the grapheme is well accepted as a class to which may group all allophonic realizations able in EP (which could include all the multi pronunciations). We also considered that the phoneme conversion corresponds to the EP-standard. The phonological neutralization of oppositions is not described in this study and phonemes do not represent any archiphonemes. Algorithms have been constructed based on practical linguistic rules, such as stress marking of the vowel (the syllable nuclei) of any single word and by identifying short contexts in which the correspondence between grapheme and phoneme has a good stability. Rules for stress assignment Following the theoretical assumptions discussed in [21], we adopted to mark all vowels, which are stressed (the syllable nuclei) within a word. The importance of the stressed vowel \((V_\mathrm{stressed})\) has been recognized in previous G2P works, such as in [8]. Since the n-grams context is short and cannot, most of the times, retain information about the syllable structure, marking the \(V_\mathrm{stressed}\) improves the statistical model by expressing graphoneme classes unequivocally. As in [1], our proposal considered to mark the \(V_\mathrm{stressed}\) (with the symbol ' " ') and did not require the identification of the syllabic unit. However, the process of identifying the \(V_\mathrm{stressed}\) that is described in this study was achieved in a very simple way. In the following Table 1, a set of rules for stressing vowels is presented with examples. All contexts were considered, including those without a stressed vowel, such as the prepositions \(<\)com\(>\), \(<\)de\(>\), \(<\)em\(>\), \(<\)sem\(>\), \(<\)sob\(>\), \(<\)do(s)\(>\), \(<\)no(s)\(>\); the personal pronouns \(<\)me\(>\), \(<\)te\(>\), \(<\)se\(>\), \(<\)nos\(>\), \(<\)vos\(>\), \(<\)lhe(s)\(>\), \(<\)o(s)\(>\),\(<\)a(s)\(>\), \(<\)lo(s)\(>\), \(<\)no(s)\(>\), \(<\)vo(s)\(>\), \(<\)mo(s)\(>\), \(<\)to(s)\(>\), \(<\)lho(s)\(>\); the relative pronoun \(<\)que\(>\); and the conjunctions \(<\)e\(>\), \(<\)nem\(>\), \(<\)que\(>\), \(<\)se\(>\), which are often added to a stressed nuclei within the prosodic unit. Table 1 Rules for stress assignment of the vowels (V) A problem arises with words, which are morphologically derived, such as the adverbs ending in \(<\)mente\(>\), especially when the adjectival form, from which they derive, has a stress mark (e.g.\(<\)rápido\(> \rightarrow <\)rapidamente\(>\);\(<\)dócil\(> \rightarrow <\)docilmente\(>)\). The solution adopted was the following: we implemented an algorithm that divides the word into two parts, \(<\)ROOT\(>\) and \(<\)mente\(>\). The \(<\)ROOT\(>\) part undertakes a specific module, which compares it with a list of graphematic patterns which have the \(V_\mathrm{stressed}\) identified. This method solved all the cases present in the dictionary of 40k words. This pre-processing module attributes a special symbol to all stressed vowels generating a univocal graphoneme. Model results All experiments were based on the pronunciation dictionary of 41,586 Portuguese words as described in Sect. 3.1. There are two cases, corresponding to the dictionary with and without stress marking. To train and test the statistical model, each one of these two dictionaries was partitioned into fivefold for a cross-validation procedure. The initial dictionary is divided into fivefold, each one with 8,317 (20 %) randomly chosen words. The words are mutually exclusive in each of the five folds. Each fold is used to perform a training and a testing run. Final results were obtained by evaluating the average of the five partial results. The performance of the G2P conversion system was expressed in two average error rates (over the fivefold): average error rate of phonemes (PER) and average error rate of words (WER). The following figures summarize the results obtained using n-grams with \(n\) between 2 and 8. As it can be seen in Fig. 1, the marking of the stressed vowel contributed to a significant improvement in the system performance. Note that, on the contrary to what we would expect, the use of n-grams with large contexts (\(n\) greater than 5) did not improve the system. The best results were achieved with 5-grams, attaining 0.32 % of PER and 2.44 % of WER using stress marking. In fact, we observe an increase in the error rates with contexts larger than 5-grams. This can be explained by the lack of samples to estimate properly the n-grams with large contexts. The optimal length of n-grams was 5 in this case, but it depends on the size of the training dictionary. For example, the optimal context for the SpeechDat pronunciation vocabulary was \(n=4\). Word and phoneme error rates (WER and PER) for the two models. We cannot compare directly our results with other systems' for Portuguese, since the data or the systems are not publicly available. However, the results presented here are the best reported in similar works. For instance, in [7] a PER of 99.11 % is achieved in 1,000 sentences of CETEMPúblico (8–12 words per sentence), but the total number of words is not reported. In [8] a WER of 3.94 % and a PER of 0.59 % were indicated with 7-grams and with stress assignment. In this work, it was already reported a significant performance improvement with the stress assignment. The database has more than 200k words automatically transcribed. In [2], a value of performance of about 89 % is reported. Although we cannot compare directly these results with ours, we think that the joint-sequence model has achieved very good results. In fact, by inspecting the test errors, we observed that most of them resulted from uncommon grapheme patterns or compound words without graphic stress marks. The most frequent errors resulted from the ambiguity of the pronunciation of the stressed \(<\)e\(>\) and \(<\)o\(>\), since they could be pronounced as /E/ vs. /e/ and /O/ vs. /o/ without any systematic rule. Other errors are due to the multiple pronunciations of some homographic words. Although this kind of errors is not the most frequent in the results presented here, cases of heterophonic homographs are very important to consider. To solve this problem of multiple pronunciations, we had to change our G2P system to include additional information appended to each pronunciation in the dictionary. Heterophonic homographs When two words have the same spelling but different pronunciations, they are called heterophonic homographs. They can belong to different PoS, such as in \({<}{dobro}{>}\), pronounced as /dobru/ 'double' (noun) or as /dObru/ 'fold' (verb in the 1st person of the present tense, indicative mood), in \({<}\)poça\({>}\), pronounced as /pOs6/ 'puddle' (noun) or as /pos6/ 'damn!' (interjection), and in \({<}{\!esmero\!}{>}\), pronounced as /@Smeru/ 'care' (noun) or as /@SmEru/ 'I perfect' (verb in the 1st person of the present tense, indicative mood). Heterophonic homographs can also have the same PoS, such as in \({<}aposto{>}\), pronounced as /6poStu/ 'appended' or as /6pOStu/ 'I bet' (both verbs); in \({<}travesso{>}\), pronounced as /tr6vesu/ 'naughty' or as /tr6vEsu/ 'transverse' (both adjectives); and in \({<}{bola}{>}\), pronounced as /bol6/ 'meat pie' or as /bOl6/ 'ball' (both nouns). To deal with the problem of deciding what pronunciation the converter should present for a given heterophonic homograph, we integrated into the G2P system a list of 591 homographs which contains 1,182 different pronunciations.Footnote 3 The homographs were taken from several databases, namely the CETEMPúblico, the Orthographical Vocabulary of PortugueseFootnote 4 and dictionaries for Portuguese available onlineFootnote 5. Each homograph has associated both PoS category and pronunciation form. We have focused on heterophonic homographs which have the vowels \(<\)e\(>\) and the \(<\)o\(>\), since they could be pronounced, respectively, either as /e/-/E/ or /o/-/O/ regardless of the phonological context. The most frequent cases of heterophonic homographs exemplify the multi-pronunciation of the vowel located in the stress position; however, some pairs with the vowel located in a non-stress position were found in the corpora, such in \(<\!\!pregar\!\!>\) /pr@gar/ 'to nail' (verb) vs. /prEgar/ 'to preach' (verb) or \(<\!\!pegada\!\!>\) /p@gad6/ 'glued' (verb, adjective) vs. /pEgad6/ 'footprint' (noun). Although the PoScategory is enough to clarify the pronunciation of the most of homographs, there are some cases of different pronunciations for the same PoS, as was already observed in [34]. In our dictionary, the ambiguity of the pronunciation remains in 228 homographs with the same PoS. For this reason, we associated to a dictionary entry not only the PoS, but also the indication of the alternative vowel sound. In terms of implementation of a practical G2P system, an off-line dictionary can be developed and incorporated in it. In fact, our final system includes the developed dictionary as an "exception list" using a hash table. Only the words not included in the dictionary are converted by the statistical model. This turns the G2P system with a very low latency, since the vocabulary has the most frequent words. It is the user responsibility to indicate the desired pronunciation with PoS or/and the alternative vowel sound; otherwise a default pronunciation is returned. Conclusions and future work The generation of a pronunciation dictionary for European Portuguese is described in this work. The technique used for the grapheme to phoneme conversion is based on a stochastic model, the joint-sequence model, which uses the concept of graphonemes and in which rules for stressed vowel assignment were embedded. The vocabulary includes the most frequent words that occur in Portuguese, as found in the CETEMPúblico corpus. A list of about 600 homographic words was also included, to disambiguate the cases of multiple pronunciations occurring on Portuguese. The results presented here are the best reported in similar works, although not directly comparable due to the use of different databases. The G2P system is freely available on the website http://lsi.co.it.pt/spl/ in the "resource" section, which contains the models, dictionaries and the G2P module. There is an ongoing study on the analysis of the phonological behavior of the foreign words. Morphological information in terms of masculine/feminine, singular/plural and the inflection of the verbs can also be included in future developments. We also intend to enlarge the dictionary to other varieties of Portuguese. Phone(me) signifies either phone or phoneme. Since the studies on the G2P often alternate between the terms phone and phoneme (as we will see with more detail in the Sect. 4), here we propose a mixed term just to highlight the problem. The "AcordoOrtográfico" or "AO" (Portuguese Language Orthographic Agreement) is an international treaty signed in 1990, with the purpose of creating a unified orthography for the Portuguese language, to be used by all the countries that have Portuguese as their official language (see also http://www.portaldalinguaportuguesa.org/acordo.php See http://lsi.co.it.pt/spl/resources/dic_homografas_heterofonas.txt. http://www.portaldalinguaportuguesa.org. http://www.infopedia.pt/; http://www.priberam.pt/dlpo/. Andrade E, Viana MC (1985) Corso I—Um Conversor de Texto Ortográfico em Código Fonético para o Português. Technical Report, CLUL-INIC, Lisboa Barros MJ, Weiss C (2006) Maximum entropy motivated grapheme-to-phoneme, stress and syllable boundary prediction for Portuguese text-to-speech. IV Jornadas en Tecnologías del Habla, Zaragoza Eckhard B (2000) The parsing system "Palavras": automatic grammatical analysis of Portuguese in a constraint grammar framework. Dr.phil. thesis, Aarhus University Press, Aarhus Bisani M, Ney H (2002) Investigations on joint-multigram models for grapheme-to-phoneme conversion. In: Proceedings of the 7th international conference on spoken language processing (ICSLP'02), Denver, USA, pp 105–108 Bisani M, Ney H (2008) Joint-sequence models for grapheme-to-phoneme conversion. Speech Commun 50(5):434–451 Braga D, Coelho L (2006) A rule-based grapheme-to-phone converter for TTS systems in European Portuguese. VI International Telecommunications Symposium, Fortaleza Book Google Scholar Braga D (2008) Algoritmos de Processamento da Linguagem Natural para Sistemas de Conversão Texto-Fala em Português. PhD thesis, Universidade da Coruña Caseiro D, Trancoso I, Oliveira L, Viana C (2002) Grapheme-to-phone using finite-state transducers. In: Proceedings of the IEEE 2002 workshop on speech synthesis, California USA, pp 215–218 Chen S, Goodman J (1998) An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Center for Research in Computing Technology (Harvard University) Chotimongkol A, Black A (2000) Statistically trained orthographic to sound models for Thai. In: Proceedings of ICSLP, vol 2. Beijing, China, pp 551–554 Crystal D (2002) A dictionary of linguistics and phonetics, 5th edn. Blackwell, Oxford Demberg, V. (2006), Letter-to-Phoneme Conversion for a German Text-to-Speech System, Stuttgart University, published as book by Verlag Dr. Müller (VDM), ISBN: 978-3-8364-6428-4 (from Amazon.com). Demberg V, Schmid H, Möhler G (2007) Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion". In: Proceedings of the 45th annual meeting of the association for computational linguistics (ACL-07), Prague, Czech Republic, pp 96–103 Galescu L, Allen J (2001) Bi-directional conversion between graphemes and phonemes using a joint N-gram model". In: Proceedings of the 4th ISCA workshop on apeech aynthesis, Perthshire, Scotland Good I (1953) The population frequencies of species and the estimation of population parameters. Biometrika 40(3,4):237–264 MATH MathSciNet Article Google Scholar Jiampojamarn S, Kondrak G, Sherif T (2007) Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion", HLT-NAACL, Rochester, New York, pp 372–379 Jiampojamarn S, Kondrak G (2009) Online discriminative training for grapheme-to-phoneme conversion. In: Proceedings of INTERSPEECH, Brighton, UK, pp 1303–1306 Kaplan RM, Kay M (1994) Computational linguistics. In: Regular models of phonological rule systems, vol 20, issue 3. MIT Press, Cambridge, pp 331–378 Katz S (1987) Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Trans Acoust Speech Signal Process 35(3):400–401 Kneser R, Ney H (1995) Improved backing-off for M-gram language modeling. In: Proceedings of ICASSP, vol 1. pp 181– 184 Mateus, MH, d'Andrade E (2000) The phonology of Portuguese. Cambridge University Press, USA 18(2):309–312 Navarro G (2001) A guided tour to approximate string matching. ACM Comput Surveys 33(1):31–88 Ney H, Essen U, Kneser (1994) On structuring probabilistic dependences in stochastic language modelling. Computer Speech Lang 8(1):1–38 Oliveira C, Moutinho L, Teixeira A (2004) Um Novo Sistema de Conversão Grafema-Fone para PE Baseado em Transdutores", Actas do II Congresso Internacional de Fonética e Fonologia, Maranhão, Brazil. Oliveira LC, Viana MC, Trancoso IM (1992) A rule-based text-to-speech system for Portuguese. In: Proceedings of ICASSP, vol. 2. San Francisco, USA, pp 73–76 Santos D, Rocha P (2001) Evaluating CETEMPúblico, AFree Resource for Portuguese". In: Proceedings of the 39th annual meeting of the association for computational linguistics, Toulouse, France, pp 442–449 SpeechDAT (1998) Portuguese SpeechDat(II) FDB-4000, European Language Resources Association. http://www.elda.org/catalogue/en/speech/S0092.html Taylor P (2005) Hidden markov models for grapheme to phoneme conversion. In: Proceedings of INTERSPEECH, Lisbon, Portugal, pp 1973–1976 Teixeira JP (2004) A prosody model to TTS systems. PhD Thesis, Faculdade de Engenharia da Universidade do Porto Wells JC (1997) SAMPA computer readable phonetic alphabet. In: Gibbon D, Moore R, Winski R (eds) Handbook of standards and resources for spoken language systems, Part IV. Berlin, Mouton de Gruyter Witten I, Bell T (1991) The zero-frequency problem: estimating the probabilities of novel events in adaptive text compression. IEEE Trans Inf Theory 37(4):1085–1094 Ribeiro R, Oliveira LC, Trancoso I (2003) Using morphossyntactic information in TTS systems: comparing strategies for European Portuguese. In: PROPOR'2003—6th workshop on computational processing of the Portuguese Language. Springer, Heidelberg, pp 143–150 Ribeiro, R, Oliveira, LC, Trancoso I (2002) Morphossyntactic Disambiguation for TTS Systems. In: Proceedings of the 3rd international conference on language resources and evaluation, vol V. pp 1427–1431 (ELRA) Braga D, Marques MA (2007) Desambiguação homógrafos para Sistemas de conversão Texto-Fala em Português", Diacrítica, 21.1 (Série Ciências da Linguagem) Braga: CEHUM/Universidade do Minho, pp 25–50 Seara I, Kafka S, Klein S, Seara R (2001) "Considerações sobre os problemas de alternância vocálica das formas verbais do Português falado no Brasil para aplicação em um sistema de conversão Texto-Fala", SBrT 2001—XIX. Simpósio Brasileiro de Telecomunicações, Fortaleza, Brazil Seara I, Kafka S, Klein S, Seara R (2002) Alternância vocálica das formas verbais e nominais do Português Brasileiro para aplicação em conversão Texto-Fala. Revista da Sociedade Brasileira de Telecomunicações 17(1):79–85 Barbosa F, Ferrari L, Resende F Jr (2003) A methodology to analyze homographs for a Brazilian Portuguese TTS system. In: PROPOR'2003— 6th workshop on computational processing of the Portuguese Language. Springer, Heidelberg Ferrari L, Barbosa F, Resende F Jr (2003) Construções gramaticais e sistemas de conversão texto-fala: o caso dos homógrafos. In: Proceedings of the international conference on cognitive linguistics, Braga Silva D, Braga D, Resende F Jr (2009) Conjunto de Regras para Desambiguação de Homógrafos Heterófonos no Português Brasileiro. In: XXVII Simpósio Brasileiro de Telecomunicações — SBrT 2009, September 29–October 2, Blumenau, Santa Catarina, Brazil, vol 1. pp 1–6 The two first authors acknowledge Instituto de Telecomunicações (Arlindo Veiga) and Science and Technology Foundation-FCT (Sara Candeias, SFRH/ BPD/36584/2007) for their scholarships. This work was also fundedby FCT under the Project (PTDC/CLE-LIN/11 2411/2009) and partially supported by FCT (Instituto de Telecomunicações multiannual funding PEst-OE/EEI/ LA0008/2011). Instituto de Telecomunicações-polo de Coimbra, Coimbra, Portugal Arlindo Veiga, Sara Candeias & Fernando Perdigão Department of Electrical and Computer Engineering, FCTUC, Universidade de Coimbra, Coimbra, Portugal Arlindo Veiga & Fernando Perdigão Arlindo Veiga Sara Candeias Fernando Perdigão Correspondence to Sara Candeias. This is a revised and extended version of a previous paper that appeared at STIL 2011, the 8th Brazilian Symposium in Information and Human Language Technology http://www.ufmt.br/stil2011/. Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Veiga, A., Candeias, S. & Perdigão, F. Generating a pronunciation dictionary for European Portuguese using a joint-sequence model with embedded stress assignment. J Braz Comput Soc 19, 127–134 (2013). https://doi.org/10.1007/s13173-012-0088-0 G2P conversion Grapheme–phoneme converter Stress assignment rules
CommonCrawl
Research Letter High-resolution calculation of the urban vegetation fraction in the Pearl River Delta from the Sentinel-2 NDVI for urban climate model parameterization Michael Mau Fung Wong1, Jimmy Chi Hung Fung1 & Peter Pak Shing Yeung1 Geoscience Letters volume 6, Article number: 2 (2019) Cite this article The European Space Agency recently launched the Sentinel mission to perform terrestrial observations in support of tasks such as monitoring forests, detecting land-cover changes, and managing natural disasters. The resolution of these satellite images can be as high as 10 m depending on the bands. In this study, we used the red and near-infrared bands in 10-m resolution from Sentinel-2 images to calculate the Normalized Difference Vegetation Index (NDVI) and estimate of the green vegetation fraction in urban areas within the Pearl River Delta region (PRD). We used vegetation coverage obtained from high-resolution Google satellite images as a reference to validate the vegetation estimates derived from the Sentinel-2 images, and found the correlation between the two to be as high as 0.97. As such, information from the Sentinel-2 imagery can supplement the urban canopy parameters (UCPs) derived from the World Urban Database and Access Portal Tools (WUDAPT) level-0 dataset, which is used in urban meteorological models. The rapid retrieval and open-source nature of the methodology supports high-resolution urban climate modeling studies. The fraction of green vegetation in an urban environment is a key parameter in the study of urban climate, as it influences an area's microclimates, including moisture levels and temperature (Weng et al. 2004). In recent years, the development of global land-cover datasets (Chen et al. 2015) from MODIS and LANDSAT images and the World Urban Database and Access Portal Tools (WUDAPT) level-0 dataset (Ching et al. 2014) for local climate zones/LCZs has improved the accuracy of urban meteorological modeling. The Landsat/MODIS satellite images have been used to derive global land-cover datasets (Chen et al. 2015; Gong et al. 2013), and are also very useful for monitoring land-use changes, such as the change of agricultural land to urban areas in the Pearl River Delta region (Seto et al. 2002). In such land-use datasets, urban areas are usually classified as a land-use category; detailed morphologies are not distinguished from one location to another in climate modeling. For example, mesoscale models tend to parameterize an urban area's momentum drag as a representative roughness length for the entire urban class in mesoscale modeling. In addition to these datasets, many urban meteorological models, such as the Weather Research and Forecasting model with different urban canopy parametrization schemes (e.g., Kusaka et al. 2001; Martilli et al. 2002; Salamanca et al. 2009), need to quantify the urban fraction, usually defined as the fraction of impervious surface in an urban area. Therefore, a good-quality dataset of green/urban fractions is desirable, such as the National Land Cover Database (NLCD) (Homer et al. 2007) adopted in the U.S. However, such data are not available in the public domain for the Pearl River Delta (PRD) region. In recent years, the WUDAPT project has proved valuable for estimating landscape characteristics based on satellite images and machine learning. This project aims to develop a straightforward LCZ classification scheme using free and open data, such as Landsat images and training samples from Google Earth. Once the LCZ classification is in place, building morphology parameters can be estimated for various urban classes using machine learning (Ching et al. 2018). The WUDAPT dataset with estimated building morphology parameters has been applied in urban heat island (UHI) studies in Madrid, with promising results (e.g., Brousse et al. 2016). Hammerberg et al. (2018) also carried out a study comparing the improvements in WRF BEP/BEM performance in Vienna using GIS-extracted building morphology data and WUDAPT level-0 data. Calculating green/urban fraction for urban climate modeling purposes with such LCZ information would typically require the use of look-up tables, which are highly dependent on the area's geolocation and climatic situation. Obtaining locally available tree data is usually difficult or costly. The European Space Agency's recently launched Sentinel-2 mission (Drusch et al. 2012) provides open-source 10-m resolution data, including the red (visible) and near-infrared (NIR) regions. This imagery has significant potential for estimating the urban green fraction based on the Normalized Difference Vegetation index (NDVI) (Carlson and Ripley 1997), which has been used to create the global land-cover datasets. It has also been widely used in vegetation fraction detection (e.g., Elmore et al. 2000) and in tracking their changes over time (e.g., Eckert et al. 2015). These estimates have been found to impact evapotranspiration modeling, which influences the accuracy of mesoscale weather simulations (Vahmani and Ban-Weiss 2016). Higher-resolution data on urban vegetation fraction would, thus, be beneficial for urban climate modeling. This study estimates the vegetation fraction in the urban areas of Pearl River Delta region by calculating the NDVI from the 10-m resolution Sentinel-2 images. We obtained the visible red and NIR bands from four tiles of the Sentinel-2 images and merged them at a resolution of 10 m. Additional file 1: Figure S1 shows the corresponding Sentinel-2 RGB image tiles and coverage for this study, which were captured on a clear-sky day (2017-12-31). We then used Google satellite images to validate the urban vegetation fraction estimated from the Sentinel-2 images, as there are no field data available for the target area. Google satellite images have a high resolution (pixel size of 0.59716 m) and are in RGB format, making it possible for the human eye to detect whether a certain region in the imagery has vegetation or not. However, manually identifying vegetation for the whole Pearl River Delta region is highly labor intensive. Therefore, to reduce validation costs, 200 Google satellite images over the study area were randomly sampled and a color detection algorithm was used to assist in estimating vegetation fractions. These calculations were combined with subjective adjustments to identify and quantify green coverage, and then used the estimates as a reference to validate the Sentinel-2-derived vegetation fraction. After validating the Sentinel-2-derived green fraction, we calculate the region's urban fraction in the 100-m urban grids used in the WUDAPT level-0 dataset. For simplification, we consider the urban fraction to be defined as impermeable surfaces and assume that any area without vegetation is impermeable. We then compare our urban fraction estimates to those derived using the WUDAPT level-0 dataset, with assigned look-up table values for different LCZs, and quantify the new method's benefits. The new urban green fraction dataset should help urban climate researchers/modelers in terms of high-resolution microclimate models and urban air quality/health assessments. The framework developed in this study could be repeated in other regions for similar purposes. Generation of a Sentinel-2-resolution green cover dataset The PRD region's green cover was estimated with the commonly used NDVI. The calculation is as follows: $${\text{NDVI}} = \left( {{\text{NIR}}{-}{\text{Red}}} \right)/\left( {{\text{NIR}} + {\text{Red}}} \right),$$ where Red and NIR are the spectral reflectance measurements acquired in the red (visible) and near-infrared regions, respectively. These spectral reflectances are themselves ratios of reflected over incoming radiation in each individual spectral band; hence, they have values between 0.0 and 1.0. Accordingly, the NDVI varies between − 1.0 and + 1.0. Numerous studies have established the threshold for vegetation as 0.2 (e.g., Sobrino et al. 2004). For computation, we merged the Sentinel-2 images' NIR and red bands from four tiles at a resolution of 10 m. Additional file 1: Figure S1 shows the corresponding Sentinel-2 RGB image tiles and their coverage. These data were captured on a clear-sky day (2017-12-31) to ensure the accuracy of the NDVI estimation. Due to the absence of field data in the target area, we used high-resolution (pixel size of 0.59716 m) Google satellite images from 2016 as field data to validate the results. These images were obtained from the Google Maps Static API with a zoom level of 19. We assume that the influence of several months' worth of differences between the field data (Google satellite images) and the Sentinel-2 data would not be significant for the PRD, which is located in a sub-tropical region. Figure 1a shows an example of a high-resolution Google Maps satellite image. Trees can be recognized, and the area that they occupy can be manually measured; these measurements serve as field data or ground truth for validation. However, examining hundreds of sampled images is a highly labor-intensive process. Therefore, we apply a very simple algorithm for color detection (Cheng et al. 2001) to the RGB Google satellite images from the MATLAB image processing toolbox to detect the color green, which usually signals the presence of vegetation. This process automatically generates images with reasonably well-recognized vegetation as an intermediate dataset requiring manual adjustment. However, green areas on the images are not always vegetation; for example, some buildings are green. Therefore, subjective adjustments (step 4) were made to the reference images using brushing tools from ARCGIS to reduce the estimation error. The steps for image processing are summarized as follows. a Example of a Google Maps satellite image of an urban area in Shenzhen. b Vegetation fraction extracted by applying a color detection algorithm to an RGB version of the image in Fig. 1a High-resolution satellite RGB images are downloaded from Google Static Map API. The RGB images are converted into HSV space for color detection. Green areas in the RGB images are detected by setting an optimum threshold (manually determined after a few iterations) for hue value, ranging 61–210. This green range image is then filtered with image processing tools in MATLAB to (i) remove noise using the "bwareaopen" command, (ii) enhance the green signal in the presence of shadows between trees using the dilution command "imdilate," and (iii) connect the trees in images close to on another using the command "imclose." Procedures 1–3 are automatically repeated 200 times randomly throughout the PRD dataset to provide a georeferenced images containing vegetation information. Finally, these images are overlaid with Google Maps images on ARCGIS to fine-tuned the results by removing objects (e.g., green buildings) and adding trees that were not identified by the color detection scheme. The vegetation fraction of the resulting image (an example is shown in Fig. 1b) was then calculated as follows: $${\text{Vegetation fraction}} = \frac{\text{Number of green pixels}}{\text{Total number of pixels in the sampled image}}.$$ Note that the vegetation data can be sampled manually. However, steps 1–3 automate the tedious job of sampling the vegetation component in RGB satellite images by generating a set of initial images that capture a majority of green cover. The subjective adjustment in Step 4 then maximizes the accuracy of the estimated vegetation fraction. In this study, the 200 images obtained through this procedure were used as a reference to validate the vegetation data obtained from the Sentinel-2 10-m imagery. We subsampled to a 360 × 360 m grid size (suitable for fine-scale urban climate modeling). Two hundred points are selected and the corresponding 200 Google images with the same size are obtained through steps 1–5 to obtain the vegetation fraction. These values are compared to determine the accuracy of the Sentinel-2-derived green fraction for urban climate modeling purposes. Comparison with WUDAPT level-0 After validation, the 10-m vegetation product for the PRD region was compared to the WUDAPT level-0 (100 m) data (Cai et al. 2016) to demonstrate the benefits of the new method. Specifically, we compared the urban fraction (defined as the percentage of ground covered by impervious surfaces) estimated by these two datasets is compared. For the WUDAPT level-0 data, the urban fraction was assigned according to the local climate zones 1–10 (urban categories from Brousse et al. 2016). For the Sentinel images, the urban fraction was estimated as follows: $${\text{Urban fraction}} = 1- {\text{vegetation fraction}}.$$ To facilitate comparison, we subsampled the Sentinel-2-derived vegetation fraction to the WUDAPT grid resolution (100 m). Figure 2 shows a flow chart depicting the process used in this study. Table 1 shows the default look-up table values used to assign the urban fraction in each LCZ in the WUDAPT level-0 estimation. Flowchart of tree cover retrieval algorithm. The arrows represent the directionality of the process, which flows generally from the top to the bottom of the figure Table 1 Assigned urban fraction values for different LCZs. Accuracy of generated vegetation fraction in urban areas Spatial comparison Figure 3 compares selected examples of the vegetation fraction calculated using the NDVI and Google static map images, respectively, for areas around Hong Kong with urban fraction ranging from about 0.23–0.97. As can be seen in Fig. 3, the color detection algorithm with subjective adjustment based on the Google Satellite images (reference data) produces finer details than the Sentinel-2 NDVI method, likely due to the relatively higher image resolution and the manual image adjustments. Nevertheless, the NDVI calculated from the Sentinel-2 images still identifies comparable sizes of green fractions in urban areas with a large range of urban densities. Spatial comparisons of several locations in Hong Kong. Column 1: vegetation fraction retrieved from Sentinel-2. Column 2: vegetation fraction retrieved from Google Images. Column 3: Google static satellite images Random sample correlation This sampling process was repeated for a smaller sample of images with vegetation fractions between 0 and 1 with a grid size of 360 × 360 m2. The aim is to validate the Sentinel-2's ability to retrieve urban vegetation fractions by comparing its results with those of a WUDAPT dataset sampled 200 times. Figure 4 shows a scatter plot comparing vegetation fractions calculated using Google images and the Sentinel-2 images. The correlation coefficient R for the two samples is as high as 0.97, thereby quantitatively demonstrating the quality of the Sentinel-2 data. The 95% confidence interval around R ranges from 0.97 to 0.98. Scatter plot of vegetation fraction retrieved from Google images vs. vegetation fraction retrieved from Sentinel-2 images Cross-comparison with WUDAPT level-0 After validating the methodology for retrieving green fractions from Sentinel images, we used the estimated vegetation fraction product at a resolution of 100 m to estimate the urban fraction and compare it with that derived from the WUDAPT level-0 data using a default look-up table for each LCZ (Brousse et al. 2016). As shown in Fig. 5 (white represents non-urban regions, according to the WUDAPT dataset), our urban fraction estimate differs significantly from the urban fraction derived from the WUDAPT level-0 default look-up table (more than 0.5 absolute difference). As an example, the blue and purple circles in Fig. 5a show relatively high (Kowloon) and relatively low (Tsing Yi) density urban areas in Hong Kong, respectively. A comparison of the corresponding areas in Fig. 5c (white denoting a difference less than 0.02) shows that in high-density urban areas (Kowloon), the WUDAPT level-0 data tend to overestimate the urban fraction, because the resolution (100 m) cannot resolve the tree clusters, courts, or parks that are partially resolved in Figs. 1 and 3. In contrast, for less dense urban areas (in this example, Tsing Yi), the WUDAPT level-0 data tend to underestimate the urban fraction, as there are container terminals, oil depots, and large parking lots where vegetation coverage is rare. Spatial comparison of WUDAPT look-up table values vs. Sentinel-2-retrieved values in Hong Kong: a WUDAPT with default look-up table from Brousse et al. (2016); b Sentinel-2-retrieved values; and c WUDAPT minus Sentinel-2 Figure 6 further emphasizes the discrepancy between the urban fractions calculated using each dataset for the entire PRD region. The Y-axis represents the proportion of urban grids (100 m resolution) in the whole study region (PRD) with a given difference (indicated by the x axis) between the estimates derived from the WUDAPT look-up table and the Sentinel-2 images, respectively. For example, a Y-axis value of 0.1 denotes 10% of the urban area throughout the entire region. The differences between the two methods were found to be as high as − 0.9 to 1 depending on the location. This suggest that the Sentinel-based method estimates high impervious coverage in some regions where WUDAPT level-0 data indicate coverage of pervious surfaces and vice versa. Nevertheless, the approximate normal distribution of the histogram shows that for almost half of the domain, the differences (46% of the urban area in PRD) center at − 0.2 to 0.2; this suggests that the datasets actually have a reasonable level of agreement, despite the coarser resolution of the WUDAPT level-0 data. Histogram of differences between the urban fractions based on WUDAPT look-up tables and Sentinel images in PRD Figure 7 shows a spatial comparison of selected locations in Hong Kong (Tsing Yi and Kowloon), where the urban fraction retrieved from the WUDAPT level-0 is either underestimated or overestimated by more than 0.2 relative to our estimates. Selected spatial comparison of Google satellite images for locations with a more than 0.2 overestimation by WUDAPT look-up tables (Kowloon) and b more than 0.2 underestimation by WUDAPT look-up tables (Tsing Yi) The Sentinel images successfully resolve individual tree clusters and our estimates suggest a much lower urban fraction when aggregated to a resolution of 100 m (down to 0.2, compared to 0.8 for WUDAPT). WUDAPT identified urban areas that have relatively larger vegetation coverage (the open low-/high-rise classes) but the data were less accurate or detailed than the coverage retrieved from the Google Satellite images (Fig. 7a). For Tsing Yi, WUDAPT underestimated the urban fraction despite the correct identification of the LCZs (large low-rise or heavy industry) are correctly identified, which owes to the way that the look-up table values for the urban canopy parameters (UCP) are assigned (in this case the urban fraction). When the look-up table is used, the median value is usually assigned to the whole study area, the Sentinel-2 retrieval method, in contrast, explicitly specifies a sub-grid scale for the vegetation/urban fraction, dependent on the geolocation. Therefore, the urban fraction retrieved from the Sentinel-2 images can provide information to represent the heterogeneity of the landscape at a sub-grid scale that is typically unavailable when using WUDAPT level-0 data for urban climate model parameterization, in which a single value (usually the mean or median) represents each LCZ. Our method, thus, offers an improvement for determining urban fractions for climate modeling research. Global land-cover and local climate zone mapping is very useful for urban meteorological modeling. The urban fraction is a particularly important parameter as it affects the accuracy of urban meteorological simulations (e.g., Cui and De Foy 2012). This study uses relatively new and high-resolution satellite images from Sentinel-2 (in the red and near-infrared bands) to estimate the urban vegetation fraction based on the commonly used NDVI. The results show a high correlation (0.97) with reference estimates derived from a random sample of 200 high-resolution true color Google satellite images of the Pearl River Delta region. We compared the urban fractions estimated from this dataset with those derived from WUDAPT level-0 LCZ and default look-up table values, and found that the discrepancies between the two datasets was less than 0.2 for about half of the sampled urban areas in the PRD; however, the differences were also found to be as high as 0.8 for some sampled locations, either due to the WUDAPT data's inability to resolve individual parks or tree clusters in highly dense urban areas or because of the limited range in the median values for certain study areas. Overall, the vegetation fraction estimated using the Sentinel-2 images complements the existing WUDAPT level-0 data for urban meteorological modeling. NDVI: Normalized Difference Vegetation Index WUDAPT: World Urban Database and Access Portal Tools PRD: UHI: urban heat island WRF BEP/BEM: Weather Research and Forecasting model with multi-layer building effect parameterization and the building energy model NLCD: National Land Cover Database red green blue hue, saturation, value Brousse O, Martilli A, Foley M, Mills G, Bechtel B (2016) WUDAPT, an efficient land use producing data tool for mesoscale models? integration of urban LCZ in WRF over madrid. Urban Clim 17:116–134 Cai M, Ren C, Xu Y, Dai W, Wang XM (2016) Local climate zone study for sustainable megacities development by using improved WUDAPT methodology–a case study in Guangzhou. Proc Environ Sci 36:82–89 Carlson TN, Ripley DA (1997) On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote Sens Environ 62(3):241–252 Chen J, Chen J, Liao A, Cao X, Chen L, Chen X et al (2015) Global land cover mapping at 30 m resolution: a POK-based operational approach. ISPRS J Photogr Remote Sens 103:7–27 Cheng H, Jiang XH, Sun Y, Wang J (2001) Color image segmentation: advances and prospects. Pattern Recogn 34(12):2259–2281 Ching J, See L, Mills G, Alexander P, Bechtel B, Feddema J et al (2014) WUDAPT: facilitating advanced urban canopy modeling for weather, climate and air quality applications. In: Proc. Amer. Meteorol. Soc. Symp. Urban Environ., pp 1–7 Ching J, Mills G, Bechtel B, See L, Feddema J, Wang X et al (2018) WUDAPT: an urban weather, climate and environmental modeling infrastructure for the anthropocene. Bull Am Meteorol Soc 99(9):1907–1924 Cui YY, De Foy B (2012) Seasonal variations of the urban heat island at the surface and the near-surface and reductions due to urban vegetation in Mexico City. J Appl Meteorol Climatol 51(5):855–868 Drusch M, Del Bello U, Carlier S, Colin O, Fernandez V, Gascon F et al (2012) Sentinel-2: ESA's optical high-resolution mission for GMES operational services. Remote Sens Environ 120:25–36 Eckert S, Hüsler F, Liniger H, Hodel E (2015) Trend analysis of MODIS NDVI time series for detecting land degradation and regeneration in Mongolia. J Arid Environ 113:16–28 Elmore AJ, Mustard JF, Manning SJ, Lobell DB (2000) Quantifying vegetation change in semiarid environments: precision and accuracy of spectral mixture analysis and the normalized difference vegetation index. Remote Sens Environ 73(1):87–102 Gong P, Wang J, Yu L, Zhao Y, Zhao Y, Liang L et al (2013) Finer resolution observation and monitoring of global land cover: first mapping results with landsat TM and ETM data. Int J Remote Sens 34(7):2607–2654 Hammerberg K, Brousse O, Martilli A, Mahdavi A (2018) Implications of employing detailed urban canopy parameters for mesoscale climate modelling: a comparison between WUDAPT and GIS databases over Vienna, Austria. Int J Climatol 38:e1241–e1257 Homer C, Dewitz J, Fry J, Coan M, Hossain N, Larson C et al (2007) Completion of the 2001 national land cover database for the counterminous united states. Photogr Eng Remote Sens 73(4):337 Kusaka H, Kondo H, Kikegawa Y, Kimura F (2001) A simple single-layer urban canopy model for atmospheric models: comparison with multi-layer and slab models. Bound-Layer Meteorol 101(3):329–358 Martilli A, Clappier A, Rotach MW (2002) An urban surface exchange parameterisation for mesoscale models. Bound-Layer Meteorol 104(2):261–304 Salamanca F, Krpo A, Martilli A, Clappier A (2009) A new building energy model coupled with an urban canopy parameterization for urban climate simulations—part I. Formulation, verification, and sensitivity analysis of the model. Theor Appl Climatol 99(3):331. https://doi.org/10.1007/s00704-009-0142-9 Seto KC, Woodcock C, Song C, Huang X, Lu J, Kaufmann R (2002) Monitoring land-use change in the pearl river delta using landsat TM. Int J Remote Sens 23(10):1985–2004 Sobrino JA, Jimenez-Munoz JC, Paolini L (2004) Land surface temperature retrieval from LANDSAT TM 5. Remote Sens Environ 90(4):434–440 Vahmani P, Ban-Weiss GA (2016) Impact of remotely sensed albedo and vegetation fraction on simulation of urban climate in WRF-urban canopy model: a case study of the urban heat island in Los Angeles. J Geophys Res Atmos 121(4):1511–1531 Weng Q, Lu D, Schubring J (2004) Estimation of land surface temperature–vegetation abundance relationship for urban heat island studies. Remote Sens Environ 89(4):467–483 MMFW: 40%, JCHF: 30%, PPSY: 30%. All authors read and approved the final manuscript. This work was supported by NSFC-FD Grant U1033001, and RGC Grants 16303416 and 16300715. We acknowledge Prof. Ren Chao and her group from CUHK for providing the WUDAPT dataset. Division of Environment and Sustainability, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China Michael Mau Fung Wong , Jimmy Chi Hung Fung & Peter Pak Shing Yeung Search for Michael Mau Fung Wong in: Search for Jimmy Chi Hung Fung in: Search for Peter Pak Shing Yeung in: Correspondence to Jimmy Chi Hung Fung. 40562_2019_132_MOESM1_ESM.docx Additional file 1: Figure S1. The study region and the corresponding four tiles of Sentinel-2 images (in RGB band). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wong, M.M.F., Fung, J.C.H. & Yeung, P.P.S. High-resolution calculation of the urban vegetation fraction in the Pearl River Delta from the Sentinel-2 NDVI for urban climate model parameterization. Geosci. Lett. 6, 2 (2019) doi:10.1186/s40562-019-0132-4 Accepted: 19 February 2019 Urban vegetation fraction Urban climate Urban fraction Sentinel-2 satellite image Asian Urban Meteorology and Climate
CommonCrawl
Reconstruction of surface-pressure fluctuations using deflectometry and the virtual fields method R. Kaufmann ORCID: orcid.org/0000-0001-6248-70271, B. Ganapathisubramani1 & F. Pierron1 Experiments in Fluids volume 61, Article number: 35 (2020) Cite this article This study presents an approach for obtaining full-field dynamic surface-pressure reconstructions with low differential amplitudes. The method is demonstrated in a setup where an air jet is impinging on a flat plate. Deformations of the flat plate under dynamic loading of the impinging jet were obtained using a deflectometry setup that allows measurement of surface slopes with high accuracy and sensitivity. The measured slope information was then used as input for the virtual fields method to reconstruct pressure. Pressure fluctuations with amplitudes of down to \({\mathcal {O}}(1)~\text {Pa}\) were extracted from time-resolved deflectometry data using temporal band-pass filters. Pressure transducer measurements allowed comparisons of the results with an established measurement technique. Even though the identified uncertainties in fluctuations were found to be as large as 50%, the spatial distributions of dynamic pressure events were captured well. Dynamic mode decomposition was used to identify relevant spatial information that correspond to specific frequencies. These dynamically important spatio-temporal events could be observed despite their low differential amplitudes. Finally, the limitations of the proposed pressure determination method and strategies for future improvements are discussed. Graphic abstract The measurement of dynamic surface pressure distributions is crucial for a range of applications in fluid dynamics, material design and testing, as well as for the investigation of impinging jets for heat and mass transfer. Time-resolved measurements that provide a large number of data points for low-range differential pressures are challenging for current techniques, which are discussed in the following. Microphones have high sensitivities for differential pressure amplitudes of \((1)~\text {Pa}\) and well below, depending on the type of microphone and experimental noise sources, but they allow only point-wise measurements. However, fitting these microphones requires drill-holes in the investigated specimen, which change the material response. Further, the achievable spatial resolution is generally limited by the size of the sensors, see, e.g., Corcos (1963, 1964). Small-amplitude vortices may thus still be detected, but with low spatial accuracy. The spatial resolution may be increased using pinhole-mounting, e.g., (Robin et al. 2012; Van Blitterswyk and Rocha 2017). This requires a large amount of sensors when measuring pressure over even a moderately large area. Larger differential pressure amplitudes of \({\mathcal {O}}(100)~\text {Pa}\) and above can be measured optically in full-field using pressure-sensitive paints (PSP), e.g., in transonic and supersonic scenarios (Engler et al. 2000; Liu et al. 2008). As PSP is a technique for the measurement of absolute pressure, it is generally difficult to resolve small differential amplitudes (Beverley and McKeon 2007, chapter 4.4). Using phase averaging to address the restricting factor of camera noise, as well as a porous paint formulation, acoustic pressure amplitudes of down to \(500~\text {Pa}\) were measured in Gregory et al. (2006). Using phase averaging in combination with proper orthogonal decomposition to measure the interaction between a rotor wake and a cylinder, PSP measurements of fluctuations within 100 Pa were achieved with good agreement to microphone measurements in Jiao et al. (2019). Particle image velocimetry (PIV) and particle tracking velocimetry (PTV) are well-established techniques that allow full-field pressure reconstructions from flow field measurements. These methods generally perform best when dense Lagrangian velocity and acceleration information can be achieved close to the investigated surface. In Jeon et al. (2016), tomographic time-resolved PIV was used to measure the pressure field around an inclined airfoil. 2D and 3D reconstructions for 2 and 3 components were conducted and found to yield similar results. The reconstructed differential pressures of \({\mathcal {O}}(100)~\text {Pa}\) agreed reasonably well with pressure tap measurements and further allowed identifying vortex shedding frequencies. Time-resolved measurements with high speed cameras are typically restricted by the limited amount of pixels on the imaging device. Since a certain number of pixels is required for each interrogation window, this limits the achievable accuracy in pressure and the size of the field of view. In Schneiders et al. (2016a) helium-filled soap bubbles were tested as tracer particles for tomo-PIV to increase the size of the field of view for practical engineering applications. The authors reconstruct the pressure field with differential amplitudes of \({\mathcal {O}}(1)~\text {Pa}\) around a cylinder and find good agreement with transducer measurements on the cylinder surface. They achieved a measurement volume of \(20 \times 17 \times 18 ~\text {cm}^3\). In Gesemann et al. (2016) B-Spline based methods are used to improve the accuracy and resolution of velocity, acceleration and pressure fields in the presence of noise. Synthetic data are used to quantify the achieved improvements in reconstruction performance for each method. Tomographic PTV was used in Schneiders et al. (2016b) for pressure volume reconstructions with four high-speed cameras for a wind tunnel flow around a cylinder. The accuracy was estimated by comparing the reconstructed pressure values along the surface of the cylinder to microphone measurements. Discrepancies between microphone and tomo-PTV results of only 0.5 Pa were found. In Huhn et al. (2018) 3D pressure fields were reconstructed within the flow field of an impinging jet using dense Lagrangian particle tracking (LPT) by Shake-The-Box (STB) processing with six high-speed cameras. Pressure reconstructions obtained along the impingement wall were compared with microphone measurements and yielded an uncertainty of \(0.3~\text {Pa}\). Tomo-PIV and LPT with STB processing were compared on simulated measurements of a flow around a step change in a wind tunnel in van Gent et al. (2017). The authors also investigate different available pressure reconstruction techniques. Accuracies of \({\mathcal {O}}(1)~\text {Pa}\) were achieved for all of the investigated methods. Using time-resolved Lagrangian particle tracking with the Vortex-in-Cell+ technique, an accuracy of approximately \(0.5~\text {Pa}\) was achieved. Generally, PIV and PTV approaches require optical access to the flow field, as well as suitable seeding particles and a sufficiently transparent fluid. Near the surface, the techniques are often restricted by shadowing effects, reflections and the finite size of the reconstruction window. The computational effort for processing the acquired images is relatively high due to the large amount of data points from potentially multiple cameras. If only the surface pressure distribution is of interest, this leads to a significant overhead and to a large amount of potentially unnecessary flow field data. Alternatively, surface deformation measurements can be used to calculate the pressure acting on a specimen by solving the mechanical equilibrium equations. For thin plates in pure bending, which will be discussed in the following, the local equilibrium equation can be obtained using the Love–Kirchhoff theory, which involves fourth-order derivatives of the surface deflections, (Timoshenko and Woinowsky-Krieger 1959). Such high-order derivatives lead to significant noise amplification and therefore require regularization. In Pezerat and Guyader (2000) numerical simulations were used to obtain local deflections of a thin-plate specimen under a given load. Noise was added numerically. Wave number filters were then applied prior to solving the equilibrium equation locally, which allowed an identification of the external vibration sources. In the same study, the technique is also applied to an experiment where an aluminium plate of \(0.5~\text {mm}\) thickness is excited with a shaker with pseudo-random noise. The reconstructed force amplitude of \(0.15~\text {N}\) was reasonably close to the directly measured force of \(0.17~\text {N}\). The acoustic component of a turbulent boundary layer flow at an air flow speed of \(25~\text {ms}^{-1}\) with a parallelepiped as turbulence source was identified on an aluminium plate with \(0.6~\text {mm}\) thickness with a similar approach in Lecoq et al. (2014). The amplitudes identified using the equilibrium equation were, however, found to underestimate the pressure levels when compared to microphone measurements. The virtual fields method (VFM) is an alternative identification method. It is based on the principle of virtual work. For the purpose of pressure identification, the VFM requires full-field kinematic data, the mechanical constitutive material parameters of the specimen, and suitable virtual fields. The latter need to be selected with respect to the theoretical and practical requirements of the investigated problem, such as boundary conditions and continuity. The virtual fields can also be selected to provide different levels of regularization adapted to a given signal-to-noise ratio. In the case of thin plates in pure bending, the principle of virtual work yields an equation that only involves second-order deflection derivatives. The VFM is described in detail with a range of applications in Pierron and Grédiac (2012). It was used in combination with scanning Laser Doppler Vibrometer (LDV) measurements for investigating acoustic loads on thin plates. This allowed reconstructing spatially averaged sound pressure levels in Robin and Berry (2018) and transverse loads and vibrations in Berry et al. (2014), as well as random external wall pressure excitations in Berry and Robin (2016). In an investigation of the sound transmission of thin plates, comparisons of spatially averaged pressure auto-spectra with microphone array measurements showed good agreement except at structural resonance frequencies in Robin and Berry (2018). A limitation of these studies was that full-field data could not be obtained simultaneously with LDV measurements. This can be addressed using alternative measurement techniques. Deflectometry is an optical full-field technique for the measurement of surface slopes, (Surrel et al. 1999), which can be combined with high-speed cameras to investigate dynamic events. It can achieve very high slope sensitivities, in the present study \({\mathcal {O}}(1)~\text {mm km}^{-1}\). In Devivier et al. (2016), deflections of ultrasonic Lamb waves were imaged using deflectometry on thin vibrating mirror glass and carbon/epoxy plates. Since deflectometry measurements yield surface slopes, the required order of derivatives of experimental data in the VFM is reduced to one. In Giraudeau et al. (2010) this was used to identify the Young's modulus, Poisson's ratio and the associated damping parameter on vibrating, thin, polycarbonate plates. A combination of deflectometry and the VFM was also employed in O'Donoughue et al. (2017) to reconstruct dynamics of mechanical point loads of several \({\mathcal {O}}(1)~\text {N}\) on an aluminium plate with \(3~\text {mm}\) thickness. Spatially averaged random excitations were identified with this method in O'Donoughue et al. (2019). In Kaufmann et al. (2019) it was used to measure mean pressure distributions of an impinging air jet with differential pressure amplitudes of several \({\mathcal {O}}(100)~\text {Pa}\) on thin glass plates of \(1~\text {mm}\) thickness. The study also proposes a methodology to assess the accuracy of pressure reconstructions and to select optimal reconstruction parameters. However, in several aerodynamic and hydrodynamic applications, it is important to obtain surface-pressure fluctuations (both broadband as well as at certain frequencies). In the present study, the work of Kaufmann et al. (2019) is extended to measure the spatio-temporal evolution of low differential pressure events which are generated by the flow on a surface. The method is demonstrated in a canonical flow problem: a jet impinging on a flat surface. This work is specifically concerned with reconstructing and extracting the pressure footprint of large-scale vortices impinging on the plate. These pressure fluctuations will have very low differential pressure compared to the mean flow, typically of \({\mathcal {O}}(10)~\text {Pa}\) for broad band events and well below for single-frequency events. This specific flow problem is chosen to highlight the pros and cons of the proposed surface pressure determination technique. Deflectometry Deflectometry is an optical technique that allows full-field slope measurements on specular reflective surfaces using a periodic spatial signal (Surrel et al. 1999). A schematic of the setup is shown in Fig. 1. \(p_{\text {G}}\) is the pitch of the spatial signal, here a cross-hatched grid, and \(h_{\text {G}}\) the distance between grid and specimen surface. The camera is placed next to the grid, such that a pixel directed at point M on the specimen surface records the reflected grid at point P. Applying a load deforms the surface locally, resulting in a change in surface slope, d\(\alpha\), such that the same pixel will now record the reflected point \(P^{\prime }\). Rigid body movements and out-of-plane deflections are neglected here, as the specimen bending stiffness is sufficiently large and the investigated loads small. Phase maps are extracted from grid images using a spatial phase-stepping algorithm featuring a windowed discrete Fourier transform algorithm with triangular weighting and using a detection kernel size of two periods (Surrel 2000; Badulescu et al. 2009). This algorithm suppresses some harmonics resulting from the use of a non-sinusoidal signal and mitigates the effects of miscalibration. The displacement, u, between P and \(P^{\prime }\) is calculated iteratively from the obtained phase maps, \(\phi\), (Grédiac et al. 2016, section 4.2): $$\begin{aligned} {u} _{n+1} ({x}) = -\frac{\text {p}_{\text {G}}}{2\pi }(\phi _{{\text {def}}}({x}+{u}_n({x})) - \phi _{{\text {ref}}}({x})), \end{aligned}$$ where the subscripts def and ref refer to a deformed and a reference configuration, respectively, as the phase difference between a loaded and unloaded configuration is of interest. For sufficiently small d\(\alpha\), \(h_{\text {G}} \gg {u}\), small angle \(\theta\) and assuming that the camera records images in normal incidence, geometric considerations yield a simplified, linear relationship between the change in surface slope d\(\alpha\) and u (e.g., Ritter 1982): $$\begin{aligned} {\text {d}}\alpha _{x} = \frac{u_{x}}{2h_{\text {G}}}, \quad {\text {d}}\alpha _y = \frac{u_y}{2h_{\text {G}}}, \end{aligned}$$ where \(u_{x}\) and \(u_{y}\) are the components of u in x- and y-direction, respectively. If this hypothesis is not valid, then full calibration needs to be performed, which is more complex, see Balzer and Werling (2010) and Surrel and Pierron (2019). The printed grid pitch \(p_{\text {G}}\) drives the spatial resolution. The slope resolution depends on measurement noise as well as \(p_{\text {G}}\) and \(h_{\text {G}}\). [redrawn from (Kaufmann et al. 2019)] Top view of deflectometry setup and working principle Pressure reconstruction Assuming that the plate material is linear elastic, isotropic and homogeneous, the principle of virtual work is expressed by (e.g., Pierron and Grédiac 2012, chapter 3): $$\begin{aligned} \begin{aligned}&\int \limits _{S} p w^{*} {\text {d}}S \\&=D_{xx} \int \limits _{S} \left( \kappa _{xx} \kappa ^* _{xx} + \kappa _{yy} \kappa ^* _{yy} + 2~\kappa _{xy} \kappa ^* _{xy} \right) {\text {d}}S \\&\quad +D_{xy} \int \limits _{S} \left( \kappa _{xx} \kappa ^* _{yy} + \kappa _{yy} \kappa ^* _{xx} - 2~\kappa _{xy} \kappa ^* _{xy} \right) {\text {d}}S\\&\quad +\rho t_S \int \limits _{S} a_z w^{*} {\text {d}}S, \end{aligned} \end{aligned}$$ where S denotes the surface area, p the investigated pressure, \(D_{xx}\) and \(D_{xy}\) the plate bending stiffness matrix components, \(\kappa\) the curvatures, \(\rho\) the plate material density, \(t_S\) the plate thickness, a the acceleration, \(w^{*}\) the virtual deflections and \(\kappa ^{*}\) the virtual curvatures. In the present study \(D_{xx}\), \(D_{xy}\), \(\rho\) and \(t_S\) were known from the plate manufacturer. \(\kappa\) and a were obtained from deflectometry measurements. The virtual fields \(w^{*}\) and \(\kappa ^{*}\) have to be chosen with respect to theoretical as well as practical restrictions of the problem such as continuity and boundary conditions. They can further be optimized to minimize the effects of noise (Pierron and Grédiac 2012, chapter 3.7). Assuming constant pressure within a piecewise virtual field and approximating the integrals in Eq. (3) with discrete sums, one obtains a simplified expression for the pressure: $$\begin{aligned} \begin{aligned} p&= \Biggl ( ~{D_{xx}} \sum \limits _{i = 1} ^{N} \kappa ^{i} _{xx} \kappa ^{*i} _{xx} +\kappa _{yy}^{i} \kappa ^{*i} _{yy} + 2~ \kappa ^{i}_{xy} \kappa ^{*i} _{xy} \\&\quad +~{D_{xy}} \sum \limits _{i = 1} ^{N} \kappa ^{i}_{xx} \kappa ^{*i} _{yy} +\kappa ^{i}_{yy} \kappa ^{*i} _{xx} - 2~\kappa ^{i}_{xy} \kappa ^{*i} _{xy} \\&\quad +~\rho t_S \sum \limits _{i = 1} ^{N} a_z^{i} w^{*i} \Biggr ) \cdot \left( \sum \limits _{i = 1} ^{N} w^{*i} \right) ^{-1}, \end{aligned} \end{aligned}$$ where N is the total number of discrete surface elements. Virtual fields In this study, 4-node Hermite 16 element shape functions were used to define virtual fields over subdomains of the plate surface S. The formulation of these fields can be found in Pierron and Grédiac (2012, chapter 14). They provide \(\hbox {C}^{1}\) continuous virtual deflections, which yield the required continuous virtual slopes. They also allow defining virtual displacements and slopes that are zero over the edges of each element, which eliminates the unknown contributions of virtual work along the plate boundaries. The definition of virtual fields over subdomains, also called piecewise virtual fields, provides more flexibility than globally defined virtual fields, in particular when unknown and complex load distributions are investigated. In the following, the subdomain over which a virtual field was defined is referred to as pressure reconstruction window (PRW). Nine nodes were defined for one PRW. All degrees of freedom were set to zero except for the virtual deflection of the center node, which was set to 1. One pressure value was calculated for each window. Example fields are shown in Fig. 2. The size of the PRW has to be chosen according to the signal-to-noise ratio as well as the spatial distribution of the signal. Larger windows filter noise more efficiently as they effectively average over a larger area, but they can lead to a loss of signal amplitude and limit the spatial resolution. Example Hermite 16 virtual fields with superimposed virtual elements and nodes (black). \(\xi _1\), \(\xi _2\) are parametric coordinates. The example window size is 24 points in each direction. Full equations can be found in Pierron and Grédiac (2012, chapter 14) The experiment consisted of a deflectometry setup with a reflective specimen and an impinging air jet (see Fig. 3). The cross-hatched grid used for deflectometry was printed on transparency with \(600~\text {dpi}\) using a Konica Minolta bizhub C652 printer. A custom made panel with \(9\times 100~\text {W}\) LEDs was used as white light source. The camera was placed at an angle beside the printed grid to record the reflected grid in normal incidence. Table 1 lists the relevant experimental parameters. 5400 images were recorded per deflectometry measurement series due to limited camera storage, which corresponds to \(1.35~\text {s}\). A \(1~\text {mm}\) thick first-surface glass mirror served as specimen. Since the camera is not focused on the plate surface, but on the reflected grid, small deformations and imperfections quickly lead to a lack of depth of field. This was addressed by closing the aperture. The achieved slope resolution, which is defined as the standard deviation of a slope map calculated with two grid images in an unloaded configuration here, was approximately \(3~\text {mm km}^{-1}\). A fan-driven, round air was used to generate the investigated flow impinging on the flate plate specimen. This flow was chosen as a canonical flow problem to demonstrate the measurement technique and highlight its pros and cons. Figure 4 shows a schematic of such an impinging jet, which can be divided into a free jet, stagnation and wall region, see, e.g., Kalifa et al. (2016) and Zuckerman and Lior (2006). Its properties are governed by the ratio between downstream distance and nozzle diameter, \(h_{\text {N}}\)/D, nozzle geometry and Reynolds number, Re. The free jet develops if \(h_{\text {N}}\)/D \(~\gtrsim ~ \,\)2 (Zuckerman and Lior 2006). Downstream from the nozzle exit, a shear layer forms around the jet core, leading to entrainment and causing the mean velocity profile to spread. So-called primary vortices form in the shear layer around the jet core, propagate downstream, impinge on the specimen and are then deflected to propagate radially outward along the wall (Zuckerman and Lior 2006). Their strength scales with the Reynolds number. A stagnation region forms as the jet approaches the impingement plate. Here, the static pressure rises up to the stagnation point and the resulting pressure gradients divert the flow radially away from the jet center line. This laterally diverted flow forms the wall region. In this region, merging of primary vortices occurs (Pieris et al. 2019). Further, counter-rotating secondary vortices form along the impingement surface as a result of impinging primary vortices (Walker et al. 1987). These can deflect the primary vortices away from the wall. Tertiary structures resulting from interactions between primary and secondary ones are also sometimes observed. The shedding frequencies of the secondary vortices have a higher variability than those of primary vortices (Pieris et al. 2019). The mean pressure distribution on the impingement surface is approximately Gaussian (Beltaos 1976). The spectrum of frequencies at which primary vortices are shed typically has a maximum that depends on the nozzle exit velocity and the nozzle diameter, characterized by the Strouhal number (Becker and Massaro 1968): $$\begin{aligned} \text {St} ~= ~f_{\text {shed}} ~D ~u_{\text {exit}}^{-1}~ \approx ~0.5. \end{aligned}$$ For the given setup parameters, it follows that the shedding frequency spectrum has an expected maximum at about \(f_{{\text {shed}}} = 800~\text {Hz}\). Thus, the 5400 images captured for one measurement series at 4000 frames per second correspond to \(1080~f^{-1}_{{\text {shed}}}\). The jet parameters, also given in Table 1, were chosen to generate dynamic events, particularly primary vortices, that could likely be resolved with the available deflectometry grid pitch and camera frame rate at full spatial resolution. Experimental setup Table 1 Setup parameters Impinging jet setup and flow features Reference pressure measurements were conducted using Endevco 8507C-2 type pressure microphones with piezoresistive transducer elements. These microphones have a diameter of \(2.5~\text {mm}\). According to the manufacturer, they have an amplitude linearity of \(\pm 1~\text {dB}\) in a range between 100 and 173 dB and a frequency response of \(\pm 5~\text {dB}\) to more than \(20~\text {kHz}\). Here, the experimentally observed noise level had a standard deviation of up to \(1.5~\text {Pa}\). The microphones were placed along a line beginning from the stagnation point radially outwards with a spacing of \(5~\text {mm}\). An NI PXIe-4330 module was used for data acquisition. Data were acquired over \(20~\text {s}\) at \(20~\text {kHz}\). The microphones were placed along an aluminium plate of \(1~\text {cm}\) thickness at the same distance from the jet as the glass specimen used in deflectometry measurements. The setup of the jet relative to the plate was the main error source in microphone measurements, since no high precision equipment like e.g., micro stages were available. Eight separate measurements were conducted to account for this repeatability error. Accelerations were measured using a Polytec PDV 100 Laser Doppler Vibrometer (LDV). The acquisition frequency was \(4~\text {kHz}\) and the acquisition time \(20~\text {s}\). One phase map was calculated from each grid image. A windowed discrete Fourier transform phase detection algorithm using a rectangular window of two grid pitches length was employed for phase detection (Surrel 2000). One data point per full grid pitch was calculated in x- and y-direction, respectively. Curvatures were calculated from slope maps with three-point centered finite differences. Deflections were obtained from the slope maps using an inverse (integrated) gradient based on a sparse approximation (D'Errico 2009). The second time derivative of these deflections yields accelerations. Central differences were used here. However, this requires knowledge of the integration constant, i.e., of the deflection at one reference point at each time step. These were not measured in the present setup. Instead, plate deformations were assumed to be quasi-static, which allows neglecting the acceleration term in Eq. (3). While this assumption is unlikely to be entirely accurate, it is reasonable to assume that for thin plates in bending sufficient relevant information about the spatial shape of the pressure distribution is contained in the curvature information. Separate LDV acceleration measurements were used to obtain an estimate for the resulting error. VFM pressure reconstructions were conducted as described in Sect. 2.2. The pressure reconstructions were oversampled in space by shifting the PRW by one data point in either direction until the entire surface was covered. Figure 5 shows a flow chart of the main processing steps. For the calculation of standard deviations, data from 20 runs were used to improve convergence. Main data processing steps The mean pressure distributions obtained from time-resolved measurements in the present study are compared to the mean distributions obtained from uncorrelated snapshots in Kaufmann et al. (2019). Figure 6 shows the azimuthally averaged mean pressure distribution for both methods and pressure transducer data for comparison. Even though the presented time-resolved measurements suffer from a lower signal-to-noise ratio due to the much higher shutter speed employed here, the results agree well. The mean surface pressure distributions was captured well using the VFM in both the snapshot data and the time-resolved data, but the amplitude is underestimated by approximately 30%. It was shown that this discrepancy between VFM and transducer data is largely due to the systematic processing error of the VFM. Further, it could be reduced to approximately 10% using a finite element correction procedure in Kaufmann et al. (2019). For amplitudes of this magnitude, this is comparable to other optical based pressure determination techniques like PSP, see, e.g., Beverley and McKeon (2007) and Jiao et al. (2019). Further details of the mean surface pressure comparison can be found in Kaufmann et al. (2019), where the influence of the processing parameters as well as of noise is investigated in experimental and numerical data. The study also investigates the systematic error using finite element simulations and artificial grid deformation. Given the agreement in the mean distributions, the time-resolved data can now be further examined to obtain surface-pressure fluctuations. These dynamic events were investigated here with two different filtering approaches. First, a temporal bandpass filter was implemented by calculating the Fourier transform of the slope maps and setting the amplitudes of filtered frequencies to zero. This filter has a poor impulse response, i.e., a poor response to finite length input signals, but yields the best achievable frequency resolution, which allows an application down to very narrow frequency bands. Second, dynamic mode decomposition (DMD) (Schmidt 2010) was applied to instantaneous pressure reconstructions. The technique dedicated to using DMD on large and streaming data sets introduced in Hemati et al. (2014) was employed here. Comparison between transducer measurements and VFM pressure reconstructions for different processing parameters. r denotes the radial distance from the stagnation point. Snapshot and transducer data from Kaufmann et al. (2019) Processing parameters Simulated experiment input and results A methodology for selecting optimal processing parameters based on simulated experiments and artificial grid deformation was proposed in Kaufmann et al. (2019, section 5). It requires an estimate of the expected load distribution to obtain model slope maps. These allow calculating the deformations of a corresponding, artificial reflected grid image, which is used as input to the processing algorithm. By systematically varying the PRW size, its influence on pressure reconstruction can be evaluated to identify its optimal value. This simulation also allows an estimation of the systematic processing error for the chosen reconstruction parameters by comparing the input and reconstructed pressure distributions. Since this study aims at identifying dynamic events which are governed by the primary vortices impinging on the specimen, a circular load distribution with a peak amplitude of \(10~\text {Pa}\) was chosen as model input, see Fig. 7a. Here, \(\hbox {r}_x\) and \(\hbox {r}_y\) denote the radial distance from the stagnation point in x- and y-direction, respectively. D is the nozzle diameter. The results of this analysis are shown in Fig. 7b. The error estimate, \(\epsilon\), is defined as the difference between reconstructed and input pressure amplitude, \(p_{{\text {rec}}, i}\) and \(p_{{\text {in}}, i}\), divided by the local amplitude of the input pressure distribution at each point i: $$\begin{aligned} \epsilon ~=~\frac{1}{N}\sum _{i=1}^{N} \biggl | \sqrt{(p_{{\text {rec}},i}-p_{{\text {in}},i})^2} / p_{{\text {in}},i} \biggr |, \end{aligned}$$ where N is the number of points. With an estimated average error of ca. 20%, \(\text {PRW} = 24\) was identified as optimal size and will therefore be employed in the following. Note that a PRW side length of 24 data points corresponds to a physical distance of 0.6 D, or \(12~\text {mm}\). Despite the PRW size of 0.6 D, it will be shown later that the smaller spatial structures of the impinging vortices are well captured. The outcomes can be further improved by optimizing the setup. Pressure fluctuations obtained through VFM reconstructions from unfiltered slope data and comparison to microphone data Instantaneous pressure maps were reconstructed from each measured slope map. The corresponding standard deviations, \(s_p\), obtained from these maps allowed a first evaluation of the captured pressure fluctuations. Figure 8a shows that the spatial distribution obtained for \(s_p\) appears to resemble noise patterns. These data were compared to microphone measurements. VFM standard deviation results were averaged over all data points with the same radial distance from the stagnation point at \(rD^{-1} = 0\). Figure 8b shows that the VFM results significantly overestimate the standard deviations when compared to microphone data. To investigate the reason for the poor agreement of these results, the power spectral densities (PSDs) of slope maps obtained from deflectometry measurements were compared with those of pressure amplitudes from microphone measurements. The amplitude spectrum obtained from microphone data are shown in Fig. 9a. Slope data (Fig. 9b) was averaged over 21 data points which corresponds to an area of \(5.3~\text {mm}^2\), approximately matching the \(4.9~\text {mm}^2\) surface area of the microphone. The comparison shows that the slope measurements capture relevant information on the dynamics of the impinging jet in the observed slope amplitude spectrum between ca. \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\). Both figures show a maximum of the respective amplitude spectra close to the expected frequency \(f_{{\text {shed}}}\). However, the pressure amplitude spectrum obtained from VFM pressure reconstructions (Fig. 9c) shows mostly random noise for all frequencies above ca. \(0.6~f_{{\text {shed}}}\) (\(50~\text {Hz}\)). This suggests that low frequencies contain sufficient noise sources to overwhelm the signal when the slopes are converted to pressure using the VFM. This is likely because the VFM requires processing the slopes to accelerations (obtained from slopes by spatial integration and subsequent double temporal differentiation) and curvatures (computed from slopes by spatial differentiation). Obtaining these quantities appears to amplify noise to the extent that the low differential pressures are masked. However, the VFM is a linear method and therefore the methodology can be applied either to the entire range of frequencies or to specific frequency bands. If the method were applied to specific bands, then it might be possible to obtain the sought low differential pressure events since the lower frequency noise sources could be effectively filtered out. It is therefore necessary to employ further processing steps to extract dynamic pressure information from the slope measurements. Qualitative comparison of PSDs from different measurement sources on specimen surface at position \(rD^{-1} = 0\) Temporal filter Instantaneous VFM pressure reconstructions from temporally filtered slope data filtered with bandpass range from \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\) (200–1500 Hz) PSD of VFM reconstruction from slope data filtered with bandpass range from \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\) (200–1500 Hz). Noise spectrum for comparison Pressure fluctuations obtained using VFM reconstructions from slope data filtered with bandpass range from \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\) (200–1500 Hz) and comparison to filtered microphone data A temporal band-pass filter was applied to the slope maps (see Sect. 3.3) to extract information on dynamic flow events in the identified relevant frequency range between \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\) (200–1500 Hz). Figure 10 shows instantaneous VFM pressure reconstructions from the band pass filtered slope map at two different points in time. The observed pressure distributions agree qualitatively with the expected distributions from primary vortices impinging on the flat surface, as they form around the upstream jet, i.e., with a radius of approximately 0.5 D and spread radially outward along the impingement plate. Figure 11 shows the corresponding PSD. The PSD of pressure reconstructions from a series of noise images which were processed in the exact same way is shown in the same figure for comparison. While the signal-to-noise ratio is relatively low, the signal is still clearly above noise level. The PSD further shows low amplitude noise in the band widths that were previously set to zero in the slope spectra, which is a result of the VFM processing steps. The standard deviations of the VFM pressure reconstructions from the broad band pass filter (see Fig. 12a) are compared with microphone data, which are filtered within the same frequency band as the slope maps (see Fig. 12b). The VFM reconstructions still overestimate the standard deviations when compared to the microphone measurements by up to 50%. The shape of the distribution seems to be captured however. The distribution found in Fig. 12a agrees reasonably well with the expectations, as the fluctuations increase for radii above 0.5 D around the stagnation point. This is where primary vortices are expected to impinge on the plate to propagate radially outwards, with secondary vortices forming in the wall flow. The results show that information on surface-pressure fluctuations was captured by the deflectometry measurements and that it is possible to reconstruct the corresponding pressure distributions qualitatively. The corresponding amplitudes of the standard deviations are however overestimated, likely due to the assumption of quasi static bending and systematic processing error and low signal-to-noise ratio. To further investigate the pressure fluctuations, methods to address the low signal-to-noise ratio and to extract additional dynamic information from the data are employed in the following section. Dynamic mode decomposition Dynamic mode decomposition (DMD) is a suitable tool for extracting relevant dynamic information from a data sequence. It allows identifying spatially coherent features and their temporal behaviour within the entire observed frequency spectrum. The main challenge in applying this technique to the present case is that the computational cost and required resources increase with the amount of data points in space as well as with the number of snapshots. The approach introduced in Hemati et al. (2014) allows processing all instantaneous pressure maps by incrementally updating the calculated POD basis as well as the DMD modes and coefficients. It is therefore employed here. DMD was applied to VFM pressure reconstructions from slopes which were filtered within a band pass range from \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\) (200–1500 Hz). This eliminated modes stemming from the mean surface pressure and from low frequency experimental noise sources. The calculated POD basis was truncated at 200 modes, since higher modes primarily stem from random noise. Figure 13a, b show the amplitude and damping coefficients. The modes identified for frequencies below \(0.25~f_{{\text {shed}}}\) (200 Hz) stem from noise introduced by data processing, since this frequency range was filtered out from the slope information. These modes resemble noise patterns, have low amplitude and high damping coefficients. The most relevant modes corresponding to the impinging primary vortices were identified to be around \(0.97~f_{{\text {shed}}}\), which is reasonably close to the maximum that was expected from theoretical considerations in the frequency spectrum (see Sect. 3.1). They have the highest amplitude and the lowest damping coefficients of all identified modes. Examples of pressure reconstructions from two modes are shown in Fig. 13c, d. The modal shapes are coherent with only small amounts of noise and resemble the expected spatial distributions. A video of the \(0.97~f_{{\text {shed}}}\) (776 Hz) mode with damping coefficient set to zero can be found in the supplementary material of this paper. This clearly shows that the low differential pressure events generated by impinging vortices were captured by the new surface pressure determination method. It should be noted that the differential pressure amplitudes extracted here are below \({\mathcal {O}}(1)~\text {Pa}\). Despite the uncertainty in amplitude identified above, it would be impossible to capture these events in full-field with any extant measurement technique. More importantly, the method allows to identify the spatio-temporal evolution of the pressure footprint of flow structures on a surface, which is otherwise currently only possible with Tomographic PIV or PTV, which would require optical access to the flow side of the experiment and the entire area over the impingement surface for several cameras. It should however be noted that these results could only be extracted using the filter techniques detailed above, which require time redundancy and are only applicable to flows with events occurring at discrete frequencies. Combining deflectometry and VFM with flow measurements will allow investigations of the flow-structure interactions in more detail in the future. To improve this methodology for further applications requires considerations of the error sources and future ways in which these can be minimized. This is discussed in the next section. DMD results obtained with \(108 \times 10^3\) snapshots (from 20 runs with 5400 snapshots each) from VFM pressure reconstructions using slope maps which were bandpass filtered within the range from \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\). 200 POD modes were calculated Error sources This section discusses the systematic error sources encountered in the experimental setup as well as in the processing technique. In terms of experimental error sources, several elements of the deflectometry setup should be considered. Irregularities in the recorded grid as well as miscalibration, i.e., non-integer numbers of pixels per grid pitch, can lead to errors in the detected phases. The main factors causing these irregularities are damages on the specimen surface and defects of the printed grid. Miscalibration can be caused by misalignments between camera sensor and printed grid, as well as by irregularities and harmonics in the printed grid. Misalignment can also result in fringes. These error sources and their effect are highly dependent on the precise alignment between grid, specimen surface and camera. They can further be time dependent, because the specimen deforms under the dynamic load. Using the LDV to measure vibrations of the camera, it was also found that the camera cooling fans caused vibrations at several frequencies below 100 Hz, which were also identified in the measured slope spectrum. Based on comparisons with numerical data presented in Kaufmann et al. (2019), the effect of these experimental errors could amount to up to 10% of the peak pressure value for the mean surface pressure. An additional issue concerning experimental bias is that the mechanical constitutive material parameters provided by the plate manufacturer, in particular the Young's modulus, may not be accurate. In Kaufmann et al. (2019) it was estimated that the resulting error on pressure amplitudes was up to 10%. The systematic error resulting from the processing approach employed in this study was investigated in Kaufmann et al. (2019). VFM pressure reconstructions were found to underestimate the local amplitudes because the virtual fields act as a low pass spatial filter over the area of a reconstruction window. The exact error value depends on the chosen reconstruction parameters as well as the investigated load distribution and signal-to-noise-ratio. In the present study, the error associated with data processing can be estimated using the analysis presented in Sect. 3.4. It is approximately 20% for instantaneous pressure maps when using a PRW size of 24 data points side length. Note that this estimate does not take into account the influence of random noise or the filtering techniques. The error resulting from assuming quasi-static behaviour was assessed using LDV measurements to obtain accelerations at discrete points along the specimen surface. Values of up to \(1.4~\text {ms}^{-2}\) were found for the standard deviation of the accelerations. A worst case estimate for the resulting error in pressure was obtained by assuming an acceleration value of \(a=1.4~\text {ms}^{-2}\) over the entire specimen. The resulting dynamic pressure value was calculated using the acceleration term in Eq. (4). For pressure reconstructions from unfiltered slopes, it yields a value of \(p_{{\text {dyn}}} = 84~\text {Pa}\), which corresponds to approximately 13% of the estimated peak pressure amplitude of 630 Pa. For reconstructions from bandpass filtered slopes, significantly lower error values were identified. Using a bandpass filter range of \(0.25~f_{{\text {shed}}} - 1.88~f_{{\text {shed}}}\) (200–1500 Hz) for slope maps, worst case estimates of \(a=0.1~\text {ms}^{-2}\) and \(p_{{\text {dyn}}}=6~\text {Pa}\) were obtained. Based on the low amplitude of noise patterns in the extracted dynamic pressure distributions, it appears that experimental errors from the deflectometry setup were filtered efficiently. The systematic processing errors as well as the assumption of quasi-static behaviour are likely to result in an underestimation of instantaneous pressure reconstructions, which can be estimated to up to 30%. The latter is also the most likely reason for the increasing discrepancy between pressure fluctuations identified from transducer data and from VFM reconstructions when moving into the region in which vortices first impinge on the plate. Limitations and future work Low-amplitude differential pressure fluctuations were extracted from VFM pressure reconstructions using two techniques. To address the large discrepancy between microphone data and VFM reconstructions, the present setup could be improved to obtain accurate acceleration information. This could potentially be achieved by simultaneously measuring deflections at a known point in the field of view using an LDV. Further, to achieve convergence and due to the low signal-to-noise ratio, a large number of snapshots was required. This is a challenge for available high-speed cameras due to limited storage and data transfer rates. Experiments based on phase-locked measurements can address this issue. This also allows using cameras with higher resolutions, which can be combined with smaller grid pitches to increase slope resolution. Slope resolution can also be improved by increasing the distance between grid and sample, though the quality and availability of suitable camera lenses is an issue. Another limitation of the approach presented here is that the specimen is required to be of optical mirror quality. Non-mirror-like, but reasonably smooth surfaces were successfully used for deflectometry measurement using an infrared camera and heated grids (Toniuc and Pierron 2019). Due to the relatively long wavelength of infrared light, sufficiently specular reflection for slope measurements was achieved using unpolished metal plates as well as perspex with approximately 1.5\(~\upmu \text {m}\) surface RMS roughness. An approach for applications of deflectometry measurements to curved surfaces is proposed in Surrel and Pierron (2019). Though the results were promising, a sophisticated calibration was required and time-resolved measurements are currently not possible. Finally, improved virtual fields and higher order pressure reconstruction approaches could reduce the systematic processing error of the VFM. This study presents an approach for obtaining full-field dynamic pressure information from surface slope measurements. Surface slopes were measured using a highly sensitive deflectometry setup. Pressure reconstructions were obtained using the VFM. The extracted differential pressure amplitudes range down to few \({\mathcal {O}}(1)~\)Pa. \(85 \times 85\) data points were obtained, corresponding to a field of view of \(4.25~\text {cm}\times 4.25~\text {cm}\). For band pass filtered data VFM results were found to capture the expected pressure distribution well, but to overestimate the standard deviations of the pressure amplitudes by up to 50% when compared to microphone data. DMD was used to extract relevant dynamic information. Error sources associated with experimental limitations and the processing technique were identified and discussed. Despite the low accuracy in amplitude, the achieved high data point density and the low magnitude of the extracted pressure amplitudes make the presented technique highly relevant for a large range of applications in engineering and science. Data provision All relevant data produced in this study is available under the DOI https://doi.org/10.5258/SOTON/D1165. Badulescu C, Grédiac M, Mathias JD (2009) Investigation of the grid method for accurate in-plane strain measurement. Measur Sci Technol 20(9):95–102 Balzer J, Werling S (2010) Principles of shape from specular reflection. Measurement 43(10):1305–1317 Becker HA, Massaro TA (1968) Vortex evolution in a round jet. J Fluid Mech 31(3):435–448 Beltaos S (1976) Oblique impingement of circular turbulent jets. J Hydraul Res 14(1):17–36 Berry A, Robin O (2016) Identification of spatially correlated excitations on a bending plate using the virtual fields method. J Sound Vib 375:76–91 Berry A, Robin O, Pierron F (2014) Identification of dynamic loading on a bending plate using the virtual fields method. J Sound Vib 333(26):7151–7164 Beverley J, McKeon RHE (2007) Springer handbook of experimental fluid mechanics, chapter 4, vol 4. Springer, Berlin, pp 188–208 Corcos GM (1963) Resolution of pressure in turbulence. J Acoust Soc Am 35(2):192–199 Corcos GM (1964) The structure of the turbulent pressure field in boundary-layer flows. J Fluid Mech 18(3):353–378 D'Errico J (2009) Inverse (integrated) gradient.https://de.mathworks.com/matlabcentral/fileexchange/9734-inverse-integrated-gradient. Accessed on 12 June 2019 Devivier C, Pierron F, Glynne-Jones P, Hill M (2016) Time-resolved full-field imaging of ultrasonic Lamb waves using deflectometry. Exp Mech 56:345–357 Engler RH, Klein C, Trinks O (2000) Pressure sensitive paint systems for pressure distribution measurements in wind tunnels and turbomachines. Meas Sci Technol 11(7):1077–1085 Gesemann S, Huhn F, Schanz D, Schroeder A (2016) From noisy particle tracks to velocity, acceleration and pressure fields using b-splines and penalties. In: Proceedings of the 18th international symposium on applications of laser techniques to fluid mechanics, July 4–7, Lisbon, Portugal, pp 2684–2694 Giraudeau A, Pierron F, Guo B (2010) An alternative to modal analysis for material stiffness and damping identification from vibrating plates. J Sound Vib 329(10):1653–1672 Grédiac M, Sur F, Blaysat B (2016) The grid method for in-plane displacement and strain measurement: a review and analysis. Strain 52(3):205–243 Gregory JW, Sullivan JP, Wanis SS, Komerath NM (2006) Pressure-sensitive paint as a distributed optical microphone array. J Acoust Soc Am 119(1):251–261 Hemati M, Williams M, Rowley C (2014) Dynamic mode decomposition for large and streaming datasets. Phys Fluids 26(11):111–701 Huhn F, Schanz D, Manovski P, Gesemann S, Schröder A (2018) Time-resolved large-scale volumetric pressure fields of an impinging jet from dense lagrangian particle tracking. Exp Fluids 59(5):81 Jeon Y, Earl T, Braud P, Chatellier L, David L (2016) 3D pressure field around an inclined airfoil by tomographic TR-PIV and its comparison with direct pressure measurements. In: Proceedings of the 18th international symposium on the application of laser and imaging techniques to fluid mechanics. In: Proceedings of the 18th international symposium on the application of laser and imaging techniques to fluid mechanics, July 4–7, Lisbon, Portugal, pp 1060–1069 Jiao L, Chen Y, Wen X, Peng D, Liu Y, Gregory JW (2019) Resolving vortex-induced pressure fluctuations on a cylinder in rotor wake using fast-responding pressure-sensitive paint. Phys Fluids 31(5):55–106 Kalifa RB, Habli S, Saïd NM, Bournot H, Palec GL (2016) The effect of coflows on a turbulent jet impacting on a plate. Appl Math Model 40(11):5942–5963 Kaufmann R, Pierron F, Ganapathisubramani B (2019) Full-field surface pressure reconstruction using the Virtual Fields Method. Exp Mech 59(8):1203–1221 Lecoq D, Pézerat C, Thomas JH, Bi W (2014) Extraction of the acoustic component of a turbulent flow exciting a plate by inverting the vibration problem. J Sound Vib 333(12):2505–2519 Liu Q, Sleiti A, Kapat J (2008) Application of pressure and temperature sensitive paints for study of heat transfer to a circular impinging air jet. Int J Therm Sci 47(6):749–757 O'Donoughue P, Robin O, Berry A (2017) Time-resolved identification of mechanical loadings on plates using the virtual fields method and deflectometry measurements. Strain 54(3):e12–e258 O'Donoughue P, Robin O, Berry A (2019) Inference of random excitations from contactless vibration measurements on a panel or membrane using the virtual fields method. In: Ciappi E, De Rosa S, Franco F, Guyader JL, Hambric SA, Leung RCK, Hanford AD (eds) Flinovia-flow induced noise and vibration issues and aspects-II. Springer International Publishing, Cham, pp 357–372 Pezerat C, Guyader JL (2000) Force analysis technique: reconstruction of force distribution on plates. Acta Acust United Acust 86(2):322–332 Pieris S, Zhang X, Yarusevych S, Peterson SD (2019) Vortex dynamics in a normally impinging planar jet. Exp Fluids 60(5):84 Pierron F, Grédiac M (2012) The virtual fields method. Extracting constitutive mechanical parameters from full-field deformation measurements. Springer, New York Ritter R (1982) Reflection moire methods for plate bending studies. Opt Eng 21:21–9 Robin O, Berry A (2018) Estimating the sound transmission loss of a single partition using vibration measurements. Appl Acoust 141:301–306 Robin O, Moreau S, Padois T, Berry A (2012) Measurement of the wavenumber-frequency spectrum of wall pressure fluctuations: spiral-shaped rotative arrays with pinhole-mounted quarter inch microphones. In: 19th AIAA/CEAS Aeroacoustics Conference, May 27–29, Berlin, Germany, pp 881–898 Schmidt PJ (2010) Dynamic mode decomposition of numerical and experimental data. J Fluid Mech 656:5–28 Schneiders JFG, Caridi GCA, Sciacchitano A, Scarano F (2016a) Large-scale volumetric pressure from tomographic PTV with HFSB tracers. Exp Fluids 57:1–8 Schneiders JFG, Caridi GCA, Sciacchitano A, Scarano F (2016b) Large-scale volumetric pressure from tomographic PTV with HFSB tracers. Exp Fluids 57(11):164 Surrel Y (2000) Photomechanics, chap fringe analysis. Springer, Berlin, pp 55–102 Surrel Y, Pierron F (2019) Deflectometry on curved surfaces. In: Lamberti L, Lin MT, Furlong C, Sciammarella C, Reu PL, Sutton MA (eds) Advancement of optical methods and digital image correlation in experimental mechanics, vol 3. Springer International Publishing, Cham, pp 217–221 Surrel Y, Fournier N, Grédiac M, Paris PA (1999) Phase-stepped deflectometry applied to shape measurement of bent plates. Exp Mech 39(1):66–70 Timoshenko S, Woinowsky-Krieger S (1959) Theory of plates and shells. Engineering societies monographs. McGraw-Hill, New York Toniuc H, Pierron F (2019) Infrared deflectometry for slope deformation measurements. Exp Mech 59(8):1187–1202 Van Blitterswyk J, Rocha J (2017) An experimental study of the wall-pressure fluctuations beneath low reynolds number turbulent boundary layers. J Acoust Soc Am 141(2):1257–1268 van Gent PL, Michaelis D, van Oudheusden BW, Weiss PÉ, de Kat R, Laskari A, Jeon YJ, David L, Schanz D, Huhn F, Gesemann S, Novara M, McPhaden C, Neeteson NJ, Rival DE, Schneiders JFG, Schrijer FFJ (2017) Comparative assessment of pressure field reconstructions from particle image velocimetry measurements and lagrangian particle tracking. Exp Fluids 58(4):33 Walker JDA, Smith CR, Cerra AW, Doligalski TL (1987) The impact of a vortex ring on a wall. J Fluid Mech 181:99–140 Zuckerman N, Lior N (2006) Jet impingement heat transfer: physics, correlations, and numerical modeling. Adv Heat Transf 39(C):565–631 This work was funded by the Engineering and Physical Sciences Research Council (EPSRC). F. Pierron acknowledges support from the Wolfson Foundation through a Royal Society Wolfson Research Merit Award (2012–2017). University of Southampton, Highfield, Southampton, SO17 1BJ, UK R. Kaufmann , B. Ganapathisubramani & F. Pierron Search for R. Kaufmann in: Search for B. Ganapathisubramani in: Search for F. Pierron in: Correspondence to R. Kaufmann. A video of a DMD reconstruction showing the well-defined spatial structure of an impinging vortex can be found in the supplementary material of this publication. It shows a reconstruction of the \(0.97~\text {f}_{shed}\) (\(776~\text {Hz}\)) DMD mode identified in section 4.3. This mode corresponds to the highest identified amplitude coefficient, which is reasonably close to the theoretically expected maximum at \(\hbox {f}_{{shed}}\) (\(800~\text {Hz}\)) in the amplitude spectrum. The video shows 100 frames at a rate of 33 fps. The acquisition frequency was \(4~\text {kHz}\). Damping was set to zero for this video. (mp4 1359 KB) Kaufmann, R., Ganapathisubramani, B. & Pierron, F. Reconstruction of surface-pressure fluctuations using deflectometry and the virtual fields method. Exp Fluids 61, 35 (2020) doi:10.1007/s00348-019-2850-y Revised: 11 November 2019
CommonCrawl
Earth and Environmental Sciences (14) Journal of Fluid Mechanics (14) Ryan Test (14) Instability of pressure-driven gas–liquid two-layer channel flows in two and three dimensions Lennon Ó Náraigh, Peter D. M. Spelt Journal: Journal of Fluid Mechanics / Volume 849 / 25 August 2018 Published online by Cambridge University Press: 15 June 2018, pp. 1-34 Print publication: 25 August 2018 We study unstable waves in gas–liquid two-layer channel flows driven by a pressure gradient, under stable stratification, not assumed to be set in motion impulsively. The basis of the study is direct numerical simulation (DNS) of the two-phase Navier–Stokes equations in two and three dimensions for moderately large Reynolds numbers, accompanied by a theoretical description of the dynamics in the linear regime (Orr–Sommerfeld–Squire equations). The results are compared and contrasted across a range of density ratios $r=\unicode[STIX]{x1D70C}_{liquid}/\unicode[STIX]{x1D70C}_{gas}$ . Linear theory indicates that the growth rate of small-amplitude interfacial disturbances generally decreases with increasing $r$ ; at the same time, the cutoff wavenumbers in both streamwise and spanwise directions increase, leading to an ever-increasing range of unstable wavenumbers, albeit with diminished growth rates. The analysis also demonstrates that the most dangerous mode is two-dimensional in all cases considered. The results of a comparison between the DNS and linear theory demonstrate a consistency between the two approaches: as such, the route to a three-dimensional flow pattern is direct in these cases, i.e. through the strong influence of the linear instability. We also characterize the nonlinear behaviour of the system, and we establish that the disturbance vorticity field in two-dimensional systems is consistent with a mechanism proposed previously by Hinch (J. Fluid Mech., vol. 144, 1984, p. 463) for weakly inertial flows. A flow-pattern map constructed from two-dimensional numerical simulations is used to describe the various flow regimes observed as a function of density ratio, Reynolds number and Weber number. Corresponding simulations in three dimensions confirm that the flow-pattern map can be used to infer the fate of the interface there also, and show strong three-dimensionality in cases that exhibit violent behaviour in two dimensions, or otherwise the development of behaviour that is nearly two-dimensional behaviour possibly with the formation of a capillary ridge. The three-dimensional vorticity field is also analysed, thereby demonstrating how streamwise vorticity arises from the growth of otherwise two-dimensional modes. The effective diffusivity of ordered and freely evolving bubbly suspensions Aurore Loisy, Aurore Naso, Peter D. M. Spelt Journal: Journal of Fluid Mechanics / Volume 840 / 10 April 2018 Print publication: 10 April 2018 We investigate the dispersion of a passive scalar such as the concentration of a chemical species, or temperature, in homogeneous bubbly suspensions, by determining an effective diffusivity tensor. Defining the longitudinal and transverse components of this tensor with respect to the direction of averaged bubble rise velocity in a zero mixture velocity frame of reference, we focus on the convective contribution thereof, this being expected to be dominant in commonly encountered bubbly flows. We first extend the theory of Koch et al. (J. Fluid Mech., vol. 200, 1989, pp. 173–188) (which is for dispersion in fixed beds of solid particles under Stokes flow) to account for weak inertial effects in the case of ordered suspensions. In the limits of low and of high Péclet number, including the inertial effect of the flow does not affect the scaling of the effective diffusivity with respect to the Péclet number. These results are confirmed by direct numerical simulations performed in different flow regimes, for spherical or very deformed bubbles and from vanishingly small to moderate values of the Reynolds number. Scalar transport in arrays of freely rising bubbles is considered by us subsequently, using numerical simulations. In this case, the dispersion is found to be convectively enhanced at low Péclet number, like in ordered arrays. At high Péclet number, the Taylor dispersion scaling obtained for ordered configurations is replaced by one characterizing a purely mechanical dispersion, as in random media, even if the level of disorder is very low. Buoyancy-driven bubbly flows: ordered and free rise at small and intermediate volume fraction Published online by Cambridge University Press: 03 March 2017, pp. 94-141 Various expressions have been proposed previously for the rise velocity of gas bubbles in homogeneous steady bubbly flows, generally a monotonically decreasing function of the bubble volume fraction. For suspensions of freely moving bubbles, some of these are of the form expected for ordered arrays of bubbles, and vice versa, as they do not reduce to the behaviour expected theoretically in the dilute limit. The microstructure of weakly inhomogeneous bubbly flows not being known generally, the effect of microstructure is an important consideration. We revisit this problem here for bubbly flows at small to moderate Reynolds number values for deformable bubbles, using direct numerical simulation and analysis. For ordered suspensions, the rise velocity is demonstrated not to be monotonically decreasing with volume fraction due to cooperative wake interactions. The fore-and-aft asymmetry of an isolated ellipsoidal bubble is reversed upon increasing the volume fraction, and the bubble aspect ratio approaches unity. Recent work on rising bubble pairs is used to explain most of these results; the present work therefore forms a platform of extending the former to suspensions of many bubbles. We adopt this new strategy also to support the existence of the oblique rise of ordered suspensions, the possibility of which is also demonstrated analytically. Finally, we demonstrate that most of the trends observed in ordered systems also appear in freely evolving suspensions. These similarities are supported by prior experimental measurements and attributed to the fact that free bubbles keep the same neighbours for extended periods of time. Non-isothermal droplet spreading/dewetting and its reversal Yi Sui, Peter D. M. Spelt Axisymmetric non-isothermal spreading/dewetting of droplets on a substrate is studied, wherein the surface tension is a function of temperature, resulting in Marangoni stresses. A lubrication theory is first extended to determine the drop shape for spreading/dewetting limited by slip. It is demonstrated that an apparent angle inferred from a fitted spherical cap shape does not relate to the contact-line speed as it would under isothermal conditions. Also, a power law for the thermocapillary spreading rate versus time is derived. Results obtained with direct numerical simulations (DNS), using a slip length down to $O(10^{-4})$ times the drop diameter, confirm predictions from lubrication theory. The DNS results further show that the behaviour predicted by the lubrication theory – that a cold wall promotes spreading, and a hot wall promotes dewetting – is reversed at sufficiently large contact angles and/or viscosity of the surrounding fluid. This behaviour is summarized in a phase diagram, and a simple model that supports this finding is presented. Although the key results are found to be robust when accounting for heat conduction in the substrate, a critical thickness of the substrate is identified above which wall conduction significantly modifies wetting behaviour. Linear instability, nonlinear instability and ligament dynamics in three-dimensional laminar two-layer liquid–liquid flows Lennon Ó Náraigh, Prashant Valluri, David M. Scott, Iain Bethune, Peter D. M. Spelt Journal: Journal of Fluid Mechanics / Volume 750 / 10 July 2014 Print publication: 10 July 2014 We consider the linear and nonlinear stability of two-phase density-matched but viscosity-contrasted fluids subject to laminar Poiseuille flow in a channel, paying particular attention to the formation of three-dimensional waves. A combination of Orr–Sommerfeld–Squire analysis (both modal and non-modal) with direct numerical simulation of the three-dimensional two-phase Navier–Stokes equations is used. For the parameter regimes under consideration, under linear theory, the most unstable waves are two-dimensional. Nevertheless, we demonstrate several mechanisms whereby three-dimensional waves enter the system, and dominate at late time. There exists a direct route, whereby three-dimensional waves are amplified by the standard linear mechanism; for certain parameter classes, such waves grow at a rate less than but comparable to that of the most dangerous two-dimensional mode. Additionally, there is a weakly nonlinear route, whereby a purely spanwise wave grows according to transient linear theory and subsequently couples to a streamwise mode in weakly nonlinear fashion. Consideration is also given to the ultimate state of these waves: persistent three-dimensional nonlinear waves are stretched and distorted by the base flow, thereby producing regimes of ligaments, 'sheets' or 'interfacial turbulence'. Depending on the parameter regime, these regimes are observed either in isolation, or acting together. Validation and modification of asymptotic analysis of slow and rapid droplet spreading by numerical simulation Journal: Journal of Fluid Mechanics / Volume 715 / 25 January 2013 Print publication: 25 January 2013 Using a slip-length-based level-set approach with adaptive mesh refinement, we have simulated axisymmetric droplet spreading for a dimensionless slip length down to $O(1{0}^{\ensuremath{-} 4} )$ . The main purpose is to validate, and where necessary improve, the asymptotic analysis of Cox (J. Fluid Mech., vol. 357, 1998, pp. 249–278) for rapid droplet spreading/dewetting, in terms of the detailed interface shape in various regions close to the moving contact line and the relation between the apparent angle and the capillary number based on the instantaneous contact-line speed, $\mathit{Ca}$ . Before presenting results for inertial spreading, simulation results are compared in detail with the theory of Hocking & Rivers (J. Fluid Mech., vol. 121, 1982, pp. 425–442) for slow spreading, showing that these agree very well (and in detail) for such small slip-length values, although limitations in the theoretically predicted interface shape are identified; a simple extension of the theory to viscous exterior fluids is also proposed and shown to yield similar excellent agreement. For rapid droplet spreading, it is found that, in principle, the theory of Cox can predict accurately the interface shapes in the intermediate viscous sublayer, although the inviscid sublayer can only be well presented when capillary-type waves are outside the contact-line region. However, $O(1)$ parameters taken to be unity by Cox must be specified and terms be corrected to ${\mathit{Ca}}^{+ 1} $ in order to achieve good agreement between the theory and the simulation, both of which are undertaken here. We also find that the apparent angle from numerical simulation, obtained by extrapolating the interface shape from the macro region to the contact line, agrees reasonably well with the modified theory of Cox. A simplified version of the inertial theory is proposed in the limit of negligible viscosity of the external fluid. Building on these results, weinvestigate the flow structure near the contact line, the shear stress and pressure along the wall, and the use of the analysis for droplet impact and rapid dewetting. Finally, we compare the modified theory of Cox with a recent experiment for rapid droplet spreading, the results of which suggest a spreading-velocity-dependent dynamic contact angle in the experiments. The paper is closed with a discussion of the outlook regarding the potential of using the present results in large-scale simulations wherein the contact-line region is not resolved down to the slip length, especially for inertial spreading. Absolute linear instability in laminar and turbulent gas–liquid two-layer channel flow Lennon Ó Náraigh, Peter D. M. Spelt, Stephen J. Shaw Published online by Cambridge University Press: 02 January 2013, pp. 58-94 We study two-phase stratified flow where the bottom layer is a thin laminar liquid and the upper layer is a fully developed gas flow. The gas flow can be laminar or turbulent. To determine the boundary between convective and absolute instability, we use Orr–Sommerfeld stability theory, and a combination of linear modal analysis and ray analysis. For turbulent gas flow, and for the density ratio $r= 1000$ , we find large regions of parameter space that produce absolute instability. These parameter regimes involve viscosity ratios of direct relevance to oil and gas flows. If, instead, the gas layer is laminar, absolute instability persists for the density ratio $r= 1000$ , although the convective/absolute stability boundary occurs at a viscosity ratio that is an order of magnitude smaller than in the turbulent case. Two further unstable temporal modes exist in both the laminar and the turbulent cases, one of which can exclude absolute instability. We compare our results with an experimentally determined flow-regime map, and discuss the potential application of the present method to nonlinear analyses. Turbulent flow over a liquid layer revisited: multi-equation turbulence modelling Lennon Ó Náraigh, Peter D. M. Spelt, Tamer A. Zaki Journal: Journal of Fluid Mechanics / Volume 683 / 25 September 2011 Published online by Cambridge University Press: 25 August 2011, pp. 357-394 Print publication: 25 September 2011 The mechanisms by which turbulent shear flow causes waves on a gas–liquid interface are studied analytically, with a critical assessment of the possible role played by wave-induced Reynolds stresses (WIRSs). First, turbulent flow past a corrugated surface of a small slope is analysed; the surface can either be stationary or support a travelling wave. This problem serves as a useful model because direct numerical simulation (DNS) and experimental data are available to test the analysis, and because this picture is itself a model for the fully coupled two-layer problem. It is demonstrated that the WIRSs play no significant role in shear-driven turbulent flow past a moving wavy wall, and that they alter the structure of the flow only in a quantitative fashion in the pressure-driven case. In the shear-driven case in particular, excellent agreement is obtained with previously reported DNS results. Two closure assumptions are made in our model: the first concerns the wave-induced dissipation of turbulent kinetic energy; the second concerns the importance of rapid distortion. The results of our calculations are sensitive to the assumptions used to close the wave-induced dissipation but are insensitive to the details of the rapid-distortion modelling. Finally, the fully coupled two-layer problem is addressed in the setting of waves of small amplitude, where it is demonstrated that the WIRSs do not play a significant role in the growth of interfacial waves, even at relatively high Reynolds numbers. Again, good agreement is obtained between data from experiments and DNS. Sliding, pinch-off and detachment of a droplet on a wall in shear flow HANG DING, MOHAMMAD N. H. GILANI, PETER D. M. SPELT Journal: Journal of Fluid Mechanics / Volume 644 / 10 February 2010 Print publication: 10 February 2010 We investigate here what happens beyond the onset of motion of a droplet on a wall by the action of an imposed shear flow, accounting for inertial effects and contact-angle hysteresis. A diffuse-interface method is used for this purpose, which alleviates the shear stress singularity at a moving contact line, resulting in an effective slip length. Various flow regimes are investigated, including steadily moving drops, and partial or entire droplet entrainment. In the regime of quasi-steadily moving drops, the drop speed is found to be linear in the imposed shear rate, but to exhibit an apparent discontinuity at the onset of motion. The results also include the relation between a local maximum angle between the interface and the wall and the instantaneous value of the contact-line speed. The critical conditions for the onset of entrainment are determined for pinned as well as for moving drops. The corresponding critical capillary numbers are found to be in a rather narrow range, even for quite substantial values of a Reynolds number. The approach to breakup is then investigated in detail, including the growth of a ligament on a drop, and the reduction of the radius of a pinching neck. A model based on an energy argument is proposed to explain the results for the rate of elongation of ligaments. The paper concludes with an investigation of detachment of a hydrophobic droplet from the solid wall. Onset of motion of a three-dimensional droplet on a wall in shear flow at moderate Reynolds numbers HANG DING, PETER D. M. SPELT Journal: Journal of Fluid Mechanics / Volume 599 / 25 March 2008 Published online by Cambridge University Press: 06 March 2008, pp. 341-362 We investigate the critical conditions for the onset of motion of a three-dimensional droplet on a wall in shear flows at moderate Reynolds number. A diffuse-interface method is used for this purpose, which also circumvents the stress singularity at the moving contact line, and the method allows for a density and viscosity contrast between the fluids. Contact-angle hysteresis is represented by the prescription of a receding contact angle θR and an advancing contact angle value θA. Critical conditions are determined by tracking the motion and deformation of a droplet (initially a spherical cap with a uniform contact angle θ0). At sufficiently low values of a Weber number, We (based on the applied shear rate and the drop volume), the drop deforms and translates for some time, but subsequently reaches a stationary position and attains a steady-state shape. At sufficiently large values of We no such steady state is found. We present results for the critical value of We as a function of Reynolds number Re for cases with the initial value of the contact angle θ0=θR as well as for θ0=θA. A scaling argument based on a force balance on the drop is shown to represent the results very accurately. Results are also presented for the static shape, transient motion and flow structure at criticality. It is shown that at low Re our results agree (with some qualifications) with those of Dimitrakopoulos & Higdon (1998, J. Fluid Mech. vol. 377, p. 189). Overall, the results indicate that the critical value of We is affected significantly by inertial effects at moderate Reynolds numbers, whereas the steady shape of droplets still shows some resemblance to that obtained previously for creeping flow conditions. The paper concludes with an investigation into the complex structure of a steady wake behind the droplet and the occurrence of a stagnation point at the upstream side of the droplet. Inertial effects in droplet spreading: a comparison between diffuse-interface and level-set simulations Axisymmetric droplet spreading is investigated numerically at relatively large rates of spreading, such that inertial effects become important. Results from two numerical methods that use different means to alleviate the stress singularity at moving contact lines (a diffuse interface, and a slip-length-based level-set method) are shown to agree well. An initial inertial regime is observed to yield to a regime associated with Tanner's law at later times. The spreading rate oscillates during the changeover between these regimes. This becomes more significant for a fixed (effective) slip length when decreasing the value of an Ohnesorge number. The initial, inertia-dominated regime is characterized by a rapidly extending region affected by the spreading, giving the appearance of a capillary wave travelling from the contact line. The oscillatory behaviour is associated with the rapid collapse that follows the point at which this region extends to the entire droplet. Results are presented for the apparent contact angle as a function of dimensionless spreading rate for various values of Ohnesorge number, slip length and initial conditions. The results indicate that there is no such universal relation when inertial effects are important. Shear flow past two-dimensional droplets pinned or moving on an adhering channel wall at moderate Reynolds numbers: a numerical study PETER D. M. SPELT Numerical simulations are presented of shear flow past two-dimensional droplets adhering to a wall, at moderate Reynolds numbers. The results were obtained using a level-set method to track the interface, with measures to eliminate any errors in the conservation of mass of droplets. First, the case of droplets whose contact lines are pinned is considered. Data are presented for the critical value of the dimensionless shear rate (Weber number, $\hbox{\it We}$), beyond which no steady state is found, as a function of Reynolds number, $\hbox{\it Re}$. $\hbox{\it We}$ and $\hbox{\it Re}$ are based on the initial height of the droplet and shear rate; the range of Reynolds numbers simulated is $\hbox{\it Re} \leq 25$. It is shown that, as $\hbox{\it Re}$ is increased, the critical value $\hbox{\it We}_c$ changes from $\hbox{\it We}_c\propto \hbox{\it Re}$ to $\hbox{\it We}_c\approx$ const., and that the deformation of droplets at $\hbox{\it We}$ just above $\hbox{\it We}_c$ changes fundamentally from a gradual to a sudden dislodgement. In the second part of the paper, drops are considered whose contact lines are allowed to move. The contact-line singularity is removed by using a Navier-slip boundary condition. It is shown that macroscale contact angles can be defined that are primarily functions of the capillary number based on the contact-line speed, instead of the value of $\hbox{\it We}$ of the shear flow. It is shown that a Cox–Voinov-type expression can be used to describe the motion of the downstream contact line. A qualitatively different relation is tested for the motion of the upstream contact line. In a third part of this paper, results are presented for droplets moving on a wall with position-dependent sliplength or contact-angle hysteresis window, in an effort to stabilize or destabilize the drop. Finite-Weber-number motion of bubbles through a nearly inviscid liquid VOLODYMYR I. KUSHCH, ASHOK S. SANGANI, PETER D. M. SPELT, DONALD L. KOCH Journal: Journal of Fluid Mechanics / Volume 460 / 10 June 2002 A method is described for computing the motion of bubbles through a liquid under conditions of large Reynolds and finite Weber numbers. Ellipsoidal harmonics are used to approximate the shapes of the bubbles and the flow induced by the bubbles, and a method of summing flows induced by groups of bubbles, using a fast multipole expansion technique is employed so that the computational cost increases only linearly with the number of bubbles. Several problems involving one, two and many bubbles are examined using the method. In particular, it is shown that two bubbles moving towards each other in an impurity-free, inviscid liquid touch each other in a finite time. Conditions for the bubbles to bounce in the presence of non-hydrodynamic forces and the time for bounce when these conditions are satisfied are determined. The added mass and viscous drag coefficients and aspect ratio of bubbles are determined as a function of bubble volume fraction and Weber number. Attenuation of sound in concentrated suspensions: theory and experiments PETER D. M. SPELT, MICHAEL A. NORATO, ASHOK S. SANGANI, MARGARET S. GREENWOOD, LAWRENCE L. TAVLARIDES Published online by Cambridge University Press: 22 June 2001, pp. 51-86 Ensemble-averaged equations are derived for small-amplitude acoustic wave propagation through non-dilute suspensions. The equations are closed by introducing effective properties of the suspension such as the compressibility, density, viscoelasticity, heat capacity, and conductivity. These effective properties are estimated as a function of frequency, particle volume fraction, and physical properties of the individual phases using a self-consistent, effective-medium approximation. The theory is shown to be in excellent agreement with various rigorous analytical results accounting for multiparticle interactions. The theory is also shown to agree well with the experimental data on concentrated suspensions of small polystyrene particles in water obtained by Allegra & Hawley and for glass particles in water obtained in the present study.
CommonCrawl
Business Economics Perfect competition A monopolist faces the following demand: P = 1,418 - 15Q The monopolist's cost function is: C... A monopolist faces the following demand: {eq}P = 1,418 - 15Q {/eq} The monopolist's cost function is: {eq}C = 10Q2 + 667Q + 326 {/eq} Find the equilibrium price, P, if this market was perfectly competitive. Round your answer to one decimal. Perfectly Competitive Market: A perfectly competitive market is a market that has many buyers and many sellers in the market. The sellers sell goods that are homogeneous and sell at the same market price. Each firm in the market faces a perfectly elastic demand curve, which means that firms in a competitive market are price takers. Answer and Explanation: 1 Firms in a perfectly competitive market maximize their profits at the point where the market price is equal to their marginal cost of production. That is: {eq}P = MC {/eq} The firm has an inverse demand curve given by: And the total cost curve given as: The marginal cost function for this firm is: {eq}MC = \frac{\Delta C}{\Delta Q}= 20Q + 667 {/eq} Equating the marginal cost to the inverse demand curve and solving for Q: {eq}1,418 - 15Q= 20Q + 667 {/eq} {eq}35Q= 751 {/eq} Solving for Q: {eq}Q^*= \frac{751}{35} = 21.5 {/eq} units. To get the price charged by the firm, we will substitute the profit maximizing level of output into the inverse demand curve. {eq}P^* = 1,418 - 15(21.5) = $1,095.5 {/eq} Perfectly Competitive Market: Definition, Characteristics & Examples Chapter 3 / Lesson 63 Learn the definition of perfect competition and understand how a perfectly competitive market works. Study the characteristics of a perfectly competitive market with examples. A monopolist faces the following demand: P = 1,890 - 16Q The monopolist's cost function is: C = 15Q^2 + 581Q + 505 Find the equilibrium price, P, if this market was perfectly competitive. Round y A monopolist has the following cost function. C = 4.9 Q^2 + 144Q + 1,531 Market demand is given by: P = 1,770 - 47Q Find the market price, P, if this market was perfectly competitive. Round your ans A monopolist faces the following demand curve: P= 12- .3Q with marginal costs of $3. What is the monopolistic price? a. P=$5.50 b. P=$6.50 c. P=$7.50 d. P=$8.50 e. P=$9.50 A monopolist has the following cost and inverse demand functions: C = 400 - 20Q + Q^2 P = 100 - 4Q What would have been the competitive price in this market? If this competitive price is imposed as a price cap on the non-discriminating monopolist, w A monopolist faces the following demand: Q = 592 - 0.4P The monopolist's cost function is: C = 0.69Q^3 - 6Q^2 + 105Q + 1,716 Find the price that maximizes the monopolist's profit. Round your answe Suppose that there is a monopolist that faces the following linear demand in the market: Q_D = 100 - (1/4)P and has the following cost function: C(Q) = 16Q^2 + 10 What quantity will the monopolist A monopolist faces the following demand curve: P = 140 - 0.3Q , its total cost is given by: TC = 300 + 0.2Q^2 and its marginal cost is given by: MC = 0.4Q. (a) If it is a single-price monopolist, what is its profit-maximizing price and quantity? Show A monopolist has the following cost and inverse demand functions: C = 400 - 20Q + Q^2 P = 100 - 4Q Determine output, profit and consumer surplus in the case where the monopolist can perfectly price discriminate. Suppose a monopolist faces the following demand curve: P = 200 - 6Q The marginal cost of production is constant and equal to $20, and there are no fixed costs. (a) How much consumer surplus would there be if this market was perfectly competitive? (b) What A monopolist faces market demand given by Q_D = 65 - P and cost of production given by C = 0.5Q^2 + 5Q + 300. A. Calculate the monopolist's profit-maximizing output and price. B. Graph the monopolist's demand, marginal revenue, and marginal cost curves. S Suppose a monopolist faces the following demand curve: P = 440 - 7Q. The long-run marginal cost of production is constant and equal to $20, and there are no fixed costs. a) What is the monopolist's profit-maximizing level of output? b) What price will A monopolist faces a demand curve P = 50 - 5Q where P is the product price and Q is the output. The monopolists cost function is C(Q) = 10Q. What are the monopolist's profit maximizing price, output, and profit? What are the consumer surplus and dead-weig Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What price will the profit maximizing monopolist charge? a. $100 b. $55 c. $45 d. $15 e. $10 f. No Suppose a monopolist faces the following demand curve: P = 88 - 3Q. The long-run marginal cost of production is constant and equal to $4, and there are no fixed costs. A) What is the monopolist's prof A monopolist faces a demand curve D(p)=100-2p and has a cost function c(y)=2y. What is the monopolist's optimal level of output and price? A monopolist faces the following demand curve: Price Quantity demanded $10 5 $9 10 $8 16 $7 23 $6 31 $5 49 $4 52 $3 60 The monopolist has total fixed costs of $40 and a constant marginal cost of $5. At the profit-maximizing level of output, the monopolist A monopolist has the following cost and inverse demand functions: C = 400 - 20Q + Q^2 P = 100 - 4Q Compute the DWL from monopoly power and in the case where the monopolist can perfectly price discriminate. Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What is the value of consumer surplus? a. $300 b. $100 c. $412.50 d. $337.50 e. $750 f. None of t A monopolist faces a market demand curve given by Demand: Q = 70 - P. The monopolist faces the following cost structure: C = 0.25 Q^2 - 5 Q + 200. a. What output level will the monopolist choose in order to maximize profits? b. What is the price at this o A monopolist faces the following demand: P = 2265 - 17Q. The monopolist's cost function is: C = 2Q^3 - 13 Q^2 + 134Q + 1509. How much profit does the monopolist earn when it produces the quantity, Q, A monopolist faces the following demand: Q = 207 - 0.3P The monopolist's cost function is: C = 0.4 Q 3 ? 11 Q 2 + 238 Q + 1 , 949 How much profit does this monopolist earn when it Consider a monopolist that faces the following demand curve: P=150-Q. The total cost curve for this monopolist is given by the following: TC=100+10Q+Q2. Which of the following is true? a. The monopolist will set price equal to 115 and sell 35 units. b. Th Question One: A monopolist faces the following markets with the following demands functions: X1=100- P1 and X2=100- 2P2 Assume that the monopolist marginal cost is constant at KES 20. a) Find the pr Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What is the monopolist's profit maximizing level of output? a. 10 b. 15 c. 16 d. 30 e. 33 f. None A monopolist faces demand P = 10 - Q. It has costs C(Q) = 2Q. It can perfectly price discriminate. a. What is its marginal revenue curve? Graph the demand curve. b. Derive the profit maximizing outpu A one-price monopolist faces a demand of P = 107 - 0.015Q and has a total cost function C(Q) = 5000ln(Q) + 30Q. Calculate the profit of the monopolist. 1. Your firm's cost function is: C = 2.5Q3 - 29Q2 + 545Q + 3,001 Your firm faces the following demand: P = 3,682 - 32Q Suppose your firm is a monopoly. Find the price that maximizes the monopolist' Suppose a monopolist faces the following demand curve: P = 314 - 7Q. The long-run marginal cost of production is constant and equal to $20. a. What is the monopolist's profit-maximizing level of output? b. What price will the profit-maximizing monopolist A monopolist sells a product with the following total cost function: TC = 1200 + 0.5Q^2 And the market demand curve is given by: P = 300 - Q (a) Find the profit-maximizing output and price for this monopolist. (b) Calculate the price elasticity of demand The monopolist faces a demand curve given by D(p) = 100 - 2p. Its cost function is c(y) = 2y. a. What is its optimal level of output and price? b. If the demand curve facing the monopolist has a constant elasticity of 2, then what will be the monopolist's Suppose that a monopolist faces market demand of Q = 200 - 0.5P and a cost function of C= Q^2 + 40Q + 50. What is the profit-maximizing price and quantity for the monopolist? A monopolist faces a demand function approximated by Qd = 125 - p. The monopolist total cost is C(Q) = 10Q. Which of the following statements is true? A. The optimal price for this monopolist is p* = $130. B. The optimal quantity is for this monopolist Consider a monopolist that faces the following demand curve: P = 150 - Q. The total cost curve for this monopolist is given by the following: TC = 100 + 10Q + Q^2. Which of the following is true? A) The monopolist will set the price equal to 115 and sell A monopolist faces the following demand and marginal cost data over the relevant range of output. {Price}&{Quantity Demanded}&{Marginal Cost} $7250&2&$450 $5000&3&$500 $3875&4&$575 $3200&5&$675 $2750& A monopolist faces the demand curve P = 100 - 2Q, where P is the price and Q is the quantity demanded. If the monopolist has a total cost of C = 50 + 20Q, determine its profit-maximizing price and output. Suppose the monopolist faces the demand P=100-3Q, which means MR(Q)=100-6Q, and marginal cost is given by MC(Q)=4Q. What price will the monopolist charge the consumers in equilibrium? a. $10 b. $20 c. $30 d. $70 Suppose a monopolist faces a demand equation given by P = 20 - Q, and MC = AVC = ATC = $6. What is the profit maximizing price for the monopolist? A monopolist's cost function is C(y) = y(y/50 - 2) + y. It faces a demand function y = 100 - 25p. Solve for monopolist's output, price, profit and consumer surplus when the price is unregulated. Suppose a monopolist faces the following demand curve: P = 180 - 4Q. The marginal cost of production is constant and equal to $20, and there are no fixed costs. What price will the profit-maximizing monopolist charge? A. P = $100 B. P = $20 C. P = $60 D. Firm D faces the following demand: 40-P from male consumers and 30-3P from female consumers. The firm's cost function is C(Q)=25+5Q. a) Determine the profit of the monopolist if it can price discriminate between both demand functions. Monopoly pricing. b) A monopolist faces a market demand curve given by Q = 53 - P. Its cost function is given by C = 5Q + 50, i.e. its MC = $5. a. Calculate the profit-maximizing price and quantity for this monopolist. Also, calculate its optimal profit. b. Suppose a second Question 8 A monopolist faces the following demand: P = 2,141 - 13Q The monopolist's cost function is: C = 1Q3 - 12Q2 + 180Q + 1,308 How much profit does this monopolist earn when it produces the Suppose that, each period, a monopolist faces market demand P = 100 - 10Q and has constant marginal cost MC= 20 (with no fixed costs). A) If the monopolist can perfectly discriminate, how much d Tinysoft is a monopolist that faces demand curve D(p) = 7p^-3, has constant marginal costs MC = 30 and no fixed cost. What price should the monopolist charge? (a) p = 45 (b) p = 22.5 (c) p = 0.77 The monopolist faces a demand curve given by D(p)=100-2p. Its cost function is c(y)=2y. What is its optimal level of output and price? If the demand curve facing the monopolist has a constant elastici Suppose that a monopolist faces a demand curve given by P = 100 - 2Q and cost function given by C = 500 + 10Q + 0.5Q^2. 9) What is the monopoly's profit-maximizing output level? A) 15 B) 18 C) 20 D) 3 A monopolist faces a demand curve of Q = 400 - 2P and MR = 200 - Q. The monopolist has a constant MC and ATC of $30. a. Find the monopolist's profit-maximizing output and price. b. Calculate the monopolist's profit. c. What is the Lerner Index for this in A monopolist has the following cost function: C(q) = 800 + 8q + 6q^2. It faces the following demand from consumers: P = 200 - 2Q. There is another firm, with the same cost function, that may consider entering the industry. If it does, the equilibrium pric Suppose that a monopolist faces a demand curve: Q^ D = 3375P^{ -3} They have constant marginal costs: MC = 10. a) What is the price elasticity of demand? b) What is the monopoly price? c) What is the markup over marginal cost? How is this related to th Suppose a monopolist faces the following demand curve: P=200-6Q. Marginal cost of production is constant and equal to $20, and there are no fixed costs. A) What is the monopolist's profit-maximizing l Consider a monopolist with constant marginal cost c(c1) which faces the following demand function: Q(p,A)=1+\sqrt{A}-p, where A is the advertising expenditure and p is a relevant price. a) Derive the monopolist's optimal price and advertising expenditu A monopolist's cost function is TC(y) = y(\frac{y}{50} - 2)^2 +y. It faces a demand function y = 100 - 25p. Solve for monopolist's output, price, profit, and consumer surplus when the price is unregulated. Suppose a monopolist has costs to produce output of TC=1/6 Q^2+10 and faces the demand curve Q=3000-3P. Find equilibrium quantity, equilibrium price, and monopoly profit. A natural monopolist faces the following demand: P = 715 - 7Q The monopolist has the following cost function: C = 319Q + 736 How much output will this firm produce to maximize profit? Round our answe Suppose a monopolist faces the market demand function P = a - bQ. Its marginal cost is given by MC = c + eQ. Assume that a > c and 2b + e > 0. a) Derive an expression for the monopolist's optimal quantity and price in terms of a, b, c, and e. b) Show that A monopolist faces a market demand curve given by Q=53-P. Its cost function is given by Q=53-P, i.e. its MC=$5. a) Calculate the profit-maximizing price and quantity for this monopolist. Also calculate its optimal profit. Suppose a monopolist faces the following demand curve: P = 596-6Q. Marginal cost of production is constant and equal to $20, and there are no fixed costs. a) What is the monopolists profit maximizing level of output? b) What price will the profit maximi A monopolist faces a demand curve: P = 100 - Q for its product. The monopolist has fixed costs of 1000 and a constant marginal cost of 4 on all units. Find the profit maximizing price, quantity, and p Suppose a monopolist faces the following demand curve: P = 200 - 6Q The marginal cost of production is constant and equal to $20, and there are no fixed costs. (a) How much profit will the monopolist make if she maximizes her profit? (b) What would be t A natural monopolist faces the following demand: P = 8 - 0.01Q The monopolist has the following cost function: C = 0.018Q + 8 How much output will this firm produce to maximize profit if A natural monopolist faces the following demand: P = 19 - 0.024Q The monopolist has the following cost function: C = 0.039Q + 964 How much output will this firm produce to maximize profit? A monopolist faces the following demand curve P = 222 - 2Q. The monopolist's cost is given by C = 2Q. Calculate the profit-maximizing quantity and the corresponding price. What is the resulting profit/loss? Calculate the monopolist's markup. Suppose a monopolist faces the following demand curve: P = 180 - 4Q. Marginal cost of production is constant and equal to $20, and there are no fixed costs. What is the value of the deadweight loss created by this monopoly? A) 200 B) 400 C) 800 D) 512 Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What is the value of the deadweight loss created by this monopoly? a. $250 b. $675 c. $412.50 d. A monopolist faces the following demand: Q = 284 - 0.2P. The monopolist's cost function is: C = 0.5Q3 - 6Q2 + 235Q + 1,891. How much profit does this monopolist earn when it produces the quantity, Q, Suppose a monopolist faces the following demand curve: P = 200 - 6Q The marginal cost of production is constant and equal to $20, and there are no fixed costs. (a) What is the monopolist's profit-maximizing level of output? (b) What price will the profit- Suppose a monopolist faces the following demand curve: P = 180 - 4Q. The marginal cost of production is constant and equal to $20, and there are no fixed costs. What is the monopolist's profit-maximizing level of output? A. Q = 45 B. Q = 40 C. Q = 30 D. Q 1) A monopolist and competitive firm face the following demand: Q = 106 - 0.12P This firm's cost function is: C = 2Q^2 + 80Q + 1,375 Find the quantity, Q, that maximizes profit. Round your answer to 1. A natural monopolist faces the following demand: P = 12.6 - 0.03Q The natural monopolist has the following cost function: C = 0.039Q + 948 What price will this firm produce to maximize profit? (Rou For the following question, consider a monopolist. Suppose the monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What is the monopolist's profit maximizing l Suppose the monopolist faces the following demand curve: P = 180 - 4 q. Marginal cost of production is constant and equal to $20, and there are no fixed costs. What is the value of consumer surplus? a. 400.00. b. 150.00. c. 1600. d. 600. e. 512.5. f. None Suppose a monopolist faces the demand curve P = 164 - 1Q. The monopolist's marginal costs are a constant $22 and they have fixed costs equal to $132. Given this information, what will the profit-maximizing price be for this monopolist? Round answer to two Suppose a monopolist faces the following demand curve: P = 180 - 4Q. The marginal cost of production is constant and equal to $20, and there are no fixed costs. What is the value of consumer surplus? A. CS = $400 B. CS = $150 C. CS = $1,600 D. CS = $600 E A monopolist faces a demand curve given by P = 10 - Q and has constant marginal (and average cost) of 2. What is the output and the price that maximizes profit for this monopolist? (a) Q = 0, P = 10. (b) Q = 2, P = 8. (c) Q = 4, P = 6. (d) Q = 8, P = 2. ( A monopolist faces a demand curve given by P = 10 - Q and has constant marginal (and average cost) of 2. What is the output and the price that maximizes profit for the monopolist? A) Q = 0, P = 10 B) Q = 2, P = 8 C) Q = 4, P = 6 D) Q = 8, P = 2 E) None of A monopolist faces the following demand curve: Price Quantity demanded $51 1 $47 2 $42 3 $36 4 $29 5 $21 6 $12 7 The monopolist has total fixed costs of $60 and has a constant marginal cost of $15. What is the profit-maximizing level of production? A) 5 u A natural monopolist faces the following demand P = 12.6 - 0.03 Q The natural monopolist has the following cost function: C = 0.039Q + 948 What price will this produce to maximize profit? Round your a Suppose a monopolist faces the demand curve P = 162 - 2Q. The monopolist's marginal costs are a constant $27 and they have fixed costs equal to $55. Given this information, what will the profit-maximizing price be for this monopolist? A monopolist faces the following market demand: Q=200-2P Suppose the firms total cost is given by: TC=50Q 1) Absent the ability to price disseminate, the monopolist, wanting to maximize profit, will A monopolist faces the following demand curve. Suppose the monopolist has total fixed costs equal to $5 and a variable cost equal to $4 per unit for all units produced. What is the total profit if she operates at her profit-maximizing price? A. $9 B. $7 C For the following question, consider a monopolist. Suppose the monopolist faces the following demand curve: P = 100 - 3Q. The marginal cost of production is constant and equal to $10, and there are no fixed costs. What is the value of consumer surplus? Consider a monopolist who faces a market demand curve given by QD = 200 - p and produces at a constant marginal cost of MC = 2. Assume that a monopolist faces a demand curve given by Q = 10 p^-3. His cost function is c (y) = 2 y. a. What is the optimal level of output and price? b. What would be the optimal level and price if the firm behaved as a perfectly competitive industry? Consider a monopolist that faces a linear inverse demand of: P(q) = 304 - 4q. The firm has the cost function of: C(q) = 100 + 4q + 2q^2. What are the monopolist market price (p^M), quantity (q^M), and A monopolist with total cost function T C = 30Q + Q2 is facing a market demand given by P = 150 - Q. a) What is the optimal quantity and price the monopolist will set on this market? (Q=30, P=120) b A monopolist faces inverse demand P = 300 - 2Q. It has total cost TC = 60Q + 2Q2 and marginal cost MC = 60 + 4Q. What is the maximum profit the monopolist can earn in this market? Suppose a monopolist faces the demand curve P = 250 - 2Q. The marginal cost of production is constant and equal to $10, and there are no fixed costs. A. What is the monopolist's profit-maximizing level of output? B. What price will the profit-maximizing m A monopolist firm faces the following cost curve: C(Q) = Q^2 + 12, where 'Q' is the output produced. The demand for its product is given by P = 24 - Q. A) Derive the MR for this firm. B) Find the equi Suppose that monopolist's inverse demand on market 1 is given by P 1 = 100 - x 1 , and monopolist's inverse demand on market 2 is given by P 2 = 50 - 0.5x 2 . Monopolist's cost function is c(x 1 + x 2 A monopolist faces the demand curve P = 11 - Q. The firm's cost function is C = 6Q. a. Draw the demand and marginal revenue curves, and the average and marginal cost curves. What are the monopolist's A monopolist faces a market demand curve given by P(y) = 100 - y. Its cost function is c(y) = y^2 + 20. a. Find its profit-maximizing output level y* and the market price p(y*). b. Calculate its total revenue, total cost, and profit at y*. c. Calculate th A monopolist with total cost function c(Q) = 4 + 3 Q + 1/2Q^2 faces a market demand function of QD(P) = 60-4P. a) Calculate the monopolist's profit-maximizing price and quantity sold, and the monopol 1. Consider a monopoly that faces a market demand curve given as Q=100-P. the marginal cost of production for the monopolist is MC=$10. The monopolist faces total cost given by the following equation: A monopolist is seeking to price discriminate by segregating the market. The demand in each market is given as follows: Market A: P = 111 - 4Q Market B: P = 156 - 1Q The monopolist faces a marginal cost of $17 and has no fixed costs. Given this informatio A monopolist faces a demand curve given by P=10-Q and has constant marginal (and average cost) of 2. What is the output and the price that maximizes profit for this monopolist? a. Q = 0, P = 10 b. Q = 2, P = 8 c. Q = 4, P = 6 d. Q = 8, P = 2 e. None of th A monopolist faces an inverse demand P = 300 - 2Q and has total cost TC = 60Q + 2Q2 and marginal cost MC = 60 + 4Q. What is the maximum profit the monopolist can earn in this market? A) 60 B) 240 Suppose a monopolistic competitor in long-run equilibrium has a constant marginal cost of $4 and faces the demand curve given in the following table. ||Price ($)||Quantity |8|0 |7|1 |6|2 |5|3 |4|4 |3 Suppose a monopolist faces consumer demand given by P = 400 - 2Q with a constant marginal cost of $80 per unit (where marginal cost equals average total cost. Assume the firm has no fixed costs). A. If the monopoly can only charge a single price, what wil
CommonCrawl
I have read recently again the auto-biography of Ulam entitled Adventures of a Mathematician. Stanislaw Marcin Ulam (1909 – 1984) is a famous Polish – American mathematician, just like Mark Kac (1914 – 1984). Ulam and Kac come roughly from the same part of Poland, were interested by fundamental and applied mathematics, probability, and physics, moved from Poland to the United States of America, lived during the same period, and wrote an interesting auto-biography. Both died in 1984. Here is an excerpt from pages 269 – 270: Mark Kac had also studied in Lwów, but since he was several years younger than I (and I had left when only twenty ­six myself), I knew him then only slightly. He told me that as a young student he had been present at my doctorate ceremony and had been impressed by it. He added that these first impressions usually stay, and that he still considers me "a very senior and advanced person," even though the ratio of our ages is now very close to one. He came to America two or three years after I did. I remembered him in Poland as very slim and slight, but here he became rather rotund. I asked him, a couple of years after his arrival, how it had happened. With his characteristic good humor he replied: "Prosperity!" His ready wit and almost constant joviality make him extremely congenial. After the war he visited Los Alamos, and we developed our scientific collaboration and friendship. After a number of years as a professor at Cornell he became a professor of mathematics at The Rockefeller Institute in New York (now The Rockefeller University.) He and the physicist George Uhlenbeck have established mathematics and physics groups at this Institute, where biological studies were the principal and almost exclusive subject before. Mark is one of the very few mathematicians who possess a tremendous sense of what the real applications of pure mathematics are and can be; in this respect he is comparable to von Neumann. He was one of Steinhaus's best students. As an undergraduate he collaborated with him on applications of Fourier series and transform techniques to probability theory. They published several joint papers on the ideas of "independent functions." Along with Antoni Zygmund he is a great exponent and true master in this field. His work in the United States is prolific. It includes interesting results on probability methods in number theory. In a way, Kac, with his superior common sense, as a mathematician is comparable to Weisskopf and Gamow as physicists in their ability to select topics of scientific research which lie at the heart of the matter and are at the same time of conceptual simplicity. In addition — and this is perhaps related — they have the ability to present to a wider scientific audience the most recent and modern results and techniques in an understandable and often very exciting manner. Kac is a wonderful lecturer, clear, intelligent, full of sense and avoidance of trivia. Among the mathematicians of my generation who influenced me the most in my youth were Mazur and Borsuk. Mazur I have described earlier. As for Borsuk, he represented for me the essence of geometrical intuition and truly meaningful topology. I gleaned from him, without being able to practice it myself, the workings of n­ dimensional imagination. Today Borsuk is continuing his creative work in Warsaw. … Ulam's auto-biography contains information and anecdotes on the personalities of great scientists of the twentieth century, such as Stefan Banach, Hugo Steinhaus, and Stanisław Mazur, from Lwów, but also Kazimierz Kuratowski, from Warsaw, and later on Paul Erdős, George David Birkhoff, John von Neumann, Richard Feynman, Niels Bohr, Enrico Fermi, and George Gamow. Many of the last ones were involved, like Ulam, in the Manhattan project in Los Alamos. Ulam was an open, deep, and creative mind, interested by all aspects of applied and fundamental mathematics, and beyond! In Los Alamos, Ulam invented, in collaboration with von Neumann, the Monte-Carlo method and also particle numerical methods in fluid mechanics. He moreover discovered chaos in non-linear vibrations with Fermi and Pasta. Last but not least, Ulam played an essential role in the design of the hydrogen nuclear bomb together with Edward Teller. Ulam's work on nuclear chain reactions led him to the study of what we call now branching processes, and the discovery of the generating function method. Here is below an excerpt taken from pages 159-160. [I forgot to mention this interesting historical fact at the end of the chapter on branching processes in my book Recueil de modèles aléatoires with my old friend Florent Malrieu – A shame!] We discussed problems of neutron chain reactions and the probability problems of branching processes, or multiplicative processes, as we called them in 1944. I was interested in the purely stylized problem of a branching tree of progeny from one neutron which may multiply, into zero (that is, the death of a neutron by absorption), or one (that just continues itself), or two or three or four (that is, causes the emergence of new neutrons), each possibility with a given probability. The problem is to follow the future course and the chain of possibilities through many generations. Very early Hawkins and I detected a fundamental trick to help study such branching chains mathematically. The so­ called characteristic function, a device invented by Laplace and useful for normal "addition" of random variables, turned out to be just the thing to study "multiplicative" processes. Later we found that observations to this effect had been made before us by the statistician Lotka, but the real theory of such processes, based on the operation of iteration of a function or of operators allied to the function (a more general process), was begun by us in Los Alamos, starting with a short report. This work was strongly generalized and broadened in 1947, after the war, by Everett and myself after he joined me in Los Alamos. Some time later, Eugene Wigner brought up a question of priorities. He was eager to note that we did this work quite a bit before the celebrated mathematician Andrei N. Kolmogoroff and other Russians and some Czechs had laid claim to having obtained similar results. In modern terms, if $Z_n$ is the number of neutrons at generation $n$, with $Z_0:=1$, then the branching process ${(Z_n)}_{n\geq0}$ modeling the neutron chain reaction can be written as $$Z_{n+1}=\sum_{k=1}^{Z_n}X_{n+1,k}$$ where ${(X_{n,k})}_{n,k\geq1}$ are independent and identically distributed random variables with offspring distribution $P:=p_0\delta_0+p_1\delta_1+\cdots$. For any discrete random variable $X$, we denote by $$g_X(s):=\mathbb{E}(s^X)=\sum_{k=0}^\infty\mathbb{P}(X=k)s^k$$ its generating function at point $s\in[0,1]$. We have then, with $X\sim P$ and $g:=g_X$, $$g_{Z_{n+1}}(s)=\mathbb{E}(\mathbb{E}(s^{X_1+\cdots+X_{Z_n}})\mid Z_n))=\mathbb{E}(\mathbb{E}(s^X)^{Z_n})=g_{Z_n}(g_X(s))=\cdots=g^{\circ (n+1)}(s).$$ It remains to use fixed point analysis to get the behavior of the extinction probability $$\mathbb{P}(\exists n:Z_n=0)=\lim_{n\to\infty}\mathbb{P}(Z_n=0)=\lim_{n\to\infty}g^{\circ n}(0).$$ But you may prefer the aristocratic British families of Francis Galton and Henry William Watson. John von Neumann, Richard Feynman, and Stan Ulam on the porch of Bandelier lodge in Frijoles Canyon, New Mexico, during a picnic, ca 1949 (Nicholas Metropolis). Both photo and legend appear in Ulam's auto-biography. Previous Post EJP-ECP : Project Euclid Next Post Apprentissage profond George S. 2016-09-12 Gian-Carlo Rota's book, "Indiscrete Thoughts" pp. 60-86, has some biographical details on Stanislaw Ulam some of which is whimsical.
CommonCrawl
Markov Processes Most processes in applications are Markov processes or can be viewed as components of multivariate Markov processes. As in discrete time the term Markov refers to a certain lack of memory. 5.1 (Reflection Principle) For a Wiener process W set S(t) :=sups≤tW(s). Use the strong Markov property of W in order to show that P(W(t) ≤ a − x, S(t) ≥ a) = P(W(t) ≥ a + x) for any a > 0, x ≥ 0, t ≥ 0. Deduce that the joint law of (W(t), S(t)) has density $$\displaystyle \begin{aligned}\varrho(x,y)=\sqrt{2\over\pi t^3}(2y-x)\exp\bigg(-{(2y-x)^2\over 2t}\bigg)1_{\mathbb R _+}(y-x)1_{\mathbb R _+}(y).\end{aligned}$$ Hint: Show that $$\displaystyle \begin{aligned}\widetilde W(t):=\left\{\begin{array}{ll} W(t) & \mbox{ for }t\leq\tau,\\ 2a-W(t)& \mbox{ for }t>\tau \end{array}\right.\end{aligned}$$ is a Wiener process for a > 0 and the stopping time \(\tau :=\inf \{t\geq 0:W(t)\geq a\}\). Show that X is a time-inhomogeneous Markov process if and only if the space-time process \(\overline X(t)=(t,X(t))\) is a Markov Process. Show that the resolvent mapping Rλ of standard Brownian motion satisfies $$\displaystyle \begin{aligned}R_\lambda f(x)=\int_{-\infty}^\infty {1\over\sqrt{2\lambda}}e^{-\sqrt{2\lambda}|x-y|}f(y)dy\end{aligned}$$ for \(f\in C_0(\mathbb R)\). Derive the characteristic function of X(t) for the square-root process solving \(X(t)=x+\sqrt {X}\bullet W(t)\) with initial value x > 0 and some Wiener process W. Hint: Try the ansatz \(u(t,x)=\exp (\Psi _0(t,v)+\Psi _1(t,v)x)\) with \(\Psi _0,\Psi _1:\mathbb R _+\to \mathbb R\) for the backward equation, where \(v\in \mathbb R\) denotes the argument of the characteristic function. Derive the probability density function of X(t) for an Ornstein–Uhlenbeck process as in ( 3.87) with L(t) = μt + σW(t) for \(\mu \in \mathbb R,\sigma >0\) and some Wiener process W. Suppose that the initial distribution PX(0) is Gaussian with mean μ0 and variance \(\sigma _0^2\). Hint: Try the ansatz that X(t) is Gaussian and derive ODEs for its mean and variance. Let X denote a solution process to the martingale problem related to (β, γ, 0) for some bounded continuous functions \(\beta ,\gamma :\mathbb R\to \mathbb R\) such that γ > 0. Define the process W = γ(X)−1∕2•Xc. Show that W is a Wiener process and that X solves the SDE \(X=X(0)+\beta (X)\bullet I+\sqrt {\gamma (X)}\bullet W\). Show that |W|1∕4 is not a semimartingale for a Wiener process W. Hint: If X = |W|1∕4 is a semimartingale, apply Itōs formula to X4 = |W| in order to show that the local time of W in 0 equals L0 = 1{W=0}•L0 = 0, which contradicts the fact that |W| is not a local martingale. Verify the implication 2 ⇒ 1 in Proposition 5.34 and the growth condition in Example 5.44. For a thorough treatment of Markov process theory one may consult [100, 241, 251, 252]. For Example 5.10 see also [2, Theorem 3.1.9]. Concerning Sect. 5.2 we refer to [241, Exercise III.1.10]. Details on Remark 5.13(2) can be found in [241, Propositions III.2.4 and VII.1.4]. Our definition of the extended generator is close to the one in [241, Definition VII.1.8] and [152, Remarque 13.45]. For Proposition 5.20 see also [241, Definition VIII.3.2 and Proposition VIII.3.3] as well as [152, Remarque 13.46]. A result in the spirit of Remark 5.26 is stated in [226, Theorem 8.2.1]. For a version of Theorem 5.28 for continuous processes see [226, Exercise 8.3]. The approach to studying processes by their symbol is advocated in [36, 151]. Theorem 5.43 and the counterexample in Example 5.44 can be found in [201]. For Problem 5.1 see also [279, Section 3.7.3]. Problem 5.7 is based on [238, Theorem IV.71]. D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd edn. (Cambridge Univ. Press, Cambridge, 2009)CrossRefGoogle Scholar B. Böttcher, R. Schilling, J. Wang, Lévy matters. III. Lévy-type Processes: Construction, Approximation and Sample Path Properties (Springer, Cham, 2013)Google Scholar E. Çinlar, J. Jacod, P. Protter, M. Sharpe, Semimartingales and Markov processes. Z. Wahrsch. verw. Gebiete 54(2), 161–219 (1980)MathSciNetCrossRefGoogle Scholar S. Ethier, T. Kurtz, Markov Processes: Characterization and Convergence (Wiley, New York, 1986)CrossRefGoogle Scholar N. Jacob, R. Schilling, Lévy-type processes and pseudodifferential operators, in Lévy Processes (Birkhäuser, Boston, 2001), pp. 139–168zbMATHGoogle Scholar J. Jacod, Calcul Stochastique et Problèmes de Martingales, volume 714 of Lecture Notes in Math (Springer, Berlin, 1979)CrossRefGoogle Scholar J. Kallsen, P. Krühner, On uniqueness of solutions to martingale problems—counterexamples and sufficient criteria. arXiv preprint arXiv:1607.02998 (2016)Google Scholar I. Karatzas, S. Shreve, Brownian Motion and Stochastic Calculus, 2nd edn. (Springer, New York, 1991)zbMATHGoogle Scholar F Kühn, On martingale problems and Feller processes. Electron. J. Probab. 23(13), 1–18 (2018)MathSciNetzbMATHGoogle Scholar P. Protter, Stochastic Integration and Differential Equations, 2nd edn. (Springer, Berlin, 2004)zbMATHGoogle Scholar D. Revuz, M. Yor, Continuous Martingales and Brownian Motion, 3rd edn. (Springer, Berlin, 1999)CrossRefGoogle Scholar C. Rogers, D. Williams, Diffusions, Markov processes, and Martingales: Volume 1, Foundations, 2nd edn. (Cambridge Univ. Press, Cambridge, 1994)zbMATHGoogle Scholar C. Rogers, D. Williams, Diffusions, Markov processes, and Martingales: Volume 2, Itô Calculus (Cambridge Univ. Press, Cambridge, 1994)zbMATHGoogle Scholar S. Shreve, Stochastic Calculus for Finance II: Continuous-Time Models (Springer, New York, 2004)CrossRefGoogle Scholar D. Stroock, Diffusion processes associated with Lévy generators. Z. Wahrsch. verw. Gebiete 32(3), 209–244 (1975)CrossRefGoogle Scholar D. Stroock, S. Varadhan, Multidimensional Diffusion Processes (Springer, Berlin, 1979)zbMATHGoogle Scholar D. Werner, Funktionalanalysis, 3rd edn. (Springer, Berlin, 2000)zbMATHGoogle Scholar Eberlein E., Kallsen J. (2019) Markov Processes. In: Mathematical Finance. Springer Finance. Springer, Cham. https://doi.org/10.1007/978-3-030-26106-1_5
CommonCrawl
The possibility of circle references in human language is totally independent of consistancy of mathematics. It's common sense and it can also be proved. "This statement about axiomatic set theory is false." "The class of all classes that do not include themselves, do not include itself." "Next sentence is false. Previous sentence is true." Statements with circle references may conflict with the Aristotelian logic of terms, but do not interfere with mathematical deduction, because there is an logically equivalent and consistent three valued logic, which admit circle references. The trick is to split 'false' into two logically equivalent but different alternatives: 'false' and 'absurd' in a systematic algebraic way, which results in a unique commutative semiring which can be interpreted as a three valued logic that preserve all tautologies and rules of inference. All theorems can be proved and absurd statements exists besides false statements and as harmless. http://forthmath.blogspot.com/2020/07/the-paradox-of-russell.html https://forthmath.blogspot.se https://iesho.blogspot.se/2015/02/21-murder-of-swedish-prime-minister.html https://en.m.wikipedia.org/wiki/Bologna_massacre https://en.m.wikipedia.org/wiki/Operation_Gladio https://en.m.wikipedia.org/wiki/Stay-behind#/ https://iesho.blogspot.com/2019/02/49-dead-woman-in-isdalen-norway.html forthmath.blogspot.se Last seen yesterday Mathematics 12.8k 12.8k 33 gold badges1919 silver badges6868 bronze badges MathOverflow 772 772 66 silver badges1616 bronze badges Earth Science 217 217 11 silver badge88 bronze badges 105 MathJax basic tutorial and quick reference 23 Is it okay to use MSE as a "proof calculator"? 22 Why become moderator? 34 Continuous relations? 8 Function to find the nth digit of Pi 8 What's the speed of light when viewed from the side? 6 A faulty algorithm for subtraction of big integers in Forth 6 How is it possible that it can get hotter in the car than it is outside? 5 The set of numbers $a+b$ such that $ma^2+nb^2$ is prime 5 How do I implement Y-combinator in Forth? soft-question elementary-number-theory category-theory prime-numbers 48 Any odd number is of form $a+b$ where $a^2+b^2$ is prime Aug 8 '17 27 Is there a domain "larger" than (i.e., a supserset of) the complex number domain? Jan 7 '15 21 Every prime number divide some sum of the first $k$ primes. Aug 4 '16 17 A topology on the set of lines? Jul 4 '16 17 An example of a problem which is difficult but is made easier when a diagram is drawn Jan 12 '15 16 Does every power of two arise as the difference of two primes? Jul 16 '16 16 Conjecture: $\pi(x)\ge \pi\circ\pi(x)+\pi\circ\pi\circ\pi(x)+\cdots$ Aug 11 '16 15 A condition for being a prime: $\;\forall m,n\in\mathbb Z^+\!:\,p=m+n\implies \gcd(m,n)=1$ Jul 10 '16 15 A conjecture concerning primes and algebra Oct 28 '14 15 A conjecture about an unlimited path Aug 11 '17 Tag Editor
CommonCrawl
Self-tuning of scheduling parameters for balancing the quality of experience among services in LTE Pablo Oliver-Balsalobre ORCID: orcid.org/0000-0002-9249-65111, Matías Toril1, Salvador Luna-Ramírez1 & José María Ruiz Avilés2 EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 7 (2016) Cite this article Improving the Quality of Experience (QoE) offered to subscribers has become a major issue for mobile network operators. In this paper, a self-tuning algorithm for adjusting parameters in a multi-service packet scheduler of a radio base station based on network statistics is proposed to balance QoE across services in Long Term Evolution (LTE). The aim of the algorithm is to ensure that all users achieve the same average QoE regardless of the type of service. For this purpose, the proposed heuristic algorithm iteratively changes service priority parameters to re-prioritize services so that those services with the lowest QoE increase their priority. Unlike previous approaches, the proposed algorithm takes QoE (and not Quality of Service) into account. Method assessment is carried out in a dynamic system-level LTE simulator. Simulation results in a typical scenario show that tuning service priority parameters can significantly increase the QoE of the worst service without affecting the overall network QoE. With the success of smartphones and tablets, traffic in mobile broadband networks has dramatically changed due to the introduction of new services. Although recent radio access technologies, such as Worldwide Interoperability for Microwave Access (WiMAX) and Long Term Evolution (LTE), are prepared to offer a wide range of services, the launch of new services poses new challenges for network operators [1]. Likewise, continuous advances in terminals and, most importantly, in user expectations force operators to update the way they manage their networks. To provide the best end user experience, mobile operators are changing their management processes, currently focused on the network performance, to a more modern approach focused on user opinion. As a result, Customer Experience Management (CEM) has now become a key factor to differentiate among operators offering similar networks and services [2]. In such a user-centric approach, traditional objective Quality-of-Service (QoS) metrics are substituted by subjective Quality-of-Experience (QoE) metrics. In parallel, the explosive growth of the size and complexity of mobile networks makes it very difficult for operators to manage their network. Such a need for increasing operational efficiency has stimulated intense research and standardization activity in the field of Self-Organizing Networks (SON) [3–5]. Most SON use cases in the literature only deal with basic radio aspects, such as radio network coverage, connection quality or capacity and power consumption [6, 7]. Although multi-layer, multi-vendor, and multi-technology issues have been addressed recently [8], less attention has been paid to the problems originated by the co-existence of multiple services in the same network and how these problems can be solved by SON. Traffic and service management in current mobile networks is done by dynamic packet scheduling (PS) algorithms [9–11]. PS algorithms dynamically assign radio resources (i.e., frequency, time slot, and power) to user data requests based on their QoS constraints [9, 12]. Basic schedulers only deal with multiple users of the same service [13]. More sophisticated schedulers allocate more resources to users experiencing worse QoS to satisfy some fairness constraint [14, 15]. Such a QoS balance between users is evaluated from a theoretical perspective in several studies (e.g., [16–19]). However, these studies do not specify how the balance situation is accomplished. To deal with the different service requirements, the 3rd Generation Partnership Project (3GPP) has defined several QoS Class Identifiers (QCI) to differentiate among service classes [20]. Based on QCIs, schedulers can prioritize among services. Some scheduling algorithms are proposed to provide differentiated services, QoS and fairness by assigning appropriate weights to each user queue (e.g., weighted and deficit round robin and weighted fair queuing). However, these schemes do not exploit multi-user diversity gain and hence do not achieve optimal system performance. More advanced scheduling algorithms combine both multi-service and multi-user diversity gain capabilities [19, 21, 22]. In [22], a scheduling algorithm is proposed to deal with real-time and non-real-time traffic in a proportional fair manner. More recent works [23–28] propose QoE-aware schedulers whose aim is to optimize the overall QoE while ensuring a minimum QoE for all users. All of them decide the exact resources assigned to every single user in real time, which makes them suitable for minimum QoS/QoE assurance. However, the aim of most schedulers is to ensure a minimum QoS/QoE for the worst users, rather than equalizing the average QoS/QoE per service. Thus, QoE balance between users or services is not guaranteed. Moreover, implementing these advanced schedulers would require upgrading network equipment, which is not desired by network operators that have already made an important investment to upgrade to the latest radio access technology. Alternatively, tuning parameters of existing schedulers can be done to optimize the overall QoE. In [29], a self-tuning algorithm for the contention window parameter is proposed that does not differentiate between services. Closer to this work, an adaptive proportional and integrative controller is used in [30] to adjust application priorities in order to ensure end-to-end delay requirements. Similarly, an adaptive controller is proposed in [31] for adjusting flow priorities to ensure a certain QoS level for multimedia services in terms of delay. In that proposal, each service has its own controller, whose decisions only depend on the QoS of that flow. This might cause instabilities when each flow tries to increase its priority individually in real time. More importantly, the aim of the controller is not to balance QoS among flows but to ensure that all flows reach their QoS target. Likewise, in congestion situations, when no flow fulfills its required QoS and all priority values reach their limits, it is not ensured that all services have the same QoS. Thus, its aim is not to equalize QoS among services, but to increase the overall system throughput. To the authors' knowledge, no method has been proposed to adjust service priority parameters in a multi-service multi-user scheduler of a radio base station with the aim of balancing the overall QoE per service under different traffic load conditions. In this paper, a self-tuning algorithm for adjusting parameters in a multi-service packet scheduler of a radio base station based on network statistics is proposed to balance QoE across services in LTE. The aim of the algorithm is to ensure that all users achieve the same average QoE regardless of the type of service. For this purpose, the proposed heuristic algorithm iteratively changes service priority parameters to re-prioritize services so that those with the lowest QoE increase their priority. Unlike previous approaches, the proposed self-tuning algorithm takes QoE (and not QoS) into account. Method assessment is carried out in a dynamic system-level LTE simulator implementing a regular macrocellular scenario. The main contributions of this work are as follows: (a) a self-tuning algorithm for scheduler parameters to equalize QoE among services in LTE with any network load conditions and (b) simulation results that quantify the impact of equalizing QoE among services in a realistic multi-service LTE scenario. The rest of the paper is organized as follows. Section 2 describes the LTE system model, including the considered scheduling algorithm. Then, Section 3 presents the proposed self-tuning algorithm for scheduler parameters. Section 4 describes the simulation tool used to assess the algorithm, and Section 5 presents the results of the simulations. Finally, Section 6 presents the conclusions of the study. In this section, a system model for a multi-service LTE system is presented. First, a multi-service scheduling algorithm is outlined, identifying its key parameters. Then, traffic models for the services included in the traffic mix of current mobile networks are presented. Finally, user QoE models relating QoS performance indicators to end user experience are explained for each service. Scheduling algorithm The multi-service PS scheme considered in this work is a modified version of the classical exponential/proportional fair (EXP/PF) scheduler [21]. The original EXP/PF scheme is designed to support multimedia applications in a system with Adaptive Modulation and Coding (AMC) and Time Division Multiplexing (TDM). For this purpose, service requests are classified into real time (RT) or non-real time (NRT). Then, each request is given a priority value, itK, depending on its service type, with the following expressions: $$ K=\left\{ \begin{aligned} & \exp \left(\frac{{{a}_{i}}{{W}_{i}}(t)-a\overline{W(t)}}{1+\sqrt{a\overline{W(t)}}} \right)\cdot P{{F}_{\text{factor}}} & {i\in RT} \\ & \\ & \frac{\omega (t)}{M(t)} \cdot P{{F}_{\text{factor}}} & i\in \textit{NRT} \end{aligned} \right. $$ $$ a\overline{W(t)}=\frac{1}{{{N}_{RT}}}\sum\limits_{i\in RT}{{{a}_{i}}}{{W}_{i}}(t) \quad, $$ $$ \omega (t)=\left\{ \begin{aligned} & \omega (t-1)-\varepsilon & {{W}_{\max}}>{{\tau}_{\max}} \\ & \omega (t-1)+\frac{\varepsilon}{\kappa} & {{W}_{\max}}<{{\tau}_{\max}} \end{aligned} \right. $$ $$ {{a}_{i}}=-\frac{\log({{\delta}_{i}})}{{{\tau}_{\max}}} \quad. $$ In (1), W i (t) is the Head-Of-Line (HOL) packet delay of user i at time t, \(a\overline {W(t)}\) represents the average delay of RT users, a i is related to delay constraints and P F factor is the fairness factor. ω(t) is a weight factor associated with NRT users and M(t) is the average number of RT packets waiting at the eNodeB buffer at time t. In (2), N RT is the number of RT users. In (3), W max is the maximum HOL packet delay of all RT service users in the cell considered, τ max is the maximum delay constraint of RT services (in milliseconds), whereas ε and k are constants defining how ω(t) is updated depending on W max and τ max. Specifically, ω(t) is increased when W max<τ max (i.e., when delay constraints are being met by RT users), giving NRT users a higher priority by Eq. (1). Finally, in (4), δ i is the maximum probability for HOL packet delay of user i to exceed its delay threshold (in this case, δ i and τ max are shared by all RT users). The fairness factor, P F factor, is computed as in the classical Proportional Fair (PF) algorithm [15], $$ P{{F}_{\text{factor}}}=\frac{{{r}_{i}}(t)}{{{R}_{i}}(t)} \quad, $$ $$ {{R}_{i}}(t)=\left(1-\frac{1}{{{t}_{c}}}\right)\cdot {{R}_{i}}(t-1)+\frac{1}{{{t}_{c}}}\cdot {{r}_{i}}(t-1) \quad, $$ where r i (t) is the achievable data rate of user i, R i (t) is its average data rate, and t c is the averaging time constant, which is used to prioritize either throughput maximization or fairness. The reader is referred to [21] for a more detailed explanation of the behavior of the EXP/PF scheduler. In the EXP/PF scheme, RT users always have a higher priority than NRT users when their HOL packet delays are approaching the maximum delay constraint, regardless of the experienced QoE. To change this behavior, and allow to re-prioritize services to some extent, the EXP/PF is modified here by adding a new parameter, referred to as Service Priority Index (SPI), as was already done in [32]. The new priority value, K ′, is computed from the previous value, K, as $$ K' = \min(\max(K,1),10) \cdot SPI_{i} \quad, $$ where S P I i is a real value between 1 and 15 reflecting the service priority associated with user i, which can be used to gently re-prioritize services. For convenience, in (7), the value of K (i.e., the priority value computed by the classical EXP/PF) is limited between 1 and 10. Such limits ensure that a service with the highest SPI value (=15) always has a priority higher than one service with the lowest SPI value (=1), regardless of the value of K. Thus, the sensitivity of priority values to SPI changes is increased, which improves the ability of the self-tuning algorithm proposed later to equalize QoE among services. Without that limitation, K might be arbitrarily high, e.g., for users with an extremely high achievable data rate compared to their average data rate. For those users with a large K value, SPI reduction might not be enough to decrease their priority, so that SPI changes would not have an impact on the re-prioritization process. In LTE, each service is associated with one QCI, which defines its performance objectives. In general, lower QCI values imply more restrictive services in terms of performance. This paper considers both RT and NRT services and thereby takes into account the whole QCI range [20]. Table 1 shows the main parameters of each service included in this work. Table 1 Service models parameters A first service is Voice over Internet Protocol (VoIP). As a conversational RT service, it is defined as a Guaranteed Bit Rate (GBR) service whose QCI value (=1) corresponds to the highest priority value for user data services [20]. In this work, the VoIP service is modeled as a data source generating packets of 20 bytes every 10 ms, with a bit rate of 16 kbps [33]. A call dropping model is also included, where a VoIP call is terminated when a user does not receive enough resources during one consecutive second. A second service is buffered video streaming (hereafter, VIDEO for short). This service is defined as non-GBR with a less restrictive QCI value (=6). In this work, a simple model of the player's buffer at the client side is considered. The amount of video data in the buffer dynamically changes with the download bandwidth, video bit rate, and video playing rate. Initially, the buffer is filled with data. The larger the buffer size, the larger the initial delay. Later, if the buffer runs out (download bandwidth < video rate), the video stops (event known as stalling) and the player waits until the buffer is re-filled again. To avoid the use of an analytical traffic model, real video traces are used (http://trace.eas.asu.edu/tracemain.html) [34]. Such traces include frame arrival times and frame sizes of real video sequences obtained with an H.264/MPEG-4 AVC codec. Video duration is randomly defined on a per-user basis with a uniform distribution to a maximum of 3 min. A video session drop model is also included, where a session is terminated if session time is more than twice the video duration. The other two services are NRT services, namely web browsing with Hypertext Transfer Protocol (referred to as WEB) and file downloading with File Transfer Protocol (FTP). Both of them are best effort non-GBR services and are assigned to the lowest QCI values (=9). The WEB model in this work is inspired in [35]. For simplicity, a WEB session is modeled as the download of several web pages with inactivity time between them. FTP service is a typical file download service, where session time is determined by the time spent downloading the file at the maximum allowed data rate [35]. QoE models A QoE model reflects the impact of QoS on end user experience. A common approach to build a QoE model is by means of utility functions. Utility functions are mathematical functions expressing some kind of preference relation. In the context of mobile networks, utility functions describe the relationship between the value of key QoS network performance statistics and the QoE perceived by users of a service. Since each service has different QoS performance targets, each service has a different utility function. In this work, the output of any QoE model is an estimate of the Mean Opinion Score (MOS) ranging from 1 (bad experience) to 5 (ideal experience) [36]. The following paragraphs describe the utility functions used for each service. The E-model [37] can be used to obtain an estimate of the voice quality, R (∈[0,100]), from the average mouth-to-ear (i.e., one way) delay. In this work, only the delay in the downlink (DL) is considered to reduce the computational load of simulations. All other E-model parameters are set to their default values, described in [38]. Then, the voice quality R is translated into MOS with the formula: $$ \text{MOS}_{\text{VoIP}}=1~ +~ 0.035~\cdot~ R~ +~ R~\cdot~ (R-60)~\cdot ~(100-R)~\cdot~ 7~\cdot~ {{10}^{-6}}. $$ Note that the MOSVoIP is upper limited to 4.5 when the R parameter is rated to its maximum value, showing that, even in ideal test conditions, some individual may not rank the experience with the maximum. In a buffered video-streaming service, such as YouTube or Netflix, the key indicators defining the QoE of a user are the initial delay and the number and duration of stallings. In this work, the utility function for video streaming is [39]: $$ \text{MOS}_{\text{VIDEO}}=4.23-0.0672{{L}_{ti}}-0.742{{L}_{fr}}-0.106{{L}_{tr}} \quad, $$ where L ti denotes the initial buffering time (in seconds), L fr is the average frequency of stallings (in seconds −1), and L tr is the average stalling duration (in seconds) [39]. As in the previous case, the model is upper limited to 4.23. The QoE associated with FTP service depends on the average user throughput during the connection. The formula used to obtain the MOS value is [40]: $$ \text{MOS}_{\text{FTP}}=\max (1,\min (5,6.5\cdot T -0.54)) \quad, $$ where T is the average user throughput of a user (in Mbps). Web browsing Similar to the FTP case, the level of satisfaction in web browsing is measured on the basis of user throughput. In this case, the MOS is calculated as [40] $$ \text{MOS}_{\text{WEB}}=5-\frac{578}{1+{{\left(\frac{T+541.1}{45.98} \right)}^{2}}} \quad, $$ where T is the average user throughput of a user (in kbps). The shape of this utility function defines the web browsing service not as restrictive as FTP. Thus, a web browsing user needs fewer resources than an FTP user to receive a satisfactory service. Balancing algorithm SON optimization algorithms aim to solve network problems by modifying Radio Access Network (RAN) parameters [4, 5]. In this work, a self-tuning algorithm is proposed to balance the QoE among services by adjusting the service priority parameters in the scheduler of the eNodeB. The aim of tuning is to re-prioritize services so that the average QoE per service is the same in the long term. For this purpose, SPIs are modified based on statistical QoS measurements collected per service in the network management system. As a result, users of services with worse QoE increase their priority and receive more radio resources. The algorithm is conceived as an iterative process that decides the new SPI values based on an estimate of the QoE per service in the previous iteration (hereafter referred to as optimization loop). To avoid abrupt parameter changes, the controller is designed with an incremental structure, where SPI parameters are modified progressively. Thus, the output of the decision-making process is the positive or negative step added to the current SPI values. For simplicity, it is assumed here that all eNodeBs in the network share the same set of SPI values (i.e., tuning is done on a network basis). Thus, the algorithm is divided into a set of controllers (one per service) in charge of modifying the corresponding SPI parameter. The inputs to each controller are the QoE for the optimized service and that of the other services measured across the whole network. The output of each controller is the new value of the SPI parameter for the optimized service that is used in all schedulers in the network. In the following paragraphs, two variants of the algorithm are described, differing in the drivers used to guide the tuning process. Unweighted strategy In a first option, referred to as unweighted strategy, the aim is to equalize the average QoE of all services, i.e., the arithmetic mean of the QoE experienced by connections of a service, $$ \overline{\text{QoE}_{\text{j}}}=\frac{1}{N_{j}}\sum\limits_{i=1}^{N_{j}}{\text{QoE}_{\text{i}}} \quad, $$ where j is the evaluated service, N j is the number of users of service j, and QoE i is the QoE perceived by user i, estimated from QoS statistics. The sum considers that all users of the same service have equal target QoE. With this aim, the input to each controller is the average QoE of that service and the average of the average QoE of the other services, computed as $$ \overline{\text{QoE}}_{k \ne j}=\frac{1}{{{N}_{s}}-1}\sum\limits_{k\ne j}{\overline{\text{QoE}}_{\text{k}}} \quad, $$ where N s stands for the number of active services on the network. Then, the QoE difference is calculated as $$ \Delta \overline{\text{QoE}}_{\text{j}}=\overline{\text{QoE}}_{\text{j}}-\overline{\text{QoE}}_{k\ne j} \quad, $$ where \(\overline {\text {QoE}}_{k\ne j}\) is the average QoE of those services different from j. Such a difference is used as a measure of the distance and direction from the balance situation. A classical proportional controller is used to modify SPIs. The response of the controller is represented in Fig. 1. It can be observed that SPI changes are inversely proportional to \(\Delta \overline {\text {QoE}}_{j}\). Thus, if a service experiences a QoE larger than the other services, its SPI is decreased. The change is more aggressive if \(\Delta \overline {\text {QoE}}_{j}\) is higher than 1. The two slopes provide a gain scheduling mechanism to achieve an adequate trade-off between speed of response and system stability. The lower slope for low QoE differences aims to reduce system sensitivity to ensure stability at the end of the balancing process. The higher slope for larger QoE differences ensures fast convergence to equilibrium and fine granularity to reduce QoE differences. Finally, upper and lower limits ensure that the largest variation of the SPI between consecutive loops is 2. Controller in the unweighted strategy Weighted strategy For network operators, the total number of satisfied users is a key driver. In this case, the percentage of users of each service becomes a very important parameter, since services with more users should be prioritized against those with fewer users. To favor services with more users, the indicator used to measure the QoE of a service is modified to include a weight dependent on the number of users of the service, as $$ \overline{\text{QoE}}_{j}^{W}=\frac{\overline{N}}{N_{j}} \cdot \overline{\text{QoE}}_{j} \quad, $$ where the superscript W denotes weighted, N j is the number of users of that service j in the network, and the weight factor \(\overline N\) represents the average number of users per service. Note that the larger the value of N j , the lower the value of \(\overline {\text {QoE}}_{j}^{W}\), reflecting a worse value of the weighted QoE indicator for that service. The lower QoE of more populated services is compensated for the controller by increasing their priority so that those services receive more resources. Similarly to the unweighted strategy, there is no distinction of target QoE among users within the same service. In the weighted strategy, the balancing process aims to reduce the difference between the weighted QoE indicator of each service and the mean value of the other services, computed as $$ \Delta \overline{\text{QoE}}_{j}^{W}=\overline{\text{QoE}}_{j}^{W}-\overline{\text{QoE}}_{k\ne j}^{W} \quad, $$ $$ \overline{\text{QoE}}_{k\ne j}^{W}\:=\frac{1}{{N_{s}}-1}\sum\limits_{k\ne j}{{\overline{\text{QoE}}}_{k}^{W}} \quad, $$ where \(\Delta \overline {\text {QoE}}_{j}^{W}\) is the difference of weighted QoE of a service against that of the other services, \(\overline {\text {QoE}}_{k\ne j}^{W}\) is the average weighted QoE of the other services, and N s is the number of services in the network. As shown in Fig. 2, the shape of the controller is exactly the same as in the unweighted case. Controller in the weighted strategy It should be pointed that, in both strategies, equalizing QoE across services does not necessarily increase the overall QoE. However, it is expected that, in normal situations, increasing the priority of services with the lowest QoE should improve the overall system QoE. Such a belief is based on the shape of utility functions, shown in (8)–(11). With them, the QoE increase obtained by reassigning more resources to an under-prioritized service is often larger than the loss of QoE caused by taking those resources from over-prioritized services that are receiving more resources than the strictly needed. In the previous section, two balancing algorithms based on re-prioritizing services have been presented. Several tests are now carried out to assess their value. For clarity, the simulation setup is first introduced and results are presented later. Analysis setup In the absence of an analytical model or a live LTE system, performance assessment is done in a dynamic system-level LTE simulator [33]. The considered macrocellular scenario, shown in Fig. 3, consists of 19 tri-sectorized sites with 57 cells evenly distributed in space. Table 2 shows the main parameters of the simulator. System bandwidth is configured to 6 Physical Resource Blocks (PRBs) to reduce the number of users needed to achieve a high network load, and thus reduce the computational load of simulations. Likewise, a hexagonal cellular layout and uniform spatial distribution have been selected to ease the analysis of results. The reader is referred to [33] for a more detailed explanation of the configuration parameters and the tool itself. Simulation scenario Table 2 Simulation parameters Table 3 shows the traffic mix used in the simulations, which is inspired in [35] and [3]. The average number of users per cell is large enough to ensure that the PRB utilization ratio is close to 100 %, so that services compete for radio resources and service priority has an impact on end user performance. Table 3 Traffic mix For repeatability, random variables are pre-generated to ensure that every optimization loop is carried out under exactly the same conditions. Thus, performance differences between loops are only due to changes in the SPI configuration. Table 4 shows the values of the internal scheduler parameters in this work. The most important one of those parameters, presented in 2.1, is t c , which has a direct influence on the P F factor and therefore on the scheduling process. The value t c =1.25 in (6) means that the weight of past history (previous average data rate) is 0.2 and the weight of present (instantaneous achievable data rate) is 0.8. Table 4 Scheduler internal parameters Three experiments are carried out. The aim of the first experiment is to show how the basic balancing algorithm manages to equalize the QoE across services. For this purpose, the unweighted algorithm is tested with an initial SPI configuration where all services begin with the same intermediate SPI value (=7). The aim of the second experiment is to check the impact of the initial SPI configuration. For this purpose, the unweighted algorithm is tested with an initial SPI configuration where the SPI is different for each service. Specifically, SPIVoIP=2, SPIVIDEO=13, SPIFTP=5, and SPIWEB=3. The last experiment aims to show how the weighted algorithm manages to prioritize the most populated services. Each experiment consists of 24 optimization loops (5 min per loop). Thus, 2 h of network time are simulated. It is checked a posteriori that, with the proposed controller, and for the initial QoE imbalance between services in the simulated scenario, such a number of loops is enough to ensure that the system reaches equilibrium. To assess the value of a SPI configuration, several network performance indicators are used. The main figure of merit from the network perspective is the average QoE of the worst service, \(\min \{\overline {\text {QoE}}_{j}\}\) or \(\min \{\overline {\text {QoE}}_{j}^{W}\}\). Such a choice is consistent with the way operators monitor network QoE, where the worst users receive most of the attention to reduce churn rates. Note that the worst service in equilibrium (i.e., at the end of the optimization process) is not necessarily the same as in the first loop. The selection of the unweighted or weighted variant of the figure of merit depends on the aim of the balancing process. If the aim is to improve fairness among services, regardless of the number of users per service, the unweighted variant must be selected. In contrast, if the aim is to benefit the most populated service, the weighted variant is the proper choice. Other important network performance indicators are the global service QoE, defined in the unweighted version as the arithmetic mean of the average QoE of all services $$ \text{QoE}_{\text{global}}=\frac{1}{N_{s}}\sum\limits_{j=1}^{N_{s}}{\overline{\text{QoE}}_{j}} \quad, $$ and the maximum QoE difference among services, defined as $$ \Delta \overline{{\text{QoE}}_{\text{max}}}=\max\left\{\overline{\text{QoE}}_{j}\right\}-\min\left\{\overline{\text{QoE}}_{j}\right\} \quad. $$ For the weighted version, the overall QoE is obtained from the average user QoE, computed as $$ \text{QoE}_{\text{global}}^{\text{W}}=\frac{1}{N_{T}}{\sum\limits_{i=1}^{N_{T}}\text{QoE}_{i}} \quad, $$ where QoE i is the quality experienced by each user treated and N T is the total number of connections, and the maximum service QoE difference is computed as $$ \Delta \overline{\text{QoE}}^{W}_{\text{max}}=\max\left\{\overline{\text{QoE}}_{j}^{W}\right\}-\min\left\{\overline{\text{QoE}}_{j}^{W}\right\} \quad. $$ Although the above-described network performance indicators are computed in every optimization loop, the focus is on the values obtained at the end of the tuning process. To assess the algorithm from the control perspective, the whole trajectory is evaluated by checking convergence speed and stability. The former is given by the number of loops to reach equilibrium, while the latter is based on the absence of fluctuations in system parameters. The results of the unweighted algorithm are first presented, since they are easier to analyze. The results of the weighted variant are discussed later. Unweighted strategy/equal initial SPI In the first experiment, the initial SPI value for all services is set to an intermediate level (i.e., 7). Figure 4 presents the evolution of the QoE and SPI of each service. In Fig. 4 a, it is observed that, with the initial SPI settings (i.e., loop 1), the RT service (VoIP) experiences the lowest QoE, whereas the WEB service has the largest QoE. It is inferred that, with the initial configuration, the scheduler benefits WEB by allocating enough resources to download web pages when needed. This is just a consequence of the low throughput threshold in the utility function of WEB, presented in (11), which make it the least restrictive service. Results in the unweighted strategy with equal initial SPI settings. a Average service QoE. b SPI parameter In Fig. 4 b, it is observed that, in only three loops, the algorithm already manages to balance the QoE of VIDEO, VoIP, and FTP by reducing the SPI of services with QoE above the average (i.e., WEB) and increasing the SPI of those below the average (i.e., VoIP, VIDEO, and FTP). Thereafter, the algorithm tries to increase the QoE of VoIP, VIDEO, and FTP at the expense of WEB. Even if the SPI of WEB reaches its lower limit, the SPIs of all other services keep increasing. Peaks in the QoEFTP curve are easily explained by observing the evolution of SPI parameters, shown in Fig. 4 b. A comparison of both figures reveals that abrupt changes of QoE occur when SPIFTP has just become greater than SPIVIDEO, i.e., when the priority of the video service becomes less than the FTP service. A closer analysis shows that this happens whenever SPIVIDEO falls below the SPI of another service. This is due to the fact that the video service occupies more than 50 % of PRBs in the network, which makes the system quite sensitive to changes in the priority of the video service. Figure 5 compares the evolution of the global QoE against that of the minimum service QoE (primary axis) and maximum QoE difference (secondary axis). As expected, the balancing algorithm achieves nearly a fourfold reduction of the maximum QoE difference from 1.74 to 0.47. As a result, the QoE of the worst service (VoIP, VIDEO, or FTP, depending on the loop) is increased from 1.8 to 2.25. Such an improvement is obtained without changing QoEglobal, except for the positive peaks observed in QoEFTP. Figures of merit in the unweighted strategy with equal initial SPI settings Unweighted strategy/different initial SPI To check the influence of initial parameter settings, the unweighted algorithm is initialized with an uneven SPI configuration selected at random, where VIDEO has larger SPI, WEB and FTP have lower SPI, and VoIP has the lowest SPI, close to the minimum. Figure 6 shows the evolution of QoE and SPI for each service. Again, it is observed that the balancing algorithm manages to reduce QoE differences among services. Nonetheless, WEB remains the best service despite reaching the minimum SPI value (=1) at the beginning of the process. This is because WEB is the least demanding service. From the comparison of both figures, it is deduced that every time SPIVoIP crosses the SPI of the other services, the QoE of VoIP increases, especially when crossing the VIDEO service. This was expected since VIDEO is the service demanding the largest amount of resources. Results in the unweighted strategy with uneven initial SPI settings. a Average service QoE. b SPI parameter Figure 7 shows the evolution of the three figures of merit. In the figure, it is observed that \(\Delta \overline {\text {QoE}}_{\text {max}}\) on the secondary axis decreases by more than six times (from 1.93 to 0.31) after tuning SPIs. Likewise, \(\min \{\overline {\text {QoE}}_{j}\}\) on the primary axis improves 90 % (from 1.18 to 2.25). In addition, the global QoE on the primary axis improves as a result of the balancing process. Figures of merit in the unweighted strategy with uneven initial SPI settings Weighted strategy/equal initial SPI The last experiment shows how the weighted algorithm improves the average end user experience by prioritizing the most populated services. User ratios in Table 3 show that, in the considered case, VoIP is the service with more users (50 %) and FTP is the one with fewer users (10 %). Figure 8 a shows the evolution of the indicator balanced by the weighted algorithm (i.e., the average service QoE divided by the number of users per service), while Fig. 8 b presents the SPI configuration trend. It is observed that the SPI of VoIP reaches the maximum value (=15) almost immediately, since VoIP is the most populated service. After that, \(\overline {\text {QoE}}_{\text {VoIP}}^{\text {W}}\) barely changes. In fact, the situation is almost stable after loop number 13, since thereafter SPIFTP and SPIWEB stagnate at 1 (i.e., the minimum value) and only SPIVIDEO varies very slowly. It is observed that the balancing process reaches saturation in this cas e. Results in the weighted strategy. a Average service QoE. b SPI parameter Figure 9 shows the evolution of the global service QoE on the primary axis, and the maximum weighted QoE difference, \(\Delta \overline {\text {QoE}}^{\text {W}}_{\text {max}}\), and the minimum weighted QoE, \(\min \{\overline {\text {QoE}}_{j}^{W}\}\), on the secondary axis. As expected, \(\Delta \overline {\text {QoE}}^{\text {W}}_{\text {max}}\) is halved at the end of the balancing process. Likewise, the minimum weighted service QoE improves by 70 % as a result of increasing the priority of the most populated service (i.e., VoIP). A beneficial side effect is that QoEglobal W improves by 15 %, especially in the first iterations. Figures of merit in the weighted strategy The case of weighted strategy and uneven initial SPI settings (not shown here for brevity) gives the same conclusions. In this paper, a self-tuning algorithm for adjusting parameters in a multi-service packet scheduler of a LTE base station based on network statistics has been proposed. The aim of the algorithm is to ensure that all users achieve the same average QoE regardless of the type of service. For this purpose, the proposed iterative algorithm changes service priority parameters to re-prioritize services so as to equalize the QoE among services. Controlling QoE instead of QoS makes it easier to compare services with very different QoS constraints. Two variants of the algorithm have been presented, depending on whether the aim is to improve the average service QoE (unweighted approach) or the average user QoE (weighted approach). Method assessment has been carried out in a dynamic system-level LTE simulator implementing the downlink in a regular scenario. Results have shown that the unweighted version of the algorithm can equalize the QoE of services by changing the service priority parameter from different initial settings. Thus, the QoE of the worst service is doubled by re-prioritizing services properly. Likewise, the weighted version of the algorithm improves the average user QoE by 15 % by increasing the priority of the most populated services. The unweighted strategy is the preferred one if fairness among services is desired, regardless of the number of users per service. On the other hand, the weighted strategy should be selected when the aim is to favor the most populated service. It should be pointed out that, if the considered utility functions were others, a different situation might be reached at the end of the SPI adjustment process. Generally, a more optimistic utility function for a service, showing a higher MOS with the same QoS, would lead to a decrease in the SPI of that service and, hence, a lower service priority. However, it is worth noting that the balancing algorithm would remain the same, regardless of the utility functions. It is left for future work to design more sophisticated controllers that ensure optimal network performance by applying classical optimization techniques instead of simple balancing rules. It is also intended to analyze how the proposed controller influences QoS metrics. D Soldani, SK Das, M Hassan, JA Hassan, GD Mandyam, Traffic management for mobile broadband networks. IEEE Commun. Mag.49(10), 98–100 (2011). A Banerjee, Revolutionizing CEM with subscriber-centric network operations and QoE strategy. White paper, Heavy Reading (2014). Next Generation Mobile Networks Recommendation on SON and O&M requirements. Technical report, NGMN (2008). J Ramiro, K Hamied, Self-Organizing Networks (SON): Self-Planning, Self-Optimization and Self-Healing for GSM, UMTS and LTE (Wiley, UK, 2011). S Hämäläinen, H Sanneck, C Sartori, LTE Self-Organizing Networks (SON): Network Management Automation for Operational Efficiency Hardcover (Wiley, UK, 2012). Use Cases related to Self Organising Network, Overall Description. Technical report, NGMN (2007). 3GPP TR 36.902, LTE; Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Self-configuring and self-optimizing network (SON) use cases and solutions (V9.2.0, ETSI, Sophia Antipolis Cedex, France, 2010). 4G Americas, Self-optimizing networks in 3GPP Release 11: The benefits of SON in LTE. Technical report (2013). H Holma, A Toskala, LTE for UMTS-OFDMA and SC-FDMA Based Radio Access (Wiley, UK, 2009). KI Pedersen, TE Kolding, F Frederiksen, IZ Kovács, D Laselva, PE Mogensen, An Overview of Downlink Radio Resource Management for UTRAN Long-Term Evolution. IEEE Commun. Mag.47(7), 86–93 (2009). FRM Lima, TF Maciel, WC Freitas, FRP Cavalcanti, Resource Assignment for rate maximization with QoS guarantees in multiservice wireless systems. IEEE Trans. Veh. Technol.61(3), 1318–1332 (2012). S Sesia, I Toufik, M Baker, LTE, the UMTS Long Term Evolution: from Theory to Practice (Wiley, USA, 2009). R Kwan, C Leung, J Zhang, Proportional fair multiuser scheduling in LTE. IEEE Signal Proc. Lett.16(6), 461–464 (2009). R Kwan, C Leung, J Zhang, Multiuser scheduling on the downlink of an LTE cellular system. Res. Lett. Commun.2008: (2008). doi:10.1155/2008/323048. RK Almatarneh, MH Ahmed, OA Dobre, in IEEE 72nd Vehicular Technology Conference Fall (VTC 2010-Fall). Performance Analysis of Proportional Fair Scheduling in OFDMA Wireless Systems (IEEEOttawa, ON (Canada), 2010), pp. 1–5. L Wang, AH Aghvami, in IEEE Global Telecommunications Conference (GLOBECOM '99), 5. Optimal power allocation based on QoS balance for a multi-rate packet CDMA system with multimedia traffic (IEEERio de Janeiro, Brazil, 1999), pp. 2778–2782. N Bansal, KR Pruhs, Server scheduling to balance priorities, fairness, and average quality of service. SIAM J. Comput.39(7), 3311–3335 (2010). H Ackermann, S Fischer, M Hoefer, M Schöngens, Distributed algorithms for QoS load balancing. Distrib. Comput.23(5–6), 321–330 (2011). T Farkhondeh, YS Chan, JJ Lee, Scheduling with Quality of Service Support in Wireless System. Google Patents. EP Patent 2,277,329 (2014). http://www.google.com/patents/EP2277329B1?cl=en. Access date: October 2014. 3GPP TS 23.203, Technical Specification Group Services and System Aspects; Policy and charging control architecture (V13.1.0, ETSI, Sophia Antipolis Valbonne, France, 2014). J-H Rhee, JM Holtzman, D-K Kim, in IEEE 57th Semiannual Vehicular Technology Conference (VTC 2003-Spring), 1. Scheduling of Real/Non-real Time Services: Adaptive EXP/PF Algorithm (IEEEJeju, Korea, 2003), pp. 462–466. X Li, Y Zaki, Y Dong, N Zahariev, C Goerg, in IEEE 6th Joint IFIP Wireless and Mobile Networking Conference (WMNC). SON Potential for LTE Downlink MAC Scheduler (IEEEKyoto, Japan, 2013), pp. 1–7. S Khan, S Duhovnikov, E Steinbach, W Kellerer, MOS-Based Multiuser Multiapplication Cross-Layer Optimization for Mobile Multimedia Communication. Adv. Multimed.2007(1) (2007). doi:10.1155/2007/94918. P Ameigeiras, JJ Ramos-Munoz, J Navarro-Ortiz, P Mogensen, JM Lopez-Soler, QoE oriented cross-layer design of a resource allocation algorithm in beyond 3G systems. Comput. Commun.33(5), 571–582 (2010). S Thakolsri, W Kellerer, E Steinbach, in IEEE International Conference on Communications (ICC). QoE-based cross-layer optimization of wireless video with unperceivable temporal video quality fluctuation, (2011). M Shehada, S Thakolsri, Z Despotovic, W Kellerer, in IEEE 14th International Symposium on Wireless Personal Multimedia Communications (WPMC). QoE-based Cross-Layer Optimization for Video Delivery in Long Term Evolution Mobile Networks (IEEEBrest, France, 2011), pp. 1–5. A El Essaili, L Zhou, D Schroeder, E Steinbach, W Kellerer, in IEEE 13th International Workshop on Multimedia Signal Processing (MMSP). QoE-driven Live and On-demand LTE Uplink Video Transmission (IEEEHangzhou, China, 2011), pp. 1–6. F Wamser, D Staehle, J Prokopec, A Maeder, P Tran-Gia, in Proceedings of the 24th International Teletraffic Congress (ITC), no. 15. Utilizing Buffered Youtube Playtime for QoE-Oriented Scheduling in OFDMA Networks (International Teletraffic Congress (ITC)Kraków, Poland, 2012). P Patras, A Banchs, P Serrano, A control theoretic approach for throughput optimization in IEEE 802.11e EDCA WLANs. Mob. Netw. Appl.14(6), 697–708 (2009). W He, K Nahrstedt, X Liu, End-to-end delay control of multimedia applications over multihop wireless links. ACM Trans. Multimed. Comput. Commun. Appl. (TOMCCAP). 5(2) (2008). doi:10.1145/1413862.1413869. H Luo, M-L Shyu, Quality of service provision in mobile multimedia-a survey. Human-centric Comput. Inf. Sci.1(1), 1–15 (2011). D Soldani, HX Jun, B Luck, in IEEE 73rd Vehicular Technology Conference (VTC 2011-Spring). Strategies for Mobile Broadband Growth: Traffic Segmentation for Better Customer Experience (IEEEBudapest, Hungary, 2011), pp. 1–5. P Muñoz, I de la Bandera, F Ruiz, S Luna-Ramírez, R Barco, M Toril, P Lázaro, J Rodríguez, Computationally-efficient design of a dynamic system-level LTE simulator. Int J. Electron. Telecommun.57:, 347–358 (2011). P Seeling, M Reisslein, Video transport evaluation with H.264 video traces. IEEE Commun. Surv. Tutor.14(4), 1142–1165 (2012). 3GPP TSG-RAN1#48, LTE physical layer framework for performance verification (R1-070674, St. Louis, MI, USA, 2007). G Gómez, J Lorca, R García, Q Pérez, Towards a QoE-Driven Resource Control in LTE and LTE-A Networks. J. Comput. Netw. Commun. (2013). doi:10.1155/2013/505910. International telephone connections and circuits – General Recommendations on the transmission quality for an entire international telephone connection; One-way transmission time (ITU-T Recommendation G.114, Geneva, Switzerland, 2003). International telephone connections and circuits – General definitions;The E-model, a computational model for use in transmission planning (ITU-T Recommendation G.107, Geneva, Switzerland, 1998). RK Mok, EW Chan, RK Chang, in IFIP/IEEE International Symposium on Integrated Network Management (IM). Measuring the Quality of Experience of HTTP Video Streaming (IEEEDublin, Ireland, 2011), pp. 485–492. J Navarro-Ortiz, JM Lopez-Soler, G Steay, in IEEE European Wireless Conference (EW). Quality of Experience Based Resource Sharing in IEEE 802.11e HCCA (IEEELucca, Italy, 2010), pp. 454–461. This work has been funded by the Spanish Ministry of Economy and Competitiveness (TIN2012-36455), and Optimi-Ericsson, Agencia IDEA (Consejería de Ciencia, Innovación y Empresa, Junta de Andalucía, ref. 59288) and FEDER. Ingeniería de Comunicaciones, Universidad de Málaga, Campus de Teatinos S/N, Malaga, 29071, Spain Pablo Oliver-Balsalobre , Matías Toril & Salvador Luna-Ramírez Ericsson, Severo Ochoa 51, Málaga, 29590, Spain José María Ruiz Avilés Search for Pablo Oliver-Balsalobre in: Search for Matías Toril in: Search for Salvador Luna-Ramírez in: Search for José María Ruiz Avilés in: Correspondence to Pablo Oliver-Balsalobre. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Oliver-Balsalobre, P., Toril, M., Luna-Ramírez, S. et al. Self-tuning of scheduling parameters for balancing the quality of experience among services in LTE. J Wireless Com Network 2016, 7 (2016) doi:10.1186/s13638-015-0508-x Long term evolution Quality of experience Self-organizing networks Re-prioritization
CommonCrawl
Principle of parallel plate capacitors - Definition, Capacitance - eSaral Hey, do you want to learn about the Principle of Parallel plate capacitor? If yes. Then keep reading. Principle of Parallel plate capacitor Let an insulated metal plate A be given a positive charge till its potential becomes maximum. When another insulated plate B is brought near A. Then by induction inner face of B becomes negatively charged and the outer face becomes positively charged. The negative charge tries to reduce the potential of A and the positive charge tries to increase it. When the outer surface of B is earthed positive charge flows to the earth while the negative charge stays on causing a reduction in the potential of A. Thus, a larger amount of charge can be given to A to raise it to maximum potential. The capacitance of an insulated conductor is increased by bringing an uncharged earthed conductor near it. An arrangement of two conductors carrying equal and opposite charge separated by a dielectric medium are said to form a capacitor. The capacitor is an arrangement for storing a large amount of charge and hence electrical energy in a small space. The capacity of a capacitor is defined as the ratio of charge Q on the plates to the potential difference between the plates i.e. $C=\frac{Q}{V}$ Capacitors are used in various electrical circuits like oscillators, in tuning circuits, filter circuits, electric fan, and motor, etc. The shape of conductors can be a plane, spherical or cylindrical make a parallel plate, spherical or cylindrical capacitor. Capacitors in parallel Capacitors are said to be connected in parallel between two points if it is possible to proceed from one point to another point along different paths. Capacitors are said to be in parallel if the potential across each individual capacitor is the same and equal to the applied potential. Charge on each capacitor is different and is proportional to capacity of capacitor $Q \propto C$ so $\mathrm{Q}_{1}=\mathrm{C}_{1} \mathrm{~V}$ , $\mathrm{Q}_{2}=\mathrm{C}_{2} \mathrm{~V}$ , $Q_{3}=C_{3} V$ The parallel combination obeys law of conservation of charge So $\mathrm{Q}=\mathrm{Q}_{1}+\mathrm{Q}_{2}+\mathrm{Q}_{3}$ $=C_{1} V+C_{2} V+C_{3} V$ $=\left(C_{1}+C_{2}+C_{3}\right) V$equivalent capacitance $C_{p}=\frac{Q}{V}$ $=C_{1}+C_{2}+C_{3}$ The equivalent capacitance may be defined as the capacitance of a single capacitor that would acquire the same total charge Q with the same potential difference V. The equivalent capacitance in parallel is equal to the sum of individual capacitances. The equivalent capacitance is greater than the largest of individual capacitances. The capacitors are connected in parallel (a) to increase the capacitance (b) when larger capacitance is required at low potential. If n identical capacitors are connected in parallel then equivalent parallel capacitance $C_{p}=n C$ The total energy stored in parallel combination $U=\frac{1}{2} C_{p} V^{2}$ $=\frac{1}{2}\left(\mathrm{C}_{1}+\mathrm{C}_{2}+\mathrm{C}_{3}+\ldots .\right) \mathrm{V}^{2}$ or$\mathrm{U}=\frac{1}{2} \mathrm{C}_{1} \mathrm{~V}^{2}+\frac{1}{2} \mathrm{C}_{2} \mathrm{~V}^{2}+\ldots \ldots=\mathrm{U}_{1}+\mathrm{U}_{2}$ $+U_{3}+\ldots \ldots$ The total energy stored in parallel combination is equal to the sum of energies stored in individual capacitors. If n plates are arranged as shown they constitute (n–1) capacitors in parallel each of capacitance $\frac{\varepsilon_{0} \mathrm{~A}}{\mathrm{~d}}$ Equivalent capacitance $C_{P}=(n-1) \frac{\varepsilon_{0} A}{d}$ Capacitance of parallel plate capacitor with conducting slab The original uniform field $E_{0}$ exists in distance d-t so potential difference between the plates $V=E_{0}(d-t)=\frac{\sigma}{\varepsilon_{0}}(d-t)$ $=\frac{Q}{\varepsilon_{0} A}(d-t)$ Capacitance $C=\frac{Q}{V}$ $=\frac{\varepsilon_{0} A}{d(1-t / d)}=\frac{C_{0}}{1-t / d}$ $c>c_{0}$ so capacitance increases on introducing a metallic slab between the plates. The capacitance of parallel plate capacitor with dielectric slab When a dielectric is introduced between plates then $\mathrm{E}_{0}$ field is present outside the dielectric and field E exists inside the dielectric. The potential difference between the plates $V=E_{0}(d-t)+E t=E_{0}(d-t)$ $+\frac{E_{0} t}{K}=E_{0}\left[d-t\left(1-\frac{1}{K}\right)\right]$ $\mathrm{V}=\frac{\sigma}{\varepsilon_{0}}\left[\mathrm{~d}-\mathrm{t}\left(1-\frac{1}{\mathrm{~K}}\right)\right]$ $=\frac{\mathrm{Qd}}{\varepsilon_{0} \mathrm{~A}}\left[1-\frac{\mathrm{t}}{\mathrm{d}}\left(1-\frac{1}{\mathrm{~K}}\right)\right]$ Capacitance $C=\frac{Q}{V} \frac{\varepsilon_{0} A}{d\left[1-\frac{t}{d}\left(1-\frac{1}{K}\right)\right]}$ $=\frac{C_{0}}{1-\frac{t}{d}\left(1-\frac{1}{K}\right)}$ $C>C_{0}$ so capacitance increases on introducing a dielectric slab between plates of capacitor. The capacitance is independent of position of dielectric slab between the plates. If whole space is filled with dielectric than t = d and $C=K C_{0}$ Energy stored in Capacitor The charging of a capacitor involves transfer of electrons from one plate to another. The battery transfers a positive charge from negative to positive plate. Some work is done in transferring this charge which is stored as electrostatic energy in the field. If dq charge is given to capacitor at potential V then dW = V dq $W=\int_{0}^{Q} \frac{q}{C} d q$ $\frac{Q^{2}}{2 C}=\frac{1}{2} C V^{2}=\frac{1}{2} Q V$ The energy is stored in the electric field between plates of capacitors. The energy stored depends on capacitance, charge, and potential difference. This does not depend on the shape of the capacitor. The energy is obtained at cost of the chemical energy of the battery So, that's all from this article. If you liked this article on the Principle of Parallel plate capacitors then please share it with your friends. If you have any confusion related to this topic then feel free to ask in the comments section down below. For a better understanding of this chapter, please check the detailed notes of the Electric charge and field. To watch Free Learning Videos on physics by Saransh Gupta sir Install the eSaral App. parallel plate capacitor capacitance of parallel plate capacitor parallel plate capacitor formula plate capacitor parallel plate capacitor class 12
CommonCrawl
Medicinal value of asiaticoside for Alzheimer's disease as assessed using single-molecule-detection fluorescence correlation spectroscopy, laser-scanning microscopy, transmission electron microscopy, and in silico docking Shahdat Hossain1,2, Michio Hashimoto1, Masanori Katakura1, Abdullah Al Mamun1 & Osamu Shido1 BMC Complementary and Alternative Medicine volume 15, Article number: 118 (2015) Cite this article Identifying agents that inhibit amyloid beta peptide (Aβ) aggregation is the ultimate goal for slowing Alzheimer's disease (AD) progression. This study investigated whether the glycoside asiaticoside inhibits Aβ1–42 fibrillation in vitro. Fluorescence correlation spectroscopy (FCS), evaluating the Brownian diffusion times of moving particles in a small confocal volume at the single-molecule level, was used. If asiaticoside inhibits early Aβ1–42 fibrillation steps, more Aβs would remain free and rapidly diffuse in the confocal volume. In contrast, "weaker or no inhibition" permits a greater number of Aβs to polymerize into oligomers, leading to fibers and gives rise to slow diffusion times in the solution. Trace amounts of 5-carboxytetramethylrhodamine (TAMRA)-labeled Aβ1–42 in the presence of excess unlabeled Aβ1–42 (10 μM) was used as a fluorescent probe. Steady-state and kinetic-Thioflavin T (ThT) fluorospectroscopy, laser-scanning fluorescence microscopy (LSM), and transmission electron microscopy (TEM) were also used to monitor fibrillation. Binding of asiaticoside with Aβ1–42 at the atomic level was computationally examined using the Molegro Virtual Docker and PatchDock. With 1 h of incubation time for aggregation, FCS data analysis revealed that the diffusion time of TAMRA-Aβ1–42 was 208 ± 4 μs, which decreased to 164 ± 8.0 μs in the presence of asiaticoside, clearly indicating that asiaticoside inhibited the early stages Aβ1–42 of fibrillation, leaving more free Aβs in the solution and permitting their rapid diffusion in the confocal volume. The inhibitory effects were also evidenced by reduced fiber formation as assessed by steady-state and kinetic ThT fluorospectroscopy, LSM, and TEM. Asiaticoside elongated the lag phase of Aβ1–42 fibrillation, indicating the formation of smaller amyloid species were impaired in the presence of asiaticoside. Molecular docking revealed that asiaticoside binds with amyloid intra- and inter-molecular amino acid residues, which are responsible for β-sheet formation and longitudinal extension of fibrils. Finally, asiaticoside prevents amyloidogenesis that precedes neurodegeneration in patients with Alzheimer's disease. Alzheimer's disease (AD) is pathologically characterized by the fibrillar deposition of amyloid beta peptides (Aβ) in the brain [1]. Aβ1–40 prevails in the cerebrospinal fluid of patients with AD [2]. However, Aβ1–42 deposition prevails in the brains of patients with AD [3], thereby implicating its involvement in the initiation of fibrillation. However, the mechanisms of Aβ fibrillation remain elusive. Several key events are considered to be crucial, including the mechanism by which Aβ1–42 abandons its random coil-alpha helix conformation and adapts a β-sheet conformation, leading to oligomerization and finally matured fiber formation. These α-to-β conformational transitions are crucial for understanding the mechanisms of fiber formation and inhibition by agents capable of inhibiting the fibrillation process. Asiaticoside, a highly polyphenolic compound, is one of the major triterpene glycosides in Centella asiatica (CA). CA is an extremely important medicinal herb used in Java and other Indonesian islands, China [4], and other Asian countries [5], and is also becoming popular in Western countries [6]. Asiaticoside retains inherent properties to combat oxidative species, which are frequently observed in patients with AD-associated memory impairments [7,8]. CA increases intelligence and memory [9]. Asiaticoside also has been reported to protect against Aβ-induced neurotoxicity [10]. More importantly, asiaticoside has been patented as a dementia-treating agent and cognitive enhancer [11]. However, the mechanisms of action of asiaticoside have remained largely unknown. In this study, we primarily used fluorescence correlation spectroscopy (FCS) to examine whether asiaticoside inhibits Aβ1–42 fibrillation and its mechanism. FCS is a new method that can detect molecular motion at the nanomolar level in small sample volumes. This technique has recently attracted interest for investigating the molecular interactions of proteins [12,13]. FCS allows real-time monitoring of protein–protein interactions in a reaction solution without separation of the free and bound forms [14,15]. For example, FCS can be successfully used in an aggregating system using trace amounts of 5-carboxytetramethylrhodamine (TAMRA)-labeled Aβ1–42 in the presence of a large excess of unlabeled Aβ1–42 in a solution [16]. During aggregation, the fluorescent species will remain constant (because of the large excess of unlabeled molecules), and the diffusion time will gradually increase. Fluctuations in the fluorescence signal in a detection volume of approximately 1 fl (femtoliter) are analyzed using an autocorrelation function, revealing information about the diffusion properties of the fluorescent complexes; larger average complex sizes are associated with longer diffusion times. Changes in the average diffusion time reflect changes in the complex size and/or the ratio of free fluorescently labeled molecules in the complexes. In addition, steady-state and kinetic thioflavin T (ThT) fluorospectroscopy, transmission electron microscopy (TEM), and laser-scanning fluorescence microscopy (LSM) were used to elucidate the mechanism of asiaticoside-induced inhibition of Aβ1–42 fibrillation. In the field of molecular modeling, docking is a method that predicts the preferred orientation of one molecule to a second when bound to each other as a stable complex [17]. At present, the use of computers to predict the binding of small molecules to known target protein structures has been an important component in the drug discovery process [18,19]. However, there is no conclusive report regarding whether the asiaticoside docks onto Aβ1–42, and if so, the amino acid specificity with which it binds as ligand to inhibit amyloid aggregation is unclear. We, therefore, investigated whether asiaticoside binds with amyloidogenic hot spots, i.e., the amino acid residues involved in β-aggregation, which may further support the use of asiaticoside as an amyloidogenesis-inhibitory agent. Aβ1–42 (human, 1–42) was purchased from the Peptide Institute (Osaka, Japan). Asiaticoside was purchased from Sigma-Aldrich. The reference dye 5-carboxytetramethylrhodamine (TAMRA) was purchased from Olympus America Inc, whereas TAMRA-Aβ1–42 was obtained from ANASPEC Inc. CA. Other chemicals were of analytical grade. Uranyl acetate was obtained from BDH. All experiments were carried out with the approval of an appropriate ethics committee of Shimane University compiled from the Guidelines for Animal Experimentation of the Japanese Association for Laboratory Animal Science. Preparation of asiaticoside, TAMRA-Aβ1–42, and unlabeled Aβ1–42 Asiaticoside was dissolved in ethanol, diluted, N2-dried to remove ethanol, and then mixed with assembly buffer to have final concentrations of 5, 10 and 20 μM. TAMRA-Aβ1–42 and unlabeled Aβ1–42 were dissolved in hexafluoroisopropanol (HFIP), aliquoted, and stored at −80°C until use. HFIP was also blown with N2 prior to the use in fibrillation assay. Fluorescence Correlation Spectroscopy (FCS) In an FCS experiment, fluctuations of the fluorescence δF(t) around the average fluorescence <F(t)> are measured, yielding information on molecular processes or motions. The fluctuations of the fluorescence signal, δF(t), stem from changes in either the number of fluorescent particles or the fluorescence quantum yield of the particles in the open probe volume, which is defined by the confocal volume of a tightly focused laser beam. To analyze these fluctuations, the autocorrelation function G(τ) of the fluorescence intensity is calculated using the following equation: $$ \mathrm{G}\left(\uptau \right)=\frac{\left\langle \updelta \mathrm{F}\left(\mathrm{t}\right)\ \updelta \mathrm{F}\left(\mathrm{t}+\uptau \right)\right\rangle }{{\left\langle \mathrm{F}\left(\mathrm{t}\right)\right\rangle}^2} $$ where τ is the correlation time. F(t) signifies the detected fluorescence intensity, where δF(t) is the variable, fluctuating part and <F(t)> denotes the mean. The angular brackets indicate a time average, <δF(t) δF(t + τ) > is the average product of a fluctuations amplitude at time t and a later time (τ), (t + τ). F denotes the fluorescence signal as a function of time (see Figure 1). Schema of fluorescence correlation spectroscopy (FCS) measurement. (A). Confocal volume of FCS measurement. A pinhole provides an axial confinement to results in a tiny (less than femtoliter) detection volume (confocal volume). The fluorescence from the confocal volume is detected and processed using a digital correlator. (B). Principle of autocorrelation analysis of FCS measurements. δF(t), the fluorescence intensity fluctuations; <F(t)>, time average (< >) fluorescence intensity; G(τ), the auto-correlation curve is constructed from δF(t). (C). Mechanism of asiaticoside-induced inhibition at early steps of Aβ1–42 fibrillation. FCS measurements were performed on a Fluoro Point Light system (Olympus, Tokyo, Japan) at room temperature using the onboard 543-nm helium–neon laser at a power of 100 μW for excitation, as previously described [20]. TAMRA-Aβ1–42/unlabeled Aβ1–42 dissolved in HFIP was blown with N2 gas and re-dissolved in the assembly buffer with or without asiaticoside. Free TAMRA (rhodamine) was used as a reference dye. The final concentrations of TAMRA-Aβ1–42 and unlabeled-Aβ1–42 were 5 nM and 10 μM, respectively. The measurements were performed in a sample volume of 50 μl in the 384-well glass-bottomed microplate. The samples were sequentially and automatically loaded into the device. FCS measurements were conducted at 0 and 1 h with a data acquisition time of 10 s per measurement, and measurements were repeated five times per sample. Steady-state Aβ1–42 fibrillation analysis using ThT fluorospectroscopy Aβ1–42 fibrillation was induced as previously described [8,21]. After blowing HFIP from the Aβ1–42 stock aliquot, the dried peptide (50 μM) was suspended in desired volume of assembly buffer (100 μl of 50-mM Tris–HCl buffer, pH 7.4, containing 100 mM NaCl and 0.01% sodium azide) with or without asiaticoside (final concentrations of asiaticoside were 0, 5, 10, and 20 μM). The reaction mixture was taken in oil-free PCR tubes (Takara Shuzo, Otsu, Japan) flushed with of N2 gas to obviate any effect of atmospheric oxygen, and incubated at 37°C for 24 h into a DNA thermal cycler (PJ480; Perkin Elmer Cetus, Emeryville, CA). After 24 h, the incubation was stopped by placing the tubes on ice and 40-μl aliquots from each tube were mixed with 210 μl of 5-μM ThT in 50 mM glycine–NaOH buffer (pH 8.5) and subjected to fluorospectroscopy (Hitachi F-2500 fluorescence spectrophotometer) at excitation and emission wavelengths of 448 and 487 nm, respectively. Effects of asiaticoside on the kinetics of Aβ1–42 fibrillation The effect of asiaticoside on the kinetics of fibrillation was evaluated by using 10 μM of Aβ1–42. Briefly, the reaction mixture containing 1.0 ml of 50 mM Tris–HCl buffer, pH 7.4, 100 mM NaCl, 0.01% sodium azide, and the desired amount of Aβ1–42 with or without asiaticoside (20 μM) was taken into Eppendorf tubes and incubated at 37°C. At desired time intervals, a 40-μl of the peptide mixture was gently removed and added to 210 μl of 5 μM ThT in 50 mM glycine–NaOH buffer (pH 8.5) for fluorescence assay. Aβ1–42 fibrillation analysis using Laser Scanning Microscopy (LSM) A 2.5-μl aliquot of the fibrillated Aβ1–42 peptide (50 μM) sample from the ThT-Aβ1–42 assay with or without asiaticoside (20 μM) was diluted 2-fold with 5 μM ThT in 50 mM glycine–NaOH buffer (pH 8.5), transferred onto slides, and photographed using the confocal laser microscope system (CLSM FV300; Olympus). The fibrillation of 5 nM TAMRA-Aβ1–42 + 20 μM unlabeled Aβ1–42 with or without 20 μM asiaticoside was also conducted. Aggregates of TAMRA-Aβ1–42 were directly visualized under the microscope at an excitation wavelength of 542 nm. Aβ1–42 fibrillation analysis using TEM Fibrillation of Aβ1–42 (50 μM) was induced as described for ThT fluorospectroscopy. A 4-μl aliquot was used for electron microscopy. In brief, a droplet of the reaction mixture was spread onto carbon-coated grids, negatively stained with 1% uranyl acetate (pH 7.0), and examined under a Hitachi H-7000 electron microscope with an acceleration voltage of 75 kV. Docking study: Preparation of docking materials (ligand and receptor) The canonical SMILES string of asiaticoside (PubChem: CID 108062; Figure 2) was submitted to Marvin 5.7.0 [22] to generate the 3D structure of this ligand molecule. The 3D structure of asiaticoside was subsequently energy-minimized and converted to the Protein Data Bank (PDB) file format using the Molegro Virtual Docker (MVD) [23]. Aβ1–42 (PDB ID: 2BEG) was downloaded from the Protein Data Bank as a receptor for asiaticoside docking. 2BEG is a 3D NMR structure of Aβ1–42, consisting of a homopentamer (A, B, C, D, and E). Each monomer of the 2BEG pentamer comprises 10 coordinate models [24]. All-atoms molecular structure of asiaticoside (PubChem: CID 108062). Dimer formation At least two amyloid molecules are required to achieve the repeating structure of a protofilament fibril; therefore, the coordinates of model 1 of A monomer (A1) and model 1 of B monomer (B1) were split from the composite 2BEG PDB file by MVD. Subsequently, the monomer (A1)–monomer (B1) dimer (A1-B1) was generated by feeding the A1 and B1 monomers to RosettaDock [25], which decoys 1000 independent simulations and gives the A1–B1 dimer on the basis of energy minimization. Analysis of the unstructured and aggregation-prone regions of Aβ 1–42 The FASTA format of the primary amino acid sequence of Aβ1–42 (>seq DAEFRHDSGYEVHHQKLVFFAEDVGSNKGAIIGLMVGGVVIA) was tested by ANCHOR [26] to elucidate the intrinsically unstructured region of this peptide. In addition, we used three state-of-the-art sequence-based computational methods, namely FoldAmyloid [27], AGGRESCAN [28], and ProA [29], to predict the amyloidogenic regions and aggregation-prone amino acid residues from Aβ1–42 peptide chains. Computational analysis of binding sites of Aβ 1–42 monomers and dimers The binding sites or pockets of the A1-monomer and A1–B1 dimer were determined by GHECOM, which detects grid-based pockets/cavities on the surface of the protein [30].The program produces a graph of residue-based pocketness. The presence of binding sites was also cross-checked using Q-SiteFinder [31], which uses the interaction energy between the protein and a simple van der Waals probe to locate energetically favorable binding sites. Intersurface interaction site analysis of the A1–B1 dimer Protein–protein, i.e., monomer-monomer A1-B1 dimer, intersurface interaction sites were analyzed by feeding the A1–B1 dimer to the cons-PPISP server [32], which predicts the residues that likely form the binding site for neighbor proteins. To further validate the presence of intersurface hot points on the dimer interface binding contacts, the dimer was fed to hot point prediction servers, including KFC2 [33,34]. Docking simulation of asiaticoside onto Aβ1–42 monomer (A1) and dimer (A1–B1) The molecular docking simulations were performed using the Molegro Virtual Docker (MVD) [23] and PatchDock [35]. Molegro Virtual Docker MVD is an automated docking software program with fast processing that automatically adds the missing hydrogen atoms of the ligand and receptor molecules, if any. The software also has a module to create a surface over the receptor molecule and identify potential binding sites for its activity. The program gives 10 conformational positions or so-called poses for the ligand and returns the five best poses with MoleDockScore (equivalent to energy of binding/docking energy) and other thermodynamically calculated values. The MoleDockScore is an anonymous value by which one can suggest the best-docked ligand with its conformation. MVD also presents hydrogen bond information together with other thermodynamic values that suggest the formation of stable complexes between ligands and receptor molecules. MVD performs flexible ligand docking with optimization of the ligand geometry during docking, thus indicating bond angles, bond lengths and torsional angles of the ligand are modified at the stages of receptor-ligand complex generation. PatchDock PatchDock is a shape complementarity/geometry-based molecular docking algorithm. It is aimed at finding docking transformations that yield good molecular shape complementarity. Such transformations induce both wide interface areas and small amounts of steric clashes. A wide interface is ensured to include several matched local features of the docked molecules that have complementary characteristics. The ouput of PatchDock is a list of candidate complexes between receptor and ligand molecule. The list is sorted according to geometric shape complementarity score, approximate interface area of the receptor-ligand complex; atomic contact energy (ACE) between ligand and receptor, and 3D transformation. Finally, the server provides an option to download the ligand-receptor complexes in the PDB format. Results were expressed as the mean ± S.E. (standard error of mean). For intergroup differences, the data were analyzed by unpaired student's t-test and one-way analysis of variance (ANOVA: for more than 2 groups). ANOVA was followed by Fisher's Protected Least Significant Difference (PLSD) for post hoc comparisons. Kinetic data of nucleation-dependent fibrillation were subjected to non-linear variable-slope sigmoidal equation. The statistical program used was StatView® 4.01 (MindVision Software, Abacus Concepts, Inc., Berkeley, CA, USA) and GRAPHPAD PRISM (version 4.00; GraphPad Software Inc., San Diego, CA, USA). A level of P < 0.05 was considered statistically significant. Effect of asiaticoside on the diffusion time of Aβ1–42 The diffusion time of the reference TAMRA dye was 100 ± 5 μs and that of the TAMRA-Aβ1–42 alone was 150 ± 5 μs. The autocorrelation function of TAMRA-Aβ1–42 in the (TAMRA-Aβ1–42 + unlabeled Aβ1–42) sample was best fitted with a one-component analysis model, which resulted in a diffusion time of 208 ± 4 μs (Figure 3A). However, the diffusion time of TAMRA-Aβ1–42 in the (TAMRA-Aβ1–42 + unlabeled Aβ1–42 + 20 μM of asiaticoside) samples was reduced to 164 ± 8.0 μs (Figure 3B). Autocorrelation curves G(τ) of fluorescence fluctuations. Representative autocorrelation curves G(τ) of fluorescence fluctuations δF(t) of TAMRA-Aβ1–42 (5 nM) mixed with excess (20 μM) unlabeled Aβ1–42 without (A, control) or with 20 μM asiaticoside (B). The blue lines represent the raw data, whereas the red lines represent fitted data. The curve fit was good as seen in the residuals, which are the differences between the data and the fitted curve. The residuals of a good fit fluctuate randomly and tightly about zero. (C). Diffusion times of TAMRA-Aβ1–42 during Aβ1–42 fibrillation in the absence (Aβ1–42 alone, control) and presence of different concentrations of asiaticoside (5–20 μM). Results are presented as the mean ± SE of quadruplicate determinations (n = 5). Effect of asiaticoside on steady-state fibrillation of Aβ1–42 The fibrillation of Aβ1–42 only, as measured by the steady-state ThT fluorescence intensity, was set as 100% (control). Asiaticoside at 5, 10, and 20 μM significantly (0 < 0.05) decreased fibril formation by 20, 29, and 33%, respectively, compared with that of the control (Figure 4A). After examining the inhibitory effect of the three concentrations of asiaticoside on Aβ1–42 fibrillogenesis, we also studied the effect of 20 μM asiaticoside on the kinetics of Aβ1–42 fibrillation. Effects of asiaticoside on the fibrillation of Aβ1–42. (A). Steady-state fibrillation. ThT, thioflavin T; Aβ1–42, 50 μM. Bar with different letters indicates significant difference (P < 0.05). Mean ± SE, n = 3. (B). Kinetics of fibrillation. Aβ1–42, 10 μM; Bar, mean ± SE, n = 3. Fibrillation kinetics of Aβ1–42 at 50 μM of concentration (the concentration used for steady-state ThT assay) displayed a very short lag phase (<10 min) at our experimental condition (data not shown)). This led us to assess the effect of asiaticoside on the kinetics of Aβ1–42 fibrillation at 10 μM of concentration. At this low concentration, fibrillation appeared to be highly cooperative, reminiscent of first order phase transitions and the fibrillation kinetics exhibited characteristics of a typical nucleation and growth process (Figure 4B). The time course of fibrillation included a lag phase followed by a rapid exponential growth (elongation) of fibrils. Experimental data were fitted well by such a variable-slope sigmoidal equation curve: [Y = Bottom + (Top - Bottom)/(1 + 10^((LogT50-X)*HillSlope)); where, Bottom is the Y value at the bottom plateau, Top is the Y value at the top plateau, LogT50 is the X value when the ThT fluorescence is halfway between Bottom and Top, HillSlope describes the steepness of the curve]. Lag time (LT) was defined as the time at which the tangent at the point of the maximum fibrillation rate intersects the abscissa. At 95% confidence interval, T50 (half-life) was increased by asiaticoside (T50: 10 μM of Aβ1–42 vs. 10 μM of Aβ1–42 + asiaticoside = 17.9 ± 0.07 vs. 20.5 ± 0.13 h; with a corresponding LT of 13.4 and 15.4 h, respectively. Asiaticoside lengthened the time both at the lag phase and during the growth of fibrillation for 2 - 3 h. Laser Scanning Microscopy (LSM) The fibrillations of unlabeled Aβ1–42 and asiaticoside + unlabeled Aβ1–42 samples also were examined by ThT laser scanning fluoromicroscopy (LSM) (Figure 5A, B), which gave green fluorescence. The fibrillations of TAMRA-Aβ1–42 + unlabeled Aβ1–42 and asiaticoside + TAMRA-Aβ1–42 + unlabeled Aβ1–42 samples were also examined by laser-scanning microscopy (LSM). TAMRA-Aβ1–42 evoked characteristic orange fluorescence (Figure 5C, D). The results disclosed the precipitation of large aggregates on the glass slides. In addition, the images were processed by ImageJ to calculate the areas of the green or red fluorescence Aβ-aggregates. Area < 5.0 μM2 was not included in the image processing. The Aβ1–42 + asiaticcoside samples had a lower number of green-fluorescent areas than did the Aβ1–42 untreated controls. Total green fluorescent areas were 768.7 ± 9.9 and 92.1 ± 3.4 μM2, respectively in the controls and asiaticcoside-treated samples (Figure 5A, B), suggesting that the aggregations of Aβs were inhibited by asiaticcoside. The red spots (Figure 5C, D) are amyloid deposits of labeled amyloid Aβ1–42 (TAMRA-Aβ1–42). Again, the number of red spot areas was significantly lesser in the presence of asiaticoside, indicating asiaticoside inhibited the fibrillation and consequently the amyloid deposits. Total areas of red-fluorescence areas were, respectively, 2513 ± 9.4 and 481 ± 18.7 μM2 for the control and asiaticoside-treated samples. Visualization of amyloid aggregates by laser-scanning microscopy (LSM). Images were taken after overnight of aggregation at 37°C with a concentration of 50 μM unlabeled Aβ1–42(A) and 5 nM TAMRA-Aβ1–42 + 20 μM unlabeled Aβ1–42(C). Aggregates of (50 μM unlabeled) Aβ1–42 displayed characteristic green fluorescence after their binding with ThT. TAMRA-Aβ1–42 itself exhibited orange fluorescence. Asiaticoside-incubated samples had a lower number and smaller green (ThT) (B) and orange (TAMRA-Aβ1–42) (D) fluorescent areas than those of the controls, suggesting that Aβ1–42 fibrillation was inhibited by asiaticoside. Effect of asiaticoside on Aβ1–42 morphology The control samples (Aβ1–42 alone) contained abundantly aggregated amyloid fibrils in the assembly buffer, whereas the Aβ1–42 + asiaticoside samples contained only very small amounts of aggregates. The former fibrils exhibited a typical filamentous and branching morphology (Figure 6A and its inset). The lengths of the fibers differed from grid to grid, and the fibers were nearly inconspicuous because of the presence of extensive branching. The fibers of the Aβ1–42 + asiaticoside samples (Figure 6B) displayed spaced, beaded and spheroidal structures, leading to necklace-like diffused proto-fibrillar filaments. The protofibrillar structures were found to align along the long axis of the fibers as if, they were 'on pathway' to complete the long fibers (arrow head). In the presence of 20 μM asiaticoside, these fibers exhibited as more diffused, oligomeric and cluster-like structures. Representative transmission electron microscopic (TEM) views of the effects of asiaticoside on fibrillation. A: Control (Aβ1–42 alone), Insets of A show complete fibers, protofibrills, and oligomeric globular amyloid species appeared to arrange in lines (red arrow) to form long fibers; B: Aβ1–42 + 10 μM asiaticoside; and C: Aβ1–42 + 20 μM asiaticoside. The fibers (B, C) became unstructured, diffused, and irregular, sometimes they appeared as globular structures (insets of B and C). The globular structures appeared more dispersed at 20 μM of concentrations of asiaticoside than those at 10 μM. These results indicate that asiaticoside clearly inhibited Aβ1–42 fibrillation. Salient features of A1 monomers and A1–B1 dimers The ANCHOR server (http://iupred.enzim.hu/) identified amino acids 1–18 that remained in a generally disordered region. These amino acids do not adopt a stable structure and cannot engage in sufficient numbers of favorable intrachain interactions (Figure 7A). The A1 monomer split from 2BEG is shown in Figure 7B, while the A1-B1 dimer generated by RosettaDock is shown in Figure 7C. The RosettaDock server gave the 10 best-scoring structures in rank order by energy after 1000 independent simulations as well as a plot of the energies of all 1000 structures created. Each point on the plot represents a structure created by the server (Figure 7D). The x-axis is a distance from the starting position, and the y-axis is the score (energy) of the structure. A hallmark of a successful run is an energetic "funnel" of low-energy structures clustered around a single position. Lührs et al. [24] also reported that residues 1–17 of the Aβ1–42 are disordered; however, residues 18–42 form a beta strand–turn–beta strand motif that contains two parallel, in-register beta sheets that are formed by residues 18–26 (β1) and 31–42 (β2). Intermolecular side chain contacts are formed between the odd-numbered residues of strand β1 of the nth molecule and the even-numbered residues of strand β2 of the (n − 1)th molecule. This interaction pattern leads to partially unpaired β-strands at the fibrillar ends, which explains the sequence selectivity, cooperativity, and apparent unidirectionality of Aβ fibril growth [24]. Features of the primary amino acid sequence of Aβ1–42. A: Prediction of the intrinsically unstructured amino acid region of Aβ1–42 by the ANCHOR server. ANCHOR identifies segments in a generally disordered region that cannot form sufficient numbers of favorable intrachain interactions. The amino acid residues from Asp1–Val18 exhibited unstructured characteristics. B: A1 monomer model of the Protein Data Bank (PDB) file of Aβ1–42 (2BEG), split by the Molegro Virtual Docker (MVD). C: Dimer (A-B1) of the A1 and B1 monomer of 2BEG. The dimer was formed by feeding the A1 and B1 monomers to RosettaDock. D: Validation graph of dimer (A1-B1) formation by RosettaDock. A hallmark of a successful dimer run is an energetic "funnel" of low-energy structures clustered around a single position. E,G: Grid-based pocketness cluster of the A1 monomer (E) and dimer (A1-B1) (G) by GHECOM. F,H: Contributions of amino acids of the A1 monomer (F) and A1–B1 dimer (H) to cluster pocketness. The line shows the value of pocketness [%] for each residue. A residue in a deeper and larger pocket has a larger value of pocketness. The color of the pocketness bar indicates the cluster number of pockets. I,J: The views were generated in Jmol viewer after the docking of asiaticoside onto the (A1) monomer and dimer (A1–B1) to demonstrate that the ligand (asiaticoside) truly bound with the pockets and/or binding sites. Binding sites/pocketness of the monomer and dimer The binding and active sites of proteins are often associated with structural pockets and cavities. The results of analyses of pocketness of the monomer and dimer are shown in Figure 7E and G, respectively. In the A1 monomer, the cluster with the highest degree of pocketness was located between residues 17–20 and 35–42 (Figure 7F). The degree of pocketness was higher in the A1–B1 dimer (Figure 7H). The Q-site finder also identified pocketness in similar regions of the monomers and dimer (data not shown). Aggregation-prone amino acid residues of Aβ1–42 The hotspots for aggregation are shown in Figure 8. The FoldAmylod analysis revealed that residues 17–21 and 33–36 of Aβ1–42 were aggregation-prone, whereas the AGGRESCAN analysis identified residues 17–22 and 29–42 as the aggregation-prone regions. Residues 17–20 and 35–42 of Aβ1–42 also exhibited a propensity for aggregation upon cross-examination using the ProA server. Analyses of aggregation-prone amino acid residues of Aβ1–42 peptide chain and monomer(A1)-monomer(B1) (A1B1dimer) interaction-sites, and interaction-sites between asiaticoside vs. monomer (A1) and dimer (A1B1) after docking. 1FoldAmyloid describes a method of prediction of amyloidogenic regions from protein sequence [27], while 2AGGRESCAN predicts and evaluates of "hot spots" of aggregation in polypeptides. By identifying aggregation-prone segments in proteins, AGGRESCAN facilitates the identification of possible therapeutic targets for anti-depositional strategies in conformational diseases [28]. 3ProA: Protein Aggregation Prediction Server, which provides access to two protein aggregation propensity prediction algorithms, based on the amino acid physicochemical properties important to protein aggregation [29]. 4Cons-PPISP is a consensus neural network method for predicting protein-protein interaction sites. Given the structure of a protein, cons-PPISP predicts the residues that likely form the binding site for another protein. The inputs to the neural network include position-specific sequence profiles and solvent accessibilities of each residue and its spatial neighbors. The neural network is trained on known structures of protein-protein complexes [32]. 5KFC2 (Knowledge-based FADE and Contacts) server predicts binding "hot spots" within protein-protein interfaces by recognizing structural features indicative of important binding contacts. The server analyzes several chemical and physical features surrounding an interface residue and predicts the classification of the residue using a model trained on prior experimental data [33,34]. 6,7For comparison, whether aisaticoside-binding site overlaps with the aggregation-prone hot-spot amino acids and/or the monomer-monomer (A1-B1dimer) intersurface interaction sites, the docking results, derived from the Molegro Virtual Docker (MVD)6 [23] and PatchDock7 [35] of Figure 9 are also shown here. Docking poses of asiaticoside onto Aβ1–42 monomer and dimer. Best docking poses of the ligand (asiaticoside) and receptor (monomer and dimer) by the Molegro Virtual Docker (A,B,C,D) and PatchDock (E,F,G,H). A, C: Snapshot of asiaticoside in the best pose 1 docked onto the A1 monomer and A1-B1 dimer. The interaction (binding) energy (−ΔG), as the MolDock score, is shown in the figures. PatchDock (E,F,G,H): Snapshot of the Patch docking of asiaticoside on to monomer (E) and dimer (G) on the geometry-based molecular docking algorithm. The best-scored complex is shown here by the Marvin molecular viewer. Score: Geometric shape complementarity score. Area: Approximate interface area of the complex (receptor-ligand), ACE: Atomic contact energy. Atomic contact maps the receptor-ligand (Aβ1–42-asiaticoside) complexes (B, D of Molegro docker; F, H of PatchDocker) and hydrogen maps were delineated by feeding them to Molecular Viewer. Red line indicates hydrophobic interactions, Blue line indicates hydrogen bonds. Monomer–monomer and dimer intersurface interaction sites The amino acid residues involved in intersurface interactions are also shown in Figure 8. The intersurface interaction sites of the A1–B1 dimer included Phe19, Phe20, Met35, Val36, Gly37, Gly38, and Val40 of the A1 monomer and Val18, Phe19, Phe20, Ile32, Gly33, Leu34, Val36, Gly37, Val39, and Ile41 of the B1 monomer. The intersurface analysis also revealed that amino acids Gly25, Gly29, and Gly38 of B1 monomer remained buried in the interior of the dimer. The intersurface interaction sites were also rechecked using the KFC2 server, and the sites were located on both the A1 and B1 monomers (Figure 8). Docking results Molegro Virtual Docker (MVD) Docking of asiaticoside with the monomers generated five poses with unique chemical arrangements. In MVD, the best pose is assessed on the basis of scoring function (MolDock score). The score (Kcal/mol) mimics the potential energy change, when the protein and ligand come together. This means that a very negative score corresponds to a strong binding and a less negative or even positive score corresponds to a weak or non-existing binding. Water molecules were not included in the docking setup, thus the 'water-ligand interaction' did not contribute to the MolDock score. The MolDock score of pose 1 was lower than that of the other four poses (pose 1 = −117.85, pose 2 = −117.10, pose 3 = −107.27, pose 4 = −105.83, and pose 5 = −104.92 Kcal/mol), indicating the structure in pose 1 was superior. The MolDock scores of the five poses for docking with the dimer were as follows: pose 1 = −134.48, pose 2 = −118.74, pose 3 = −118.64, pose 4 = −113.89, and pose 5 = −109.09 Kcal/mol. The results of the computational docking of asiaticoside with the A1monomer and A1-B1dimer are shown in Figure 9. Figure 9A presents the findings for pose 1 docked onto the A1 monomer, and Figure 9C presents data for pose-1 docked onto the dimer (A1-B1). The amino acids with which asiaticoside bound when it docked onto the monomer (A1) and dimer (A1–B1) were visualized by the contact maps between the atoms of asiaticoside and the Aβ1–42 amino acid residues using MVD (Figures 8 and 9B and 9D). Residues Ala21, Glu22, Asp23, Gly25, Lys28, Gly33, Leu34, and Met35 of the A1 monomer bound with asiaticoside (Figure 9B). The amino acid residues of the A1–B1 dimer that sterically interacted with asiaticoside were Leu17, Phe19, Phe20, Ala21, Glu22, Asp23 (from the A1 monomer), and Gly38 (from the B1 monomer) (Figure 9D). The higher free energy of binding between asiaticoside bound to the dimer than that of asiaticoside bound to the monomer suggests a stronger binding with the dimer. PatchDocker The PatchDock server provided a web page that presents the top 20 solutions. The best-scored receptor-ligand complex (Top 1) was visualized by Marvin Space (Figure 9E, 9G). Afterwards, the Top1 complex was subjected to Molegro Virtual Viewer (MVV) to show the atomic contact maps between receptor-ligand complex (Figure 9F, 9H). The results of the PatchDock revealed that amino acids Asp23,Val24, Gly25, Ser26, Asn27, Lys28, Ile32, Gly333, Leu34, Met35, Val36 and Gly37 of the monomer bound (contacted) with asiaticoside (Figure 9F). While Aβ1–42 dimer was docked with asiaticoside, the amino acid residues Leu17, Val18, Phe19, Val36, Val40, Ile41 and Ala42 of A1-monomer, and those of the Val40 and Ala42 of B1-monomer bound with asiaticoside (Figure 9H). The common features between Molegro Virtual Docker and PatchDocker were displayed in their common amino acid binding sites for asiaticoside: Asp23, Gly 25, Lys28, Gly33, Leu34 and Met35 of monomer were bound with asiaticoside in both of the docking systems. Consistently, the amino acids Leu17 and Phe19 of A1 monomer were common when the dimer (A1-B1) was docked with asiaticoside by both the Molegro Virtual Docker and PatchDocker. Fluorescence correlation spectroscopy (FCS) is a method based on fluorescence intensity fluctuations that result from dynamic movement and fluorescence quantum yield of fluorescent molecule while it diffuses in and out of small detection area. In this study we recorded the effect of asiaticoside on the dynamical properties of Aβ1–42 fibrillation in FCS. The initial steps of Aβ aggregation are preceded by the transformation of α-sheets to β-sheets, through lateral stacking of dimers, trimers, tetramers, and oligomers, leading to matured amyloid fibers [21]. Autocorrelation function of FCS analysis revealed a diffusion time of 100 ± 5 μs for the reference dye TAMRA, vs. 150 ± 5 μs for TAMRA-Aβ1–42 alone. The increased diffusion time of the latter is consistent with its increased molecular mass. At time zero, the diffusion time of TAMRA-Aβ1–42 was not significantly different in the presence or absence of asiaticoside, demonstrating that the amyloids were in their monomeric state at the start and/or early steps of fibrillation. After 1 h of incubation, the diffusion time of TAMRA-Aβ1–42 in the TAMRA-Aβ1–42 + unlabeled Aβ1–42 samples increased to 208 ± 4 μs, indicating that amyloid species/units with a molecular mass relatively greater than the previous one occurred in the early steps, including oligomrizations of Aβ fibrillation process. This resulted in a concurrent slowness of movements of the amyloid units during "entering-and-leaving" the confocal volume of FCS. In FCS, larger amyloid aggregates (of Aβ1–42) in solution diffuse less frequently through the confocal volume, thereby resulting in longer diffusion times. Thus, the decreased diffusion time in the presence of asiaticoside reflects an inhibition of Aβ1–42 aggregation by this glycoside, allowing more Aβ1–42 in the confocal volume to freely diffuse in and out of the confocal volume. Repeated FCS measurements of 10-s sampling times illustrated an increase in the amplitude (y-axis). The amplitude reflects the inverse of the average number of fluorescent TAMRA-Aβ1–42 molecules in the focal volume. In other words, the correlation amplitude increases in size as the intensity fluctuation grows larger. This again demonstrates that asiaticoside inhibited amyloid fibrillation, allowing an increasing number of free TAMRA-Aβ1–42 molecules to freely diffuse in the confocal volume. The diffusion time decreased as the asiaticoside concentration increased (Figure 3C). Recently, C. asiatica extract, which contains two major triterpene glycosides (asiaticoside and madecassoside) as active components, is increasingly being used to enhance memory [9], which is severely impaired in patients with AD. Very recently, we have reported that the oral administration of madecassoside improves the memory loss of AD model rats, with concurrent inhibition of Aβ1–42 fibrillation [36]. In the present study, asiaticoside inhibited Aβ1–42 fibrillation at the early stages of the process, hinting at the basis of the neuroprotective effects of C. asiatica extract and/or its active components (asiaticoside and/or madecassoside) against Aβ1–42–induced AD. Steady-state ThT fluorospectroscopy also revealed that asiaticoside displayed the strongest inhibition of fibrillation at 20 μM. The results evolved from steady-state ThT fluorospectroscopy, however, did not directly disclose about the stage(s) at which asiaticoside exerted the inhibitory effect on fibrillation. Therefore, we evaluated the effect of asiaticoside on the kinetics of Aβ1–42 fibrillation. The ThT growth curve for Aβ1–42 was sigmoidal showing a dependence on time, which suggests that the amyloid assembly process is cooperative. The presence of asiaticoside led to a significant increase in the lag phase of Aβ1–42 fibrillation with a concurrent deceleration of the exponential growth of fibrils. Thus it would appear that the asiaticoside acted to reduce the rate of nucleation, and, however, once critical nuclei were formed elongation was also affected by the presence of the asiaticoside. Once the FCS measurements and kinetic and/or steady-state ThT-fluorospectroscopic measurements proved that asiaticoside delayed/inhibited fibrillation, we also examined the effect of asiaticoside on amyloid fibrillation using Laser Scanning Microscopy (LSM). The ThT-bound amyloidal aggregates (green) were clearly visible in the confocal images. The red fluorescence areas indicate the larger aggregates of TAMRA-Aβ1–42-intercalated fibers of Aβ1–42 (5 nM TAMRA-Aβ1–42 was used in excess of unlabeled Aβ1–42). Total numbers of fluorescent aggregates of Aβ1–42 control samples were much more frequent and larger than those of the Aβ1–42 + asiaticoside samples (Figure 5). Therefore, the LSM results are consistent with those of the FCS and steady-state/kinetics of ThT fluorospectroscopic studies, illustrating that asiaticoside exhibits a strong anti-fibrillation effect. We directly observed the high-resolution image of the dense aggregates of matured fibers, long/short or single/few scattered matured fibrils, typical branched fibrils, protofibrils, oligomer-like, in some instances, amorphous and other unstructured smaller aggregates of Aβ1–42 in the grid fields of TEM (Figure 6). This suggests that fibrillation of Aβ1–42 goes through a diverse dynamic exchange of conformational stages and includes continuous shuffling and growth, particularly, during the first stages of aggregation; Aβ1–42 can adopt, at least, a globular structure or so-called nucleating center. These globular structures form oligomers leading to protofibrils, which then associate in elongated fibers, finally to the highest ordered matured Aβ1–42 fibers. Asiaticoside-induced interference of Aβ1–42 fibrillation also was evident from the morphology of Aβ1–42 fibrils in transmission electron micrographs. Consistent with the fact that pretreatment of Aβ1–42 with asiaticoside kinetically lengthened the formation of amyloid nucleating centers and reduced the protofibrillar structures, TEM grids of the Aβ1–42 + asiaticoside samples were also devoid of complete fibers, and only a small number of fibrils exhibited a diffused, shorter and some spheroid-type morphology. Structural characterization of these spheroidal species has remained elusive. Soluble amyloid β-oligomeric intermediates, emerging during amyloid fibril assembly, represent the main molecular species responsible for toxicity to cells and tissues [15]. The presence of these indefinable spheroidal amyloid species in TEM thus may raise the question of whether aisiticoside would confer toxicity to cells, and deteriorate Alzheimer's symptoms. However, there are reports that proteins can assemble into ordered aggregates without invoking the cross-β motif [37]. Ehrnhoefer et al. [38] showed that the polyphenol epigallocatechine gallate redirected Aβ1–42 into unstructured and nontoxic off-pathway oligomers. Asiaticoside-induced decreases in the formation and quantity of amyloid fibers (Figures 4, 5 and 6) are thus qualitatively consistent with these reports. Therefore we speculate that the spheroidal species of amyloids seen in the TEMs of asiaticoside + Aβ1–42 samples are not toxic oligomer species. Otherwise, asiaticoside containing centella asistica extract couldn't have beneficial effects on memory [9], dementia and cognition [11], neuronal damage [39], neurite outgrowth [40] and Aβ1–42-induced neurotoxicity in AD model animals [10,41]. Our previous results with madecassoside are also consistent with the speculation [36]. Because asiaticoside kinetically decreased the diffusion time, and inhibited ThT-determined lag and growth phase of fibrillogenesis, the morphological transformation of Aβ1–42 may be mediated by the effects of this glycoside. Structure-based drug design has made tremendous contributions to the fields of cancer chemotherapy and drug-resistant infections. Computational structure-based drug design may, therefore, facilitate the development of novel treatments for AD. Therefore, the docking of asiaticoside with the Aβ1–42 monomers and dimer was used in combination with experimental data to obtain information on the binding domain and mechanism of inhibition. The ability of Aβ1–42 to bind to itself and to different ligands in a highly specific manner is an important feature of amyloidogenesis. Kirschner et al. [42] reported that β-sheets run parallel and β-strands run perpendicular to the fibril axis. Sheet β1 is formed by approximately residues 18–26, and sheet β2 is formed by approximately residues 31–42. Thus, the corresponding residues of adjacent monomers line up along the fibril [24]. The aggregation-prone amino acids, i.e., hotspots of aggregation of Aβ1–42, were located in the regions of residues 17–22 and 32–42 Figure 8, which matched to the β1 and β2 sheets of 2BEG and interaction sites of asiaticoside, respectively. While docked with the monomer in MVD (Figure 9A), asiaticoside displayed steric interactions with Ala21, Glu22, Asp23, and Gly25 of the β1 sheet and Lys28, Gly33, Leu34, and Met35 of the β2 sheet, thereby indicating that the formation of β-sheets (from the native α-helix) may have been, at least partially, perturbed by asiaticoside. PatchDocking of asiaticoside onto Aβ1–42 also revealed that asiaticoside bound with the amino acid sites located at the β1 and β2 domain of the Aβ1–42 (Figure 9E-H). Remarkably, both the β1 (residues 17–21) and β2 (residues 31–42) sheet-forming amino acids were engaged in monomer–monomer (dimer) intersurface interactions (Figure 8). These results, therefore, suggest that the intersurface interaction sites could be used as the ultimate target sites of ligands to inhibit the interactions. We previously reported that the dimer may act as one of the seeding units of Aβ1–42 protofibrils, leading to matured fibers [21]. This stimulated us to dock the dimer (Figure 9C, G) with asiaticoside to clarify whether the docking affects the intersurface interaction sites. As expected, asiaticoside bound with the amino acids of these sensitive regions. These results again suggest that asiaticoside impedes both the β-sheet-forming amino acid residues and binds with the amino acids that remain located in the monomer–monomer intersurface and buried in the loop region, such as Gly38 (Figure 8). Studies of aggregation of Aβ1–42 and interactions between small molecules, such as, asiaticoside and Aβ1–42 may contribute to the development of inhibitors against Alzheimer's disease. In this study, by means of a combined approach of ThT, FCS, LSM, TEM techniques, we suggest a model of how the asiaticoside interacts with β1–42. Our experimental results allow us to conclude that asiaticoside inhibits early stages of fibrillogenesis through interactions with 'nucleating' amyloid species and decelerating the growth phase. In addition, the in silico, docking of asiaticoside with Aβ1–42 appears to be consistent with the ability of this glycoside to inhibit fibrillation by acting both at the intra- and inter- β-sheet interaction sites. Finally, the results obtained support further development of asiaticoside for clinical use in Aβ1–42 -induced neurodegenerative diseases, such as Alzheimer's disease. Selkoe DJ. The molecular pathology of Alzheimer's disease. Neuron. 1991;6(4):487–98. Seubert P, Vigo-Pelfrey C, Esch F, Lee M, Dovey H, Davis D, et al. Isolation and quantification of soluble Alzheimer's β-peptide from biological fluids. Nature. 1992;359(24 Sep):325–7. Iwatsubo T, Odaka A, Suzuki N, Mizusawa H, Nukina N, Ihar Y. Visualization of Aβ 42(43) and Aβ40 in senile plaques with end-specific Aβ monoclonals: Evidence that an initially deposited species is Aβ 42(43). Neuron. 1994;13(1):45–53. Diwan PC, Karwande I, Singh AK. Anti-anxiety profile of mandukparni Centella asiatica Linn in animals. Fitoterapia. 1991;62:255–7. Bown D. Encyclopaedia of Herbs and their Uses. London, UK: Dorling Kindersley. p. 361–65 Chevallier A. The encyclopedia of medicinal plants. London, UK: Dorling Kindersley; 1996. p. 257. Hashimoto M, Hossain S, Shimada T, Sugioka K, Yamasaki H, Fujii Y, et al. Docosahexaenoic acid provides protection from impairment of learning ability in Alzheimer's disease model rats. J Neurochem. 2002;81:1084–91. Hashimoto M, Shahdat HM, Yamashita S, Katakura M, Tanabe Y, Fujiwara H, et al. Docosahexaenoic acid disrupts in vitro amyloid beta(1–40) fibrillation and concomitantly inhibits amyloid levels in cerebral cortex of Alzheimer's disease model rats. J Neurochem. 2008;107:1634–46. Kapoor L. Handbook of Ayurvedic medicinal plants. Boca Raton, Fla, USA: CRC Press; 1990. Mook-Jung I, Shin JE, Yun SH, Huh K, Koh JY, Park HK, et al. Protective effects of asiaticoside derivatives against beta-amyloid neurotoxicity. J Neurosci Res. 1999;58:417–25. De Souza ND, Shah V, Desai PD, Inamdar PK, AD'Sa A, Ammonamanchi R, et al. 2, 3, 23-Trihydroxy-urs-12-ene and its Derivatives, Processes for their Preparation and their Use (1990); European Patent 383, A2. Edelstein SJ, Schaad O, Changeux JP. Single binding versus single channel recordings: a new approach to study ionotropic receptors. Biochemistry. 1997;36:13755–60. Haupts U, Maiti S, Schwille P, Webb WW. Dynamics of fluorescence fluctuations in green fluorescent protein observed by fluorescence correlation spectroscopy. Proc Natl Acad Sci U S A. 1998;95:13573–8. Piehler J. New methodologies for measuring protein interaction in vivo and in vitro. Curr Opin Struct Biol. 2005;15:4–14. Hossain S, Grande M, Ahmadkhanov G, Pramanik A. Binding of the Alzheimer amyloid beta-peptide to neuronal cell membranes by fluorescence correlation spectroscopy. Exp Mol Pathol. 2007;82:169–74. Tjernberg L, Pramanik A, Björling S, Thyberg P, Thyberg J, Nordsted C, et al. Amyloid β-peptide polymerization studied by fluorescence correlation spectroscopy. Chem Biol. 1999;6:53–62. Cavasotto CN, A bagyan RA. Protein flexibility in ligand docking and virtual screening to protein kinases. J Mol Biol. 2004;12:209–25. Schoichet BK. Virtual screening of chemical libraries. Nature. 2004;43:862–5. Koppen H. Virtual screening – what does it us? Curr Opin Drug Disc Dev. 2009;12:397–407. Miwa K, Hashimoto M, Hossain S, Katakura M, Shido O. Evaluation of the inhibitory effect of docosahexaenoic acid and arachidonic acid on the initial stage of amyloid β1-42 polymerization by fluorescence correlation spectroscopy. Adv Alzheimers Dis. 2013;2:66–72. http://dx.doi.org/10.4236/aad.2013.22009. Hossain S, Hashimoto M, Katakura M, Miwa K, Shimada T, Shido O. Mechanism of docosahexaenoic acid-induced inhibition of in vitro Abeta1-42 fibrillation and Abeta1-42-induced toxicity in SH-S5Y5 cells. J Neurochem. 2009;111(2):568–79. Marvin, "Marvin was used for drawing, displaying and characterizing chemical structures, substructures and reactions, Marvin 5.7, 2011; ChemAxon (http://www.chemaxon.com)" Thomsen R, Christensen MH. MolDock: a new technique for high-accuracy molecular docking. J Med Chem. 2006;49(11):3315–21. Lührs T, Ritter C, Adrian M, Riek-Loher D, Bohrmann B, Döbeli H, et al. 3D structure of Alzheimer's amyloid-beta (1–42) fibrils. Proc Natl Acad Sci U S A. 2005;102(48):17342–7. Lyskov S, Gray JJ. The RosettaDock server for local protein-protein docking. Nucleic Acids Res. 2008;36(Web Server Issue):W233–8. Dosztányi Z, Mészáros B, Simon I. ANCHOR: web server for predicting protein binding regions in disordered proteins. Bioinformatics. 2009;25(20):2745–6. Garbuzynskiy SO, Lobanov MY, Galzitskaya OV. Fold Amyloid: a method of prediction of amyloidogenic regions from protein sequence. Bioinformatics. 2010;26(Oxford, England):326–32. Conchillo-Sole O, de Groot NS, Aviles FX, Vendrell J, Daura X, Ventura S. AGGRESCAN: a server for the prediction and evaluation of "hot spots" of aggregation in polypeptides. BMC Bioinformatics. 2007;8(65):1–17. Fang Y, Gao S, Ta D, Middaugh CR, Fang J. Identification of properties important to protein aggregation using feature selection. BMC Bioinformatics. 2013;14:314. doi:10.1186/1471-2105-14-314. Kawabata T. Detection of multi-scale pockets on protein surfaces using mathematical morphology. Proteins. 2010;78(5):1195–221. Laurie ATR, Jackson RM. Q-SiteFinder: an energy-based method for the prediction of protein-ligand binding sites. Bioinformatics. 2005;21(9):1908–16. Chenm H-L, Zhou H-X. Prediction of interface residues in protein-protein complexes by a consensus neural network method: test against NMR data. Proteins. 2005;61(1):21–35. Darnell SJ, Page D, Mitchell JC. Automated decision-tree approach to predicting protein protein interaction hot spots. Proteins. 2007;68(4):813–23. Zhu X, Mitchell JC. KFC2: a knowledge-based hot spot prediction method based on interface solvation, atomic density and plasticity features. Proteins. 2011;79(9):2671–83. Duhovny D, Nussinov R, Wolfson HJ. Efficient unbound docking of rigid molecules. In: Guigó R, Gusfield D, et al., editors. Proceedings of the 2′nd Workshop on Algorithms in Bioinformatics(WABI) Rome, Italy, Lecture Notes in Computer Science (LNCS) 2452. Berlin Heidelberg: Springer Verlag; 2002. p. 185–200. Mamun AA, Hashimoto M, Katakura M, Matsuzaki K, Hossain S, Arai H, et al. Neuroprotective effect of madecassoside evaluated using amyloid Β1-42-mediated in vitro and in vivo alzheimer's disease models. Intl J Indigenous Med Plants. 2014;47(2):1669–82. Greenwald J, Riek R. Biology of Amyloid: structure, function, and regulation. Structure (Review). 2010;8(10):1244–60. Ehrnhoefer D, Bieschke J, Boeddrich A, Herbst M, Masino L, Lurz R, et al. EGCG redirects amyloidogenic polypeptides into unstructured off-pathway oligomers. Nat Struct Mol Biol. 2008;15:558–66. Soumyanath A, Zhong YP, Gold SA, Yu X, Koop DR, Bourdette D, et al. Centella asiatica accelerates nerve regeneration upon oral administration and contains multiple active fractions increasing neurite elongation in-vitro. J Pharm Pharmacol. 2005;57(3):1221–9. Wanakhachornkrai O, Pongrakhananon V, Chunhacha P, Wanasuntronwong A, Vattanajun A, Tantisira B, et al. Neuritogenic effect of standardized extract of Centella asiatica ECa233 on human neuroblastoma cells. BMC Complement Altern Med. 2013;13:204. Dhanasekaran M, Holcomb LA, Hitt AR, Tharakan B, Porter JW, Young KA, et al. Centella asiatica extract selectively decreases amyloid β levels in hippocampus of Alzheimer's disease animal model. Phytother Res. 2009;23(1):14–9. Kirschner DA, Abraham C, Selkoe DJ. X-ray diffraction from intraneuronal paired helical filaments and extraneuronal amyloid fibers in Alzheimer disease indicates cross-beta conformation. Proc Natl Acad Sci U S A. 1986;83(2):503–7. This work was supported in part by a Grant-in-Aid for Scientific Research (C) from the Ministry of Education Culture, Sports, Science and Technology, Japan (23500955, M.H.).The authors gratefully acknowledge the supports of Koji Miwa, Ph.D. student of the Dept. of Environmental Physiology, Shimane University Faculty of Medicine, Izumo Japan, for helping in acquisition of FCS data. No conflict of interest exists. Department of Environmental Physiology, Shimane University Faculty of Medicine, Izumo, 693-8501, Japan Shahdat Hossain, Michio Hashimoto, Masanori Katakura, Abdullah Al Mamun & Osamu Shido Department of Biochemistry and Molecular Biology, Jahangirnagar University, Savar, Dhaka, Bangladesh Shahdat Hossain Michio Hashimoto Masanori Katakura Abdullah Al Mamun Osamu Shido Correspondence to Michio Hashimoto. SH conducted the analysis. MH conducted a critical review of the manuscript and provided final editing to the manuscript. SH and MH contributed to the conception and design of the study. All authors made substantial contributions to drafting the manuscript. All authors read and approved the final manuscript. Hossain, S., Hashimoto, M., Katakura, M. et al. Medicinal value of asiaticoside for Alzheimer's disease as assessed using single-molecule-detection fluorescence correlation spectroscopy, laser-scanning microscopy, transmission electron microscopy, and in silico docking. BMC Complement Altern Med 15, 118 (2015). https://doi.org/10.1186/s12906-015-0620-9 Asiaticoside Amyloid fibrillation Fluorescence correlation spectroscopy
CommonCrawl
Populations and Samples Features of Populations and Samples Estimating population size from sample Suppose a statistic is to be calculated from numerical data obtained from a sample drawn from a population. It could be the mean, it could be a number that gives a measure of the spread of the data, or it could be some other numerical result derived from the sample data. It is clear that the same statistic calculated from several different samples drawn from a population could be different for each sample. In the questions accompanying this chapter, you are asked to explore the variation that can occur for the mean in some simple cases, and also to see how the sample mean relates to the population mean. For a population of a known size, we can calculate how many possible samples of a given size can be drawn from it. This is a problem in combinatorics. For moderate-sized populations and samples, the number of possible samples is usually very large. If from a population of 500 it is desired to draw a sample of 5 units, there would be $\frac{500!}{495!5!}\approx2.55\times10^{11}$500!495!5!​≈2.55×1011 possibilities. It can be shown that the sample means are always clustered symmetrically around the population mean with many of them close to it and progressively fewer of them as the distance from the true mean increases. If the sample size is increased, we find that the sample means are brought closer to the true mean. Techniques exist that enable us to choose a sample size that will be large enough to ensure that the sample mean is within a given distance of the true mean with a specified probability. Numbers derived from a sample are routinely used for estimating the corresponding quantities in the population from which the sample was drawn. The sample mean, for example, is said to be an estimator for the population mean. Sometimes a proportion in a sample can be used to estimate a proportion in the population and hence, the population size. Membership of a club in a certain locality is open to anyone over the age of eighteen years living within the local geographic region. A survey of people eligible to be members finds that 2.125% of potential members actually are members. Club records show that there are currently 312 members. From this information, it is possible to deduce approximately the number of people over the age of eighteen who live in the local area. We take the proportion of members in the sample to be an estimator for the proportion of members in the population. If the population size is $n$n, then we have $\frac{2.125}{100}=\frac{312}{n}$2.125100​=312n​. From this, we see that $n\approx14682$n≈14682. This example should be compared with a similar one from ecology, presented in another chapter. More Worked Examples The heights (in cm) of a population of 3 people are $A$A, $B$B and $C$C. List all possible samples of size 2 without replacement. For example if the first 2 are selected we can write that as $AB$AB. Use commas to separate different samples. If $A=171$A=171, $B=153$B=153 and $C=162$C=162, complete the following table: Sample Values (cm) Sample Mean $171$171, $153$153 $\editable{}$ What is the mean of the distribution of all possible sample means? What is the population mean? Is the mean of the sample means equal to the population mean? The weights (in kg) of a population of 5 people are $F$F, $G$G, $H$H, $I$I and $J$J. List all possible samples of size 1 without replacement. For example if the first weight is selected we can write that as $F$F. If $F=98$F=98, $G=136$G=136, $H=116$H=116, $I=94$I=94, and $J=130$J=130 complete the following table. $98$98 $\editable{}$ $136$136 $\editable{}$ Write your answer as a decimal. A survey asked $144$144 randomly chosen students if they were going to attend the school play. $18$18 said yes. If there are $204$204 tickets sold for the play, predict the number of students who attend the school, $x$x. S7-2 Make inferences from surveys and experiments: A making informal predictions, interpolations, and extrapolations B using sample statistics to make point estimates of population parameters C recognising the effect of sample size on the variability of an estimate Use statistical methods to make an inference
CommonCrawl
Tagged: unique solution Are Coefficient Matrices of the Systems of Linear Equations Nonsingular? (a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular? (b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular? (c) Let $A$ be a $4\times 4$ matrix and let \[\mathbf{v}=\begin{bmatrix} 1 \\ \end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix} \end{bmatrix}.\] Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular? Read solution The Possibilities For the Number of Solutions of Systems of Linear Equations that Have More Equations than Unknowns Determine all possibilities for the number of solutions of each of the system of linear equations described below. (a) A system of $5$ equations in $3$ unknowns and it has $x_1=0, x_2=-3, x_3=1$ as a solution. (b) A homogeneous system of $5$ equations in $4$ unknowns and the rank of the system is $4$. (The Ohio State University, Linear Algebra Midterm Exam Problem) In this post, we summarize theorems about the possibilities for the solution set of a system of linear equations and solve the following problems. Determine all possibilities for the solution set of the system of linear equations described below. (a) A homogeneous system of $3$ equations in $5$ unknowns. (b) A homogeneous system of $5$ equations in $4$ unknowns. (c) A system of $5$ equations in $4$ unknowns. (d) A system of $2$ equations in $3$ unknowns that has $x_1=1, x_2=-5, x_3=0$ as a solution. (e) A homogeneous system of $4$ equations in $4$ unknowns. (f) A homogeneous system of $3$ equations in $4$ unknowns. (g) A homogeneous system that has $x_1=3, x_2=-2, x_3=1$ as a solution. (h) A homogeneous system of $5$ equations in $3$ unknowns and the rank of the system is $3$. (i) A system of $3$ equations in $2$ unknowns and the rank of the system is $2$. (j) A homogeneous system of $4$ equations in $3$ unknowns and the rank of the system is $2$. Infinite Cyclic Groups Do Not Have Composition Series For Which Choices of $x$ is the Given Matrix Invertible? Find a Basis For the Null Space of a Given $2\times 3$ Matrix Group of $p$-Power Roots of 1 is Isomorphic to a Proper Quotient of Itself Express a Vector as a Linear Combination of Other Vectors Prove that $\{ 1 , 1 + x , (1 + x)^2 \}$ is a Basis for the Vector Space of Polynomials of Degree $2$ or Less 12 Examples of Subsets that Are Not Subspaces of Vector Spaces Basis of Span in Vector Space of Polynomials of Degree 2 or Less
CommonCrawl
When Uranus Lets Out A Toot To all my readers, I apologize in advance for the goofy title. While it may take some brains to write a space blog, maturity is purely optional! :) As a die-hard space fan, I get a peculiar satisfaction dropping random tidbits of space trivia on people without them even asking for it. Weird flex, I know, but I like seeing the look of intrigue on their faces when people learn something about the cosmos they'd never pondered before. One fact I tell people that often surprises them: even after decades of planetary exploration and all the famous probes we've sent out, only one spacecraft has ever visited the distant worlds of Uranus and Neptune: Voyager 2, in 1986 and 1989! Why explore space, you ask? For the best memes in the galaxy, of course!! This means that even in 2020, for any planetary scientist hoping to find a breakthrough about the two ice giants, their best bet is still to delve into data collected over 30 years ago, when Ronald Reagan was president, the Soviet Union still existed, and the first Apple Macintosh and its 128KB of RAM were state of the art! And sometimes, they discover hidden treasure in all that data - in this recent announcement from NASA, scientists discovered that when Voyager 2 flew by Uranus, it actually flew through what's know a plasmoid, a giant bubble of the atmosphere that got ejected into space due to interactions in the planet's magnetic field! Artist's impression of Voyager 2 zooming by Uranus According to the researchers, while magnetic fields are crucial in preventing solar wind from stripping away a planet's atmosphere (just look what happened to Mars!), sometimes the complex magnetic field lines get tangled and pinch off a little bubble. It turns out, Uranus is particularly susceptible to this phenomenon as it's the only planet that spins on its side, causing its magnetosphere to "wobble like a poorly thrown football." We're fortunate that Voyager 2 happened to fly by just as Uranus was letting off a little steam! Sharing this visualization from the NASA announcement showing Uranus' magnetic field: The yellow arrow denotes Uranus' direction of orbit, while the light blue arrow marks the planet's magnetic axis So this all begs the question, why not pay Uranus and Neptune another visit? And on top of that, given Voyager 2 already nailed a flyby of both planets, why not send an orbiter to at least one of them so we can maintain constant observation? In fact, there are a number of ambitious proposals being submitted, both for NASA's Planetary Science Decadal Survey and through the European Space Agency, that would answer many longstanding mysteries about the nature of the gas giants. Unfortunately, missions to Uranus and Neptune are far less likely to be chosen than inner planets like Mars and Venus, meaning astronomers will probably be left guessing and combing through Voyager's old data for decades to come Some proposed missions to Uranus and Neptune. Think any of them will ever see the light of day? So I'm faced with this harsh reality, fine then. But nothing can stop me from dreaming of a world where NASA has an infinite budget, and in my imagination I'm going to propose an even more ambitious planetary mission: a Uranus/Neptune double orbiter!! By this I mean a single probe that reaches Uranus first, orbits it for a period of time, then departs for Neptune and establishes a new orbit there as well (unlike ODINUS, which would send two separate probes)! Obviously this would require an enormous amount of energy, but how impossible would it be? I take comfort knowing that my hypothetical mission wouldn't be totally without precedent. The Dawn spacecraft launched in 2007 was the first ever to orbit two different bodies, orbiting the asteroid Vesta for 14 months before departing for Ceres, where it orbited for several years! But Uranus and Neptune would be significantly more challenging, being so far away... Animation of Dawn's mission trajectory, from Wikipedia Pink = Dawn | Dark Blue = Earth | Yellow = Mars | Light Blue = Vesta | Green = Ceres Bust out the calculators! In investment banking jargon, this is what we call a "back-of-the-envelope calculation" (remember I don't have a PhD in astrophysics, just my simple bachelors in finance!). But the exercise I'll demonstrate here should roughly give the change in velocity (i.e. delta-v) needed to travel from an orbit around Uranus to an orbit around Neptune, using a Hohmann transfer orbit. It's essentially the same calculation I demonstrate for my trajectory to Mars, so check that out here for additional detail! Here's the general diagram of what we're trying to achieve Some constants we're going to need: Gravitational constant: $G = 6.674 \cdot 10^{-11} \thinspace m^3\cdot kg^{-1}\cdot s^{-2}$ Mass of the Sun: $M = 1.989 \cdot 10^{30} \thinspace kg$ Radius of Uranus' orbit: $r_{Uranus} = 2.963 \cdot 10^{9} \thinspace km$ Radius of Neptune' orbit: $r_{Neptune} = 4.477 \cdot 10^{9} \thinspace km$ Some useful formulas: Vis-viva equation (used for calculating orbital velocities): $V = \sqrt{GM(\frac{2}{r} - \frac{1}{a})}$ Special case (circular orbits): $V = \sqrt{\frac{GM}{r}}$ Great, now we're ready to begin. Since our probe is starting off in a stable orbit around Uranus, we need to calculate its initial velocity relative to the Sun. In other words, what's Uranus' orbital velocity as it orbits the Sun? Assuming the orbit of Uranus is circular (a reasonable approximation; we'll do the same for Neptune later), we can calculate it as follows: $V_{Uranus} = \sqrt{\frac{GM}{2.963 \cdot 10^{9}}} = 6.693 \thinspace km/s$ Now in order to start our probe on its path to Neptune, we need to accelerate our probe with an engine burn right at the point where Uranus' orbit intersects the Hohmann transfer orbit. Since this point is the periapsis of the elliptical orbit, we can calculate it as follows: $a = \frac{2.963 \cdot 10^{9} + 4.477 \cdot 10^{9}}{2} = 3.720 \cdot 10^{9} \thinspace km$ $V_{periapsis} = \sqrt{GM(\frac{2}{2.963\cdot 10^{9}} - \frac{1}{3.720 \cdot 10^{9}})} = 7.343 \thinspace km/s$ Using this, we can determine our first delta-v: $\Delta V_1 = 7.343 - 6.693 = 0.650 \thinspace km/s$ Awesome, so we've started our probe on its path to Neptune. But when it finally reaches Neptune's orbit, how fast will it be going? With the same vis-viva formula we used to calculate velocity at periapsis, we can do the same at apoapisis: $V_{apoapsis} = \sqrt{GM(\frac{2}{4.477\cdot 10^{9}} - \frac{1}{3.720 \cdot 10^{9}})} = 4.860 \thinspace km/s$ But remember, our goal is to get into orbit around Neptune, so we have to match Neptune's orbital velocity around the Sun: $V_{Neptune} = \sqrt{\frac{GM}{4.477 \cdot 10^{9}}} = 5.445 \thinspace km/s$ Now we can determine the second delta-v: So upon reaching the orbit of Neptune, the probe needs to fire its engine again and accelerate to match Neptune's orbital speed There you have it: two engine burns generating a total delta-v of 1.235 km/s to get us from an orbit around Uranus to an orbit around Neptune. I've done the math for you NASA, let's get this mission approved!! Oil is Negative, Now Launch the Rocket! John Olmsted and the Lunar Roving Vehicle
CommonCrawl
DOI:10.2140/GT.2020.24.1381 A gluing formula for families Seiberg–Witten invariants @article{Baraglia2018AGF, title={A gluing formula for families Seiberg–Witten invariants}, author={David Baraglia and Hokuto Konno}, journal={arXiv: Differential Geometry}, David Baraglia, Hokuto Konno arXiv: Differential Geometry We prove a gluing formula for the families Seiberg-Witten invariants of families of $4$-manifolds obtained by fibrewise connected sum. Our formula expresses the families Seiberg-Witten invariants of such a connected sum family in terms of the ordinary Seiberg-Witten invariants of one of the summands, under certain assumptions on the families. We construct some variants of the families Seiberg-Witten invariants and prove the gluing formula also for these variants. One variant incorporates a… Rigidity of the mod 2 families Seiberg–Witten invariants and topology of families of spin 4-manifolds Tsuyoshi Kato, Hokuto Konno, N. Nakamura Compositio Mathematica We show a rigidity theorem for the Seiberg–Witten invariants mod 2 for families of spin 4-manifolds. A mechanism of this rigidity theorem also gives a family version of 10/8-type inequality. As an… Family Bauer--Furuta invariant, Exotic Surfaces and Smale conjecture Jianfeng Lin, Anubhav Mukherjee We establish the existence of a pair of exotic surfaces in a punctured K3 which remains exotic after one external stabilization and have diffeomorphic complements. A key ingredient in the proof is a… A cohomological Seiberg-Witten invariant emerging from the adjunction inequality Hokuto Konno We construct an invariant of closed ${\rm spin}^c$ 4-manifolds. This invariant is defined using families of Seiberg-Witten equations and formulated as a cohomology class on a certain abstract… Symplectic mapping class groups of K3 surfaces and Seiberg-Witten invariants G. Smirnov The purpose of this note is to prove that the symplectic mapping class groups of many K3 surfaces are infinitely generated. Our proof makes no use of any Floer-theoretic machinery but instead follows… Positive scalar curvature and higher‐dimensional families of Seiberg–Witten equations Journal of Topology We introduce an invariant of tuples of commutative diffeomorphisms on a 4-manifold using families of Seiberg-Witten equations. This is a generalization of Ruberman's invariant of diffeomorphisms… Non-trivial smooth families of $K3$ surfaces David Baraglia Let X be a complex K3 surface, Diff(X) the group of diffeomorphisms of X and Diff(X)0 the identity component. We prove that the fundamental group of Diff(X)0 contains a free abelian group of… View 2 excerpts, cites background and methods An adjunction inequality obstruction to isotopy of embedded surfaces in 4-manifolds. Consider a smooth $4$-manifold $X$ and a diffeomorphism $f : X \to X$. We give an obstruction in the form of an adjunction inequality for an embedded surface in $X$ to be isotopic to its image under… Constraints on families of smooth 4-manifolds from $\mathrm{Pin}^{-}(2)$-monopole. Hokuto Konno, N. Nakamura Using the Seiberg-Witten monopole equations, Baraglia recently proved that for most of simply-connected closed smooth $4$-manifolds $X$, the inclusions $\mathrm{Diff}(X) \hookrightarrow… A note on the Nielsen realization problem for K3 surfaces We will show the following three theorems on the diffeomorphism and homeomorphism groups of a $K3$ surface. The first theorem is that the natural map $\pi_{0}(Diff(K3)) \to Aut(H^{2}(K3;\mathbb{Z}))$… Pseudo-isotopies and diffeomorphisms of 4-manifolds O. Singh A diffeomorphism f of a compact manifold X is pseudo-isotopic to the identity if there is a diffeomorphism F of X×I which restricts to f on X×1, and which restricts to the identity onX×0 and ∂X×I. We… Obstructions to smooth group actions on 4-manifolds from families Seiberg-Witten theory Advances in Mathematics Abstract Let X be a smooth, compact, oriented 4-manifold. Building upon work of Li-Liu, Ruberman, Nakamura and Konno, we consider a families version of Seiberg-Witten theory and obtain obstructions… Positive scalar curvature, diffeomorphisms and the Seiberg-Witten invariants Daniel Ruberman We study the space of positive scalar curvature (psc) metrics on a 4{manifold, and give examples of simply connected manifolds for which it is disconnected. These examples imply that concordance of… Homotopy K3 surfaces and mod 2 Seiberg-Witten invariants J. Morgan, Z. Szabó After writing this paper we learned that this result was also known to Kronheimer and Furuta. This theorem is closely related to the work of Fintushel and Stern [FS1], [FS2]. They proved that any… Scalar curvature, covering spaces, and Seiberg-Witten theory. C. Lebrun Mathematics, Physics The Yamabe invariant Y(M) of a smooth compact manifold is roughly the supremum of the scalar curvatures of unit-volume constant-scalarcurvature Riemannian metrics g on M . (To be precise, one only… Seiberg-Witten Theory and Z/2^p actions on spin 4-manifolds J. Bryan Furuta's "10/8ths" theorem gives a bound on the magnitude of the signature of a smooth spin 4-manifold in terms of the second Betti number. We show that, inthe presence of a Z/2 p action, this bound… The Seiberg-Witten equations and 4-manifold topology S. Donaldson Since 1982 the use of gauge theory, in the shape of the Yang-Mills instanton equations, has permeated research in 4-manifold topology. At first this use of differential geometry and differential… Family Seiberg-Witten invariants and wall crossing formulas Tian-Jun Li, A. Liu In this paper we set up the family Seiberg-Witten theory. It can be applied to the counting of nodal pseudo-holomorphic curves in a symplectic 4-manifold (especially a Kahler surface). A new feature… Bounds on genus and geometric intersections from cylindrical end moduli spaces S. Strle In this paper we present a way of computing a lower bound for genus of any smooth representative of a homology class of positive self-intersection in a smooth four-manifold $X$ with second positive… Isotopy of 4-manifolds F. Quinn The principal result of this paper is that the group of homeomorphisms mod isotopy (the "homeotopy" group) of a closed simply-connected 4-manifold is equal to the automorphism group of the quadratic… Family Blowup Formula, Admissible Graphs and the Enumeration of Singular Curves, I A. Liu In this paper, we discuss the scheme of enumerating the singular holomorphic curves in a linear system on an algebraic surface. Our approach is based on the usage of the family Seiberg-Witten…
CommonCrawl
Are there Conservation Laws in Complexity Theory? Let me start with some examples. Why is it so trivial to show CVP is in P but so hard to show LP is in P; while both are P-complete problems. Or take primality. It is easier to show composites in NP than primes in NP (which required Pratt) and eventually in P. Why did it have to display this asymmetry at all? I know Hilbert, need for creativity, proofs are in NP etc. But that has not stopped me from having a queasy feeling that there is more to this than meets the eye. Is there a quantifiable notion of "work" and is there a "conservation law" in complexity theory? That shows, for example, that even though CVP and LP are both P-complete, they hide their complexities at "different places" -- one in the reduction (Is CVP simple because all the work is done in the reduction?) and the other in expressibility of the language. Anyone else queasy as well and with some insights? Or do we shrug and say/accept that this is the nature of computation? This is my first question to the forum: fingers crossed. Edit: CVP is Circuit Value Problem and LP is Linear Programming. Thanks Sadeq, for pointing out a confusion. cc.complexity-theory big-picture V Vinay V VinayV Vinay $\begingroup$ At first, I mistook CVP for Closest Vector Problem (which is NP-hard). Then I noted that it is the Circuit Value Problem. I thought it would be helpful to mention this. $\endgroup$ – M.S. Dousti Nov 16 '10 at 20:05 $\begingroup$ interesting question. Not sure there's an interesting answer though :) $\endgroup$ – Suresh Venkat Nov 16 '10 at 23:29 $\begingroup$ Just an observation: The difficulty of proving the membership to NP (say) is not a property of a language, but a property of a description of a language. For example, it requires some effort to prove that the set of primes is in NP, but it is trivial that the set of integers having a Pratt certificate is in NP. $\endgroup$ – Tsuyoshi Ito Nov 17 '10 at 4:05 $\begingroup$ Is time-space tradeoff lowerbounds not applicable as a conservation law in the sense of the wording of this question? $\endgroup$ – Maverick Woo Nov 17 '10 at 4:39 $\begingroup$ Charles Bennett's notion of computational depth (originally "logical depth") may capture part of the intuition of "work required to demonstrate a complexity fact." $\endgroup$ – Aaron Sterling Nov 17 '10 at 13:38 This is a question that has run across my mind many times. I think one place to look is information theory. Here is a speculation of mine. Given a problem maybe we can give some sort of entropy value to information given as input and the information received from the algorithm. If we could do that, then there would be some minimum amount of information gain required by an algorithm to solve that problem. There's one related thing I've wanted to figure out. In some NP-complete problems you can find a constrained version in P; with the Hamiltonian path if you specify that the graph is a DAG then there is a p-time algorithm to solve it. With other problems like TSP, there's often p-time algorithms that will approximate the optimal. It seems to me, for constrained p-time algorithms, there should be some proportional relationship between the addition information assumed and run-time complexity reduction. In the case of the TSP we aren't assuming additional information, we're relaxing the precision, which I expect to have a similar effect on any sort of algorithmic information gain. Note on Conservation Laws In the earlier 1900's there was little known German-American mathematician named Emily Noether. Among other things she was described by Einstein and Hilbert to be the most import women in the history of mathematics. In 1915 she published what is now known as Noether's First Theorem. The theorem was about physical laws of conservation, and said that all conservation laws have a corresponding differential symmetry in the physical system. Conservation of Angular Momentum comes from a rotational symmetry in space, Conservation of Linear Momentum is translation in space, Conservation of Energy is translation in time. Given that, for there to be some law of conservation of complexity in a formal sense, there would need to be some corresponding differential symmetry in a Langragian function. MattRSMattRS $\begingroup$ +1 Great answer! I have often had similar musings (@MattRS: send me an email). By the way, I don't think Emmy Noether is "little-known," but in fact quite the opposite, although maybe she's not well-known in TCS. Noether's First Theorem is well-known to physicists, and Noetherian rings are a central object of study in commutative algebra and algebraic geometry. Several other important theorems, mostly in those areas, also bear her name. $\endgroup$ – Joshua Grochow Nov 18 '10 at 3:22 $\begingroup$ Yeah thats what I meant; not well known to comp sci. I always thought abstract algebra should be more widely taught in CS. $\endgroup$ – MattRS Nov 18 '10 at 4:33 $\begingroup$ Even though this argument is compelling, I wonder if is it compatible with many problems having a sharp approximability threshold. (By this, I mean a problem such that achieving an approximation factor $\alpha > 1$ is easy, but $\alpha - \epsilon$ is hard for all $\epsilon > 0$.) Why is the relation between the precision required and the algorithmic information gain so dramatically discontinuous? $\endgroup$ – Srivatsan Narayanan Jul 28 '11 at 5:40 I think the reason lies within the logical system we use. Each formal system has a set of axioms, and a set of rules of inference. A proof in a formal system is just a sequence of formulae such that each formula in the sequence either is an axiom or is obtained from earlier formulae in the sequence by applying a rule of inference. A Theorem of the formal system is just the last formula in a proof. The length of the proof of a theorem, assuming it is decidable in the logical system, depends totally on the sets of axioms and rules of inference. For instance, consider the propositional logic, for which there exist several characterizations: Frege (1879), Nicod (1917), and Mendelson (1979). (See this short survey for more info.) The latter system (Mendelson) has three axioms and one rule of inference (modus ponens). Given this short characterization, it is really hard to prove even the most trivial theorems, say $\varphi \to \varphi$. Here, by hard, I mean the minimum length of the proof is high. This problem is termed proof complexity. To quote Beame & Pitassi: One of the most basic questions of logic is the following: Given a universally true statement (tautology) what is the length of the shortest proof of the statement in some standard axiomatic proof system? The propositional logic version of this question is particularly important in computer science for both theorem proving and complexity theory. Important related algorithmic questions are: Is there an efficient algorithm that will produce a proof of any tautology? Is there an efficient algorithm to produce the shortest proof of any tautology? Such questions of theorem proving and complexity inspired Cook's seminal paper on NP-completeness notably entitled "The complexity of theorem-proving procedures" and were contemplated even earlier by Gödel in his now well-known letter to von Neumann. I was thinking about this same question the other day, when I was replaying some of Feynman's Lectures on Physics, and came to lesson 4 on the conservation of energy. In the lecture Feynman uses the example of a simple machine which (through some system of levers or pulleys or whatever) lowers a weight of one unit by some distance x, and uses that to lift a second weight of 3 units. How high can the weight be lifted? Feynman makes the observation that if the machine is reversible, then we don't need to know anything about the mechanism of the machine--we can treat it like a black box--and it will always lift the weight the maximum distance possible (x/3 in this case). Does this have an analogue in computation? The idea of reversible computation brings to mind the work of Landauer and Bennett, but I'm not sure this is the sense of the term in which we are interested. Intuitively, if we have an algorithm for some problem that is optimal, then there isn't any wasted "work" being done churning bits; while a brute-force approach to the same problem would be throwing away CPU cycles left and right. However, I imagine one could construct a physically reversible circuit for either algorithm. I think the first step in approaching a conservation law for computational complexity is to figure out exactly what should be conserved. Space and time are each important metrics, but it's clear from the existence of space/time trade-offs that neither one by itself is going to be adequate as a measure of how much "work" is being done by an algorithm. There are other metrics such as TM head reversals or tape cell crossings that have been used. None of these really seems to be close to our intuition of the amount of "work" required to carry out a computation. The flip side of the problem is figuring out just what that work gets converted into. Once you have the output from a program, what exactly is it that you have gained? KurtKurt Some observations suggesting the existence of conservation law: If we consider polynomial-time (or log-space) computable reductions $<_p$ as transformations between computational problems, then the following definitions of known complexity classes suggest the existence of some conserved property under "efficient" transformations. Assuming $P\ne NP$ then "hardness" seems to be the conserved property. $P=\{L| L<_p HornSAT \}$ $NP=\{L| L<_p 3SAT \}$ $CoNP=\{L| \bar L<_p 3SAT \}$ $NPC=\{L| L<_p 3SAT, 3SAT<_p L \}$ $PC=\{L| L<_p HornSAT, HornSAT<_p L \}$ EDIT: $P$ is more accurately defined as $P=\{L| L<_p HornSAT, \bar L <_p HornSAT \}$ suggesting that the hardness of problems in $P$ is invariant under complement operation while it is not known that complementation preserves the hardness of $NP$ problems (unless $P=NP$). Mohammad Al-TurkistanyMohammad Al-Turkistany Tao suggests the existence of law of conservation of difficulty in mathematics: "in order to prove any genuinely non-trivial result, some hard work has to be done somewhere". He argues that the difficulty of some mathematical proofs suggests a lower bound to the amount of effort needed by the theorem proving process. Not the answer you're looking for? Browse other questions tagged cc.complexity-theory big-picture or ask your own question. Energy considerations on computation Is there a complexity theory analogue of Rice's theorem in computability theory? Applications of Complexity Theory Is there a relationship between computational complexity theory and complex systems theory? Are there descriptive complexity representations of quantum complexity classes? Are there non-closed complexity classes for which there are complete problems? Is there a theory that combines category theory/abstract algebra and computational complexity? Are there applications of modular graph decomposition in TCS/complexity theory? possible bridge between group growth theory and complexity theory? Are there links between Geometry of Interaction and Geometric Complexity Theory? Why exactly are complexity theorists interested in closed timelike curves?
CommonCrawl
Bound entangled states fit for robust experimental verification Gael Sentís1,2, Johannes N. Greiner3, Jiangwei Shang4,1, Jens Siewert5,6, and Matthias Kleinmann1,2 1Naturwissenschaftlich-Technische Fakultät, Universität Siegen, 57068 Siegen, Germany 2Departamento de Física Teórica e Historia de la Ciencia, Universidad del País Vasco UPV/EHU, E-48080 Bilbao, Spain 33rd Institute of Physics, University of Stuttgart and Institute for Quantum Science and Technology, IQST, Pfaffenwaldring 57, D-70569 Stuttgart, Germany 4Beijing Key Laboratory of Nanophotonics and Ultrafine Optoelectronic Systems, School of Physics, Beijing Institute of Technology, Beijing 100081, China 5Departamento de Química Física, Universidad del País Vasco UPV/EHU, E-48080 Bilbao, Spain 6IKERBASQUE Basque Foundation for Science, E-48013 Bilbao, Spain Published: 2018-12-18, volume 2, page 113 Doi: https://doi.org/10.22331/q-2018-12-18-113 Citation: Quantum 2, 113 (2018). Preparing and certifying bound entangled states in the laboratory is an intrinsically hard task, due to both the fact that they typically form narrow regions in state space, and that a certificate requires a tomographic reconstruction of the density matrix. Indeed, the previous experiments that have reported the preparation of a bound entangled state relied on such tomographic reconstruction techniques. However, the reliability of these results crucially depends on the extra assumption of an unbiased reconstruction. We propose an alternative method for certifying the bound entangled character of a quantum state that leads to a rigorous claim within a desired statistical significance, while bypassing a full reconstruction of the state. The method is comprised by a search for bound entangled states that are robust for experimental verification, and a hypothesis test tailored for the detection of bound entanglement that is naturally equipped with a measure of statistical significance. We apply our method to families of states of $3\times 3$ and $4\times 4$ systems, and find that the experimental certification of bound entangled states is well within reach. Featured image: Schematic picture of the state space, where the boundaries of the PPT, CCNR and physical sets enclose a bound entangled region (BE). The parameter r is the radius of a Hilbert–Schmidt ball around a target state ρ such that all physical states τ are bound entangled @article{Sentis2018boundentangled, doi = {10.22331/q-2018-12-18-113}, url = {https://doi.org/10.22331/q-2018-12-18-113}, title = {Bound entangled states fit for robust experimental verification}, author = {Sent{\'{i}}s, Gael and Greiner, Johannes N. and Shang, Jiangwei and Siewert, Jens and Kleinmann, Matthias}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {2}, pages = {113}, month = dec, year = {2018} } [1] M. Horodecki, P. Horodecki, and R. Horodecki, Mixed-State Entanglement and Distillation: Is there a "Bound" Entanglement in Nature?, Physical Review Letters 80, 5239 (1998), 10.1103/​PhysRevLett.80.5239. [2] K. Horodecki, M. Horodecki, P. Horodecki, and J. Oppenheim, Secure Key from Bound Entanglement, Physical Review Letters 94, 160502 (2005), 10.1103/​PhysRevLett.94.160502. [3] P. Horodecki, M. Horodecki, and R. Horodecki, Bound Entanglement Can Be Activated, Physical Review Letters 82, 1056 (1999), 10.1103/​PhysRevLett.82.1056. [4] L. Masanes, All Bipartite Entangled States Are Useful for Information Processing, Physical Review Letters 96, 150501 (2006), 10.1103/​PhysRevLett.96.150501. [5] Ł. Czekaj, A. Przysiȩżna, M. Horodecki, and P. Horodecki, Quantum metrology: Heisenberg limit with bound entanglement, Physical Review A 92, 062303 (2015), 10.1103/​PhysRevA.92.062303. https:/​/​doi.org/​10.1103/​PhysRevA.92.062303 [6] G. Tóth and T. Vértesi, Quantum States with a Positive Partial Transpose are Useful for Metrology, Physical Review Letters 120, 020506 (2018), 10.1103/​PhysRevLett.120.020506. [7] T. Moroder, O. Gittsovich, M. Huber, and O. Gühne, Steering Bound Entangled States: A Counterexample to the Stronger Peres Conjecture, Physical Review Letters 113, 050404 (2014), 10.1103/​PhysRevLett.113.050404. [8] T. Vértesi and N. Brunner, Disproving the Peres conjecture by showing Bell nonlocality from bound entanglement, Nature Communications 5, 5297 (2014), 10.1038/​ncomms6297. [9] M. Horodecki, J. Oppenheim, and R. Horodecki, Are the Laws of Entanglement Theory Thermodynamical?, Physical Review Letters 89, 240403 (2002), 10.1103/​PhysRevLett.89.240403. [10] F. G. S. L. Brandao and M. B. Plenio, Entanglement theory and the second law of thermodynamics, Nature Physics 4, 873 (2008), 10.1038/​nphys1100. [11] E. Amselem and M. Bourennane, Experimental four-qubit bound entanglement, Nature Physics 5, 748 (2009), 10.1038/​nphys1372. [12] J. Lavoie, R. Kaltenbaek, M. Piani, and K. J. Resch, Experimental bound entanglement?, Nature Physics 6, 827 (2010), 10.1038/​nphys1832. [13] J. Lavoie, R. Kaltenbaek, M. Piani, and K. J. Resch, Experimental Bound Entanglement in a Four-Photon State, Physical Review Letters 105, 130501 (2010), 10.1103/​PhysRevLett.105.130501. [14] J. A. Smolin, Four-party unlockable bound entangled state, Physical Review A 63, 032306 (2001), 10.1103/​PhysRevA.63.032306. [15] J. T. Barreiro, P. Schindler, O. Gühne, T. Monz, M. Chwalla, C. F. Roos, M. Hennrich, and R. Blatt, Experimental multiparticle entanglement dynamics induced by decoherence, Nature Physics 6, 943 (2010), 10.1038/​nphys1781. [16] H. Kampermann, D. Bruß, X. Peng, and D. Suter, Experimental generation of pseudo-bound-entanglement, Physical Review A 81, 040304 (2010), 10.1103/​PhysRevA.81.040304. [17] K. Dobek, M. Karpiński, R. Demkowicz-Dobrzański, K. Banaszek, and P. Horodecki, Experimental Extraction of Secure Correlations from a Noisy Private State, Physical Review Letters 106, 030501 (2011), 10.1103/​PhysRevLett.106.030501. [18] F. Kaneda, R. Shimizu, S. Ishizaka, Y. Mitsumori, H. Kosaka, and K. Edamatsu, Experimental Activation of Bound Entanglement, Physical Review Letters 109, 040501 (2012), 10.1103/​PhysRevLett.109.040501. [19] E. Amselem, M. Sadiq, and M. Bourennane, Experimental bound entanglement through a Pauli channel, Scientific Reports 3, 1966 (2013), 10.1038/​srep01966. https:/​/​doi.org/​10.1038/​srep01966 [20] K. Dobek, M. Karpiński, R. Demkowicz-Dobrzański, K. Banaszek, and P. Horodecki, Experimental generation of complex noisy photonic entanglement, Laser Physics 23, 025204 (2013), 10.1088/​1054-660X/​23/​2/​025204. https:/​/​doi.org/​10.1088/​1054-660X/​23/​2/​025204 [21] J. DiGuglielmo, A. Samblowski, B. Hage, C. Pineda, J. Eisert, and R. Schnabel, Experimental Unconditional Preparation and Detection of a Continuous Bound Entangled State of Light, Physical Review Letters 107, 240503 (2011), 10.1103/​PhysRevLett.107.240503. [22] B. C. Hiesmayr and W. Löffler, Complementarity reveals bound entanglement of two twisted photons, New Journal of Physics 15, 083036 (2013), 10.1088/​1367-2630/​15/​8/​083036. [23] M. Paris and J. Řeháček (eds.), Quantum State Estimation, vol. 649 of Lecture Notes in Physics (Springer Berlin Heidelberg, Berlin, Heidelberg) (2004), ISBN 978-3-540-22329-0, 10.1007/​b98673. https:/​/​doi.org/​10.1007/​b98673 [24] Z. Hradil, Quantum-state estimation, Physical Review A 55, R1561 (1997), 10.1103/​PhysRevA.55.R1561. https:/​/​doi.org/​10.1103/​PhysRevA.55.R1561 [25] D. F. V. James, P. G. Kwiat, W. J. Munro, and A. G. White, Measurement of qubits, Physical Review A 64, 052312 (2001), 10.1103/​PhysRevA.64.052312. [26] T. Sugiyama, P. S. Turner, and M. Murao, Effect of non-negativity on estimation errors in one-qubit state tomography with finite data, New Journal of Physics 14, 085005 (2012), 10.1088/​1367-2630/​14/​8/​085005. [27] C. Schwemmer, L. Knips, D. Richart, H. Weinfurter, T. Moroder, M. Kleinmann, and O. Gühne, Systematic Errors in Current Quantum State Tomography Tools, Physical Review Letters 114, 080403 (2015), 10.1103/​PhysRevLett.114.080403. [28] B. Efron and R. J. Tibshirani, An introduction to the bootstrap (Chapman & Hall/​CRC) (1994), ISBN 0-412-04231-2. [29] J. Shang, H. K. Ng, A. Sehrawat, X. Li, and B.-G. Englert, Optimal error regions for quantum state estimation, New Journal of Physics 15, 123026 (2013), 10.1088/​1367-2630/​15/​12/​123026. [30] M. Christandl and R. Renner, Reliable Quantum State Tomography, Physical Review Letters 109, 120403 (2012), 10.1103/​PhysRevLett.109.120403. [31] R. Blume-Kohout, Robust error bars for quantum tomography, arXiv:1202.5270 [quant-ph] (2012). arXiv:1202.5270 [32] L. Knips, C. Schwemmer, N. Klein, J. Reuter, G. Tóth, and H. Weinfurter, How long does it take to obtain a physical density matrix?, arXiv:1512.06866 [quant-ph] (2015). [33] D. Suess, Ł. Rudnicki, T. O. Maciel, and D. Gross, Error regions in quantum state tomography: computational complexity caused by geometry of quantum states, New Journal of Physics 19, 093013 (2017), 10.1088/​1367-2630/​aa7ce9. https:/​/​doi.org/​10.1088/​1367-2630/​aa7ce9 [34] K. Życzkowski, Volume of the set of separable states. II, Physical Review A 60, 3496 (1999), 10.1103/​PhysRevA.60.3496. https:/​/​doi.org/​10.1103/​PhysRevA.60.3496 [35] S. Bandyopadhyay, S. Ghosh, and V. Roychowdhury, Robustness of entangled states that are positive under partial transposition, Physical Review A 77, 032318 (2008), 10.1103/​PhysRevA.77.032318. [36] O. Gühne and G. Tóth, Entanglement detection, Physics Reports 474, 1 (2009), 10.1016/​j.physrep.2009.02.004. https:/​/​doi.org/​10.1016/​j.physrep.2009.02.004 [37] B. Baumgartner, B. C. Hiesmayr, and H. Narnhofer, State space for two qutrits has a phase space structure in its core, Physical Review A 74, 032327 (2006), 10.1103/​PhysRevA.74.032327. [38] R. A. Bertlmann and P. Krammer, Bloch vectors for qudits, Journal of Physics A: Mathematical and Theoretical 41, 235303 (2008), 10.1088/​1751-8113/​41/​23/​235303. [39] R. A. Bertlmann and P. Krammer, Bound entanglement in the set of Bell-state mixtures of two-qutrits, Physical Review A 78, 014303 (2008), 10.1103/​PhysRevA.78.014303. [40] R. A. Bertlmann and P. Krammer, Geometric entanglement witnesses and bound entanglement, Physical Review A 77, 024303 (2008), 10.1103/​PhysRevA.77.024303. [41] R. A. Bertlmann and P. Krammer, Entanglement witnesses and geometry of entanglement of two-qutrit states, Annals of Physics 324, 1388 (2009), 10.1016/​j.aop.2009.01.008. https:/​/​doi.org/​10.1016/​j.aop.2009.01.008 [42] G. Sentís, C. Eltschka, and J. Siewert, Quantitative bound entanglement in two-qutrit states, Physical Review A 94, 020302(R) (2016), 10.1103/​PhysRevA.94.020302. [43] T. Moroder and O. Gittsovich, Calibration-robust entanglement detection beyond Bell inequalities, Physical Review A 85, 032301 (2012), 10.1103/​PhysRevA.85.032301. [44] N. L. Johnson, S. Kotz, and N. Balakrishnan, Continuous Univariate Distributions, 2nd ed. (John Wiley & Sons, New York) (1994), ISBN 978-0-471-58495-7. [1] Saronath Halder and Ritabrata Sengupta, "Construction of noisy bound entangled states and the range criterion", Physics Letters A 383 17, 2004 (2019). [2] Yanwu Gu, Weijun Li, Michael Evans, and Berthold-Georg Englert, "Very strong evidence in favor of quantum mechanics and against local hidden variables from a Bayesian analysis", Physical Review A 99 2, 022112 (2019). [3] Saronath Halder, Manik Banik, and Sibasish Ghosh, "Family of bound entangled states on the boundary of the Peres set", Physical Review A 99 6, 062329 (2019). The above citations are from Crossref's cited-by service (last updated 2019-07-15 16:37:40). The list may be incomplete as not all publishers provide suitable and complete citation data. On SAO/NASA ADS no data on citing works was found (last attempt 2019-07-15 16:37:40).
CommonCrawl
Patient stratification for determining optimal second-line and third-line therapy for type 2 diabetes: the TriMaster study Patient preference for second- and third-line therapies in type 2 diabetes: a prespecified secondary endpoint of the TriMaster study Beverley M. Shields, Catherine D. Angwin, … Andrew T. Hattersley Efficacy and Safety of Pioglitazone Monotherapy in Type 2 Diabetes Mellitus: A Systematic Review and Meta-Analysis of Randomised Controlled Trials Fahmida Alam, Md. Asiful Islam, … Siew Hua Gan Composite cardiovascular risk and BMI affected comparative profiles of BIAsp 30 + metformin vs BIAsp 30 monotherapy: a MERIT post-hoc analysis Lixin Guo, Baocheng Chang, … Qinghua He A prospective cohort study on effects of gemigliptin on cardiovascular outcomes in patients with type 2 diabetes (OPTIMUS study) Eun Heui Kim, Sang Soo Kim, … In Joo Kim Two drugs are better than one to start T2DM therapy Francesco Prattichizzo, Lucia La Sala & Antonio Ceriello Efficacy and safety of GLP-1 receptor agonists as add-on to SGLT2 inhibitors in type 2 diabetes mellitus: A meta-analysis Marco Castellana, Angelo Cignarelli, … Francesco Giorgino Cardiovascular safety and efficacy of metformin-SGLT2i versus metformin-sulfonylureas in type 2 diabetes: systematic review and meta-analysis of randomized controlled trials Desye Gebrie, Desalegn Getnet & Tsegahun Manyazewal Assessing the need for pioglitazone in the treatment of patients with type 2 diabetes: a meta-analysis of its risks and benefits from prospective trials Binayak Sinha & Samit Ghosal Efficacy and safety of sitagliptin treatment in older adults with moderately controlled type 2 diabetes: the STREAM study Mototsugu Nagao, Jun Sasaki, … STREAM Study Investigators Beverley M. Shields1, John M. Dennis ORCID: orcid.org/0000-0002-7171-732X1, Catherine D. Angwin1, Fiona Warren2,3, William E. Henley3, Andrew J. Farmer ORCID: orcid.org/0000-0002-6170-44024, Naveed Sattar ORCID: orcid.org/0000-0002-1604-25935, Rury R. Holman ORCID: orcid.org/0000-0002-1256-874X6, Angus G. Jones ORCID: orcid.org/0000-0002-0883-75991, Ewan R. Pearson ORCID: orcid.org/0000-0001-9237-85857, Andrew T. Hattersley ORCID: orcid.org/0000-0001-5620-473X1 & TriMaster Study group Nature Medicine (2022)Cite this article Precision medicine aims to treat an individual based on their clinical characteristics. A differential drug response, critical to using these features for therapy selection, has never been examined directly in type 2 diabetes. In this study, we tested two hypotheses: (1) individuals with body mass index (BMI) > 30 kg/m2, compared to BMI ≤ 30 kg/m2, have greater glucose lowering with thiazolidinediones than with DPP4 inhibitors, and (2) individuals with estimated glomerular filtration rate (eGFR) 60–90 ml/min/1.73 m2, compared to eGFR >90 ml/min/1.73 m2, have greater glucose lowering with DPP4 inhibitors than with SGLT2 inhibitors. The primary endpoint for both hypotheses was the achieved HbA1c difference between strata for the two drugs. In total, 525 people with type 2 diabetes participated in this UK-based randomized, double-blind, three-way crossover trial of 16 weeks of treatment with each of sitagliptin 100 mg once daily, canagliflozin 100 mg once daily and pioglitazone 30 mg once daily added to metformin alone or metformin plus sulfonylurea. Overall, the achieved HbA1c was similar for the three drugs: pioglitazone 59.6 mmol/mol, sitagliptin 60.0 mmol/mol and canagliflozin 60.6 mmol/mol (P = 0.2). Participants with BMI > 30 kg/m2, compared to BMI ≤ 30 kg/m2, had a 2.88 mmol/mol (95% confidence interval (CI): 0.98, 4.79) lower HbA1c on pioglitazone than on sitagliptin (n = 356, P = 0.003). Participants with eGFR 60–90 ml/min/1.73 m2, compared to eGFR >90 ml/min/1.73 m2, had a 2.90 mmol/mol (95% CI: 1.19, 4.61) lower HbA1c on sitagliptin than on canagliflozin (n = 342, P = 0.001). There were 2,201 adverse events reported, and 447/525 (85%) randomized participants experienced an adverse event on at least one of the study drugs. In this precision medicine trial in type 2 diabetes, our findings support the use of simple, routinely available clinical measures to identify the drug class most likely to deliver the greatest glycemic reduction for a given patient. (ClinicalTrials.gov registration: NCT02653209; ISRCTN registration: 12039221.) Precision medicine aims to tailor treatment to an individual based on their clinical characteristics1. The most successful examples of precision medicine to date have been in cancer and monogenic disease, where genetic sequencing has indicated molecularly distinct subtypes that could benefit from specific treatment strategies2,3. This approach, however, is not suitable for common polygenic complex diseases, so other strategies are needed. Type 2 diabetes is an attractive candidate for a precision medicine approach as it is a heterogeneous disease with varying underlying pathophysiology, and there are many different options for glucose-lowering treatment available that have differing mechanisms of action4. Identifying clinical characteristics or biomarkers robustly associated with differential treatment responses could allow the targeting of specific glucose-lowering agents to those most likely to benefit. In the 2022 American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) international guidelines, the targeting of therapy based on a person's clinical features is limited5. In patients with established atherosclerotic cardiovascular disease, glucagon-like peptide 1 receptor agonists (GLP-1RA) or sodium–glucose co-transporter 2 inhibitors (SGLT2i) are recommended5. Patients with either heart failure or chronic kidney disease are recommended to receive SGLT2i. However, these recommendations apply to only 15–20% of individuals6. For most individuals with type 2 diabetes, current guidelines include a broad choice of potential therapies with differentiation between treatment classes based predominantly on costs and side effect profiles, rather than efficacy. Simple clinical features, such as a person's sex; surrogate markers of insulin resistance, such as body mass index (BMI) and triglycerides; or markers of renal function, such as estimated glomerular filtration rate (eGFR), can be used to stratify people with type 2 diabetes into subgroups showing differential responses to glucose-lowering therapies7. Individuals with obesity have been shown to have a greater glycemic reduction on thiazolidinediones compared to individuals without obesity, whereas a higher BMI is associated with a smaller glycemic reduction on DPP4 inhibitors (DPP4i)8,9. For SGLT2i, which acts through inhibiting the active reabsorption of glucose in the proximal tubule, impaired renal function (lower eGFR) is associated with reduced glucose-lowering efficacy10,11. In contrast, with some DPP4i, impaired renal function is associated with increased glucose-lowering efficacy, likely due to the drug pharmacokinetics where reduced renal clearance can lead to increased plasma DPP4i concentrations12. These associations, to date, have been observed in independent treatment groups in electronic healthcare records and in post hoc analyses of individual participant data in parallel group randomized controlled trials (RCTs)8,9,10,11. The precision medicine approach to using these data-derived strata needs to be tested in a clinical trial. To date, no trials have directly examined a precision medicine approach to prescribing in type 2 diabetes. The effectiveness of any stratified approach for choosing between therapies will depend upon the extent to which differential responses can be predicted, and, therefore, the true test of a precision medicine approach would be to assess the within-person differential responses to therapy. We carried out a three-drug, three-period, randomized crossover trial to assess two specific hypotheses (Figs. 1 and 2) in people with type 2 diabetes treated with metformin alone or with metformin plus sulfonylurea: Individuals with BMI > 30 kg/m2, compared to those with BMI ≤ 30 kg/m2, will have a greater glycemic reduction with a thiazolidinedione (pioglitazone) than with a DPP4i (sitagliptin). Individuals with eGFR 60–90 ml/min/1.73 m2, compared to those with eGFR > 90 ml/min/1.73 m2, will have a greater glycemic reduction with a DPP4i (sitagliptin) than with an SGLT2i (canagliflozin). Fig. 1: Study design for the TriMaster three-treatment, three-period crossover trial of pioglitazone, sitagliptin and canagliflozin. Six sequences represent the six possible treatment orders for pioglitazone (P), canagliflozin (C) and sitagliptin (S). There was no washout between treatment periods. Fig. 2: The two main hypotheses being tested in TriMaster. Flow diagram showing the comparisons and outcomes for each of the hypotheses: differential response to pioglitazone and sitagliptin between BMI strata, and differential response to sitagliptin and canagliflozin between eGFR strata. Participant retention and baseline characteristics Figure 3 shows participant flow throughout the study and the numbers on each drug at each stage. Fig. 3: Trial profile (CONSORT diagram): patient flow through the stages of the crossover trial and eligibility for primary analysis. Numbers presented for each visit are the numbers assigned to each drug at that time. For an HbA1c value to be valid for primary analysis, it needed to be taken after at least 12 weeks of therapy (exclusions indicated by '<12wks'), and participants needed to have at least 80% adherence on the therapy (exclusions indicated by 'adh<80%'). C, canagliflozin; P, pioglitazone; S, sitagliptin. In total, 742 patients were screened for eligibility between 22 November 2016 and 24 January 2020. Of these, 210 patients did not meet eligibility criteria, and seven withdrew before being randomized. Overall, 525 participants were randomized to one of the six sequences of drug allocations (see Table 1 for participant characteristics). Of these, 20 patients withdrew before the baseline visit (four for health reasons, ten changed their mind, two were ineligible, one moved out of the area and three were unable to be contacted), and two withdrew at the baseline visit due to difficulties taking blood, leaving 503 receiving their first study drug. Overall, 45 participants subsequently withdrew (Fig. 3), leading to 458 participants (87% of those randomized) who completed all three study periods. In total, there were 1,417 instances of people taking drugs: 469 pioglitazone, 474 sitagliptin and 474 canagliflozin. Table 1 Characteristics of 525 randomized participants For hypothesis 1, 356 participants (68%) had HbA1c results that could be included in primary analysis (that is, took therapy for at least 12 weeks with >80% adherence based on pill count). For hypothesis 2, 342 participants (65%) had HbA1c results that could be included in primary analysis. No participants were missing eGFR or BMI results. There was no evidence of any HbA1c carryover effect but some evidence of a period effect with participants having a mean (95% CI) 1.38 (0.23, 2.54) mmol/mol lower HbA1c in period 2 compared to period 1. There was no difference in period 3 compared to period 1, suggesting that this was not a sustained reduction over the year (Supplementary Table 1). Period effect was adjusted for in subsequent analysis. Before stratification, there was no difference in achieved HbA1c among the three therapies: pioglitazone 59.6 mmol/mol (95% CI 58.5, 60.7), sitagliptin 60.0 mmol/mol (95% CI: 59.0, 61.1) and canagliflozin 60.6 mmol/mol (95% CI: 59.7, 61.6) (P = 0.2) (Supplementary Table 2). Pioglitazone was associated with the lowest rates of discontinuation; sitagliptin was associated with the lowest mean number of side effects; and canagliflozin was associated with the lowest weight on therapy (Supplementary Table 2). The distribution of side effects on the three therapies is shown in Extended Data Fig. 1. Primary analysis The five components of the estimand for both hypotheses are shown in Extended Data Table 1. For hypothesis 1 (BMI-dependent differential glycemic responses to pioglitazone and sitagliptin), 356 (68% of randomized participants) had valid HbA1c values for both pioglitazone and sitagliptin and so were eligible for hypothesis 1 primary analysis (BMI strata). Eligible participants were slightly older and had a slightly lower HbA1c at baseline compared to those without valid HbA1c values but were similar with respect to other characteristics (Supplementary Table 3). Characteristics of patients in the two BMI strata are shown in Supplementary Table 4. Participants with BMI ≤ 30 kg/m2 had a lower mean 1.48 (95% CI: 0.04, 2.91) mmol/mol achieved HbA1c on sitagliptin compared to pioglitazone. Participants with BMI > 30 kg/m2 had a lower mean 1.44 (95% CI: 0.19, 2.70) mmol/mol achieved HbA1c on pioglitazone compared to sitagliptin (Fig. 4a and Extended Data Table 2). This led to a 2.92 (95% CI: 0.99, 4.85) mmol/mol overall difference between BMI strata. Results were similar in a full mixed effects model, adjusting for period (2.88 (95% CI: 0.98, 4.79) mmol/mol; P = 0.003) (Supplementary Table 5). Fig. 4: Effect of stratification on treatment response. Point estimates represent the mean difference in HbA1c values between the two therapies for hypothesis 1—pioglitazone and sitagliptin, with stratification by obesity (n = 141 BMI ≤ 30; n = 215 BMI > 30) (a) and hypothesis 2—canagliflozin and sitagliptin, with stratification by renal function (eGFR) (n = 163 eGFR 60–90; n = 179 eGFR > 90) (b). Error bars represent 95% CIs. Overall difference between strata was determined from drug × strata interaction in mixed effects analysis adjusting for period. A tipping point analysis suggested that the missing data would need to show a 3.1 mmol/mol difference in HbA1c in the opposite direction to the trial results to change the statistical significance of the findings. The association between BMI and difference in response between pioglitazone and sitagliptin was linear on a continuous scale, indicating that there would be an even greater benefit for pioglitazone at higher BMIs and greater benefit for sitagliptin at lower BMIs (Extended Data Fig. 2a). For hypothesis 2 (eGFR and differential responses to sitagliptin and canagliflozin), 342 (65% of randomized participants) had valid HbA1c values for both sitagliptin and canagliflozin and so were eligible for primary analysis for hypothesis 2 (eGFR strata). There were no differences in characteristics between those eligible and ineligible for hypothesis 2 analysis (Supplementary Table 6). Characteristics of patients in the two eGFR strata are shown in Supplementary Table 7. Participants with eGFR 60–90 ml/min/1.73 m2 had a lower mean (95% CI) 1.74 (0.65, 2.85) mmol/mol achieved HbA1c on sitagliptin compared to canagliflozin. Participants with eGFR >90 ml/min/1.73 m2 had a lower mean 1.08 (−0.24, 2.41) mmol/mol achieved HbA1c on canagliflozin compared to sitagliptin (Fig. 4b and Extended Data Table 3). In a full mixed effects model, adjusting for period, this translated into a difference of 2.90 (1.19, 4.61) mmol/mol between eGFR strata (P = 0.001) (Supplementary Table 8). The association between eGFR and difference in response between sitagliptin and canagliflozin was linear on a continuous scale, indicating that there would be an even greater benefit for canagliflozin at higher eGFR values and greater benefit for sitagliptin at lower eGFR values (Extended Data Fig. 2b) Sensitivity analyses show that results did not differ for either of the tested hypotheses when adjusting for study period, when restricted to only those with HbA1c values when on therapy for at least 15 weeks, when adjusting for differences in time intervals between measurements or when adjusting for those who had >18 weeks of therapy (Supplementary Tables 9–11). Secondary outcomes There was no difference in tolerability between BMI strata for pioglitazone compared to sitagliptin (odds ratio (OR) (95% CI) 2.11 (0.66, 6.76)) for drug × BMI strata interaction in a mixed effects logistic regression analysis (P = 0.2; Supplementary Tables 12 and 13 and Extended Data Table 4) or between eGFR strata for canagliflozin compared to sitagliptin (OR (95% CI) 0.424 (0.158, 1.135)) for drug × eGFR strata interaction in a mixed effects logistic regression analysis (P = 0.09; Supplementary Tables 13 and 14 and Extended Data Table 4). There was no difference in the odds of experiencing at least one side effect for either of the drug/strata combinations of interest (OR (95% CI) 0.68 (0.31, 1.45), P = 0.3 for drug × BMI strata interaction; OR (95% CI) 1.46 (0.70, 3.04), P = 0.3 for drug × eGFR strata interaction) (Extended Data Table 5 and Supplementary Tables 15 and 16). There was evidence of period and carryover effects for weight, with participants being heavier on average as the trial progressed and with a carryover effect (P < 0.001), with either canagliflozin or sitagliptin treatment in the previous period associated with lower weight compared to pioglitazone treatment in the previous period (Supplementary Table 17). This means that absolute weight differences observed between drugs need to be treated with caution. When analyzing by strata, pioglitazone was associated with a higher weight compared to sitagliptin in both BMI categories, and this was more pronounced in those with BMI > 30 kg/m2 (Extended Data Table 6). There was no difference in weight between eGFR strata for canagliflozin and sitagliptin (Extended Data Table 6). There was no evidence of any difference in the odds of experiencing hypoglycemia by BMI strata for pioglitazone and sitagliptin or by eGFR strata for sitagliptin and canagliflozin (Extended Data Table 7 and Supplementary Tables 18 and 19). Participant drug preference was a pre-specified secondary analysis and is reported in a separate publication13. There was no difference in drug preference by strata. Pioglitazone was ranked higher than sitagliptin in 131/265 (49%) participants in the BMI > 30 kg/m2 strata compared to 78/183 (43%) in the BMI < 30 kg/m2 strata (P = 0.2; ten participants expressed no preference). Sitagliptin was ranked higher than canagliflozin in 112/214 (52%) in the eGFR 60–90 ml/min/1.73 m2 strata compared to 105/235 (45%) in the eGFR > 90 ml/min/1.73 m2 strata (P = 0.1; nine participants expressed no preference). There were 2,201 adverse events reported throughout the study: 56 pre-trial, one post-trial and 2,144 while on therapy in the trial. Table 2 summarizes the adverse events on therapy reported throughout the trial. In total, 447/525 (85%) randomized participants experienced adverse events on at least one of the study drugs. Forty-five events were classed as serious (three participants died), but none of these was related to the study drugs. Table 2 Adverse events on each of the three study drugs This randomized crossover study provides prospective trial evidence to directly support a stratified approach for therapy to manage glycemia in type 2 diabetes. Our results demonstrate that, for second-line and third-line therapy in type 2 diabetes, simple predefined stratification using BMI and renal function can determine the choice of the drug most likely to be effective for glucose lowering. We show here that, among patients with type 2 diabetes on background metformin or combination metformin and sulfonylurea therapy, stratification based on BMI and eGFR is associated with differential glucose-lowering responses to canagliflozin, sitagliptin and pioglitazone. For a population of people with type 2 diabetes, treating patients with the drug proposed best for their strata rather than the alternative drug could potentially lead to an overall mean improvement of ~3 mmol/mol in those who are able to tolerate the therapy. This stratification could be used to help select glucose-lowering therapies for individuals in clinical practice. For participants with BMI > 30 kg/m2, a lower HbA1c was achieved on pioglitazone compared to sitagliptin, whereas, for those with BMI < 30 kg/m2, a lower HbA1c was achieved with sitagliptin. For participants with impaired renal function (eGFR between 60 ml/min/1.73 m2 and 90 ml/min/1.73 m2), a lower HbA1c was achieved on sitagliptin compared to canagliflozin, whereas, for those with normal renal function (eGFR > 90 ml/min/1.73 m2), a lower HbA1c was achieved on canagliflozin. These findings are concordant with our original study hypothesis. There was no evidence by strata in reported drug tolerability or overall rates of side effects. We found that using different strata led to clinically meaningful differences (~3 mmol/mol) in achieved HbA1c among glucose-lowering therapies. This equates to approximately 3 years without requiring additional therapy, given that the median deterioration in HbA1c in people with type 2 diabetes is 1 mmol/mol per year14. In contrast, without stratification, all three therapies were, on average, equivalent in achieved HbA1c. Although these differences are of a smaller magnitude than the benefits seen with targeted therapy in monogenic diabetes (for example, a ~30 mmol/mol difference in response between metformin and gliclazide treatment for patients with HNF1A_MODY15), the overall improvement through stratification would likely have a pronounced effect at the population level, as type 2 diabetes is far more common (90% of all diabetes for type 2 compared to <1% for MODY)16,17. A lack of difference in tolerability or overall incidence of side effects between strata suggests that, if choice of therapy were to be based solely on the optimal strata for glycemic response, this would not likely lead to any overall increase in these detrimental effects. However, consideration would need to be made regarding the weight gain associated with pioglitazone, which was greater in individuals with obesity and would need to be balanced against the greater HbA1c improvement. Further work is needed to determine the effect of this on other non-glycemic effects, such as blood pressure. These findings, based on binary, free-to-implement strata, establish the principle of stratification helping to target type 2 diabetes treatment to those most likely to benefit, and they represent a step forward in the translation of type 2 diabetes precision medicine into clinical practice. However, ultimately, a more sophisticated 'precision' approach using models that integrate multiple individual-level clinical features (for example, BMI, HbA1c and eGFR) on a continuous scale will have the greatest utility for clinical practice7,18. Using individual-level features will likely enable the identification of more 'extreme' patient phenotypes with large differences in HbA1c reduction than we demonstrated with binary strata based on clinically defined cutoffs. For example, when we look at the impact of BMI or renal function on a continuous scale rather than two dichotomous groups, it is clear that those with more extreme values have greater differential response to the treatment. Such models could potentially be optimized to incorporate more advanced biomarkers and genetics and to evaluate additional outcomes beyond HbA1c19. Our findings are consistent with previous research from trial subgroup and observational data that has suggested that higher BMI may be associated with increased glucose lowering to thiazolidinediones9 and modestly reduced glucose lowering to DPP4 inhibtors8 and with research suggesting that lower eGFR is associated with reduced response to SGLT2 inhibitors10,11. Pioglitazone acts through altering the transcription of genes influencing carbohydrate and lipid metabolism in adipocytes20, which could lead to a greater glycemic effect in those with higher BMI. For sitagliptin, which reduces degradation of incretin hormones, including GLP-1, thereby potentiating insulin secretion, the association of greater HbA1c reduction in those with lower BMI is less clear. Potential mechanisms include the impact of high insulin resistance on the action of a drug that acts predominantly through potentiating insulin secretion, impaired GLP-1 secretion in obesity or direct effects of lipotoxicity on GLP-1 receptor expression, which have been demonstrated in animal models21,22,23. For SGLT2i, the drug mechanism of action to lower glucose levels per se (as opposed to its other effects) is through inhibition of renal tubular glucose reabsorption, and a low eGFR might, therefore, be expected to lead to reduced filtration of glucose and, subsequently, reduced glycosuria with SGLT2i therapy24. A key strength of this RCT is that we have shown that these differences are observed in the crossover setting, allowing robust assessment of differential response to therapy within individuals and, therefore, direct assessment of stratified treatment that cannot be undertaken from existing trials with a parallel group design. The crossover design also requires a much smaller sample size compared to parallel group trials. Our RCT had several limitations. The crossover design, although more powerful for assessing within-person differences, does require careful design to avoid period and carryover effects. We did see a period effect with a reduction in HbA1c in the second period, but, in line with our Statistical Analysis Plan, we adjusted for this in our analysis, and this was not a sustained reduction over the year, which would indicate a more general decline in glycemic control. We did not see a carryover effect for HbA1c, our primary outcome, but there was carryover with weight limiting the interpretability of the effect sizes for the associations seen with weight. In addition, the crossover design enabled an assessment of only short-term outcomes, meaning that we did not evaluate durability of HbA1c reduction, cardiovascular outcomes or development of diabetes complications. However, our previous work using parallel group trial data and observational data suggests that differences in response associated with strata are maintained over time, with early HbA1c response representative of long-term effects8,25. Most of our study population was male (73%) and self-reported White ethnicity (94%), which limits conclusions about the relative benefits and risks of these therapies in females and in other ethnic groups. We assessed only specific glucose-lowering agents, and findings cannot be assumed to reflect class effects of SGLT2 inhibitors, DPP4 inhibitors and thiazolidinediones. We chose a per-protocol analysis rather than an intention-to-treat approach for our primary analysis, as we could not obtain a valid HbA1c value when participants had not taken the therapy for at least 12 weeks, and imputation with baseline measures was deemed inappropriate due to the pre-study baseline not being representative for later study periods. This means that the inferences from this study apply only to those who can tolerate the therapies of interest. There were some minor differences between individuals included and excluded from the BMI-defined strata (hypothesis 1), but there was no difference in tolerability between the study drug and/or strata combinations. In addition, tipping point analysis indicated that the missing data would have to show large differences in the opposite direction to change the statistical significance of our findings. Therefore, we are confident that our findings are not artifacts of our analytical approach and are reflective of the effects seen in those who are able to tolerate the respective therapies. It should be recognized that we studied only patients treated with metformin (with or without a sulphonylurea) at baseline and that the glycemia-related and tolerability-related outcomes that we studied are not the only factors considered by clinicians and patients when choosing a glucose-lowering therapy for a patient with type 2 diabetes. In patients with established atherosclerotic cardiovascular disease (or those at elevated risk), or with chronic kidney disease or heart failure, SGLT2 inhibitors are the recommended drugs in international guidelines, and GLP1-RAs are recommended for those with atherosclerotic cardiovascular disease26. In addition, despite still being a low-cost treatment option proposed in guidelines, prescribing of thiazolidinediones is declining27,28. Any precision medicine approach based on short-term outcomes, such as glycemia, will need to be embedded in existing treatment pathways based on the longer-term cardiorenal risk benefits of specific therapies. In patients without specific cardiorenal indications (~80% of patients6), the 2022 ADA/EASD updated guidelines offer many treatment options, so considering likely glycemic response (based on participant characteristics), alongside other factors considered in current practice (such as cost and side effect profile), may offer a low-cost approach to improving treatment response and patient outcomes. We show here, in a randomized crossover study, that clinically relevant differences in glycemic responses to therapy in type 2 diabetes can be seen when stratifying a patient population based on BMI and eGFR, leading to benefits in those who tolerate these therapies that would not be observed if considering overall glycemic response to the three drugs in the population as a whole. This study represents a prospective demonstration of a potential stratified approach to type 2 diabetes treatment. This study was approved by the UK Health Research Authority Research Ethics Committee South Central—Oxford A (16/SC/0147). This trial was conducted and analyzed in line with the previously published protocol29 and the Statistical Analysis Plan (the full TriMaster Statistical Analysis Plan is freely available and can be downloaded from https://ore.exeter.ac.uk/repository/handle/10871/125162). The trial was registered at ClinicalTrials.gov (NCT02653209) and the ISRCTN registry (12039221). Major protocol amendments were approved by the Royal Devon University Healthcare NHS Foundation Trust as sponsor, the UK Health Research Authority (HRA) Research Ethics Committee South Central—Oxford A and the UK Medicines and Healthcare products Regulatory Agency (where relevant). Details of all 12 major amendments are included in Extended Data Table 7: Protocol Amendments in the TriMaster randomized three-way crossover trial. We conducted a double-blind, randomized crossover trial of three glucose-lowering therapies (pioglitazone 30 mg once daily, sitagliptin 100 mg once daily and canagliflozin 100 mg once daily) in 24 UK centers (Supplementary Table 20). The three-way crossover trial was undertaken as an efficient, faster and more cost-effective approach to address both hypotheses, requiring fewer participants than performing two two-way crossover studies. In addition, this study design allows a unique opportunity to compare the effects of these three medications within a single person, including participant tolerance and therapy preference13 Study participants Participants were adults aged ≥30 years and ≤80 years, with a clinical diagnosis of type 2 diabetes for at least 12 months and treated with either metformin alone or two classes of oral glucose-lowering therapy (given either as separate or combined medications) that do not include a DPP4 inhibitor, an SGLT2 inhibitor or a thiazolidinedione. This was likely to be metformin and sulphonylurea but included prandial glucose regulators nateglinide or repaglinide. No change of diabetes treatment (new therapy or dose change) was permitted in the previous 3 months. Participants had HbA1c > 58 mmol/mol (7.5%) and ≤110 mmol/mol (12.2%) and eGFR ≥ 60 ml/min/1.73 m2, both results confirmed at a screening visit, and were able and willing to give informed consent. Patients were excluded if screening blood tests identified alanine transaminase (ALT) > 2.5× upper limit of the assay normal range (ULN) or known liver disease, specifically bilirubin >30 μmol/L associated with other evidence of liver failure; HbA1c ≤ 58 mmol/mol (7.5%) or >110 mmol/mol (12.2%); or eGFR <60 ml/min/1.73 m2. Treatment with insulin in the previous 12 months or with any of the study drugs within the previous 3 months were exclusion criteria, as was current treatment with corticosteroids, rifampicin, gemfibrozil, phenytoin and carbamazepine, loop diuretics (furosemide or bumetanide) or antibiotics for active infection. Presence of limb ischemia shown by absence of both pulses in one or both feet at screening, a foot ulcer requiring antibiotics in the previous 3 months or any active infection requiring antibiotics were also exclusions. Patients could not be recruited if they were undergoing current/ongoing investigation for macroscopic hematuria, had recent (within 3 months) or planned major surgery or had experienced an acute cardiovascular episode (angina, myocardial infarction, stroke, or transient ischemic episode) within the previous 3 months. Also excluded were patients with any history of heart failure, bladder carcinoma, diabetic ketoacidosis or pancreatitis. Patients were not recruited if they were pregnant, breastfeeding or planning a pregnancy over the study period, and concurrent participation on another clinical trial of investigational medicinal product (CTIMP) where the investigational medicinal product (IMP) was currently being taken, without a sufficient washout period (5× half-life of the IMP/potential IMP), was also not permitted. Participants were identified in primary care and from existing research cohorts. People with type 2 diabetes were eligible if aged 30–80 years on stable doses of metformin alone or metformin plus a sulfonylurea, with HbA1c > 58 mmol/mol (>7.5%) and ≤110 mmol/mol (≤12.2%). Figure 1 shows the design of the trial. Participants provided written informed consent. Ethnicity was self-reported by participants against standard 2011 UK Office for National Statistics coding. Those meeting screening criteria and consenting to take part were randomized to one of the six possible therapy sequences and asked to take each allocated therapy in turn for 16 weeks, with both participants and investigators blinded to therapy allocation. There was no washout between therapies. The 16-week treatment period was designed to minimize any carryover (the effects of the previous treatment on HbA1c in the subsequent period): all three drugs have half-lives between 7 hours and 14 hours, and HbA1c measurement reflects the previous 8–12 weeks of glycemia. Therefore, the end-of-treatment-period HbA1c represented the initial glycemic response to the drug for that individual. Randomization and blinding Randomization was carried out at the baseline visit as described in the study protocol and Statistical Analysis Plan. The three therapies were allocated in random order according to six possible treatment orders: ABC, ACB, BAC, BCA, CAB and CBA. Drugs were blinded by over-encapsulation (Tayside Pharmaceuticals) with allocations blinded to the participants, study team, study researchers and study statistician. Study procedure Within 2 weeks of screening, participants attended a baseline fasting visit. Subsequent research visits were scheduled to take place after 16–18 weeks of study treatment, but participants were offered the opportunity to stop a treatment early and move on to the next treatment period if they were unable to tolerate the therapy. At the baseline and end-of-therapy visits, blood samples were collected for measurement of HbA1c, weight and blood pressure, and the participant's experiences of the therapy and potential side effects were recorded (once daily). Participants were compensated for travel expenses only. Biochemistry measures Recruiting centers used local results to confirm eligibility, but all biochemical tests used in analysis, except HbA1c, were centrally analyzed at Exeter Clinical Laboratory. These included albumin, aspartate aminotransferase (AST), bilirubin, NT-pro-BNP, cholesterol, C-peptide, creatinine, fructosamine, glucose, HDL cholesterol, islet autoantibodies (GAD, IA2 and ZnT8), insulin, LDL cholesterol and triglycerides. To ensure standardization across centers, eGFR was calculated using the CKD-EPI equation by the central database, based on serum creatinine, sex, ethnicity and age as collected at baseline. All HbA1c assessment was performed by recruiting center NHS laboratories to ensure that results were available for screening and to inform final patient preference. HbA1c assays were CE marked, fully validated and accredited by the UK Accreditation Service. Measurement of adherence Participants were asked to return their medication bottle and all unused capsules at the end of the study, with adherence in each treatment period expressed as a percentage calculated as number of tablets taken divided by the expected number of tablets to be taken (number of days between study visits). Where pill count was not available, adherence was based on four questions around self-reported compliance (if the patient ever forgot to take their medicine, if they were careless about taking their medicine, if they stopped taking their medicine if they felt unwell and if they stopped taking their medicine if they felt better). Participants were considered to be non-adherent if they answered yes to at least three of the four questions. Primary outcome The primary outcome was the HbA1c value achieved after each treatment period as long as the participant had taken the study drug for at least 12 weeks and had at least 80% adherence on therapy. The following secondary outcomes were assessed: Tolerability, defined as taking the drug for at least 12 weeks. Participant-reported side effects, assessed at the end of each treatment period (see Supplementary Table 21 for the full list). For analysis by strata, these were summarized into a binary variable 'any' or 'none' for each drug for each participant. We defined side effects as any experienced in the treatment periods, including those where they were also reported at baseline. Weight on each therapy, measured at the end of each treatment period. Participant-reported experience of hypoglycemia at the end of each treatment period (binary variable: experienced at least one episode of hypoglycemia versus none). Low blood glucose was defined as either 'episodes of hypoglycemia where you felt confused, disorientated or lethargic and were unable to treat yourself' or 'hypoglycemic episodes where you were unconscious or had a seizure and needed glucagon or intravenous glucose'. At both baseline and subsequent timepoints, number of episodes or experience of hypoglycemia was self-reported and collected on data collection forms. Patient preference of therapy. Participants ranked the three drugs in overall preference: 1 for most preferred, 3 for least preferred. In line with a change to the Statistical Analysis Plan that we specified before data lock, analysis and unblinding, we did not analyze gender differences as a secondary outcome as our study was powered for a 60:40 split in strata, whereas 73% of our cohort were male. Adverse event recording Adverse events or reactions were recorded as they presented or at research visits and reported to the sponsor and Data Monitoring Committee at regular intervals. Adverse events were rated in terms of severity, seriousness and causality and coded according to MedDRA dictionary terms. Changes to protocol All protocol amendments are detailed in Extended Data Table 8. All analyses were carried out in line with the TriMaster Statistical Analysis Plan, which was approved before data lock and drug allocations being provided. Investigation of participant preference, including additional exploratory analysis, is reported separately13. All analysis was carried out using a validated version of Stata 16.1. In line with the Statistical Analysis Plan, statistical significance was defined as P < 0.05, based on two-sided tests of significance. Effect of stratification by clinical features Figure 4 shows the overall approach for the primary analysis for the two hypotheses. For each hypothesis and corresponding drug comparison, the aim was to assess whether the difference in achieved HbA1c for the two drugs differed for the two strata (either BMI above or below 30 kg/m2 or eGFR above or below 90 ml/min/1.73 m2), the null hypothesis being that the difference in HbA1c between drugs will be the same between strata. Rationale for a per-protocol approach Analysis was carried out using a per-protocol approach. For a participant to be included in the primary analysis, it was necessary to have a valid HbA1c value. For intention-to-treat analysis, in the absence of a valid HbA1c value, some form of imputation of missing values would be required. This is more challenging in a crossover setting30, as parallel group approaches, such as imputing with the baseline, are not valid, as the pre-treatment baseline is an appropriate baseline only for the first period. However, we recognize that the missing data could be informative. Therefore, to address this issue, we proposed two further analyses to explore the extent to which the missing HbA1c values could affect the final results: a tipping point analysis (see 'sensitivity analysis') and a secondary analysis of tolerability. Carryover and period effects Carryover and period effects were checked before the main analysis. In line with the Statistical Analysis Plan, we examined first-order carryover effects (that is, carryover from the preceding period only) using mixed effects models with drug, period and a carryover variable (that is, drug in previous period) as fixed effects, participant as a random effect and HbA1c as the outcome. The carryover variable used the same coding as the drug variable or a 0 if in the first period (adjustment for period removes this part of the carryover term in analysis). We adjusted for period in primary analysis by adding as a fixed effect variable in the mixed effects models. For each hypothesis, the mean (95% CI) for the difference in HbA1c between the two drugs of interest was calculated and also the mean (95% CI) difference of these differences (treatment contrasts) between the two strata of interest (Fig. 4). Distribution of HbA1c difference was checked and confirmed to be normally distributed. For the main analysis, a mixed effects model was used for each hypothesis to allow adjustment for study period, with HbA1c as the outcome, participant as the random effect and drug, period, stratum and drug × stratum interaction as fixed effects. The drug × stratum interaction represented the effect size of interest. Pre-specified sensitivity analyses We examined whether substantial amendment to protocol SA6 (expanding the inclusion criteria to including participants treated with metformin alone as well as metformin and sulfonylureas) affected the main findings by adding in an 'epoch' term to the model, where 'epoch' was a binary variable representing before or after the change in inclusion criteria. We repeated the main analysis but included only participants who completed the full treatment period (at least 15 weeks to allow for flexibility in arranging study visits). We examined whether receiving the study drug for >18 weeks (substantial amendment to protocol SA12 in relation to the COVID-19 pandemic) affected the main findings, by adding in a binary variable to the model for those with treatment periods greater or less than 18 weeks. Tipping point analysis As we were analyzing using a per-protocol approach, a tipping point analysis was used to explore what change in treatment contrast would be required as a result of the missing data to change the statistical significance of the outcome31. The tipping point, Δ, was designated according to when it would change the outcome of the study at the 5% significance level, calculated by: $${\Delta} = \frac{{\left( {1.96 \times \mathrm{SE}} \right) - \tau }}{f}$$ where τ is the main effect size from analysis (difference in treatment contrasts between stratum), SE is the standard error of this effect size and f is the fraction of the cohort with missing data. Secondary analyses relating to stratification hypotheses Tolerability: We tested whether the odds of each of tolerability, side effects (any versus none) and hypoglycemia (any versus none) differed by the two hypothesized drug/stratum combinations. Each of these secondary outcomes were binary variables, so analysis followed the same approach as primary analysis using the same predictors but using mixed effects logistic regression models instead. As before, the drug × strata interaction represented the effect size of interest, but, this time, the output was an OR, as the data were binary. Weight: We assessed differences in weight by drug/stratum as in the primary analysis hypotheses using similar mixed effects models to those used in primary analysis but with weight as the outcome. Patient preference: For each hypothesis, we examined whether a patient's preferred drug differed by strata. All other analysis relating to patient preference is reported in an additional paper submitted separately13. For each hypothesis, we compared whether the proportions preferring each of the two drugs of interest differed by strata using the chi-squared test. Secondary analyses of overall differences in outcomes Overall weight and HbA1c: Mean and s.d. for weight and HbA1c for each of the three drugs were examined, with statistical differences across all three determined using mixed effects models with drug (three-level factor) as the fixed effect and participant ID as the random effect. Overall side effects: This analysis was descriptive, examining the proportions reporting experiencing each of the 16 side effects that the patients were asked about for each of the three drugs. Overall tolerability: We report proportions not tolerating therapy (that is, not completing at least 12 weeks of therapy) for each drug. As specified in the Statistical Analysis Plan, we compared tolerability using both a Mantel–Haenszel approach and a mixed effects model. Results were similar using both approaches, but, for clarity, we just present the P values based on the mixed effects model with tolerability as the outcome, drug and period as fixed effects and participant ID as the random effect. For each hypothesis, to detect a difference of 0.35 s.d. (equivalent to a 3.0-mmol/mol difference between the two strata on the two different therapies) with 90% power and α = 0.05, we required 172 participants in each stratum. To allow for the possibility of uneven numbers in each stratum (up to a 60%:40% split), the sample size was increased to 358. To allow for a withdrawal rate of 15% and exclusion from primary analysis due to discontinuing at least one study drug (estimated at 19%), the sample size was increased to 520. Differences to Statistical Analysis Plan For side effects, in the Statistical Analysis Plan we stated that we would examine only new side effects (that is, not previously experienced), but we changed to any side effects after discussion of the presentation of findings. By examining only new side effects, this did not allow us to show change in side effects from baseline. It was apparent that the proportion experiencing some side effects went down on treatments compared to baseline, whereas some went up. By allowing analysis of all, we were able to demonstrate this. It also meant that participants could record the same side effect on two different drugs, which would not be possible otherwise. The full distribution of participants reporting side effects for baseline and on each of the therapies is presented for completeness. Additional analyses to original Statistical Analysis Plan There were no major changes to the analysis proposed compared to the original Statistical Analysis Plan, but some minor additional analysis was carried out to explore differences in side effects (any versus none) between drugs and strata, as a way of capturing the overall burden of side effects. A strata-specific analysis for each side effect would entail multiple testing without prior hypotheses and increased likelihood of type 1 errors, so this was deemed inappropriate. We also report numbers of adverse events for each drug, split by severity and relatedness and whether they were associated with withdrawal or non-tolerability. A further sensitivity analysis was conducted to explore the impact of residual autocorrelation arising from time trends or treatment carryover on the main effect sizes (drug × strata interactions). The mixed effects models for the primary analysis were extended by defining an exponential autocorrelation structure for the residual errors. This allowed for the pairwise correlation between HbA1c measurements to decrease systematically as the time gap increased and could account for irregularly spaced intervals. Finally, we added in scatter plots to show the association on a continuous scale between each of BMI and eGFR against the difference in HbA1c for the two drugs of interest for each hypothesis. We present Pearson correlation coefficients to show the strength of associations alongside these. No imputation was carried out, and missing data were minimal. Participants required eGFR and BMI to be included in primary analysis, but this was available on all randomized participants. We report n for each analysis and in the tables of results throughout. Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. To minimize the risk of patient re-identification, de-identified individual patient-level clinical data are available under restricted access. Requests for access to anonymized individual participant data and study documents should be made to the corresponding author and will be reviewed by the Peninsula Research Bank Steering Committee. Access to data through the Peninsula Research Bank will be granted for requests with scientifically valid questions by academic teams with the necessary skills appropriate for the research. Data that can be shared will be released with the relevant transfer agreement. Requests for access to code should be made to the corresponding author and will be reviewed by the Peninsula Research Bank Steering Committee. Access to code through the Peninsula Research Bank will be granted for requests with scientifically valid questions by academic teams with the necessary skills appropriate for the research. Code will be released by the lead statistician. Bell, J. Stratified medicines: towards better treatment for disease. Lancet 383, S3–S5 (2014). Jackson, S. E. & Chester, J. D. Personalised cancer medicine. Int. J. Cancer 137, 262–266 (2015). Hattersley, A. T. & Patel, K. A. Precision diabetes: learning from monogenic diabetes. Diabetologia 60, 769–777 (2017). Chung, W. K. et al. Precision Medicine in Diabetes: a consensus report from the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetes Care 43, 1617–1635 (2020). Davies, M. J. et al. Management of Hyperglycemia in Type 2 Diabetes, 2022. A consensus report by the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetes Care 45, 2753–2786 (2022). Hinton, W., Feher, M., Munro, N., Walker, M. & de Lusignan, S. Real-world prevalence of the inclusion criteria for the LEADER trial: data from a national general practice network. Diabetes Obes. Metab. 21, 1661–1667 (2019). Dennis, J. M. Precision medicine in type 2 diabetes: using individualized prediction models to optimize selection of treatment. Diabetes 69, 2075–2085 (2020). Dennis, J. M. et al. Precision medicine in type 2 diabetes: clinical markers of insulin resistance are associated with altered short- and long-term glycemic response to DPP-4 inhibitor therapy. Diabetes Care 41, 705–712 (2018). Dennis, J. M. et al. Sex and BMI alter the benefits and risks of sulfonylureas and thiazolidinediones in type 2 diabetes: a framework for evaluating stratification using routine clinical and individual trial data. Diabetes Care 41, 1844–1853 (2018). Gilbert, R. E. et al. Impact of age and estimated glomerular filtration rate on the glycemic efficacy and safety of canagliflozin: a pooled analysis of clinical studies. Can. J. Diabetes 40, 247–257 (2016). Cherney, D. Z. I. et al. Pooled analysis of phase III trials indicate contrasting influences of renal function on blood pressure, body weight, and HbA1c reductions with empagliflozin. Kidney Int. 93, 231–244 (2018). Goldenberg, R. M. Choosing dipeptidyl peptidase-4 inhibitors, sodium-glucose cotransporter-2 inhibitors, or both, as add-ons to metformin: patient baseline characteristics are crucial. Clin. Ther. 39, 2438–2447 (2017). Shields, B. M. et al Patient preference for second and third line therapies in type 2 diabetes: a prespecified secondary analysis of the TriMaster study. Nat. Med. (2022). Donnelly, L. A. et al. Rates of glycaemic deterioration in a real-world population with type 2 diabetes. Diabetologia 61, 607–615 (2018). Pearson, E. R. et al. Genetic cause of hyperglycaemia and response to treatment in diabetes. Lancet 362, 1275–1281 (2003). Xu, G. et al. Prevalence of diagnosed type 1 and type 2 diabetes among US adults in 2016 and 2017: population based study. BMJ 362, k1497 (2018). Shields, B. M. et al. Population-based assessment of a biomarker-based screening pathway to aid diagnosis of monogenic diabetes in young-onset patients. Diabetes Care 40, 1017–1025 (2017). Dennis, J. M., Shields, B. M., Henley, W. E., Jones, A. G. & Hattersley, A. T. Disease progression and treatment response in data-driven subgroups of type 2 diabetes compared with models based on simple clinical features: an analysis using clinical trial data. Lancet Diabetes Endocrinol. 7, 442–451 (2019). Gloyn, A. L. & Drucker, D. J. Precision medicine in the management of type 2 diabetes. Lancet Diabetes Endocrinol. 6, 891–900 (2018). Smith, U. Pioglitazone: mechanism of action. Int. J. Clin. Pract. Suppl. 13–18 (2001). Alssema, M. et al. Preserved GLP-1 and exaggerated GIP secretion in type 2 diabetes and relationships with triglycerides and ALT. Eur. J. Endocrinol. 169, 421–430 (2013). Kang, Z. F. et al. Pharmacological reduction of NEFA restores the efficacy of incretin-based therapies through GLP-1 receptor signalling in the beta cell in mouse models of diabetes. Diabetologia 56, 423–433 (2013). Matikainen, N. et al. GLP-1 responses are heritable and blunted in acquired obesity with high liver fat and insulin resistance. Diabetes Care 37, 242–251 (2014). Ferrannini, E., Veltkamp, S. A., Smulders, R. A. & Kadokura, T. Renal glucose handling: impact of chronic kidney disease and sodium-glucose cotransporter 2 inhibition in patients with type 2 diabetes. Diabetes Care 36, 1260–1265 (2013). Dennis, J. M. et al. Evaluating associations between the benefits and risks of drug therapy in type 2 diabetes: a joint modeling approach. Clin. Epidemiol. 10, 1869–1877 (2018). Buse, J. B. et al. 2019 update to: Management of hyperglycaemia in type 2 diabetes, 2018. A consensus report by the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetologia 63, 221–228 (2020). Le, P. et al. Use of antihyperglycemic medications in U.S. adults: an analysis of the National Health and Nutrition Examination Survey. Diabetes Care 43, 1227–1233 (2020). Dennis, J. M. et al. Time trends in prescribing of type 2 diabetes drugs, glycaemic response and risk factors: a retrospective analysis of primary care data, 2010–2017. Diabetes Obes. Metab. 21, 1576–1584 (2019). Angwin, C. et al. TriMaster: randomised double-blind crossover study of a DPP4 inhibitor, SGLT2 inhibitor and thiazolidinedione as second-line or third-line therapy in patients with type 2 diabetes who have suboptimal glycaemic control on metformin treatment with or without a sulfonylurea—a MASTERMIND study protocol. BMJ Open 10, e042784 (2020). Senn, S. Cross-over Trials in Clinical Research (Wiley, Chichester, 2002). Yan, X., Lee, S. & Li, N. Missing data handling methods in medical device clinical trials. J. Biopharm. Stat. 19, 1085–1098 (2009). We thank all study participants. We gratefully acknowledge the TriMaster central coordinating team, all members of the TriMaster study group, the MASTERMIND consortium, the Data Monitoring Committee and the Trial Steering Committee (Supplementary Information). In particular, we thank S. Senn for invaluable guidance on the analysis for this trial. In addition, we thank the Exeter NIHR Clinical Research Facility and the Exeter Clinical Trials Unit (CTU), particularly L. Quinn and S. Creanor, for their support with the study, and the CTU Data Team. We thank A. Kerridge and S. Todd of the R&D and Pharmacy Departments at the Royal Devon and Exeter NHS Foundation Trust for support and sponsorship. This trial is part of the MASTERMIND (MRC APBI Stratification and Extreme Response Mechanism in Diabetes) consortium and is supported by UK Medical Research Council study grant number MR/N00633X/1 (B.M., J.D., C.A., W.H., A.F., N.S., R.H., A.J., E.P. and A.H.). The TriMaster trial was supported by the National Institute for Health and Care Research Exeter Biomedical Research Centre and the National Institute for Health and Care Research Exeter Clinical Research Facility. The funders had no role in study design, data collection, data analysis, data interpretation, decision to publish or preparation of the manuscript. The views expressed are those of the author(s) and not necessarily those of the MRC, the NIHR or the Department of Health and Social Care. For the purpose of open access, the corresponding author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising. Department of Clinical and Biomedical Sciences, University of Exeter, Exeter, UK Beverley M. Shields, John M. Dennis, Catherine D. Angwin, Angus G. Jones & Andrew T. Hattersley Clinical Trials Unit, University of Exeter Medical School, Exeter, UK Fiona Warren Institute of Health Research, University of Exeter Medical School, Exeter, UK Fiona Warren & William E. Henley Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK Andrew J. Farmer School of Cardiovascular & Metabolic Health, University of Glasgow, Glasgow, UK Naveed Sattar Diabetes Trials Unit, Radcliffe Department of Medicine, University of Oxford, Oxford, UK Rury R. Holman Population Health & Genomics, School of Medicine, University of Dundee, Dundee, UK Ewan R. Pearson Beverley M. Shields John M. Dennis Catherine D. Angwin William E. Henley Angus G. Jones Andrew T. Hattersley , John M. Dennis , Catherine D. Angwin , Fiona Warren , William E. Henley , Andrew J. Farmer , Naveed Sattar , Rury R. Holman , Angus G. Jones , Ewan R. Pearson & Andrew T. Hattersley B.S. helped design the study, wrote the Statistical Analysis Plan, performed analysis and drafted the manuscript. J.D. discussed the results and helped write the manuscript. C.A. was the trial manager and helped design the study and edited the manuscript. F.W. was the second analyst for the trial and edited the manuscript. W.H. helped design the study, advised on the Statistical Analysis Plan and edited the manuscript. N.B. advised on design of the patient preference questionnaires and edited the manuscript. A.J.F. advised on study design and edited the manuscript. N.S. advised on study design and edited the manuscript. R.H. advised on study design and statistical analysis and edited the manuscript. A.J. advised on study design and edited the manuscript. E.P. helped design the study and edited the manuscript. A.H. was Chief Investigator on the study, led the study design, advised on analysis and edited the manuscript. Correspondence to Andrew T. Hattersley. N.S. is supported by a BHF Centre of Excellence Award (RE/18/6/34217). R.H. is an Emeritus National Institute for Health Research Senior Investigator. A.H. is a Wellcome Senior Investigator (098395/Z/12/Z) and a Senior Investigator at the NIHR. A.H., B.S., A.J. and C.A. are supported by the NIHR Exeter Clinical Research Facility. A.H., B.S., J.D., A.G. and C.A. are supported by the National Institute for Health and Care Research Exeter Biomedical Research Centre. A.F. receives support from the NIHR Oxford Biomedical Research Centre. E.P. has received honoraria from Eli Lilly, Sanofi and Illumina. N.S. has consulted for and/or received speaker honoraria from Abbott Laboratories, Afimmune, Amgen, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Hanmi Pharmaceuticals, Janssen, Merck Sharp & Dohme, Novo Nordisk, Novartis, Sanofi and Pfizer and received grant funding paid to his university from AstraZeneca, Boehringer Ingelheim, Novartis and Roche Diagnostics. R.R.H. reports research support from AstraZeneca, Bayer and Merck Sharp & Dohme and personal fees from Anji Pharmaceuticals, AstraZeneca, Novartis and Novo Nordisk. W.H. has received grant funding from IQVIA and travel funds from Eisai. The remaining authors declare no competing interests. Nature Medicine thanks Ronald Ma, Victor Volovici, Nisa Maruthur and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary handling editor: Jennifer Sargent, in collaboration with the Nature Medicine team. Extended data Extended Data Fig. 1 Distribution of side effects experienced on each of the three study drugs (pioglitazone represented by blue bars, sitagliptin by yellow bars, and canagliflozin by red bars) for all instances where people tried the therapy (n = 469 pioglitazone, n = 474 sitagliptin, n = 474 canagliflozin). Proportions experiencing the side effects at baseline shown by black bars. Scatterplots showing a) difference in on-treatment HbA1c between pioglitazone and sitagliptin (negative values favour pioglitazone, positive values favour sitagliptin) against BMI, and b) difference in on-treatment HbA1c between sitagliptin and canagliflozin (negative values favour sitagliptin, positive values favour canagliflozin) against eGFR. Line of best fit shown for each plot. Extended Data Table 1 Five components of the estimand for both study hypotheses Extended Data Table 2 Primary analysis hypothesis 1. Absolute unadjusted values for HbA1c on pioglitazone and sitagliptin split by BMI strata and the corresponding mean difference between drugs, and between strata. P value assessed by a t test comparing the difference between drugs between strata. *negative values favour pioglitazone Extended Data Table 3 Primary analysis hypothesis 2. Absolute unadjusted values for HbA1c on sitagliptin and canagliflozin split by eGFR strata and the corresponding mean difference between drugs, and between strata. P value assessed by a t test comparing the difference between drugs between strata. *negative values favour sitagliptin Extended Data Table 4 Tolerability by Hypothesised Drug/Strata combinations. Proportions tolerating therapy (remaining on therapy for at least 12 weeks) for each of the drug/strata combinations Extended Data Table 5 Side effects by Hypothesised Drug/Strata Combinations. Proportions experiencing at least one side effect for each of the hypothesised drug/strata combinations Extended Data Table 6 Weight difference by drug and strata. P value assessed by a t test comparing the difference between drugs between strata Extended Data Table 7 Hypoglycemia by hypothesised drug/strata combinations. Proportions experiencing hypoglycemia for each of the hypothesised drug/strata combinations Extended Data Table 8 Protocol Amendments in the TriMaster randomised three way crossover trial Supplementary Tables 1–21 and Supplementary Information Shields, B.M., Dennis, J.M., Angwin, C.D. et al. Patient stratification for determining optimal second-line and third-line therapy for type 2 diabetes: the TriMaster study. Nat Med (2022). https://doi.org/10.1038/s41591-022-02120-7 Precision medicine approach improves HbA1c outcomes in T2DM Olivia Tysoe Nature Reviews Endocrinology (2023) Short-term effectiveness of dapagliflozin versus DPP-4 inhibitors in elderly patients with type 2 diabetes: a multicentre retrospective study M. L. Morieri I. Raz Carmela Vinci Journal of Endocrinological Investigation (2023) Nature Medicine Article 07 Dec 2022 Nature Medicine Classic Collection Nature Medicine (Nat Med) ISSN 1546-170X (online) ISSN 1078-8956 (print)
CommonCrawl
A computer-assisted proof of symbolic dynamics in Hyperion's rotation Anna Gierzkiewicz ORCID: orcid.org/0000-0003-2714-63521 & Piotr Zgliczyński2 Celestial Mechanics and Dynamical Astronomy volume 131, Article number: 33 (2019) Cite this article Hyperion is a moon of Saturn, known of its non-round shape. Its rotation is often modelled by equations of motion of an ellipsoidal satellite. The model is expected to be chaotic for a large range of parameters. The paper contains a rigorous proof of the existence of symbolic dynamics in the model. In other words, there exist infinitely many periodic orbits of arbitrary periods coded by the sequences of two symbols. The proofs are computer assisted, based on the interval arithmetic by the use of CAPD C++ library. A motivation of our study was the broadly known example of chaotic motion in the Solar System, i.e. the tumbling of Hyperion, one of the Saturn's moons. The shape of Hyperion significantly differs from spherical: it is roughly an ellipsoid with the rate of principal moments of inertia \(\frac{\varTheta _2-\varTheta _1}{\varTheta _3}\approx 0.26\), where \(\varTheta _3>\varTheta _2>\varTheta _1\). Hyperion's Keplerian elliptic orbit (long semi-axis \(a=1, 500, 933\) km, period \(T=21.276\) d) is non-circular with eccentricity \(e\approx 0.1\) (Jay Klavetter 1989). The analysis of Voyager and Voyager 2 observational data could not fit Hyperion's rotation into any certain period, which was the base of supposition that its tumbling may be chaotic. The natural attempt (Wisdom et al. 1984) to explain this phenomenon was to compare it to a classical Danby (1962) model of rotation of an oblate satellite, see (Greiner 2009, Example 27.5). The model assumes that the satellite is of ellipsoidal shape and orbits a massive distant body in a Keplerian ellipse orbit with significant eccentricity. It also states that the longest (or the shortest) ellipsoid's axis is always perpendicular to the orbit's plane, which is crucial to simplicity of the model: it implies one axis of rotation only. This last assumption is justified as the basic analysis of Euler equations of the rigid body motion shows that this 'normal' state is stable (Danby 1962). The model does not, unfortunately, fit sufficiently: the key assumption on perpendicularity of the rotation axis to the orbit plane is not true in Hyperion's case. It is tumbling, and its rotation rate is more than 4 times too fast for Hyperion to be synchronous (Harbison et al. 2011). Nevertheless, the model is interesting and applicable in many other cases, like moon's or Mercury's libration. Proving the existence of chaos in it can also be helpful in the more general modelling of Hyperion's motion. This is the reason why in this paper we fix the parameters to Hyperion's case: \(\omega = 0. 89 \pm 0. 22\) and \(e=0.1042\) (Jay Klavetter 1989). The model with the above parameters was, as mentioned, explored in some articles, such as Black et al. (1995), Jay Klavetter (1989), Wisdom et al. (1984) and Tarnopolski (2015). The statement of chaotic rotation is based there on the picture of Poincaré section \(S:\{f=0\}\), which visibly contains a large chaotic region (see also Fig. 2). The Lyapunov Characteristic Exponents were also numerically calculated. The rigorous proof of chaoticity would set the mathematical ground to their theses and also can present an elegant application for the topological methods combined with rigorous numerics (CAPD Group 2017). In this paper, we understand the existence of chaos in a dynamical system as the semi-conjugacy of its dynamics onto the shift dynamics on the space of bi-infinite sequences of two symbols. Such a phenomenon is known in the literature as symbolic dynamics (Morse and Hedlund 1938; Moser 1973). It means, in particular, the existence of a compact invariant subset of the phase space with typical chaotic phenomena: periodic orbits of arbitrary period, dense orbit or sensitivity to the initial conditions. The paper is organized as follows: in Sect. 2, we present the model, the system of ODEs and its basic properties. Sections 3 and 4 contain description of main topological tools used in our work. Section 5 presents our results for the symbolic dynamics in our model. The equations We shortly recollect the derivation of the model (Danby 1962, Eq. 14.3.1). Illustration of the model An ellipsoidal satellite S orbits a massive body F in a Kepler ellipse orbit. Therefore, its true anomaly f fulfils the equation $$\begin{aligned} f'=\frac{(1+e \cos f)^2}{(1-e^2)^{3/2}}. \end{aligned}$$ Equation (1) has a symmetry: if \(t\mapsto f(t)\) is a solution, then \(t\mapsto -f(-\,t)\) is a solution. Also, solutions are strictly increasing. The shortest axis of the satellite is perpendicular to the plane of the orbit. The rotation is expressed by the angle \(\theta \) (see Fig. 1) between the longest axis of S and the long axis of the orbit. Then \(\theta \) fulfils the second-order ordinary differential equation (Greiner 2009, Eq. 27.97) $$\begin{aligned} \theta ''=-\frac{\omega ^2}{2r^3} \sin 2(\theta -f) ,\qquad \text { where } \qquad r=\frac{1-e^2}{1+e\cos f}. \end{aligned}$$ The parameter $$\begin{aligned} \omega ^2 = \frac{\varTheta _2-\varTheta _1}{\varTheta _3}\in [0,1] \end{aligned}$$ may be understood as a measure of oblateness of the satellite. The dynamical system In general, Eqs. (1) and (2) induce a three-dimensional dynamical system $$\begin{aligned} {\left\{ \begin{array}{ll} \theta '=\phi \\ \phi '=-\frac{\omega ^2}{2r^3} \sin 2(\theta -f)\\ f'=\frac{(1+e \cos f)^2}{(1-e^2)^{3/2}} \end{array}\right. } \end{aligned}$$ with parameters e and \(\omega ^2\). The rotation angle \(\theta \in [0,\pi ]\) and \(f\in [0,2\pi ]\), so the phase space for the system (4) is \((\theta ,\phi ,f)\in {\mathbb {R}}_{/\pi \mathbb {Z}}\times {\mathbb {R}}\times {\mathbb {R}}_{/2\pi \mathbb {Z}}\). Poincaré map We study the Poincaré map P of the system (4) on the two-dimensional section \(S:\{f=0\}\), i.e. the map $$\begin{aligned} P(\theta ,\phi ) = \varPhi \left( T(\theta ,\phi ), (\theta ,\phi ,f=0)\right) , \end{aligned}$$ where \(\varPhi \) is the dynamical system induced by (4) and \(T=T(\theta ,\phi )\) is a first recurrence time. Note that the domain of so-defined map is \(\mathrm{Dom}_P = {\mathbb {R}}_{/\pi \mathbb {Z}}\times {\mathbb {R}}\), because f is strictly increasing and of bounded variation. The main fragment of the Poincaré section S with twelve orbits marked in different colours is depicted in Fig. 2. Poincaré section \(S: \{f=0\}\), \(e=0.1\), \(\omega ^2=0.79\) What can be immediately noticed is a probable symmetry of the map P, because the section seems to have reflection symmetries with respect to \(\theta =0=\pi \) and \(\theta =\frac{\pi }{2}\) lines. Indeed, setting \(t\mapsto -t\) to the equations we notice that if \((\theta (t), \phi (t), f(t))\) is a solution, then so is \((-\,\theta (-\,t), \phi (-\,t), -f(-\,t))\), and consequently, $$\begin{aligned} \pi _\theta P(\theta ,\phi ) = -\pi _\theta P^{-1}(-\,\theta ,\phi ) {, \qquad } \pi _\phi P(\theta ,\phi ) = \pi _\phi P^{-1}(-\,\theta ,\phi ). \end{aligned}$$ Then, using the periodicity of the phase space: \(-\,\theta = 0-\theta = \pi -\theta \), we explain the two symmetries of the Poincaré section. For further consideration, we denote the \(\{\theta =\frac{\pi }{2}\}\)-hyperplane reflectional time-reversing symmetry of the extended phase space by R, so $$\begin{aligned} R\left( \theta ,\phi ,f,t\right) = \left( \pi -\theta ,\phi ,-f,-t\right) . \end{aligned}$$ We will also, if it is understandable, denote by R its restriction to S: \(R(\theta ,\phi ) = (\pi -\theta ,\phi )\). Chaos in the system (4) The other natural observation based on Fig. 2 is a large region of probable chaos for (more or less) \(0< \phi =\theta ' < 2.0\), with some elliptic islands. The chaotic behaviour in terms of stability and tidal evolution was studied by Wisdom et al. (1984). The Lyapunov Characteristic Exponents (LCE) occurred to be positive, which was the reason to treat Hyperion's motion as chaotic in the subsequent literature. LCE of the system with a wide range of e and \(\omega ^2\) were also explored by Tarnopolski (2015). Periodic orbits via topological covering Topological tools that we used in detecting periodic orbits for (4) were introduced in detail in Wilczak and Zgliczyński (2003) and Wilczak and Zgliczyński (2006). Here we recollect them shortly and present some intuition. By \(\mathbb {B}_k\), we will denote the k-dimensional unit ball, that is, $$\begin{aligned} \mathbb {B}_k = \{\mathbf {x}\in \mathbb {R}^k :\Vert \mathbf {x}\Vert <1\}. \end{aligned}$$ For a topological set \(A\subset \mathbb {R}^k\), we will denote by \(\overline{A}\), \({{\,\mathrm{int}\,}}A\) and \(\partial A\) its closure, interior and boundary, respectively. H-sets The basic notion is Definition 1 (Wilczak and Zgliczyński 2003, Definition 3.1) An h-set is a quadruple \(N=(|N|, u(n), s(N), c_N)\), where |N| is a compact subset of \(\mathbb {R}^n\), which we will call a support of a h-set (or simply an h-set), and two numbers u(N) and \(s(N) \in \mathbb {N}\cup \{0\}\) complement the dimension of space: $$\begin{aligned} u(N) + s(N)=n; \end{aligned}$$ we will call them the exit and entry dimension, respectively; the homeomorphism \(c_N : \mathbb {R}^n\rightarrow \mathbb {R}^n=\mathbb {R}^{u(N)}\times \mathbb {R}^{s(N)}\) is such that $$\begin{aligned} c_N(|N|)=\overline{\mathbb {B}_{u(N)}}\times \overline{\mathbb {B}_{s(N)}}, \end{aligned}$$ where \(\overline{\mathbb {B}_{k}}\) denotes a closed unit ball of dimension k. We set also some useful notions: $$\begin{aligned} \dim N&= n,\\ N_{\mathrm{c}}&= \overline{\mathbb {B}_{u(N)}}\times \overline{\mathbb {B}_{s(N)}},\\ N_{\mathrm{c}}^-&= \partial \mathbb {B}_{u(N)}\times \overline{\mathbb {B}_{s(N)}},\\ N_{\mathrm{c}}^+&= \overline{\mathbb {B}_{u(N)}}\times \partial \mathbb {B}_{s(N)},\\ N^-&= c_N^{-1}(N_{\mathrm{c}}^-), \qquad N^+ = c_N^{-1}(N_{\mathrm{c}}^+). \end{aligned}$$ As one can notice, the notions with the subscript \(_{\mathrm{c}}\) refer to the 'straight' coordinate system in the image of \(c_N\). The last two sets \(N^-\) and \(N^+\) defined above are often called the exit set and the entrance set, respectively. Therefore, we can assume that an h-set is a product of two unitary balls moved to some coordinate system with the exit and entrance sets distinguished. Covering and back-covering We define the notion of topological covering: (Wilczak and Zgliczyński 2003, Definition 3.4, simplified) Let \(f:|N|\rightarrow \mathbb {R}^n\) be a continuous map and two h-sets M and N are such that \(u(M)=u(N)=u\) and \(s(M)=s(N) = s\). Denote \(f_{\mathrm{c}}=c_N \circ f \circ c_M^{-1} : M_{\mathrm{c}} \rightarrow \mathbb {R}^u\times \mathbb {R}^s\). We say that that M f-covers the h-set N, if there exists a continuous homotopy \(h:[0,1]\times M_{\mathrm{c}} \rightarrow \mathbb {R}^u\times \mathbb {R}^s\), such that $$\begin{aligned} h_0&= f_{\mathrm{c}} ,&\\ h([0,1],M_{\mathrm{c}}^-)\cap N_{\mathrm{c}}&= \varnothing&\text {(the exit condition),}\\ h([0,1],M_{\mathrm{c}})\cap N_{\mathrm{c}}^+&= \varnothing&\text {(the entry condition).}\\ \end{aligned}$$ If \(u>0\), then there exists a linear map \(A:\mathbb {R}^u \rightarrow \mathbb {R}^u\) such that $$\begin{aligned} h_1(x,y)&= (A(x),0) \quad \text { for } x\in \overline{\mathbb {B}_u} \text { and } y\in \overline{\mathbb {B}_s} ,\\ A(\partial \mathbb {B}_u)&\subset \mathbb {R}^u{\setminus }\mathbb {B}_u. \end{aligned}$$ If M f-covers N, we simply denote it by \(M {\mathop {\Longrightarrow }\limits ^{f}}N\). See Fig. 3 for an illustration of covering in some low-dimensional cases. Examples of topological self-covering \(N {\mathop {\Longrightarrow }\limits ^{f}}N\) in \(\mathbb {R}^2\) (left) and \(\mathbb {R}^3\): with one exit direction (middle) and two exit directions (right). The exit sets and their images are marked in red Sometimes, it is more convenient to use the backward covering, which one may understand as the covering backwards in time. If N is an h-set, then we define an h-set \(N^{\mathrm{T}}\) as \(|N^{\mathrm{T}}| = |N|\); \(u(N^{\mathrm{T}}) = s(N)\) and \(s(N^{\mathrm{T}}) = u(N)\); \(c_{N^{\mathrm{T}}}:\mathbb {R}^n \ni x \longmapsto j(c_N(x)) \in \mathbb {R}^{u(N^{\mathrm{T}})}\times \mathbb {R}^{s(N^{\mathrm{T}})}=\mathbb {R}^n\), where \(j:\mathbb {R}^{u(N^{\mathrm{T}})}\times \mathbb {R}^{s(N^{\mathrm{T}})}\ni (p,q) \longmapsto (q,p) \in \mathbb {R}^{s(N^{\mathrm{T}})}\times \mathbb {R}^{u(N^{\mathrm{T}})}\). As we can see, the h-set \(N^{\mathrm{T}}\) is just the h-set N with the entrance and exit sets swapped. Let M and N be two h-sets such that \(u(M)=u(N)=u\) and \(s(M)=s(N) = s\). Let \(f:\mathrm{Dom}_f\subset {\mathbb {R}}^n \rightarrow {\mathbb {R}}^n\) be such that \(f^{-1}:|N|\rightarrow {\mathbb {R}}^n\) is well defined and continuous. Then we say that M back-covers N and denote by \(M{\mathop {\Longleftarrow }\limits ^{f}}N\), iff \(N^{\mathrm{T}}{\mathop {\Longrightarrow }\limits ^{f^{-1}}} M^{\mathrm{T}}\). If either \(M{\mathop {\Longleftarrow }\limits ^{f}}N\) or \(M{\mathop {\Longrightarrow }\limits ^{f}} N\), then we will write \(M{\mathop {\Longleftrightarrow }\limits ^{f}} N\). In general, if \(N_0 {\mathop {\Longrightarrow }\limits ^{f}}N_1\) and \(N_1 {\mathop {\Longrightarrow }\limits ^{f}}N_2\), then not necessarily \(N_0 {\mathop {\Longrightarrow }\limits ^{f^2}}N_2\), but covering has the property of tracking orbits. The basic application of topological covering is the following theorem, stating the existence of a periodic orbit related to a sequence of coverings. Theorem 1 (Wilczak and Zgliczyński 2003, Theorem 3.6, simplified) Suppose there exists a sequence of h-sets \(N_0\), ...\(N_n=N_0\), such that $$\begin{aligned} N_0 {\mathop {\Longleftrightarrow }\limits ^{f}} N_1 {\mathop {\Longleftrightarrow }\limits ^{f}}\ldots {\mathop {\Longleftrightarrow }\limits ^{f}} N_n = N_0, \end{aligned}$$ then there exists a point \(x\in {{\,\mathrm{int}\,}}|N_0|\), such that \(f^k(x) \in {{\,\mathrm{int}\,}}|N_{k}|\) for \(k=0,1,\ldots ,n\) and \(f^n(x)=x\). In particular, if \(N_0{\mathop {\Longrightarrow }\limits ^{f}} N_0\), then in \(N_0\) we have a stationary point for the map f. Note also that if the map is a Poincaré map P, then a stationary point for P or \(P^k\) lies on a periodic orbit for the dynamical system. Periodic orbits in Hyperion's rotation Using Theorem 1, we find some stationary points for P, i.e. periodic orbits for the system (4). Their existence is proved rigorously via the interval Newton method (Moore 1966; Neumaier 1991) implemented in C++ language with the use of CAPD library (CAPD Group 2017). Using this method, one can also estimate the eigenvalues of the derivative in the stationary point, so it is possible to prove rigorously whether the points are hyperbolic. Those periodic points are depicted on the Poincaré map P in Fig. 4. Three of them, denoted as \(P_1\), \(P_2\) and \(P_3\), will be important in further consideration. Periodic points of P, found via interval Newton method. The points marked by red dots are hyperbolic; black asterisks '\(\star \)' are elliptic. The points marked by black 'x' are the stationary points for \(P^2\) The list of small intervals on the \(\{\theta =\frac{\pi }{2}\}\) axis containing \(P_1\), \(P_2\), \(P_3\) is shown in Table 1. From now on, we will denote by \(P_1\), \(P_2\), \(P_3\) the stationary points as well as the small sets containing them, described in this table. Table 1 Localization of three stationary points of P, found via interval Newton method The hyperbolic points \(P_1\), \(P_2\), \(P_3\), presented in Fig. 4, can be also detected up to a small neighbourhood using covering relations. It is sufficient to find a self-covering compact set. This method, however, does not prove neither the uniqueness of the stationary point inside the set nor its hyperbolicity. The examples of the self-covering sets for \(P_1\), \(P_2\), \(P_3\) are presented in Fig. 5. Self-covering sets (the rectangles) proving existence of stationary points \(P_1\), \(P_2\), \(P_3\). The exit sets and their images are marked in red Symbolic dynamics detecting via covering For the next study, we assume that the continuous map \(f : \mathbb {R}^2 \rightarrow \mathbb {R}^2\) and all h-sets \(N_i\) contained in \(\mathbb {R}^2\) have entry and exit dimensions equal to 1, that is, \(s(N_i)=u(N_i)=1\). Following the above assumptions, the notions related to a h-set get simpler, because the balls are just the closed intervals: \(\overline{\mathbb {B}_{u(N)}} = \overline{\mathbb {B}_{s(N)}}=[-\,1,1]\): \(N_{\mathrm{c}} = [-\,1,1]^2\), \(N_{\mathrm{c}}^- = \{-\,1,1\}\times [-\,1,1]\), \(N_{\mathrm{c}}^+ = [-\,1,1]\times \{-\,1,1\}\), \(N^-\) and \(N^+\) are topologically a sum of two disjoint intervals. Symbolic dynamics for a two-dimensional map A model chaotic behaviour for our purposes is the shift map on the set of bi-infinite sequences of two symbols, that is, the space \(\varSigma _2=\{0,1\}^{\mathbb {Z}}\) as a compact metric space with the metric $$\begin{aligned} \text {for }c=\{ c_n\}_{n\in \mathbb {Z}},\quad c'=\{ c'_n\}_{n\in \mathbb {Z}}, \quad \mathrm{dist}\left( c,c'\right) = \sum _{n=-\infty }^{+\infty } \frac{|c_n-c'_n|}{2^{|n|}}, \end{aligned}$$ which induces the product topology. The shift map \(\sigma :\varSigma _2\rightarrow \varSigma _2\), given by $$\begin{aligned} (\sigma (c))_n =c_{n+1}, \end{aligned}$$ is a homeomorphism of \(\varSigma _2\) with well-known chaotic properties like the existence of a dense orbit, existence of orbit of any given period or that the set of periodic orbits is dense in the whole space. In our study, by the chaotic behaviour of a dynamical system we understand the existence of a compact set I invariant for the Poincaré map P (or sometimes its higher iteration) and a continuous surjection \(g:I\rightarrow \varSigma _2\) such that \(P|_I\) is semi-conjugated to \(\sigma \), that is, $$\begin{aligned} g\circ P|_I = \sigma \circ g. \end{aligned}$$ Then one may say that P admits on I at least as rich dynamics as \(\sigma \) on \(\varSigma _2\). The system \((\varSigma _2,\sigma )\) or any system (semi-)conjugated to it is sometimes described in the literature as symbolic dynamics (Morse and Hedlund 1938). For better understanding the nature of symbolic dynamics, let us recall shortly that the discrete system \((\varSigma _2,\sigma )\) defined above shows all the typical chaotic phenomena as transitivity, density of periodic orbits or sensitivity to the initial conditions. There exists a periodic orbit of any prescribed period. The system \((\varSigma _2,\sigma )\) can be also imagined as a model of ideal randomness, obviously related to the probability space of an infinite experiment of coin tossing. Topological horseshoe A simple example of symbolic dynamics semi-conjugated to \(\sigma \) is a horseshoe: Let \(N_0\) and \(N_1 \subset \mathbb {R}^2\) be two disjoint h-sets. We say that a continuous map \(f:\mathbb {R}^2 \rightarrow \mathbb {R}^2\) is a topological horseshoe for \(N_0\) and \(N_1\) if (see Fig. 6) $$\begin{aligned} \begin{array}{cc} N_0 \overset{f}{\Longrightarrow }N_0, &{}\quad N_0 \overset{f}{\Longrightarrow }N_1,\\ N_1 \overset{f}{\Longrightarrow }N_0 , &{}\quad N_1 \overset{f}{\Longrightarrow }N_1. \end{array} \end{aligned}$$ A topological horseshoe: each \(N_{0,1}\) covers itself and the other set. The exit sets of \(N_0\) and \(N_1\) are marked in red and green, respectively It can be shown that for any topological horseshoe we obtain symbolic dynamics. (Zgliczyński and Gidea 2004, Theorem 18) Let f be a topological horseshoe for \(N_0\) and \(N_1\). Denote by \(I = \mathrm{Inv}(N_0\cup N_1)\) the invariant part of the set \(N_0\cup N_1\) under f, and define a map \(g: I \rightarrow \varSigma _2\) by $$\begin{aligned} g(x)_k = j \in \{0,1\} \quad \text { iff } \quad f^k(x)\in N_j. \end{aligned}$$ Then g is a surjection satisfying \(g\circ f|_I = \sigma \circ g\), and therefore, f is semi-conjugated to the shift map \(\sigma \) on \(\varSigma _2\). The conjugacy with the model space \(\varSigma _2\) may be understood as follows: for any sequence of the symbols 0 and 1, there exists an orbit of the discrete system generated by f passing through the sets \(N_0\) and \(N_1\) in the order given by the sequence. Moreover, if the sequence is periodic, then so is the orbit. Corollary 1 Let f be a topological horseshoe for \(N_0\) and \(N_1\). Then it follows from Theorem 1 that for any finite sequence of zeros and ones \((a_0, a_1,\ldots , a_{n-1})\), \(a_i\in \{0,1\}\), there exists \(x\in N_{a_0}\) such that $$\begin{aligned} f^i(x)\in {{\,\mathrm{int}\,}}N_{a_i} \quad \text{ and } \quad f^n(x)=x. \end{aligned}$$ Note also that the following two chains of coverings or back-coverings: $$\begin{aligned}&N_0 \overset{f}{\Longleftrightarrow } N_0 \overset{f}{\Longleftrightarrow } N_1 \overset{f}{\Longleftrightarrow } \dots \overset{f}{\Longleftrightarrow } N_k = M_0,\\&M_0 \overset{f}{\Longleftrightarrow } M_0 \overset{f}{\Longleftrightarrow } M_1 \overset{f}{\Longleftrightarrow } \dots \overset{f}{\Longleftrightarrow } M_k = N_0 \end{aligned}$$ indicate symbolic dynamics for \(f^k\) in the sets \(N_0\) and \(M_0\), with the use of Theorem 1. Symbolic dynamics in Hyperion's rotation system Numerical calculations suggest that the three hyperbolic points \(P_1\), \(P_2\) and \(P_3\) detected above (see Table 1) have an interesting property: their stable and unstable manifolds intersect (for the same point and also pairwise), see Fig. 7. The intersections seem to be transversal, which is a clue for searching for the symbolic dynamics in the Poincaré map P. Fragments of stable (orange) and unstable (blue) manifolds of stationary points \(P_1\), \(P_2\), \(P_3\). Some points of intersections of the manifolds are marked in red, namely \(P_{11}\), \(P_{12}\), \(P_{13}\), \(P_{23}\) We found six topological horseshoes related to the intersections mentioned above. Below we present the h-sets and their covering relations. Similarly as in Wilczak and Zgliczyński (2003), the h-sets are parallelograms of the form \(N = p + A \cdot b\), which are the base cubes b transformed to some affine coordinate system, where p are some small interval vectors containing base points (such as \(P_1\), \(P_2\) and \(P_3\)); interval matrices A are usually one of the eigenvectors matrices: $$\begin{aligned} \mathbf {M}_1&= \begin{bmatrix} 0.7064565371_{793425}^{812066}&\quad 0.7077564277_{852203}^{908148} \\ 0.7064565371_{793425}^{812066}&\quad -\,0.7077564277_{908146}^{852203} \end{bmatrix},\\ \mathbf {M}_2&=\begin{bmatrix} 0.7338248323_{735296}^{818986}&\quad 0.679338733_{8943307}^{9188653} \\ 0.7338248323_{735297}^{818987}&\quad -\,0.679338733_{9188652}^{8943306} \end{bmatrix},\\ \mathbf {M}_3&=\begin{bmatrix} 0.8831685403_{716123}^{830619}&\quad 0.4690557848_{18765}^{524839} \\ 0.8831685403_{716135}^{830631}&\quad -\,0.4690557848_{524816}^{187629} \end{bmatrix}, \end{aligned}$$ or, in the case of connecting \(P_3\) to itself, some matrices corrected to point directions of the dynamics on the stable or unstable manifolds. Outline of the computer-assisted proofs In the following six cases, we find some chains of h-sets N which are rectangles transformed to some affine coordinate systems (parallelograms on the section S). The computer-assisted proof finds their overestimated images through the Poincaré map P and encloses them in rectangles (interval closure of P(N), denoted often by [P(N)]). Then the covering relations \(N_0 \overset{P}{\Longrightarrow } N_1\) are verified in the simplified version: using interval arithmetic, we simply use a series of '<' and '>' relations to check if: \([P(N_0)]\) lies in the stripe between the top and bottom borders of \(N_1\); the image of the left part of \(N_0^-\) lies to the left of \(N_1\); and the image of the right part of \(N_0^-\) lies to the right of \(N_1\). The above conditions are sufficient for covering \(N_0 \overset{P}{\Longrightarrow } N_1\). We use the CAPD library for C++ (CAPD Group 2017), containing in particular modules for interval arithmetic, linear algebra and Taylor integration. \(P_1\) and \(P_2\) The simplest situation occurs in the case of the points \(P_1\) and \(P_2\), because we found a direct topological horseshoe for the first iteration of the Poincaré map P (see Fig. 8). Horseshoe proving symbolic dynamics for P, connecting the points \(P_1\) and \(P_2\). The exit sets of h-sets and their images are marked in orange Let \(N_0\) and \(N_1\) be h-sets of the form \(p + A \cdot b\cdot 10^{-3}\), where \(N_0\) \(P_1\) \(\mathbf {M}_1\) \([-\,0.8,0.8]\times [-\,180,80]\) \(N_1\) \(P_2\) \(\mathbf {M}_2\) \([-\,10,5]\times [-\,75,200]\) Then the following chain of covering relations occur: $$\begin{aligned} N_0 \overset{P}{\Longrightarrow } N_0 \overset{P}{\Longrightarrow } N_1 \overset{P}{\Longrightarrow } N_1 \overset{P}{\Longrightarrow } N_0, \end{aligned}$$ which proves the existence of symbolic dynamics for P. Computer assisted (Gierzkiewicz and Zgliczyński 2018). \(\square \) \(P_1\) and \(P_1\), and \(P_2\) and \(P_2\) To prove symbolic dynamics between \(P_1\) and \(P_1\) or between \(P_2\) and \(P_2\) with our simple tools, we need the second iteration of the Poincaré map P. Figure 9 illustrates the situation. Horseshoes proving symbolic dynamics for the second iteration of the Poincaré map \(P^2\), connecting the point \(P_1\) to itself (to the left) or \(P_2\) to itself (to the right). The exit sets and their images are marked in orange \(N_0\) \(P_1\) \(\mathbf {M}_1\) \([-\,1,1]\times [-\,100,100]\) \(N_1\) (1.58669; 1.10102) \(\mathbf {M}_1\) \([-\,0.1,0.1]\times [-\,60,60]\) \(N_0\) \(P_2\) \(\mathbf {M}_2\) \([-\,10,10]\times [-\,200,200]\) \(N_1\) (1.62953; 1.35174) \(\mathbf {M}_2\) \([-\,10,10]\times [-\,150,150]\) Then, in both cases we have the following sequence of covering relations: $$\begin{aligned} N_0 \overset{P}{\Longrightarrow } N_0 \overset{P}{\Longrightarrow } N_1 \overset{P^2}{\Longrightarrow } N_1 \overset{P^2}{\Longrightarrow } N_0, \end{aligned}$$ which proves the existence of symbolic dynamics for \(P^2\) (see Remark 1). To construct the horseshoe connecting \(P_3\) to \(P_3\), we will need five iterations of P (see Fig. 10). Note that for \(N_2\) and \(N_3\) the direction matrices are corrected to make them compatible to the dynamics along the stable or unstable manifold. Sequence of covering relations proving symbolic dynamics for \(P^5\), connecting the point \(P_3\) with itself. The exit sets and their images are marked in orange Let \(N_i\), \(i=0,\ldots , 4\) be h-sets of the form \(p + A \cdot b\cdot 10^{-3}\), where \(N_0\) \(P_3\) \(\mathbf {M}_3\) \([-\,20,20]\times [-\,30,30]\) \(N_1\) (1.51877; 1.68699) \(\mathbf {M}_3\) \([-\,6,6]\times [-\,30,30]\) \(N_2\) (1.32082; 1.62293) \(\begin{bmatrix} 0.734429&\quad 0.734429 \\ 0.678686&\quad -\,0.678686 \end{bmatrix}\) \([-\,0.5,0.5]\times [-\,30,30]\) \(N_3\) (1.82077; 1.62293) \(\begin{bmatrix} 0.866025&\quad -0.5 \\ 0.5&\quad 0.866025 \end{bmatrix}\) \([-\,15,15]\times [-\,15,15]\) \(N_4\) (1.62282; 1.68699) \(\mathbf {M}_3\) \([-\,20,20]\times [-\,20,20]\) Then we have the following sequence of covering relations: $$\begin{aligned} N_0 \overset{P}{\Longrightarrow } N_0 \overset{P}{\Longrightarrow } N_1 \overset{P}{\Longrightarrow } N_2 \overset{P^2}{\Longrightarrow } N_3 \overset{P}{\Longrightarrow } N_4 \overset{P}{\Longrightarrow } N_0 \text {, and also } N_4 \overset{P}{\Longrightarrow } N_1, \end{aligned}$$ which proves the existence of symbolic dynamics in \(N_0\cup N_1\) for \(P^5\) (see Remark 1). To connect \(P_1\) and \(P_3\), we need the fourth iteration of P (see Fig. 11), but this time we find only a one-way chain of covering relations from \(P_1\) to \(P_3\). Thanks to the time-reversing symmetry and the fact that we choose \(N_0\) and \(N_4\) symmetrical related to the \(\theta = \frac{\pi }{2}\) line, we will be able to close this chain. To the left: the right half of the horseshoe proving symbolic dynamics for \(P^4\), connecting the points \(P_1\) and \(P_3\). To the right: the left half of the horseshoe proving symbolic dynamics for \(P^5\), connecting the points \(P_3\) and \(P_2\). The exit sets and their images are marked in orange Let \(N_i\), \(i=0,\ldots ,4\) be h-sets of the form \(p + A \cdot b\cdot 10^{-3}\), where \(N_0\) \(P_1\) \(\mathbf {M}_1\) \([-\,0.65,0.65]\times [-\,0.65,0.65]\) \(N_2\) (1.93186; 1.62049) \(\mathbf {M}_1\) \([-\,20,20]\times [-\,8,8]\) \(N_3\) (1.6471; 1.67581) \(\mathbf {M}_3\) \([-\,10,10]\times [-\,15,15]\) Then the following covering relations occur: $$\begin{aligned} N_0 \overset{P}{\Longrightarrow } N_0 \overset{P}{\Longrightarrow } N_1 \overset{P}{\Longrightarrow } N_2 \overset{P}{\Longrightarrow } N_3\overset{P}{\Longrightarrow } N_4 \overset{P}{\Longrightarrow } N_4 . \end{aligned}$$ Let \(N_i\), \(i=0,\ldots ,4\) be h-sets defined in Theorem 6. Then there is symbolic dynamics in \(N_0 \cup N_4\) for \(P^4\). Consider the h-sets \(R(N_i)^{\mathrm{T}}\), \(i=0,\ldots ,4\). Using the time-reversing symmetry R, it is clear that $$\begin{aligned} R(N_0) \overset{P^{-1}}{\Longrightarrow } R(N_1) \overset{P^{-1}}{\Longrightarrow } R(N_2) \overset{P^{-1}}{\Longrightarrow } R(N_3)\overset{P^{-1}}{\Longrightarrow } R(N_4) ; \end{aligned}$$ hence, from the definition of back-covering, $$\begin{aligned} R(N_4)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_3)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_2)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_1)^{\mathrm{T}}\overset{P}{\Longleftarrow } R(N_0)^{\mathrm{T}}. \end{aligned}$$ We have chosen \(N_0\) and \(N_4\) to be \(\{\theta =\frac{\pi }{2}\}\)-line symmetrical, so \(R(N_0)^{\mathrm{T}}=N_0\) and \(R(N_4)^{\mathrm{T}}=N_4\). Therefore, we get the full chain of covering and back-covering relations in the form: $$\begin{aligned} N_0 \overset{P}{\Rightarrow } N_0 \overset{P}{\Rightarrow } N_1 \overset{P}{\Rightarrow } N_2 \overset{P}{\Rightarrow } N_3\overset{P}{\Rightarrow } N_4 \overset{P}{\Rightarrow } N_4 \overset{P}{\Leftarrow } R(N_3)^{\mathrm{T}} \overset{P}{\Leftarrow } R(N_2)^{\mathrm{T}} \overset{P}{\Leftarrow } R(N_1)^{\mathrm{T}}\overset{P}{\Leftarrow } N_0 , \end{aligned}$$ which proves the existence of symbolic dynamics in \(N_0\cup N_4\) for \(P^4\), with the use of Remark 1. \(\square \) To connect \(P_2\) and \(P_3\) with symbolic dynamics, we need the fifth iteration of P and time-reversing symmetry, see Fig. 11 (right) for illustration. \(N_0\) \(P_3\) \(\mathbf {M}_3\) \([-\,8,8]\times [-\,8,8]\) \(N_1\) (1.5569; 1.70419) \(\mathbf {M}_3\) \([-\,5,5]\times [-\,20,20]\) $$\begin{aligned} N_0 \overset{P}{\Longrightarrow } N_0 \overset{P}{\Longrightarrow } N_1 \overset{P}{\Longrightarrow } N_2 \overset{P}{\Longrightarrow } N_3\overset{P}{\Longrightarrow } N_4 \overset{P}{\Longrightarrow } N_5\overset{P}{\Longrightarrow } N_5. \end{aligned}$$ Analogously as in the proof of Corollary 2, the lacking h-sets on the right half-plane are simply \(R(N_i)^{\mathrm{T}}\), \(i=0,\ldots ,5\). Using the time-reversing symmetry and the definition of back-covering, $$\begin{aligned} R(N_5)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_4)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_3)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_2)^{\mathrm{T}} \overset{P}{\Longleftarrow } R(N_1)^{\mathrm{T}}\overset{P}{\Longleftarrow } R(N_0)^{\mathrm{T}}. \end{aligned}$$ Again, we have chosen \(N_0\) and \(N_5\) to be \(\{\theta =\frac{\pi }{2}\}\)-line symmetrical, so \(R(N_0)^{\mathrm{T}}=N_0\) and \(R(N_5)^{\mathrm{T}}=N_5\). Therefore, we get the full chain of covering relations: $$\begin{aligned}&N_0 \overset{P}{\Rightarrow } N_1 \overset{P}{\Rightarrow } N_2 \overset{P}{\Rightarrow } N_3\overset{P}{\Rightarrow } N_4 \overset{P}{\Rightarrow } N_5 \overset{P}{\Leftarrow } R(N_4)^{\mathrm{T}} \overset{P}{\Leftarrow } R(N_3)^{\mathrm{T}} \overset{P}{\Leftarrow } R(N_2)^{\mathrm{T}} \overset{P}{\Leftarrow } R(N_1)^{\mathrm{T}}\overset{P}{\Leftarrow } N_0 ,\\&\quad \text {and }\quad N_0 \overset{P}{\Rightarrow } N_0\text {, }\quad N_5 \overset{P}{\Rightarrow }N_5 , \end{aligned}$$ which proves the existence of symbolic dynamics in \(N_0\cup N_5\) for \(P^5\). \(\square \) We would like to underline that we proved purely topological symbolic dynamics. Studying the model for its hyperbolicity is our future plan. As many proofs are based on the interval arithmetic, our investigation is valid for a small interval of parameters, containing the values for Hyperion. The methods can be easily applied to other (but precise) values of \(\omega \) and e. The more general question for our future work is to explore the relation between the value of parameters and the size and structure of the chaotic region on the section S, which could be applied in modelling rotation of other objects of the universe. Another question is the way of generalizing the model itself. If it cannot describe the tumbling of Hyperion, then the complete set of Euler equations could be investigated. This extends the phase space from 3 to 7 dimensions and requires the use of more complex methods. The other way to make the model more accurate is to consider the impact of Titan on the Saturn–Hyperion system as a correction in the rotation equation (Tarnopolski 2016) or as a change of parameter e to become an independent variable. Black, G.J., Nicholson, P.D., Thomas, P.C.: Hyperion: Rotational dynamics. Icarus 117, 149–161 (1995). https://doi.org/10.1006/icar.1995.1148 ADS Article Google Scholar CAPD Group: Computer assisted proofs in dynamics C++ library ver. 5.0.6 (2017). http://capd.ii.uj.edu.pl Danby, J.M.A.: Fundamentals of Celestial Mechanics. Macmillan, New York (1962) Gierzkiewicz, A., Zgliczyński, P.: C++ source code (2018). http://www.ii.uj.edu.pl/~zgliczyn/ Greiner, W.: Classical Mechanics: Systems of Particles and Hamiltonian Dynamics. Classical Theoretical Physics. Springer, Berlin (2009) Harbison, R.A., Thomas, P.C., Nicholson, P.C.: Rotational modeling of Hyperion. Celest. Mech. Dyn. Astron. 110(1), 1–16 (2011). https://doi.org/10.1007/s10569-011-9337-3 ADS MathSciNet Article MATH Google Scholar Jay Klavetter, J.: Rotation of Hyperion. II—dynamics. Astron. J. 98, 1855–1874 (1989) Moore, R.E.: Interval Analysis. Prentice-Hall Series in Automatic Computation. Prentice-Hall, Englewood Cliffs (1966) MATH Google Scholar Morse, M., Hedlund, G.: Symbolic dynamics. Am. J. Math. 60, 815–866 (1938) MathSciNet Article Google Scholar Moser, J.: Stable and Random Motions in Dynamical Systems: With Special Emphasis on Celestial Mechanics (AM-77), revised edn. Princeton University Press, Princeton (1973) Neumaier, A.: Interval Methods for Systems of Equations. Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge (1991). https://doi.org/10.1017/CBO9780511526473 Tarnopolski, M.: Nonlinear time-series analysis of Hyperion's lightcurves. Astrophys. Space Sci. 357(2), 160 (2015). https://doi.org/10.1007/s10509-015-2379-3 Tarnopolski, M.: Influence of a second satellite on the rotational dynamics of an oblate moon. Celest. Mech. Dyn. Astron. (2016). https://doi.org/10.1007/s10569-016-9719-7 MathSciNet Article MATH Google Scholar Wilczak, D., Zgliczyński, P.: Heteroclinic connections between periodic orbits in planar restricted circular three-body problem—a computer assisted proof. Commun. Math. Phys. 234(1), 37–75 (2003). https://doi.org/10.1007/s00220-002-0709-0 Wilczak, D., Zgliczyński, P.: Heteroclinic connections between periodic orbits in planar restricted circular three body problem. Part II. Commun. Math. Phys. 261(2), 547–547 (2006). https://doi.org/10.1007/s00220-005-1471-x ADS Article MATH Google Scholar Wisdom, J., Peale, S.J., Mignard, F.: The chaotic rotation of Hyperion. Icarus 58(2), 137–152 (1984). https://doi.org/10.1016/0019-1035(84)90032-0 Zgliczyński, P., Gidea, M.: Covering relations for multidimensional dynamical systems. J. Differ. Equ. 202(1), 32–58 (2004). https://doi.org/10.1016/j.jde.2004.03.013 Department of Applied Mathematics, University of Agriculture in Kraków, ul. Balicka 253c, 30-198, Kraków, Poland Anna Gierzkiewicz Institute of Computer Science, Jagiellonian University, ul. Łojasiewicza 6, 30-348, Kraków, Poland Piotr Zgliczyński Correspondence to Anna Gierzkiewicz. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the paper submitted. Work of A.G. and P.Z. was supported by National Science Center (NCN) of Poland under Project No. 2015/19/B/ST1/01454. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Gierzkiewicz, A., Zgliczyński, P. A computer-assisted proof of symbolic dynamics in Hyperion's rotation. Celest Mech Dyn Astr 131, 33 (2019). https://doi.org/10.1007/s10569-019-9910-8 Revised: 06 June 2019 Computer-assisted proof Symbolic dynamics Interval Newton method Mathematics Subject Classification 37N05
CommonCrawl
A kernel-based method for data-driven koopman spectral analysis Sigurdur F. Hafstein 1, , Christopher M. Kellett 2, and Huijuan Li 3, School of Science and Engineering, Reykjavik University, Menntavegi 1, Reykjavik, IS-101 School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, New South Wales 2308 School of Mathematics and Physics, Chinese University of Geosciences (Wuhan), 430074, Wuhan Received June 2015 Revised March 2016 Published May 2016 We present a numerical technique for the computation of a Lyapunov function for nonlinear systems with an asymptotically stable equilibrium point. The proposed approach constructs a partition of the state space, called a triangulation, and then computes values at the vertices of the triangulation using a Lyapunov function from a classical converse Lyapunov theorem due to Yoshizawa. A simple interpolation of the vertex values then yields a Continuous and Piecewise Affine (CPA) function. Verification that the obtained CPA function is a Lyapunov function is shown to be equivalent to verification of several simple inequalities. Numerical examples are presented demonstrating different aspects of the proposed method. Keywords: Lyapunov functions, continuous and piecewise affine functions, computational techniques stability theory, ordinary differential equations.. Mathematics Subject Classification: Primary: 93D05, 93D30, 93D20; Secondary: 93D1. Citation: Sigurdur F. Hafstein, Christopher M. Kellett, Huijuan Li. Computing continuous and piecewise affine lyapunov functions for nonlinear systems. Journal of Computational Dynamics, 2015, 2 (2) : 227-246. doi: 10.3934/jcd.2015004 R. Baier, L. Grüne and S. Hafstein, Linear programming based Lyapunov function computation for differential inclusions,, Discrete and Continuous Dynamical Systems Series B, 17 (2012), 33. doi: 10.3934/dcdsb.2012.17.33. Google Scholar H. Ban and W. Kalies, A computational approach to Conley's decomposition theorem,, Journal of Computational and Nonlinear Dynamics, 1 (2006), 312. doi: 10.1115/1.2338651. Google Scholar J. Björnsson, P. Giesl and S. Hafstein, Algorithmic verification of approximations to complete Lyapunov functions,, In Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems, (2014), 1181. Google Scholar J. Björnsson, P. Giesl, S. Hafstein, C. M. Kellett and H. Li, Computation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction,, In Proceedings of the 53rd IEEE Conference on Decision and Control, (2014), 5506. Google Scholar P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions,, Number 1904 in Lecture Notes in Mathematics. Springer, (1904). Google Scholar P. Giesl and S. Hafstein, Existence of piecewise affine Lyapunov functions in two dimensions,, J. Math. Anal. Appl., 371 (2010), 233. doi: 10.1016/j.jmaa.2010.05.009. Google Scholar P. Giesl and S. Hafstein, Construction of Lyapunov functions for nonlinear planar systems by linear programming,, Journal of Mathematical Analysis and Applications, 388 (2012), 463. doi: 10.1016/j.jmaa.2011.10.047. Google Scholar P. Giesl and S. Hafstein, Existence of piecewise affine Lyapunov functions in arbitrary dimensions,, Discrete and Contin. Dyn. Syst., 32 (2012), 3539. doi: 10.3934/dcds.2012.32.3539. Google Scholar P. Giesl and S. Hafstein, Revised CPA method to compute Lyapunov functions for nonlinear systems,, Journal of Mathematical Analysis and Applications, 410 (2014), 292. doi: 10.1016/j.jmaa.2013.08.014. Google Scholar P. Giesl and S. Hafstein, Computation and verification of Lyapunov functions,, SIAM J. Appl. Dyn. Syst., 14 (2015), 1663. doi: 10.1137/140988802. Google Scholar P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions,, Discrete and Continuous Dynamical Systems Series B, 20 (2015), 2291. doi: 10.3934/dcdsb.2015.20.2291. Google Scholar S. Hafstein, An Algorithm for Constructing Lyapunov Functions,, Electronic Journal of Differential Equations Mongraphs, (2007). Google Scholar S. Hafstein, C. M. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction,, In Proceedings of the American Control Conference, (2014), 548. doi: 10.1109/ACC.2014.6858660. Google Scholar W. Hahn, Stability of Motion,, Springer-Verlag, (1967). Google Scholar T. Johansen, Computation of Lyapunov functions for smooth nonlinear systems using convex optimization,, Automatica, 36 (2000), 1617. doi: 10.1016/S0005-1098(00)00088-1. Google Scholar W. Kalies, K. Mischaikow and R. VanderVorst, An algorithmic approach to chain recurrence,, Foundations of Computational Mathematics, 5 (2005), 409. doi: 10.1007/s10208-004-0163-9. Google Scholar C. M. Kellett, A compendium of comparsion function results,, Mathematics of Controls, 26 (2014), 339. doi: 10.1007/s00498-014-0128-8. Google Scholar C. M. Kellett, Classical converse theorems in Lyapunov's second method,, Discrete and Continuous Dynamical Systems Series B, 20 (2015), 2333. doi: 10.3934/dcdsb.2015.20.2333. Google Scholar J. Kurzweil, On the inversion of Ljapunov's second theorem on stability of motion,, Chechoslovak Mathematics Journal, 81 (1956), 217. Google Scholar H. Li, S. Hafstein and C. M. Kellett, Computation of continuous and piecewise affine Lyapunov functions for discrete-time systems,, J Differ Equ Appl, 21 (2015), 486. doi: 10.1080/10236198.2015.1025069. Google Scholar A. M. Lyapunov, The general problem of the stability of motion,, Math. Soc. of Kharkov, 55 (1992), 521. doi: 10.1080/00207179208934253. Google Scholar S. Marinosson, Lyapunov function construction for ordinary differential equations with linear programming,, Dynamical Systems, 17 (2002), 137. doi: 10.1080/0268111011011847. Google Scholar J. L. Massera, On Liapounoff's conditions of stability,, Annals of Mathematics, 50 (1949), 705. doi: 10.2307/1969558. Google Scholar A. Papachristodoulou and S. Prajna, The construction of Lyapunov functions using the sum of squares decomposition,, In Proceedings of the 41st IEEE Conference on Decision and Control, 3 (2002), 3482. doi: 10.1109/CDC.2002.1184414. Google Scholar M. Peet and A. Papachristodoulou, A converse sum of squares Lyapunov result with a degree bound,, IEEE Transactions on Automatic Control, 57 (2012), 2281. doi: 10.1109/TAC.2012.2190163. Google Scholar N. Rouche, P. Habets and M. Laloy, Stability Theory by Liapunov's Direct Method,, Springer-Verlag, (1977). Google Scholar E. D. Sontag, Comments on integral variants of ISS,, Systems and Control Letters, 34 (1998), 93. doi: 10.1016/S0167-6911(98)00003-6. Google Scholar A. R. Teel and L. Praly, A smooth Lyapunov function from a class-$\mathcal{KL}$ estimate involving two positive semidefinite functions,, ESAIM Control Optim. Calc. Var., 5 (2000), 313. doi: 10.1051/cocv:2000113. Google Scholar T. Yoshizawa, On the stability of solutions of a system of differential equations,, Memoirs of the College of Science, 29 (1955), 27. Google Scholar T. Yoshizawa, Stability Theory by Liapunov's Second Method,, Mathematical Society of Japan, (1966). Google Scholar Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006 Peter Giesl, Sigurdur Hafstein. System specific triangulations for the construction of CPA Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020378 Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331 Djamel Aaid, Amel Noui, Özen Özer. Piecewise quadratic bounding functions for finding real roots of polynomials. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 63-73. doi: 10.3934/naco.2020015 Bimal Mandal, Aditi Kar Gangopadhyay. A note on generalization of bent boolean functions. Advances in Mathematics of Communications, 2021, 15 (2) : 329-346. doi: 10.3934/amc.2020069 Andreas Koutsogiannis. Multiple ergodic averages for tempered functions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1177-1205. doi: 10.3934/dcds.2020314 Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020169 Xinpeng Wang, Bingo Wing-Kuen Ling, Wei-Chao Kuang, Zhijing Yang. Orthogonal intrinsic mode functions via optimization approach. Journal of Industrial & Management Optimization, 2021, 17 (1) : 51-66. doi: 10.3934/jimo.2019098 Yu Zhou, Xinfeng Dong, Yongzhuang Wei, Fengrong Zhang. A note on the Signal-to-noise ratio of $ (n, m) $-functions. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020117 Tahir Aliyev Azeroğlu, Bülent Nafi Örnek, Timur Düzenli. Some results on the behaviour of transfer functions at the right half plane. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020106 Meenakshi Rana, Shruti Sharma. Combinatorics of some fifth and sixth order mock theta functions. Electronic Research Archive, 2021, 29 (1) : 1803-1818. doi: 10.3934/era.2020092 Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055 Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310 Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051 Isabeau Birindelli, Françoise Demengel, Fabiana Leoni. Boundary asymptotics of the ergodic functions associated with fully nonlinear operators through a Liouville type theorem. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020395 Arthur Fleig, Lars Grüne. Strict dissipativity analysis for classes of optimal control problems involving probability density functions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020053 Chunming Tang, Maozhi Xu, Yanfeng Qi, Mingshuo Zhou. A new class of $ p $-ary regular bent functions. Advances in Mathematics of Communications, 2021, 15 (1) : 55-64. doi: 10.3934/amc.2020042 Sugata Gangopadhyay, Constanza Riera, Pantelimon Stănică. Gowers $ U_2 $ norm as a measure of nonlinearity for Boolean functions and their generalizations. Advances in Mathematics of Communications, 2021, 15 (2) : 241-256. doi: 10.3934/amc.2020056 Sigurdur F. Hafstein Christopher M. Kellett Huijuan Li
CommonCrawl
Hitting Slugging Average (SLG) Slugging Percentage (SLG) $$ {\displaystyle \mathrm {SLG} ={\frac {({\mathit {1B}})+(2\times {\mathit {2B}})+(3\times {\mathit {3B}})+(4\times {\mathit {HR}})}{AT\ BATS}}} $$ Total bases divided by at bats, using the formula where 1B, 2B, 3B, and HR are the number of singles, doubles, triples, and home runs. Slugging percentage (SLG) is a measure of batting productivity of a hitter. Unlike batting average (AVG), slugging percentage gives more weight to extra-base hits such as doubles and home runs, relative to singles. Walks are specifically excluded as a plate appearance that ends in a walk is not counted as an at bat. The name is a misnomer, as the statistic is not a percentage but a scale of measure whose computed value is a rational number in the interval [0, 4]. A slugging percentage is always expressed as a decimal to three decimal places, and is generally spoken as if multiplied by 1,000 (a slugging percentage of .589 is spoken as "five eighty nine"). The maximum possible slugging percentage is 4.000. Babe Ruth holds the MLB career slugging percentage record of .690. The single season record SLG is held by Barry Bonds who achieved a .863 SLG with the San Francisco Giants in 2001. In 2017, the average SLG among all batters in the Majors was .426. Previous On-Base Percentage (OBP) Next On-Base Plus Slugging (OPS)
CommonCrawl
Vector and complex variables 1 Where do matrices come from? 2 Transformations of the plane 3 Linear operators 4 Examples of linear operators 5 The determinant of a matrix 6 It's a stretch: eigenvalues and eigenvectors 7 Linear operators with real eigenvalues 8 How complex numbers emerge 9 Classification of quadratic polynomials 10 The complex plane ${\bf C}$ is the Euclidean space ${\bf R}^2$ 11 Multiplication of complex numbers: ${\bf C}$ isn't just ${\bf R}^2$ 12 Complex functions 13 Complex linear operators 14 Linear operators with complex eigenvalues 15 Complex calculus 16 Series and power series 17 Solving ODEs with power series Where do matrices come from? Matrices appear in systems of linear equations. Problem 1. Suppose we have coffee that costs $\$3$ per pound. How much do we get for $\$60$? Solution: $$3x=60\ \Longrightarrow\ x=\frac{60}{3}.$$ Problem 2. Given: Kenyan coffee - $\$2$ per pound, Colombian coffee - $\$3$ per pound. How much of each do you need to have $6$ pounds of blend with the total price of $\$14$? The setup is the following. Let $x$ be the weight of the Kenyan coffee and let $y$ be the weight of Colombian coffee. Then the total price of the blend is $\$ 14$. Therefore, we have a system: $$\begin{cases} x&+&y &= 6 \\ 2x&+&3y &= 14 \end{cases}.$$ Solution: From the first equation, we derive: $y=6-x$. Then substitute into the second equation: $2x+3(6-x)=14$. Solve the new equation: $-x=-4$, or $x=4$. Substitute this back into the first equation: $(4)+y=6$, then $y=2$. But it was so much simpler for Problem 1! How can we mimic this equation and get a single equation for the system in Problem 2? What if we think of the former problem as if it is about a blend of one ingredient? This is how progress: $$\begin{array}{l|cc} &\text{ Problem 1 }&\text{ Problem 2 }\\ \hline \text{the unknown }&x&X=<x,y>\\ \text{the coefficient }&3&?\\ \text{the product }&60&?\\ \end{array}$$ We transition to from numbers to vectors. But what are the operations? Let's collect the data in the equation in these tables: $$\begin{array}{|ccc|} \hline 1\cdot x&+1\cdot y &= 6 \\ \hline 2\cdot x&+3\cdot y &= 14\\ \hline \end{array}\leadsto \begin{array}{|c|c|c|c|c|c|c|} \hline 1&\cdot& x&+&1&\cdot& y &=& 6 \\ 2&\cdot& x&+&3&\cdot&y &=& 14\\ \hline \end{array}\leadsto \begin{array}{|c|c|c|c|c|c|c|} \hline 1& & & &1& & & & 6 \\ &\cdot& x&+& &\cdot& y &=& \\ 2& & & &3& & & & 14\\ \hline \end{array}$$ We see tables starting to appear... We call these tables matrices. The four coefficients of $x,y$ form the first table: $$A = \left[\begin{array}{} 1 & 1 \\ 2 & 3 \end{array}\right].$$ It has two rows and two columns. In other words, this is a $2 \times 2$ matrix. The second table is on the right; it consists of the two "free" terms (free of $x$'s and $y$'s) in the right hand side: $$B = \left[\begin{array}{} 6 \\ 14 \end{array}\right].$$ This is a $2$-dimensional vector. The third table is less visible; it is made of the two unknowns: $$X = \left[\begin{array}{} x \\ y \end{array}\right].$$ This is also a $2$-dimensional vector. The following combination of $A$ and $B$ is called the augmented matrix of the system: $$[A|B]= \left[\begin{array}{ll|l} 1 & 1 &6\\ 2 & 3 &14 \end{array}\right].$$ It's an abbreviation. Is there more to this than just abbreviations? How does this matrix interpretation of the system help? Both $X$ and $B$ are column-vectors in dimension $2$ and matrix $A$ makes the latter from the former. This is very similar to multiplication of numbers; after all they are vectors of dimension $1$... Let's match the two problems and their solutions: $$\begin{array}{ccc} \dim 1:&a\cdot x=b& \Longrightarrow & x = \frac{b}{a}& \text{ provided } a \ne 0,\\ \dim 2:&A\cdot X=B& \Longrightarrow & X = \frac{B}{A}& \text{ provided } A \ne 0, \end{array}$$ Now if we can just make sense of the new algebra... Here $AX=B$ is a matrix equation and it's supposed to capture the system of equations in Problem 2. Let's compare the original system of equations to $AX=B$: $$\begin{array}{} x&+y &= 6 \\ 2x&+3y &=14 \end{array}\ \leadsto\ \left[ \begin{array}{} 1 & 1 \\ 2 & 3 \end{array} \right] \cdot \left[ \begin{array}{} x \\ y \end{array} \right] = \left[ \begin{array}{} 6 \\ 14 \end{array} \right].$$ We can see these equations in the matrices. First: $$1 \cdot x + 1 \cdot y = 6\ \leadsto\left[ \begin{array}{} 1 & 1 \end{array} \right] \cdot\left[ \begin{array}{} x \\ y \end{array} \right] = \left[ \begin{array}{} 6 \end{array} \right].$$ Second: $$3x+5y=35\ \leadsto\ \left[ \begin{array}{} 2 & 3 \end{array} \right] \cdot \left[ \begin{array}{} x \\ y \end{array} \right] = \left[ \begin{array}{} 14 \end{array} \right].$$ This suggests what the meaning of $AX$ should be. We "multiply" either row in $A$, as a vector, by the vector $X$ -- via the dot product! Definition. The product $AX$ of a matrix $A$ and a vector $X$ is defined to be the following vector: $$A=\left[ \begin{array}{} a & b \\ c & d \end{array} \right],\ X=\left[ \begin{array}{} x \\ y \end{array} \right]\ \Longrightarrow\ AX=\left[ \begin{array}{} a & b \\ c & d \end{array} \right] \cdot \left[ \begin{array}{} x \\ y \end{array} \right]=\left[ \begin{array}{} ax+by \\ cx+dy \end{array} \right].$$ Before we study matrices as functions in the next section, let's see what insight our new approach provides for our original problem... The solution considered initially has the following geometric meaning. We can think of the two equations as equations about the coordinates of points, $(x,y)$, in the plane: $$\begin{array}{ll} x&+y &= 6 ,\\ 2x&+3y &= 14. \end{array}$$ In fact, either equation is a representations of a line on the plane. Then the solution $(x,y)=(4,2)$ is the point of their intersection: The new point of view is very different: instead of the locations, we are after the directions. We can think of the two equations as equations about the coefficients, $x$ and $y$, of vectors in the plane: $$x\left[\begin{array}{c}1\\2\end{array}\right]+y\left[\begin{array}{c}1\\3\end{array}\right]=\left[\begin{array}{c}6\\14\end{array}\right].$$ In other words, we need to find a way to stretch these two vectors so that the resulting combination is the vector on the right: Example. What if the blend to contain another, third, type of coffee? Given three prices per pound: $2,3,5$. How much of each do you need to have $6$ pounds of blend with the total price of $14$? Let $x$, $y$, and $z$ be the weights of the three types of coffee respectively. Then the total price of the blend is $14$. Therefore, we have a system: $$\begin{cases} x &+ &y &+& z &= 6 \\ 2x &+ &3y &+& 5z &= 14 \end{cases}.$$ We know that these equations represent two planes in ${\bf R}^3$. The solution is then their intersection: There are, of course, infinitely many solutions. An additional restriction in the form of another linear equation may reduce the number to one... or none. The variety of possible outcomes is by far higher than in the $2$-dimensional case; they are not discussed in this chapter. The algebra, however, is the same! The three prices per pound can be written in a vector, $<2,3,5>$, and the first equation becomes the dot product: $$<2,3,5>\cdot <x,y,z>=6.$$ And the whole system can be written in the form of exactly the same matrix equation: $$AX=B.$$ This equation will re-appear no matter what number of variables, $m$, we have or the number of equations, $n$; we will have an $n\times m$ matrix: $$\begin{array}{ll}\begin{array}{lllllllll} && 1 & 2 & 3 & ... & m \end{array}\\ \begin{array}{c} 1 \\ 2 \\ \vdots\\ n \end{array} \left[\begin{array}{rrrrrrrrrrrr} 2 & 0 & 3 &... & 2 \\ 0 & 6 & 2 &... & 0 \\ \vdots & \vdots & \vdots &... &\vdots \\ 3 & 1 & 0 &... & 12 \end{array}\right]\end{array}. $$ $\square$ Warning: What is the difference between tables of numbers and matrices? The algebraic operations discussed here. Transformations of the plane We saw in Chapter 18 real-valued functions of two variables. Consider $u=f(x,y)=2x-3y$, such a function: $f : {\bf R}^2 \to {\bf R}$, meaning $f : (x,y) \mapsto u$. Consider another such function: $v=g(x,y)=x+5y$ is also a real-valued function of two variables: $g : {\bf R}^2 \to {\bf R}$, meaning $g : (x,y) \mapsto v$. Let's build a new function from these. We take the input to be the same -- a point in the plane -- and we combine the two outputs into a single point $(u,v)$ -- in another plane. Then what we have is a single function: $F : {\bf R}^2 \to {\bf R}^2$, meaning $F : (x,y) \mapsto (u,v)$. Then this is the formula for this function: $$F(x,y) = (u,v) = (2x-3y,x+5y).$$ This is an example of transformation of the plane: $$F(x,y) = (u,v) = \big( f(x,y),\ g(x,y) \big),$$ for any pair of $f,g$. We saw examples of such transformations in Chapter 3. For example, if $$F(x,y)=(x,y+3),$$ this is a vertical shift: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccc} (x,y)& \ra{ \text{ up } k}& (x,y+k). \end{array}$$ We visualize these transformations by drawing something on the original plane (the domain) and then see what that looks like in the new plane (the co-domain): Predictably, $$F(x,y)=(x+a,y+b),$$ is the shift by vector $<a,b>$. Next, consider vertical flip. We lift, then flip the sheet of paper with $xy$-plane on it, and finally place it on top of another such sheet so that the $x$-axes align. If $$F(x,y)=(x,-y),$$ then we have: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccc} (x,y)& \ra{ \text{ vertical flip } }& (x,-y). \end{array}$$ Now the horizontal flip. We lift, then flip the sheet of paper with $xy$-plane on it, and finally place it on top of another such sheet so that the $y$-axes align. If $$F(x,y)=(-x,y),$$ then we have: Below we illustrate the fact that the parabola's left branch is a mirror image of its right branch: How about the flip about the origin? This is the formula, $$F(x,y)=(-x,-y),$$ of what is also known as the central symmetry: It is also a rotation $180$ degrees around the origin: It can be represented as a flip about the $y$-axis and then about the $x$-axis (or vice versa): $$(x,y)\mapsto (-x,y)\mapsto (-x,-y).$$ Now the vertical stretch. We grab a rubber sheet by the top and the bottom and pull them apart in such a way that the $x$-axis doesn't move. Here, $$F(x,y)=(x,ky).$$ Similarly, the horizontal stretch is given by: $$F(x,y)=(kx,y).$$ How about the uniform stretch (same in all directions)? This is its formula: $$F(x,y)=(kx,ky).$$ There are many more however that aren't here, such as the rotations. Below, we visualize the $90$ degrees rotation with a stretch with a factor of $2$: These have been bijections. If we exclude stretches from the list, we are left with the motions. Among others, we may consider folding the plane: $$F(x,y)=(|x|,y).$$ There are also projections, such as this: $$F(x,y)=(x,0),$$ when the sheet is rolled into a thin scroll: Some transformations of the plane cannot be represented in terms of the vertical and horizontal transformations such as, for example, a $90$-degree rotation: A flip about the line $x=y$ that appeared in the context of finding the graph of the inverse function: It is given by $$(x,y)\mapsto (y,x).$$ Finally, we have the collapse, a constant function: $$F(x,y)=(x_0,y_0),$$ when the sheet is crushed into a tiny ball: Of course, any Euclidean space ${\bf R}^n$ can be -- in a similar manner -- rotated (around various axes), stretched (in various directions), projected (onto various lines or planes), or collapsed. Linear operators What if we think of these points on the plane as vectors: $$<x,y>\text{ and }<u,v>?$$ Then this is the formula for our function: $$F\big(<x,y> \big) = <u,v> = <2x-3y,x+5y>.$$ Here $<x,y>$ is the input and $<u,v>$ is the output: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} \text{input} & & \text{function} & & \text{output} \\ <x,y> & \mapsto & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \mapsto & <u,v> \end{array}$$ Both are vectors; it's a vector function! Warning: We could interpret $F$ as a vector field, but we won't. Some of the benefits are immediate. For example, the uniform stretch of the plane is simply scalar multiplication: $$F:<x,y>\mapsto k<x,y>.$$ But the real benefit is a better representation of the function: $$F(X) = \left[ \begin{array}{} 2 & -3 \\ 1 & 5 \end{array} \right] \left[ \begin{array}{} x \\ y \end{array} \right] = FX.$$ Turns out, this is the matrix product from first section! This approach works out but only because $f$ and $g$ happen to be linear. Exercise. Does it work when the function isn't linear? Try $F(x,y)=(e^x,\frac{1}{y})$. Conversely, if we do have a matrix, we can always understand it as a function, as follows: $$\left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left[ \begin{array}{} x \\ y \end{array} \right] = \left[ \begin{array}{} ax+by \\ cx+dy \end{array} \right]=<ax+by,cx+dy>,$$ for some $a,b,c,d$ fixed. So matrix $A$ is contains all the information about function $A$. One can think of $A$ (a table) as an abbreviation of $AX$ (a formula). Warning: We use the same letter for the function and the matrix. Now, what happens to the vector operations under $A$? Suppose we have addition and scalar multiplication carried out in the domain space of $A$: Once $A$ has transformed the plane, what do these operations look like now? At its simplest, an addition diagram rotated will remain an addition diagram: In other words, addition is preserved under $A$. The same conclusion is quickly reached for the flip and other motions: the triangles of the new diagram are identical to the original. What about the general case? Consider $A : {\bf R}^2 \to {\bf R}^2$ and two input vectors: $$X=\left[ \begin{array}{} x \\ y \end{array} \right]\text{ and } X'=\left[ \begin{array}{} x' \\ y' \end{array} \right]\ \Longrightarrow\ A(X+X')=?$$ The example suggests the answer: $$A(X+X')=A(X)+A(X').$$ Let's compare: $$\begin{array}{l|l} \text{ LHS: }&\text{ RHS: }\\ A \left( \left[ \begin{array}{} x \\ y \end{array} \right] + \left[ \begin{array}{} x' \\ y' \end{array} \right] \right)&A \left[ \begin{array}{} x \\ y \end{array} \right] + A\left[ \begin{array}{} x' \\ y' \end{array} \right]\\ \begin{array}{} &= \left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left( \left[ \begin{array}{} x \\ y \end{array} \right] + \left[ \begin{array}{} x' \\ y' \end{array} \right] \right) \\ &= \left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left[ \begin{array}{} x + x' \\ y + y' \end{array} \right] \\ &= \left[ \begin{array}{} a(x+x')+b(y+y') \\ c(x+x')+d(y+y') \end{array} \right] \end{array}& \begin{array}{} &= \left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left[ \begin{array}{} x \\ y \end{array} \right] + \left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left[ \begin{array}{} x' \\ y' \end{array} \right] \\ &= \left[ \begin{array}{} ax+by \\ cx+dy \end{array} \right] + \left[ \begin{array}{} ax'+by' \\ cx'+dy' \end{array} \right] \\ &=\left[ \begin{array}{} ax+by+ax'+by' \\ cx + dy + cx' + dy' \end{array} \right] \end{array} \end{array}$$ These are the same, after factoring. Indeed, the example of a stretch below shows that the triangles of the diagram, if not identical, are similar to the original: The diagram of scalar multiplication is much simpler. It is a stretch; rotated or stretched, it remains a stretch. To prove, consider an input vector $X=<x,y>$ and a scalar $r$. Then, $$\left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left( r\left[ \begin{array}{} x \\ y \end{array} \right] \right)=\left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left( \left[ \begin{array}{} rx \\ ry \end{array} \right] \right)= \left[ \begin{array}{} arx+bry \\ crx+dry \end{array} \right]= r\left[ \begin{array}{} ax+by \\ cx+dy \end{array} \right]=r\left[ \begin{array}{} a & b \\ c & d \end{array} \right] \left[ \begin{array}{} x \\ y \end{array} \right].$$ In other words, scalar multiplication is preserved under $A$. Now, this is the general case... Definition. A function $A:{\bf R}^n\to {\bf R}^m$ that satisfies the following two properties is called a linear operator: $$\begin{array}{ll} A(U+V) &= A(U)+A(V), \\ A(rV) &= rA(V). \end{array}$$ What about dimension $1$? It's $f(x)=ax$. Warning: Previously, $ax+b$ has been called a "linear function". Now, $ax$ is called a "linear operator". This is the summary. Theorem. The function $A:{\bf R}^2\to {\bf R}^2$ defined via multiplication by a $2\times 2$ matrix $A$, $$A(X)=AX,$$ is a linear operator. Conversely, every linear operator $A:{\bf R}^2\to {\bf R}^2$ is defined via multiplication by some $2\times 2$ matrix. Examples of linear operators We will visualize some examples of linear operators on the plane: $$F:{\bf R}^2\to {\bf R}^2.$$ We illustrate them via compositions. We plot a parametric curve or curves, $X=X(t)$, in the domain and then plot its image in the resulting plane under the transformation $Y=F(X)$. In other words, we plot the image of the function: $$Y=F(X(t)),$$ which is also a parametric curve. Example (collapse on axis). Let's consider this very simple function: $$\begin{cases}u&=2x\\ v&=0\end{cases} \ \leadsto\ \left[ \begin{array}{} u \\ v \end{array}\right]= \left[\begin{array}{} 2 & 0 \\ 0 & 0 \end{array} \right] \cdot \left[ \begin{array}{} x \\ y \end{array} \right].$$ Below, one can see how this function collapses the whole plane to the $x$-axis: In the meantime, the $x$-axis is stretched by a factor of $2$. $\square$ Example (stretch-shrink along axes). Let's consider this function: $$\begin{cases}u&=2x\\ v&=4y\end{cases}.$$ Here, this function is given by the matrix: $$F=\left[ \begin{array}{ll}2&0\\0&4\end{array}\right].$$ What happens to the rest of the plane? Since the stretching is non-uniform, the vectors turn. However, since the basis vectors $E_1$ and $E_2$ don't turn, this is not a rotation but rather "fanning out" of the vectors: $\square$ Example (stretch-shrink along axes). A slightly different function is: $$\begin{cases}u&=-x\\ v&=4y\end{cases}. $$ It is simple because the two variables are fully separated. The slight change to the function produces a similar but different pattern: we see the reversal of the direction of the ellipse around the origin. Here, the matrix of $F$ is diagonal: $$F=\left[ \begin{array}{ll}-1&0\\0&4\end{array}\right].$$ $\square$ Example (collapse). Let's consider a more general function: $$\begin{cases}u&=x&+2y\\ v&=2x&+4y\end{cases} \ \Longrightarrow\ F=\left[ \begin{array}{cc}1&2\\2&4\end{array}\right].$$ It appears that the function is stretching the plane in one direction and collapsing in another. That's why there is a whole line of points $X$ with $FX=0$. To find it, we solve this equation: $$\begin{cases}x&+2y&=0\\ 2x&+4y&=0\end{cases} \ \Longrightarrow\ x=-2y.$$ $\square$ Exercise. Describe what is happening with this matrix $F$: $$F=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right].$$ Exercise. Describe what is happening with this matrix $F$: $$F=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right].$$ Example (skewing-shearing). Consider a matrix with repeated (and, therefore, real) eigenvalues: $$F=\left[ \begin{array}{cc}-1&2\\0&-1\end{array}\right].$$ Below, we replace a circle with an ellipse to see what happens to it under such a function: There is still angular stretch-shrink but this time it is between the two ends of the same line. To see clearer, consider what happens to a square: The plane is skewed, like a deck of cards: Another example is when wind blows at the walls with its main force at their top pushing in the direction of the wind while their bottom is held by the foundation. Such a skewing can be carried out with any image editing software such as MS Paint. $\square$ Example (rotation). Consider a rotation through $90$ degrees: $$\begin{cases}u&=& -y\\ v&=x&&\end{cases} \ \leadsto\ \left[\begin{array}{} u \\ v \end{array}\right] = \left[\begin{array}{} 0&1\\-1&0 \end{array}\right] \cdot \left[\begin{array}{} x \\ y \end{array}\right].$$ Consider a rotation through $45$ degrees: $$\begin{cases}u&=\cos \tfrac{\pi}{4} x& -\sin \tfrac{\pi}{4} y\\ v&=\sin \tfrac{\pi}{4} x&+\cos \tfrac{\pi}{4} y& \end{cases} \ \leadsto\ \left[\begin{array}{} u \\ v \end{array}\right] = \left[\begin{array}{rrr} \cos \tfrac{\pi}{4} y & -\sin \tfrac{\pi}{4} y \\ \sin \tfrac{\pi}{4} y & \cos \tfrac{\pi}{4} y \end{array}\right] \cdot \left[\begin{array}{} x \\ y \end{array}\right].$$ Example (rotation with stretch-shrink). Let's consider a more complex function: $$\begin{cases}u&=3x&-13y\\ v&=5x&+y\end{cases} .$$ Here, the matrix of $F$ is not diagonal: $$F=\left[ \begin{array}{cc}3&-13\\5&1\end{array}\right].$$ We have a combination of stretching, flipping, and rotating... $\square$ The determinant of a matrix Let's, once and for all, solve the system of linear equations: $$\begin{cases} ax&+&by&=0,&\quad (1)\\ cx&+&dy&=0.&\quad (2) \end{cases}$$ The system has unspecified coefficients $a,b,c,d$ and the free terms are chosen to be $0$ for simplicity. From (1), we derive: $$y=-ax/b, \text{ provided }b\ne 0.\quad (3)$$ Substitute this into (2): $$cx+d(-ax/b)=0.$$ Then $$x(c-da/b)=0,$$ or, alternatively, $$x(cb-da)=0, \text{ when }b\ne 0.$$ One possibility is $x=0$; it follows from (3) that $y=0$ too. Then, we have two cases for $b\ne 0$: Case 1: $x=0,\ y=0$, or Case 2: $ad-bc=0$. Now, we apply this analysis to $y$ in (1) instead of $x$; we have for $a\ne 0$: The result is the same! Furthermore, if we apply this analysis for $x$ and $y$ in (2) instead of (1), we have the same two cases. Thus, whenever one of the four coefficients, $a,b,c,d$, is non-zero, we have these cases: But when $a=b=c=d=0$, Case 2 is satisfied... and we can have any values for $x$ and $y$! Definition. The determinant of a $2\times 2$ matrix is defined to be: $$\det \left[\begin{array}{} a & b \\ c & d \end{array}\right]=ad-bc.$$ What does the determinant determine? According to the analysis above: $$\det A\ne 0\ \Longrightarrow\ x=y=0.$$ The converse is also true. Indeed, let's consider our system of linear equations again: $$\begin{cases} ax&+&by&=0,&\quad (1)\\ cx&+&dy&=0.&\quad (2) \end{cases}$$ We multiply (1) by $c$ and (2) by $a$. Then we have: $$\begin{cases} cax&+&cby&=0&\quad c(1)\\ acx&+&ady&=0&\quad a(2)\\ \hline (ca-ac)x&+&(cb-ad)y&=0\\ 0\cdot x&-&\det A\cdot y&=0\\ \end{cases}$$ The third equation is the result of subtraction of the first two. The whole equation is zero when $\det A=0$! This means that equations (1) and (2) represent two identical lines. It follows that the original system has infinitely many solutions. Exercise. What if $a=0$ or $c=0$? The following is the summary of above analysis. Theorem. Suppose $A$ is a $2\times 2$ matrix. Then, $\det A\ne 0$ if and only if the solution set of the matrix equation $AX=0$ consists of only $0$. Thus, the flip, the stretch, and the rotation have non-zero determinants while the projections and the collapse have a zero determinant. The following is the contra-positive form of the theorem. Corollary. Suppose $A$ is a $2\times 2$ matrix. Then, there is $X\ne 0$ such that $AX=0$ if and only if $\det A=0$. Since $A(0)=0$, this indicates that $A$ isn't one-to-one. There is more... Corollary. A linear operator $A$ is a bijection if and only if $\det A\ne 0$. Exercise. Prove the rest of the theorem. So, a zero determinant indicates that some non-zero vector $X$ is taken to $0$ by $A$. It follows that all the multiples, $\lambda X$, of $X$ are also taken to $0$: $$A(\lambda X)=\lambda A(X)=\lambda 0=0.$$ In other words, the whole line is collapsed to $0$. The following may seem less dramatic than that. Definition. A $2\times 2$ matrix $A$ is called singular when its two columns are multiples of each other. So, for $$A = \left[\begin{array}{} a & b \\ c & d \end{array}\right],$$ there is such an $x$ that: $$\left[\begin{array}{} a \\ c \end{array}\right] = x\left[\begin{array}{} b \\ d \end{array}\right].$$ Theorem. A $2\times 2$ matrix $A$ is singular if and only if $\det A=0$. Proof. ($\Rightarrow$) Suppose $A$ is singular, then $$\left[\begin{array}{} a \\ c \end{array}\right] = x \left[\begin{array}{} b \\ d \end{array}\right]\ \Longleftrightarrow\ \begin{cases}a=xb\\ c=xd\end{cases} \Longleftrightarrow\ \det A = ad-bc = (xb)d - b (xd)=0.$$ ($\Leftarrow$) Suppose $ad-bc=0$, then let's find $x$, the multiple. Case 1: assume $b \neq 0$, then choose $x = \frac{a}{b}$. Then $$\begin{array}{} xb &= \frac{a}{b}b &= a, \\ xd &= \frac{a}{b}d &= \frac{ad}{b} = \frac{bc}{b} = c. \end{array}$$ So $$x \left[\begin{array}{} b \\ d \end{array}\right] = \left[\begin{array}{} a \\ c \end{array}\right].$$ Case 2: assume $a \neq 0$... $\blacksquare$ Exercise. Finish the proof. We make the following observation about determinants, which will reappear in the case of $n\times n$ matrices: the determinant is an alternating sum of terms each of which is the product of $n$ of matrix's entries, exactly one from each row and one from each column. The following important fact is accepted without proof. Corollary. The determinant of a linear operator remains the same in any Cartesian coordinate system. It's a stretch: eigenvalues and eigenvectors Consider a linear operator given by the matrix: $$A=\left[\begin{array}{} 2 & 0 \\ 0 & 3 \end{array}\right].$$ What exactly does this transformation of the plane do to it? To answer, just consider where $A$ takes the standard basis vectors: $$\begin{array}{} A:& E_1 = \left[\begin{array}{} 1 \\ 0 \end{array}\right] &\mapsto \left[\begin{array}{} 2 \\ 0 \end{array}\right]&=2E_1 \\ A:&E_2 = \left[\begin{array}{} 0 \\ 1 \end{array}\right] &\mapsto \left[\begin{array}{} 0 \\ 3 \end{array}\right]&=3E_2 \end{array}.$$ In other words, what happens to them is scalar multiplication: $$A(E_1)=2E_1\text{ and }A(E_2)=3E_2.$$ So, we can say that $A$ stretches the plane horizontally by a factor of $2$ and vertically by $3$, in either order. Even though we speak of stretching the plane, this is not to say that vectors are stretched, expect for the vertical and horizontal ones. Indeed, any other vector is rotated; for example, the values of $<1,1>$ isn't its multiple: $$A\left[\begin{array}{} 1 \\ 1 \end{array}\right]=\left[\begin{array}{} 2 \\ 3 \end{array}\right]\ne \lambda \left[\begin{array}{} 1 \\ 1 \end{array}\right],$$ for any real $\lambda$. In fact, any stretch of this kind will have this form: $$A = \left[\begin{array}{} h & 0 \\ 0 & v \end{array}\right],$$ where $h$ is the horizontal coefficient and $v$ the vertical coefficient. What about a stretch along other axes? For example, we've seen this: This time the matrix is not diagonal: $$A=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right].$$ The plane is visibly stretched but in what direction or directions? It is hard to tell because the result simply looks skewed. We know that $A$ rotates some vectors but $A$ acts as scalar multiplication along some -- still unknown -- axes! In other words, if vectors $V_1$ and $V_2$ point along these axes, we have: $A(V_1)=\lambda_1 V_1$ and $A(V_2)=\lambda_2 V_2$, for some numbers $\lambda_1$ and $\lambda_2$. Definition. Given a linear operator $A : {\bf R}^2 \to {\bf R}^2$, a (real) number $\lambda$ is called an eigenvalue of $A$ if $$A(V)=\lambda V,$$ for some non-zero vector $V$ in ${\bf R}^2$. Then, $V$ is called an eigenvector of $A$ corresponding to $\lambda$. Vector $V=0$ is excluded because we always have $A(0)=0$. Now, how do we find these? Example. If this is the identity matrix, $A=I_2$, the equation is easy to solve: $$\lambda V=AV =I_2V=V. $$ So, $\lambda=1$. This is the only eigenvalue. What are its eigenvectors? All vectors but $0$. $\square$ Example. Let's revisit this diagonal matrix: $$A = \left[\begin{array}{ll} 2 & 0 \\ 0 & 3 \end{array}\right].$$ Then our matrix equation becomes: $$\left[\begin{array}{ll} 2 & 0 \\ 0 & 3 \end{array}\right] \left[\begin{array}{ll} x \\ y \end{array}\right] = \lambda \left[\begin{array}{ll} x \\ y \end{array}\right].$$ Let's rewrite: $$\left[\begin{array}{ll} 2x \\ 3y \end{array}\right] = \left[\begin{array}{} \lambda x \\ \lambda y \end{array}\right] \Longrightarrow\ \begin{cases} 2x &=\lambda x \\ 3y &= \lambda y\\ \end{cases} \Longrightarrow\ \begin{cases} x(2-\lambda ) &=0,&\quad (1) \\ y(3-\lambda ) &=0.&\quad (2) \\ \end{cases}$$ Now, we have $V=<x,y>\ne 0$, so either $x\ne 0$ or $y\ne 0$. Let's use the above equations to consider these two cases: Case 1: $x \ne 0$, then from (1), we have: $2-\lambda =0 \Longrightarrow \lambda =2$, Case 2: $y \ne 0$, then from (2), we have: $3-\lambda =0 \Longrightarrow \lambda =3$. These are the only two possibilities. So, if $\lambda = 2$, then $y=0$ from (2). Therefore, the corresponding eigenvectors are: $$\left[\begin{array}{} x \\ 0 \end{array}\right],\ x \neq 0.$$ If $\lambda=3$, then $x=0$ from (1). Therefore, the corresponding eigenvectors are: $$\left[\begin{array}{ll} 0 \\ y \end{array}\right],\ y \neq 0.$$ These two sets are almost equal to the two axes! If we append $0$ to these sets of eigenvectors, we have the following. For $\lambda = 2$, the set is the $x$-axis: $$\left\{ \left[\begin{array}{ll} x \\ 0 \end{array}\right] :\ x \text{ real } \right\}.$$ And for $\lambda=3$, the set is the $y$-axis: $$\left\{\left[\begin{array}{ll} 0 \\ y \end{array}\right] :\ y \text{ real } \right\}.$$ $\square$ Definition. For a (real) eigenvalue $\lambda$ of $A$, the eigenspace of $A$ corresponding to $\lambda$ is defined and denoted by the following: $$E(\lambda) = \{ V :\ A(V)=\lambda V \}.$$ It's all eigenvectors of $\lambda$ plus $0$... Example. For the identity matrix, we have: $$E_{I_2}(1) = {\bf R}^2.$$ $\square$ Example. For $$A = \left[ \begin{array}{} 2 & 0 \\ 0 & 3 \end{array}\right],$$ we have: $E(2)$ is $x$-axis. $E(3)$ is $y$-axis. Example. For the zero matrix, $A=0$, we have $$AV=\lambda V, \text{ or } 0 = \lambda V.$$ Therefore, $\lambda =0$. Hence $$E(0)={\bf R}^2.$$ $\square$ Example. Consider now the projection on the $x$-axis, $$P = \left[\begin{array}{} 1 & 0 \\ 0 & 0 \end{array}\right].$$ Then our matrix equation is solved as follows: $$\left[\begin{array}{} 1 & 0 \\ 0 & 0 \end{array} \right] \left[ \begin{array}{} x \\ y \end{array} \right] = \lambda \left[\begin{array}{} x \\ y \end{array}\right] \Longrightarrow\ \left[\begin{array}{} x \\ 0 \end{array}\right] = \left[\begin{array}{} \lambda x \\ \lambda y \end{array}\right] \Longrightarrow\ \begin{cases} x=\lambda x\\ 0 =\lambda y\end{cases}.$$ So, the only possible cases are: $$\lambda =0 \text{ and } \lambda=1.$$ Next, in order to find the corresponding eigenvalues, we now go back to the system of linear equations for $x$ and $y$. We consider these two cases. Case 1: $\lambda = 0 \Longrightarrow $ $x=0 \cdot x \Longrightarrow x=0$, and $0=0 \cdot y \Longrightarrow y$ any. Hence, $$E(0) = \left\{ \left[\begin{array}{lll} 0 \\ y \end{array}\right] :\ y \text{ real} \right\}.$$ This is the $y$-axis. Case 2: $x=1 \cdot x \Longrightarrow x$ any, and $0=1 \cdot y \Longrightarrow y=0$. Hence, $$E(1) = \left\{ \left[\begin{array}{lll} x \\ 0 \end{array}\right] :\ x \text{ real} \right\}.$$ This is the $x$-axis. $\square$ Example. Suppose $A$ is the $90$ degree rotation: $$A = \left[\begin{array}{} 0 & -1 \\ 1 & 0 \end{array}\right].$$ Then, to find the eigenvalues, we solve this system of linear equations: $$\left[\begin{array}{} 0 & -1 \\ 1 & 0 \end{array}\right] \left[\begin{array}{} x \\ y \end{array}\right] = \lambda \left[\begin{array}{} x \\ y \end{array} \right]\ \Longrightarrow\ \begin{cases} -y &= \lambda x \\ x &= \lambda y \end{cases} \Longrightarrow\ \begin{cases} -xy &= \lambda x^2 \\ xy &= \lambda y^2 \end{cases} \ \Longrightarrow\ \lambda x^2 = -\lambda y^2\ \Longrightarrow\ x^2=-y^2 \text{ or }\lambda=0.$$ A direct examination reveals that $\lambda = 0$ is not an eigenvalue. In the meantime, the equation $x^2=-y^2$ is impossible! There seems to be no eigenvalues, certainly not real ones... $\square$ Example. Let's revisit this linear operator: $$A=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right].$$ Our equation becomes: $$\left[\begin{array}{} -1 & -2 \\ 1 & -4 \end{array}\right] \left[\begin{array}{} x \\ y \end{array}\right] = \lambda \left[\begin{array}{} x \\ y \end{array}\right].$$ Let's rewrite: $$\left[\begin{array}{} -x-2y \\ x-4y \end{array}\right] = \left[\begin{array}{} \lambda x \\ \lambda y \end{array}\right] \Longrightarrow \begin{cases} (-1-\lambda)x&-&2y &=0 \\ x&+&(-4-\lambda )y &= 0\\ \end{cases}.$$ This is another system of linear equations to be solved, again. Is there an easier way? $\square$ Let's try an alternative approach... Suppose $\lambda$ is an eigenvalue for matrix $A$. This means that, for some non-zero vector $V$ in ${\bf R}^2$, we try to rewrite our matrix equation: $$AV=\lambda V\ \Longleftrightarrow\ AV-\lambda V = 0 \ \Longleftrightarrow\ (A-\lambda)V=0 ?$$ But $\lambda$ is a number not a matrix! The last equation is better rewritten as: $$ AV-\lambda V=0\ \Longleftrightarrow\ AV-\lambda I_2V=0\ \Longleftrightarrow\ (A-\lambda I_2)V=0.$$ This is a new matrix equation to be solved: the matrix is $F=A - \lambda I_2$, $V$ is the unknown, the right hand side is $B=0$. But do we really need to solve it? Our strategy is to go after the eigenvalues initially and the to find the eigenvectors. Then, all we need is the following familiar theorem: there is a non-zero solution of $FX=0$ if and only if the determinant of the matrix is zero, $\det F = 0$. Theorem. Suppose $A$ is a linear operator ${\bf R}^2$. Then an eigenvalue $\lambda$ of $A$ is a solution to the following (quadratic) equation: $$\det (A-\lambda I_2)=0.$$ Note: "Eigen" means "characteristic" in German. Definition. The characteristic polynomial of an $2\times 2$ matrix $A$ is defined to be: $$\chi_A(\lambda) = \det(A - \lambda I_2).$$ Meanwhile, the equation is called the characteristic equation. Example. Consider again: $$A = \left[\begin{array}{} 2 & 0 \\ 0 & 3 \end{array}\right],$$ then $$\chi_A(\lambda) = \det \left[\begin{array}{} 2-\lambda & 0 \\ 0 & 3-\lambda \end{array}\right] = (2-\lambda)(3-\lambda) = 0.$$ Therefore, we have: $$\lambda=2,\ 3.$$ $\square$ Example. If $$A=\left[\begin{array}{} 1 & 0 \\ 0 & 0 \end{array}\right],$$ the characteristic equation is: $$\chi_A(\lambda) = \det \left[\begin{array}{} 1-\lambda & 0 \\ 0 & \lambda \end{array}\right] = (1-\lambda)\lambda = 0.$$ Then, $$\lambda=1,\ 0.$$ $\square$ Example. Take $A$ the $90$ degree rotation again. Then, $$\chi_A(\lambda) = \det \left[\begin{array}{} -\lambda & -1 \\ 1 & -\lambda \end{array}\right] = \lambda^2+1 = 0.$$ No real solutions! So, no vector is taken by $A$ to its own multiple. $\square$ A broader view is provided by the theorem below. Theorem. If $A$ is an $2 \times 2$ matrix, then the eigenvalues are the real roots of the (quadratic) characteristic polynomial $\chi_A$, and, therefore, the number of eigenvalues is less than or equal to $2$, counting their multiplicities. Linear operators with real eigenvalues We have shown how one can understand the way a linear operator transforms the plane: by examining what happens to the various curves. This time, we would like to learn how to predict the outcome by examining only its matrix. Below is a familiar fact that will take us down that road. Theorem. When $\det F\ne 0$, the vector $X=0$ is the only vector that give the zero value under the operator $Y=FX$; otherwise, there are infinitely many such points. In other words, what matters is whether $F$, as a function, is or is not one-to-one. Example (collapse on axis). The latter is the "degenerate" case such as the following: $$\begin{cases}u&=2x\\ v&=0\end{cases} \ \leadsto\ \left[ \begin{array}{} u \\ v \end{array}\right]= \left[\begin{array}{} 2 & 0 \\ 0 & 0 \end{array} \right] \, \left[ \begin{array}{} x \\ y \end{array} \right].$$ Below, one can see how this function collapses the whole plane to the $x$-axis while the $x$-axis is stretched by a factor of $2$: The illustrations are produced by a spreadsheet. By mapping various curves, one can see how the linear operator given by this matrix transforms the plane: stretching, shrinking in various directions, rotations, etc. The spreadsheet also shows eigenspaces as two (or one, or none) straight lines; they remain in place under the transformation. Running a few examples demonstrates a classification of matrices, which in turn will give us -- in Chapter 24 -- a classification of systems of ODEs... Example (stretch-shrink along axes). Let's consider this linear operator and its matrix $$\begin{cases}u&=2x\\ v&=4y\end{cases} \text{ and } F=\left[ \begin{array}{ll}2&0\\0&4\end{array}\right].$$ To see the transformation of the whole plane, we see first the transformations of the two axes. In other words, we track the values of the basis vectors and the rest of the values are seen as a linear combination of these. The vectors turn non-uniformly, i.e., they "fan out": Algebraically, we have: $$FX=F\left[ \begin{array}{ll}u\\v\end{array}\right]=u \left[ \begin{array}{ll}2\\0\end{array}\right] +v\left[ \begin{array}{ll}0\\4\end{array}\right]=2u \left[ \begin{array}{ll}1\\0\end{array}\right] +4v\left[ \begin{array}{ll}0\\1\end{array}\right]=uE_1+4vE_2.$$ The last expression is a linear combination of the values of the two standard basis vectors. The middle, however, is also a linear combination but with respect to two "non-standard" basis vectors: $$V_1=\left[ \begin{array}{ll}2\\0\end{array}\right] \text{ and } V_2=\left[ \begin{array}{ll}0\\4\end{array}\right].$$ $\square$ Example (stretch-shrink along axes). A slightly different function is: $$\begin{cases}u&=-x\\ v&=4y\end{cases} \text{ and } F=\left[ \begin{array}{ll}-1&0\\0&4\end{array}\right]. $$ It is still simple because the two variables remain fully separated. We can think of either of these two transformations as a transformation of the whole plane that is limited to one of the two axes: The slight change to the function produces a similar but different pattern: we see the reversal of the direction of the ellipse around the origin. Algebraically, we have as before: $$FX=F\left[ \begin{array}{ll}u\\v\end{array}\right]= u\left[ \begin{array}{ll}-1\\0\end{array}\right] +v\left[ \begin{array}{ll}0\\4\end{array}\right]=- u\left[ \begin{array}{ll}1\\0\end{array}\right] +4v\left[ \begin{array}{ll}0\\1\end{array}\right]=-uE_1+4vE_2.$$ Once again, the last expression is a linear combination of the values of the two standard basis vectors while the middle is a linear combination but with respect to another basis: $$V_1=\left[ \begin{array}{ll}-1\\0\end{array}\right] \text{ and } V_2=\left[ \begin{array}{ll}0\\4\end{array}\right].$$ $\square$ What if the two variables aren't separated? In the examples, the standard basis vectors happen to be the eigenvectors of the two (diagonal) matrices. The insight provided by the examples is that the two variables can be separated -- along the eigenvectors of the matrix that serve as an alternative basis. To see how this happens, just imagine that the two pictures above are skewed (the latter also reverses the orientation of the curve): The idea is uncomplicated: the linear operator within the eigenspace is $1$-dimensional. It can then be represented the usual way -- by a single number. If we know this number, how do we find the rest of the values of the linear operator? In two steps. First, every $X$ that lies within the eigenspace, which is a line, is a multiple of the eigenvector and its value under $F$ can be easily computed: $$X=rV\ \Longrightarrow\ U=F(rV)=rFV=r\lambda V.$$ In fact, we have the following result. Theorem (Eigenspaces). If $\lambda$ is a (real) eigenvalue and $V$ a corresponding eigenvector of a matrix $F$, then every multiple of $V$ is also an eigenvector; i.e., $$V \text{ belongs to } E(\lambda) \ \Longrightarrow\ U=r V \text{ belongs to } E(\lambda) \text{ for any real } r.$$ Second, the rest of values is covered by the following idea: try to express the value as a linear combination of two values found so far. In fact, we have the following result. Theorem (Representation). Suppose $V_1$ and $V_2$ are two eigenvectors of a matrix $F$ that correspond to two (possibly equal) eigenvalues $\lambda_1$ and $\lambda_2$. Suppose also that $V_1$ and $V_2$ aren't multiples of each other. Then, all values of the linear operator $Y=FX$ are represented as linear combinations of its values on the eigenvectors: $$X=cV_1+kV_2\ \Longrightarrow\ FX=c\lambda_1V_1+k\lambda_2V_2,$$ with some real coefficients $C$ and $K$. Exercise. Show that when all eigenvectors are multiples of each other, the formula won't give us all the values of the linear operator. Let's consider a few functions with non-diagonal matrices. Example (collapse). Let's consider a more general linear operator: $$\begin{cases}u&=x&+&2y\\ v&=2x&+&4y\end{cases} \ \Longrightarrow\ F=\left[ \begin{array}{cc}1&2\\2&4\end{array}\right].$$ It appears that the function has a stretching in one direction and a collapse in another. What are those directions? Linear algebra gives the answer. First, the determinant is zero: $$\det F=\det\left[ \begin{array}{cc}1&2\\2&4\end{array}\right]=1\cdot 4-2\cdot 2=0.$$ That's why there is a whole line of points $X$ with $FX=0$. To find it, we solve this equation: $$\begin{cases}x&+&2y&=0\\ 2x&+&4y&=0\end{cases} \ \Longrightarrow\ x=-2y.$$ We have, then, eigenvectors corresponding to the zero eigenvalue $\lambda_1=0$: $$V_1=\left[\begin{array}{cc}2\\-1\end{array}\right]\ \Longrightarrow\ FV_1=0 .$$ So, second, there is only one non-zero eigenvalue: $$\det (F-\lambda I)=\det\left[ \begin{array}{cc}1-\lambda&2\\2&4-\lambda\end{array}\right]=\lambda^2-5\lambda=\lambda(\lambda-5).$$ Let's find the eigenvectors for $\lambda_2=5$. We solve the equation: $$FV=\lambda_2 V,$$ as follows: $$FV=\left[ \begin{array}{cc}1&2\\2&4\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=5\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}x&+&2y&=5x\\ 2x&+&4y&=5y\end{cases} \ \Longrightarrow\ \begin{cases}-4x&+&2y&=0\\ 2x&-&y&=0\end{cases} \ \Longrightarrow\ y=2x.$$ This line is the eigenspace. We choose the eigenvector to be: $$V_2=\left[\begin{array}{cc}1\\2\end{array}\right].$$ Every value is a linear combination of the values of the two eigenvectors: $$X=c\, V_1+k\, V_2 \ \Longrightarrow\ FX=c\left[\begin{array}{cc}2\\-1\end{array}\right]+k\cdot 0\cdot\left[\begin{array}{cc}1\\2\end{array}\right].$$ $\square$ Exercise. Find the line of the projection. Example (stretch-shrink). Let's consider this function: $$\begin{cases}u&=-x&-&2y,\\ v&=x&-&4y.\end{cases} $$ Here, the matrix of $F$ is not diagonal: $$F=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right].$$ The analysis starts with the characteristic polynomial: $$\det (F-\lambda I)=\det \left[ \begin{array}{cc}-1-\lambda&-2\\1&-4-\lambda\end{array}\right]=\lambda^2-5\lambda+6.$$ Therefore, the eigenvalues are: $$\lambda_1=-3,\ \lambda_2=-2.$$ To find the eigenvectors, we solve the two equations: $$FV_i=\lambda_i V_i,\ i=1,2.$$ The first: $$FV_1=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=-1\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}-x&-&2y&=-3x\\ x&-&4y&=-3y\end{cases} \ \Longrightarrow\ \begin{cases}2x&-&2y&=0\\ x&-&y&=0\end{cases} \ \Longrightarrow\ x=y.$$ We choose: $$V_1=\left[\begin{array}{cc}1\\1\end{array}\right].$$ The second eigenvalue: $$FV_2=\left[ \begin{array}{cc}-1&-2\\1&-4\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=-2\left[\begin{array}{cc}x\\y\end{array}\right].$$ We have the following system: $$\begin{cases}-x&-&2y&=-2x\\ x&-&4y&=-2y\end{cases} \ \Longrightarrow\ \begin{cases}x&-&2y&=0\\ x&-&2y&=0\end{cases}\ \Longrightarrow\ x=2y.$$ We choose: $$V_2=\left[\begin{array}{cc}2\\1\end{array}\right].$$ Then, $$X=c\, V_1+k\, V_2 \ \Longrightarrow\ FX=-3c\, \left[\begin{array}{cc}2\\-1\end{array}\right]-2k\,\left[\begin{array}{cc}1\\2\end{array}\right].$$ $\square$ Example (stretch-shrink). Let's consider this linear operator: $$\begin{cases}u&=x&+&2y,\\ v&=3x&+&2y.\end{cases} $$ Here, the matrix of $F$ is not diagonal: $$F=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right].$$ Let's find the eigenvectors: $$\det (F-\lambda I)=\det \left[ \begin{array}{cc}1-\lambda&2\\3&2-\lambda\end{array}\right]=\lambda^2-3\lambda-4.$$ Therefore, the eigenvalues are: $$\lambda_1=-1,\ \lambda_2=4.$$ Now we find the eigenvectors. We solve the two equations: $$FV_i=\lambda_i V_i,\ i=1,2.$$ The first: $$FV_1=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=-1\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}x&+&2y&=-x\\ 3x&+&2y&=-y\end{cases} \ \Longrightarrow\ \begin{cases}2x&+&2y&=0\\ 3x&+&3y&=0\end{cases} \ \Longrightarrow\ x=-y.$$ We choose: $$V_1=\left[\begin{array}{cc}1\\-1\end{array}\right].$$ Every value within this eigenspace (the line $y=-x$) is a multiple of this eigenvector: $$X=\lambda_1V_1=-\left[\begin{array}{cc}1\\-1\end{array}\right].$$ The second eigenvalue: $$FV_2=\left[ \begin{array}{cc}1&2\\3&2\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=4\left[\begin{array}{cc}x\\y\end{array}\right].$$ We have the following system: $$\begin{cases}x&+&2y&=4x\\ 3x&+&2y&=4y\end{cases} \ \Longrightarrow\ \begin{cases}-3x&+&2y&=0\\ 3x&-&2y&=0\end{cases}\ \Longrightarrow\ x=2y/3.$$ We choose: $$V_2=\left[\begin{array}{cc}1\\3/2\end{array}\right].$$ Every value within this eigenspace (the line $y=3x/2$) is a multiple of this eigenvector: $$X=\lambda_2V_2=4\left[\begin{array}{cc}1\\3/2\end{array}\right].$$ Then, $$X=c\, V_1+k\, V_2 \ \Longrightarrow\ U=FX=-c\left[\begin{array}{cc}1\\-1\end{array}\right]+4k\left[\begin{array}{cc}1\\3/2\end{array}\right].$$ $\square$ Let's summarize the results. Theorem (Classification of linear operators -- real eigenvalues). Suppose matrix $F$ has two real non-zero eigenvalues $\lambda _1$ and $\lambda_2$. Then, the function $U=FX$ stretches/shrinks each eigenspace by a factor $|\lambda _1|$ and $|\lambda_2|$ respectively and if $\lambda _1$ and $\lambda_2$ have the same sign, it preserves the orientation of a closed curve around the origin; and if $\lambda _1$ and $\lambda_2$ have the opposite signs, it reverses the orientation of a closed curve around the origin. Example (skewing-shearing). Consider a matrix with repeated (and, therefore, real) eigenvalues: $$F=\left[ \begin{array}{cc}1&1\\0&1\end{array}\right].$$ Below, we replace a circle with an ellipse to see what happens to it under such a function: There is still angular stretch-shrink but this time it is between the two ends of the same line. We see "fanning out" again: To see clearer, consider what happens to a square: This is the characteristic polynomial: $$\det (F-\lambda I)=\det \left[ \begin{array}{cc}1-\lambda&1\\0&1-\lambda\end{array}\right]=(1-\lambda)^2.$$ Therefore, the eigenvalues are $$\lambda_1=\lambda_2=1.$$ What are the eigenvectors? $$FV_{1,2}=\left[ \begin{array}{cc}1&1\\0&1\end{array}\right]\left[\begin{array}{cc}x\\y\end{array}\right]=1\left[\begin{array}{cc}x\\y\end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases}x&+&y&=x\\ &&y&=y\end{cases} \ \Longrightarrow\ x \text{ any},\ y=0.$$ The only eigenvectors are horizontal! Therefore, our classification theorem doesn't apply... $\square$ Example. There are even more outcomes that the theorem doesn't cover. Recall the characteristic polynomial of the matrix $A$ of the $90$ degree rotation: $$\chi_A(\lambda) = \det \left[\begin{array}{} -\lambda & -1 \\ 1 & -\lambda \end{array}\right] = \lambda^2+1.$$ But the characteristic equation, $$x^2+1=0,$$ has no real solutions! Are we done then?.. $\square$ How complex numbers emerge The equation $$x^2+1=0$$ has no solutions. Indeed, if we try to solve it the usual way, we get these: $$x=\sqrt{-1} \text{ and } x=-\sqrt{-1}.$$ There are no such real numbers! However, let's ignore this fact for a moment. Let's substitute what we have back into the equation and -- blindly -- follow the rules of algebra. We "confirm" that this "number" is a "solution": $$x^2+1=(\sqrt{-1})^2+1=(-1)+1=0.$$ We call this entity the imaginary unit, denoted by $i$. We just add this "number" to the set of numbers we do algebra with: And see what happens... The only thing we need to know about $i$ is this four-part convention: $i$ is not a real number, but $i$ can participate in the (four) algebraic operations with real numbers by following the same rules; $i^2=-1$ and, consequently, $i\ne 0$. We use these fundamental facts and replace $\sqrt{-1}$ with $i$ any time we want. Using these rules, we discover that our irreducible (Chapter 4) polynomial can now be factored: $$(x-i)(x+i)=x^2-ix+ix-i^2=x^2+1.$$ Warning: $i$ is not an $x$-intercept of $f(x)=x^2+1$ as the $x$-axis ("the real line") consists of only (and all) real numbers. Definition. The multiples of the imaginary unit, $2i,-i,...$, are called imaginary numbers. We have created a whole class of non-real numbers! There are as many of them as the real numbers: Except $0i=0$ is real! Example. The imaginary numbers may also come from solving the simplest quadratic equations. For example, the equation $$x^2+4=0$$ gives us via our substitution: $$x=\pm\sqrt{-4}=\pm \sqrt{4}\sqrt{-1}=\pm 2i.$$ Indeed, if we substitute $2i$ into $$x^2+4=0,$$ we have: $$(2i)^2+4=-4+4=0.$$ $\square$ Imaginary numbers obey the laws of algebra as we know them! If we need to simplify the expression, we try to manipulate it in such a way that real numbers are combines with real while $i$ is pushed aside. For example, we can just factor $i$ out of all addition and subtraction: $$5i+3i=(5+3)i=8i.$$ It looks exactly like middle school algebra: $$5x+3x=(5+3)x=8x.$$ After all, $x$ could be $i$. The nature of the unit doesn't matter (if we can push it aside): $$5\text{ in.}+3\text{ in.}=(5+3)\text{ in.}=8\text{ in.},$$ or $$5\text{ apples }+3\text{ apples }=(5+3)\text{ apples }=8\text{ apples }.$$ It's "$8$ apples" not "$8$"! And so on. This is how we multiply an imaginary number by a real number: $$2\cdot(3i)=(2\cdot 3)i=6i.$$ How do we multiply two imaginary numbers? In contrast to the above, even though multiplication and division follow the same rule as always, we can, when necessary, and often have to, simplify the outcome of our algebra using our fundamental identity: $$i^2=-1.$$ For example: $$(5i)\cdot(3i)=(5\cdot 3) (i\cdot i)=15i^2=15(-1)=-15.$$ We also simplify the outcome using the other fundamental fact about the imaginary unit: $$i\ne 0.$$ We can divide by $i$! For example, $$\frac{5i}{3i}=\frac{5}{3}\frac{i}{i}=\frac{5}{3}\cdot 1=\frac{5}{3}.$$ As you can see, doing algebra with imaginary numbers will often bring us back to real numbers. These two classes of numbers cannot be separated from each other! They aren't. Let's take another look at quadratic equations. The equation $$ax^2+bx+c=0, \ a\ne 0,$$ is solved with the familiar Quadratic Formula: $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$$ Let's consider $$x^2+2x+10=0.$$ Then the roots are supposed to be: $$\begin{array}{ll} x&=\frac{-2\pm\sqrt{2^2-4\times 10}}{2}\\ &=\frac{-2\pm\sqrt{-36}}{2}\\ &=-1\pm\sqrt{-9}\\ &=-1\pm\sqrt{9}\sqrt{-1}\\ &=-1\pm 3i. \end{array}$$ We end up adding real and imaginary numbers! Warning: there is no way to simplify this. This addition is not literal. It's like "adding" apples to oranges: $$5\text{ apples }+3\text{ oranges }=...$$ It's not $8$ and it's not $8$ fruit because we wouldn't be able to read this equality backwards. The algebra is still possible: $$(5a+3o)+(2a+4o)=(5+3)a+(3+4)o=8a+7o.$$ Or, something familiar: $$(5+3x)+(2+4x)=(5+3)+(3+4)=8a+7x,$$ and something new: $$(5+3i)+(2+4i)=(5+3)+(3+4)i=8+7i.$$ The number we are facing consist of both real numbers and imaginary numbers. This fact makes them "complex"... Definition. Any sum of real and imaginary numbers is called a complex number. The set of all complex numbers is denoted by ${\bf C}$. Warning: all real numbers are complex! Addition and subtraction are easy; we just combine similar terms just like in middle school. For example, $$(1+5i)+(3-i)=1+5i+3-i=(1+4)+(5i-i)=5+4i.$$ To simplify multiplication of complex numbers, we expand and then use $i^2=-1$, as follows: $$\begin{array}{ll} (1+5i)\cdot (3-i)&=1\cdot 3+5i\cdot 3+1\cdot(-i)+5i\cdot (-i)\\ &=3+15i-i-5i^2\\ &=(3+5)+(15i-i)\\ &=8+14i. \end{array}$$ It's a bit trickier with division: $$\begin{array}{ll} \frac{1+5i}{3-i}&=\frac{1+5i}{3-i}\frac{3+i}{3+i}\\ &=\frac{(1+5i)(3+i)}{(3-i)(3+i)}\\ &=\frac{-2+8i}{3^2-i^2}\\ &=\frac{-2+8i}{3^2+1}\\ &=\frac{1}{10}(-2+8i)\\ &=-.2+.8i.\\ \end{array}$$ The simplification of the denominator is made possible by the trick of multiplying by $3+i$. It is the same trick we used in Chapters 5 and 6 to simplify fractions with roots to compute their limits: $$\frac{1}{1-\sqrt{x}}=\frac{1}{1-\sqrt{x}}\frac{1+\sqrt{x}}{1+\sqrt{x}}=\frac{1+\sqrt{x}}{1-x}.$$ Definition. The complex conjugate of $z=a+bi$ is defined and denoted by: $$\bar{z}=\overline{a+bi}=a-bi.$$ The following is crucial. Theorem (Algebra of complex numbers). The rules of the algebra of complex numbers are identical to those of real numbers: Commutativity of addition: $z+u=u+z$; Associativity of addition: $(z+u)+v=z+(u+v)$; Commutativity of multiplication: $z\cdot u=u\cdot z$; Associativity of multiplication: $(z\cdot u)\cdot v=z\cdot (u\cdot v)$; Distributivity: $z\cdot (u+ v)=z\cdot u+ z \cdot v$. This is the complex number system; it follows the rules of the real number system but also contains it. This theorem will allow us to build calculus for complex functions that is almost identical to that for real functions and also contains it. Definition. Every complex number $x$ has the standard representation: $$z=a+bi,$$ where $a$ and $b$ are two real numbers. The two components are named as follows: $a$ is the real part of $z$, with notation $a=\operatorname{Re}(z)$, and $bi$ is the imaginary part of $z$, with notation $b=\operatorname{Im}(z)$. Then, the purpose of the computations above were to find the standard form of a complex number that comes from algebraic operations with other complex numbers. So, complex numbers are linear combinations of the real unit, $1$, and the imaginary unit, $i$. This representation helps us understand that two complex numbers are equal if and only if both their real and imaginary parts are equal. Thus, $$z=\operatorname{Re}(z)+\operatorname{Im}(z)i.$$ In order to see the geometric representation of complex numbers we need to combine the real number line and the imaginary number line. How? We realize that that have nothing in common... except $0=0i$ belongs to both! We then unite them in the same manner we built the $xy$-plane in Chapter 2: It is called the the complex plane, ${\bf C}$. If $z=a+bi$, then $a$ and $b$ are thought of as the components of vector $z$ in the plane. We have a one-to-one correspondence: $${\bf C}\longleftrightarrow {\bf R}^2,$$ given by $$a+bi\longleftrightarrow <a,b>.$$ Then the $x$-axis of this plane consists of the real numbers and the $y$-axis of the imaginary numbers. Then the complex conjugate of $z$ is the complex number with the same real part as $z$ and the imaginary part with the opposite sign: $$\operatorname{Re}(\bar{z})=\operatorname{Re}(z) \text{ and } \operatorname{Im}(\bar{z})=-\operatorname{Im}(z).$$ Warning: all numbers we have encountered in this book so far are real non-complex, and so are all quantities one can encounter in day-to-day life or science: time, location, length, area, volume, mass, temperature, money, etc. Classification of quadratic polynomials The general quadratic equation with real coefficients, $$ax^2+bx+c=0, \ a\ne 0,$$ can be simplified. Let's divide by $a$ and study the resulting quadratic polynomial: $$f(x)=x^2+px+q,$$ where $p=b/a$ and $q=c/a$. The Quadratic Formula then provides the $x$-intercepts of this function: $$x=-\frac{p}{2}\pm\frac{\sqrt{p^2-4q}}{2}.$$ Of course, the $x$-intercepts are the real solutions of this equation and that is why the result only makes sense when the discriminant of the quadratic polynomial, $$D=p^2-4q,$$ is non-negative. Now, increasing the value of $q$ makes the graph of $y=f(x)$ shift upward and, eventually, pass the $x$-axis entirely. We can observe how its two $x$-intercepts start to get closer to each other, then merge, and finally disappear: This process is explained by what is happening, with the growth of $q$, to the roots given by the Quadratic Formula: $$x_{1,2}=-\frac{p}{2}\pm\frac{\sqrt{D}}{2}.$$ Starting with a positive value, $D$ decreases and $\frac{\sqrt{D}}{2}$ decreases; then $D=0$ and $\frac{\sqrt{D}}{2}=0$; then $D$ becomes negative and $\frac{\sqrt{D}}{2}$ becomes imaginary (but $\frac{\sqrt{-D}}{2}$ is real). The roots are, respectively: $$\begin{array}{l|ll} \text{discriminant }&\text{ root }\# 1&\quad&\text{ root }\# 2\\ \hline D>0&x_{1}=-\frac{p}{2}-\frac{\sqrt{D}}{2} && x_2=-\frac{p}{2}+\frac{\sqrt{D}}{2}\\ D=0&x_{1}=-\frac{p}{2} && x_2=-\frac{p}{2}\\ D<0&x_{1}=-\frac{p}{2}-\frac{\sqrt{-D}}{2}i && x_2=-\frac{p}{2}+\frac{\sqrt{-D}}{2}i \end{array}$$ Observe that the real roots ($D>0$) are unrelated while the complex ones ($D<0$) are linked so much that knowing one tells us what the one is: just flip the sign; they are conjugate of each other: They always come in pairs! As a summary, we have the following classification the roots of quadratic polynomials in terms of the sign of the discriminant. Theorem (Classification of Roots I). The two roots of a quadratic polynomial with real coefficients are: distinct real when its discriminant $D$ is positive; equal real when its discriminant $D$ is zero; complex conjugate of each other when its discriminant $D$ is negative. To understand ODEs, we will need a more precise way to classify the polynomials: according to the signs of the real parts of their roots. The signs will determine increasing and decreasing behavior of certain solutions. Once again, these are the possibilities: Theorem (Classification of Roots II). Suppose $x_1$ and $x_2$ are the two roots of a quadratic polynomial $f(x)=x^2+px+q$ with real coefficients. Then the signs of the real parts $\operatorname{Re}(x_1)$ and $\operatorname{Re}(x_2)$ of $x_1$ and $x_2$ are: same when $p^2> 4q$ and $q\ge 0$; opposite when $p^2> 4q$ and $q<0$; same and opposite of that of $p$ when $p^2\le 4q$. Proof. The condition $p^2\le 4q$ is equivalent to $D\le 0$. We can see in the table above that, in that case, we have $\operatorname{Re}(x_1)=\operatorname{Re}(x_2)=-\frac{p}{2}$. We are left with the case $D>0$ and real roots. The case of equal signs is separated from the case of opposite signs of $x_1$ and $x_2$ by the case when both are equal to zero: $x_1=x_2=0$. We solve: $$-\frac{p}{2}-\frac{\sqrt{D}}{2}=0\ \Longrightarrow\ p=-\sqrt{p^2-4q}\ \Longrightarrow\ p^2=p^2-4q\ \Longrightarrow\ q=0.$$ $\blacksquare$ Let's visualize our conclusion. We would like to show the main scenarios of what kinds of roots the polynomial might have depending on the values of its two coefficients, $p$ and $q$. First, how do we visualize pairs of numbers? As points on a coordinate plane of course... but only when they are real. Suppose for now that they are. We start with a plane, the $x_1x_2$-plane to be exact, as a representation of all possible pairs of real roots (left). Then the diagonal of this plane represents the case of equal (and still real) roots, $x_1=x_2$, i.e., $D=0$. Since the order of the roots doesn't matter -- $(x_1,x_2)$ is as good as $(x_2,x_1)$ -- we need only half of the plane. We fold the plane along the diagonal (middle). The diagonal -- represented by the equation $D=0$ -- exposed this way can now serve its purpose of separating the case of real and complex roots. Now, let's go to the $pq$-plane. Here, the parabola $p^2=4q$ also represents the equation $D=0$. Let's bring them together! We take our half-plane and bend its diagonal edge into the parabola $p^2=4q$ (right). Classifying polynomials this way allows one to classify matrices and understand what each of them does as a transformation of the plane, which in turn will help us understand systems of ODEs. The complex plane ${\bf C}$ is the Euclidean space ${\bf R}^2$ We will consider ODE of functions over complex variables. The issues we need to address in order to develop calculus of such functions are familiar from early part of this book. However, we will initially look at them through the lens of vector algebra presented in Chapter 16. A complex number $z$ has the standard representation: $$z=a+bi,$$ where $a$ and $b$ are two real numbers. These two can be seen in the geometric representation of complex numbers: Therefore, $a$ and $b$ are thought of as the coordinates of $z$ as a point on the plane. But any complex number is not only a point on the complex plane but also a vector. We have a one-to-one correspondence: $${\bf C}\longleftrightarrow {\bf R}^2,$$ given by $$a+bi\ \longleftrightarrow\ <a,b>.$$ There is more to this than just a match; the algebra of vectors in ${\bf R}^2$ applies! Warning: In spite of this fundamental correspondence, we will continue to think of complex numbers as numbers (and use the lower case letters). Let's see how this algebra of numbers works in parallel with the algebra of $2$-vectors. First, the addition of complex numbers is done component-wise: $$\begin{array}{ll} (a+bi)&+&(c+di)&=&(a+c)&+&(b+d)i,\\ <a,b>&+&<c,d>&=&<a+c&,&b+d>. \end{array}$$ Second, we can easily multiply complex numbers by real ones: $$\begin{array}{ll} (a+bi)&c&=&(ac)&+&(bc)i,\\ <a,b>&c &=&<ac&,&bc>. \end{array}$$ Aren't complex numbers just vectors? No, not with the possibility of multiplication taken into account... Our study of calculus of complex numbers starts with the study of the topology of the complex plane. This topology is the same as that of the Euclidean plane ${\bf R}^2$! Just as before, every function $z=f(t)$ with an appropriate domain creates a sequence: $$z_k=f(k).$$ A function with complex values defined on a ray in the set of integers, $\{p,p+1,...\}$, is called an infinite sequence, or simply sequence with the abbreviated notation: $$z_k=\{z_k\}.$$ Example. A good example is that of the sequence made of the reciprocals: $$z_k=\frac{\cos k}{k}+\frac{\sin k}{k}i.$$ It tends to $0$ while spiraling around it. The starting point of calculus of complex numbers is the following. The convergence of a sequence of complex numbers is the convergence of its real and imaginary parts or, which is equivalent, the convergence of points (or vectors) on the complex plane seen as any plane: the distance from the $k$th point to the limit is getting smaller and smaller. We use the definition of convergence for vectors on the plane from Chapter 16 simply replacing vectors with complex numbers and "magnitude" with "modulus". Definition. Suppose $\{z_k:\ k=1,2,3...\}$ is a sequence of complex numbers, i.e., points in ${\bf C}$. We say that the sequence converges to another complex number $z$, i.e., a point in ${\bf C}$, called the limit of the sequence, if: $$||z_k- z||\to 0\text{ as }k\to \infty,$$ denoted by: $$z_k\to z \text{ as }k\to \infty,$$ or $$z=\lim_{k\to \infty}z_k.$$ If a sequence has a limit, we call the sequence convergent and say that it converges; otherwise it is divergent and we say it diverges. In other words, the points start to accumulate in smaller and smaller circles around $z$. A way to visualize a trend in a convergent sequence is to enclose the tail of the sequence in a disk: Theorem (Uniqueness). A sequence can have only one limit (finite or infinite); i.e., if $a$ and $b$ are limits of the same sequence, then $a=b$. Definition. We say that a sequence $z_k$ tends to infinity if the following condition holds: for each real number $R$, there exists a natural number $N$ such that, for every natural number $k > N$, we have $$||z_k|| >R;$$ we use the following notation: $$z_k\to \infty \text{ as } k\to \infty .$$ The following is another analog of a familiar theorem about the topology of the plane. Theorem (Component-wise Convergence of Sequences). A sequence of complex numbers $z_k$ in ${\bf C}$ converges to a complex number $z$ if and only if both the real and the imaginary parts of $z_k$ converge to the real and the imaginary parts of $z$ respectively; i.e., $$z_k\to z \ \Longleftrightarrow\ \operatorname{Re}(z_k)\to \operatorname{Re}(z) \text{ and } \operatorname{Im}(z_k)\to \operatorname{Im}(z).$$ The algebraic properties of limits of sequences of complex numbers also look familiar... Theorem (Sum Rule). If sequences $z_k ,u_k$ converge then so does $z_k + u_k$, and $$\lim_{k\to\infty} (z_k + u_k) = \lim_{k\to\infty} z_k + \lim_{k\to\infty} u_k.$$ Theorem (Constant Multiple Rule). If sequence $z_k $ converges then so does $c z_k$ for any complex $c$, and $$\lim_{k\to\infty} c\, z_k = c \cdot \lim_{k\to\infty} z_k.$$ Wouldn't calculus of complex numbers be just a copy of calculus on the plane? No, not with the possibility of multiplication taken into account. Multiplication of complex numbers: ${\bf C}$ isn't just ${\bf R}^2$ Just like in ${\bf R}^2$, multiplication by a real number $r$ will stretch/shrink all vectors and, therefore, the complex plane ${\bf C}$. However, multiplication by a complex number $c$ will also rotate each vector. The imaginary part of $c$ is responsible for rotation. For example, multiplication by $i$ rotates the number by $90$ degrees: $1$ becomes $i$, while $i$ becomes $-1$ etc. How does multiplication affect topology? Theorem (Product Rule). If sequences $z_k ,u_k$ converge then so does $z_k \cdot u_k$, and $$\lim_{k\to\infty} (z_k \cdot u_k) = \lim_{k\to\infty} z_k \cdot \lim_{k\to\infty} u_k.$$ Proof. Suppose $$z_k=a_k+b_ki\to a+bi \text{ and } u_k=p_k+q_ki=p+qi.$$ Then, according to the Component-wise Convergence Theorem above, we have: $$a_k\to a,\ b_k \to b \text{ and } p_k\to p,\ q_k\to q.$$ Then, by the Product Rule for numerical sequences, we have: $$a_kp_k\to ap,\ a_kq_k\to aq,\ b_kp_k\to bp,\ b_kq_k\to bq.$$ Then, as we know, $$z_k \cdot u_k=(a_kp_k-b_kq_k)+(a_kq_k+b_kp_k)i\to (ap-bq)+(aq+bq)i=(a+bi)(p+qi),$$ by the Sum Rule for numerical sequences. $\blacksquare$ Theorem (Quotient Rule). If sequences $z_k ,\ u_k$ converge (with $u_k\ne 0$) then so does $z_k / u_k$, and $$\lim_{k\to\infty} \frac{z_k}{ u_k} = \frac{\lim_{k\to\infty} z_k }{ \lim_{k\to\infty} u_k},$$ provided $$\lim_{k\to\infty} u_k\ne 0.$$ Just like real numbers! Exercise. Prove the last theorem. In addition to the standard, Cartesian, representation, a complex number $x=a+bi$ can be defined in terms of its magnitude and angle with to the $x$-axis via the polar coordinates. The former is called the modulus of $z$ denoted by: $$||z||=\sqrt{a^2+b^2}.$$ The latter is called the argument of $z$ denoted by: $$\operatorname{arg}(z)=\arctan{\frac{b}{a}}.$$ Any two real numbers $r\ge 0$ and $0\le \alpha< 2\pi$ can serve as those: $$z=r\big[ \cos \alpha+i\sin \alpha \big].$$ The algebra takes a new form too. We don't need the new representation to compute addition and multiplication by real numbers, but we need it for multiplication. What is the product of two complex numbers: $$z_1 = r_1 \big[\cos\varphi_1 + i \sin\varphi_1\big] \text{ and } z_2 = r_2 \big[\cos\varphi_2 + i \sin\varphi_2\big]?$$ Consider: $$\begin{array}{lll} z_1z_2 &= r_1 \big[\cos\varphi_1 + i \sin\varphi_1) \big]\cdot r_2 \big[\cos\varphi_2 + i \sin\varphi_2 \big]\\ &=r_1r_2\big[ \cos\varphi_1 + i \sin\varphi_1\big]\cdot \big[\cos\varphi_2 + i \sin\varphi_2\big]\\ &=r_1r_2\big( \cos\varphi_1\cos\varphi_2 + i \sin\varphi_1\cos\varphi_2+ \cos\varphi_1\sin\varphi_2 + i^2 \sin\varphi_1\sin\varphi_2\big)\\ \end{array}$$ We utilize the following trigonometric identities $$\cos a\cos b - \sin a \sin b = \cos(a + b)$$ and $$\cos a \sin b + \sin a\cos b = \sin(a + b).$$ Then, $$z_1 z_2 = r_1 r_2 \big[ \cos(\varphi_1 + \varphi_2) + i \sin(\varphi_1 + \varphi_2) \big].$$ In other words, the modules are multiplied and the arguments are added. Exercise. Prove that the two sequence comprised of the modulus and the argument of the terms of a sequence converge when the sequence converges. Complex functions A complex function is simply a function with both input and output complex numbers: $$F:{\bf C}\to{\bf C}.$$ How do we visualize these functions? The graph of a function $F$ lies in the $4$-dimensional space and isn't of much help! To begin with, we can recast some of the transformations of the plane presented in this chapter as complex functions. Example. The shift by vector $V=<a,b>$ becomes addition of a fixed complex number: $$F(x,y)=(x+a,y+b) \ \leadsto\ F(z)=z+z_0,$$ where $z_0=a+bi$. Example. The vertical flip becomes conjugation: $$F(x,y)=(x,-y)\ \leadsto\ F(z)=\bar{z}.$$ Exercise. Find a complex formula for the horizontal flip: $$F(x,y)=(-x,y).$$ Example. The uniform stretch becomes multiplication by a real number: $$F(x,y)=(kx,ky)\ \leadsto\ F(z)=kz.$$ Of course, the flip about the origin is just multiplication by $-1$. $\square$ Exercise. Find complex formulas for the vertical stretch: $$F(x,y)=(x,ky)$$ and the horizontal stretch: $$F(x,y)=(kx,y).$$ Example. A rotation is carried out via a multiplication by a fixed complex number: $$F(z)=z_0z.$$ Specifically, it has to be a number with modulus equal to $1$: $$z_0=\cos \alpha +i\sin \alpha.$$ Meanwhile, with $z_0=2i$, we have the $90$ degrees rotation with a stretch with a factor of $2$: For any complex function, we represent both the independent variable and the dependent variable in terms of their real and imaginary parts, just as vector functions: $x = u + iv$, and $z = F(x) = f (u,v) + ig (u,v)$, where $u,\ v$ are real numbers and $f(u,v),\ g(u,v)$ are real-valued functions. The two component functions, $f$ and $g$, can be plotted as well as the argument, $\operatorname {arg}F$, and the module, $||F||$. Example. The projections on the $x$- and $y$-axes are these: $$F(z)=\operatorname{Re} z \text{ and }F(z)=\operatorname{Im} z.$$ Example. Example. Let's consider a quadratic polynomial again: $$f(x)=x^2+px+q.$$ Recall that increasing the value of $q$ makes the graph of $y=f(x)$ shift upward: its two $x$-intercepts start to get closer to each other, then merge, and finally disappear: Note that an identical result is seen in a seemingly different situation. Suppose we have a paraboloid, then it produces a parabola as it is cut by a vertical plane. Suppose the paraboloid is moving horizontally. If it is fading away, the parabola is moving upward. This illustrates what happens when our quadratic polynomial is seen as a function of a complex variable. We are plotting the real part of the function. $\square$ Warning: visualizing $F(x)$ via those functions, or as a vector field, may be misleading. Complex functions are transformations of the complex plane! Example. This is a visualization of the power function over complex numbers. For several values of $z$, the values $$z,z^2,z^3,...$$ are plotted as sequences. One can see how the real part of $z$ makes the multiplication by $z$ stretch or shrink the number while the imaginary part of $x$ is responsible for rotating the number around $0$. A special, square path is produced by $z=i$. $\square$ The definition is a copy of the one from Chapter 6. Definition. The limit of a function $z=F(x)$ at a point $x=a$ is defined to be the limit $$\lim_{n\to \infty} F(x_n)$$ considered for all sequences $\{x_n\}$ within the domain of $F$ excluding $a$ that converge to $a$, $$a\ne x_n\to a \text{ as } n\to \infty,$$ when all these limits exist and are equal to each other. In that case, we use the notation: $$\lim_{x\to a} F(x).$$ Otherwise, the limit does not exist. Theorem (Locality). Suppose two functions $f$ and $g$ coincide in the vicinity of point $a$: $$f(x)=g(x) \text{ for all }x \text{ with } ||x-a|| <\varepsilon,$$ for some $\varepsilon >0$. Then, their limits at $a$ coincide too: $$\lim_{x\to a} f(x) =\lim_{x\to a} g(x).$$ Limits under algebraic operations... We will use the algebraic properties of the limits of sequences to prove virtually identical facts about limits of functions. Let's re-write the main algebraic properties using the alternative notation. Theorem (Algebra of Limits of Sequences). Suppose $a_n\to a$ and $b_n\to b$. Then $$\begin{array}{|ll|ll|} \hline \text{SR: }& a_n + b_n\to a + b& \text{CMR: }& c\cdot a_n\to ca& \text{ for any complex }c\\ \text{PR: }& a_n \cdot b_n\to ab& \text{QR: }& a_n/b_n\to a/b &\text{ provided }b\ne 0\\ \hline \end{array}$$ Each property is matched by its analog for functions. Theorem (Algebra of Limits of Functions). Suppose $f(x)\to F$ and $g(x)\to G$ as $x\to a$ . Then $$\begin{array}{|ll|ll|} \hline \text{SR: }& f(x)+g(x)\to F+G & \text{CMR: }& c\cdot f(x)\to cF& \text{ for any complex }c\\ \text{PR: }& f(x)\cdot g(x)\to FG& \text{QR: }& f(x)/g(x)\to F/G &\text{ provided }G\ne 0\\ \hline \end{array}$$ Definition. A function $f$ is called continuous at point $a$ if $f(x)$ is defined at $x=a$, the limit of $f$ exists at $a$, and the two are equal to each other: $$\lim_{x\to a}f(x)=f(a).$$ Thus, the limits of continuous functions can be found by substitution. Equivalently, a function $f$ is continuous at $a$ if $$\lim_{n\to \infty}f(x_n)=f(a),$$ for any sequence $x_n\to a$. A typical function we encounter is continuous at every point of its domain. The most important class of continuous functions is the following. Theorem. Every polynomial is continuous at every point. Unlike vector functions, complex functions have more operations to worry about. The theorem follows from the following algebraic result. Theorem (Algebra of Continuity). Suppose $f$ and $g$ are continuous at $x=a$. Then so are the following functions: $$\begin{array}{|ll|ll|} \hline \text{SR: }& f+g & \text{CMR: }& c\cdot f& \text{ for any complex }c\\ \text{PR: }& f\cdot g& \text{QR: }& f/g &\text{ provided }g(a)\ne 0\\ \hline \end{array}$$ Complex linear operators Let's consider linear operators again. Suppose $A$ is a linear operator with a $2\times 2$ matrix with real entries. Suppose $D$ is the discriminant of the characteristic polynomial of $A$. When $D>0$, we have two distinct real roots covered by the Classification Theorem of Linear Operators with real eigenvalues. We also saw the transitional case when $D=0$. Thus, we transition to the case $D<0$ when the eigenvalues -- as roots of the characteristic polynomial -- are complex. We already know that complex numbers are just as good (or better!) than the real, so why not include this possibility? Example. This is how we find the characteristic polynomial for the rotation and find the eigenvalues by solving it: $$A = \left[\begin{array}{} 0 & -1 \\ 1 & 0 \end{array}\right]\ \Longrightarrow\ \det \left( \left[\begin{array}{} 0 & -1 \\ 1 & 0 \end{array}\right] - \lambda \left[\begin{array}{} 1 & 0 \\ 0 & 1 \end{array}\right] \right) = \det \left[\begin{array}{} -\lambda & -1 \\ 1 & -\lambda \end{array}\right] = \lambda^2 + 1 = 0\ \Longrightarrow\ \lambda = \pm i.$$ The eigenvalues are imaginary! Let's notice though that no real vector multiplied by an imaginary number can produce a real vector... Indeed, let's try to find the eigenvectors; solve the matrix equation $AV=\lambda V$ for $V$ in ${\bf R}^2$. In other words, we need to find real(!) $x, y$ such that: $$\left[\begin{array}{} 0 & -1 \\ 1 & 0 \end{array}\right] \left[\begin{array}{} x \\ y \end{array}\right] = i \left[ \begin{array}{} x \\ y \end{array}\right]\ \Longrightarrow\ \begin{cases} -y &= ix\\ x &= iy \end{cases}.$$ Unless both zero, $x$ and $y$ can't be both real... $\square$ So, when the eigenvalues aren't real, there are no real eigenvectors. But why stop here? Why not have all the numbers and vectors and matrices complex? Just as a real $2$-vector is a pair of real numbers, a complex $2$-vector is a pair of complex numbers; for example, $$V=\left[\begin{array}{cc} 2&+&i \\ -1&+&2i \end{array}\right].$$ This representation is illustrated below via the real and imaginary parts of either of the two components: Here, we see the complex plane ${\bf C}$ for the first and then the complex plane ${\bf C}$ for the second component of the vector $V$. This is why we denote the set of all complex $2$-vectors by ${\bf C}^2$. Furthermore, this vector $V$ can be rewritten in terms of its real and imaginary parts: $$V=\left[\begin{array}{cc} 2&+&i \\ -1&+&2i \end{array}\right]=\left[\begin{array}{cc} 2 \\ -1 \end{array}\right]+i\left[\begin{array}{cc} 1 \\ 2 \end{array}\right].$$ Each of these is a real vector and they are illustrated accordingly: The former is of the main interest and it is located in the familiar real plane ${\bf R}^2$. Next, a complex $2\times 2$ matrix $A$ is simply a $2\times 2$ table of complex numbers; for example: $$A=\left[\begin{array}{} 0 & 1+i \\ i & 2-3i \end{array}\right].$$ The algebra of complex numbers -- addition and multiplication -- presented above allows us to carry out the operations -- addition, scalar multiplication, and matrix multiplication -- on these vectors and these matrices! Then, a complex matrix $A$ defines a function: $$A : {\bf C}^2 \to {\bf C}^2,$$ through matrix multiplication: $A(X)=AX$. Moreover, this function is a linear operator in the following sense: $$A(\alpha X+\beta Y)=\alpha A(X)+\beta A(Y),$$ for any complex numbers $\alpha$ and $\beta$ and any complex vectors $X$ and $Y$. But how can we use these functions to understand real linear operators? Its inputs and outputs are complex vectors and they have real parts as discussed above. So, we can restrict the domain of a complex linear operator to the real plane first and then take the real part of the output. The result is a familiar real linear operator: $$B : {\bf R}^2 \to {\bf R}^2,$$ the real part of the complex linear operator $A$. Let's review our theory generalized this way. Definition. The determinant of a complex $2\times 2$ matrix is defined to be the following complex number: $$\det \left[\begin{array}{} a & b \\ c & d \end{array}\right]=ad-bc.$$ Theorem. Suppose $A$ is a complex $2\times 2$ matrix. Then, $\det A\ne 0$ if and only if the solution set of the matrix equation $AX=0$ consists of only $0$. Definition. Given a linear operator $A : {\bf C}^2 \to {\bf C}^2$, a (complex) number $\lambda$ is called an eigenvalue of $A$ if $$A(V)=\lambda V,$$ for some non-zero vector $V$ in ${\bf C}^2$. Then, $V$ is called an eigenvector of $A$ corresponding to $\lambda$. Definition. For a (complex) eigenvalue $\lambda$ of $A$, the eigenspace of a complex linear operator $A$ corresponding to $\lambda$ is defined and denoted by the following set in ${\bf C}^2$: $$E(\lambda) = \{ V :\ A(V)=\lambda V \}.$$ It is a very important fact that all the computations that we have performed on real matrices and vectors remain valid! Among the results that remain valid is the Classification of Linear Operators with real eigenvalues. Linear operators represented by real matrices however will remain our exclusive interest. Every linear operator represented by a real matrix $A$ is still just a special case of a complex operator $A : {\bf C}^2 \to {\bf C}^2$. In fact, it will always have some complex (non-real) vectors among its values, unless $A=0$. Its characteristic polynomial has real coefficients and, therefore, the Classification Theorem of Quadratic Polynomials presented in this chapter applies, as follows. Theorem (Classification Theorem of Eigenvalues). Suppose $A$ is a linear operator represented by a real $2\times 2$ matrix $A$ and $D$ is the discriminant of its characteristic polynomial. Then the eigenvalues $\lambda_1,\ \lambda_2$ of $A$ fall into one of the following three categories: if $D>0$, then $\lambda_1,\ \lambda_2$ are distinct real; if $D=0$, then $\lambda_1,\ \lambda_2$ are equal real; if $D<0$, then $\lambda_1,\ \lambda_2$ are complex conjugate. We only need to address the last case. Linear operators with complex eigenvalues All numbers below are complex unless stated otherwise. The first thing we notice about the case when the eigenvalues of a linear operator aren't real is that there are no real eigenvectors. There will be no eigenspaces shown in the real plane shown in the examples below. Example (rotations). Consider a rotation through $90$ degrees counterclockwise again: $$\left[ \begin{array}{} u \\ v \end{array}\right] = \left[\begin{array}{} 0&-1\\1&0 \end{array} \right] \, \left[ \begin{array}{} x \\ y \end{array} \right]\ \Longrightarrow\ \lambda_{1,2}=\pm i.$$ To find the first eigenvector, we solve: $$FV_1=\left[ \begin{array}{ccc} 0&-1\\ 1&0 \end{array}\right]\left[\begin{array}{cc} x\\ y \end{array}\right]=i\left[\begin{array}{cc} x\\ y \end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases} &-y&=ix\\ x&&=iy \end{cases} \ \Longrightarrow\ y=-ix.$$ We choose a complex eigenvector: $$V_1=\left[\begin{array}{cc} 1\\ -i \end{array}\right],$$ and similarly an eigenvector for the second eigenvalue: $$V_2=\left[\begin{array}{cc} 1\\ i \end{array}\right].$$ The rest are (complex) multiples of these. So, under $F$, complex vector $V_k$ is multiplied by $\lambda_k$, and so is every of its multiples, $k=1,2$. Since these multiples are complex, this multiplication rotates (the vector of the geometric representation of) either component of this vector. Furthermore, the real part of this vector is also rotated -- on the real plane. It is this rotation that we are interested in. It is shown in the second row below: The general value is a linear combination -- over the complex numbers -- of our two eigenvectors: $$X=\alpha \, V_1+\beta\, V_2\ \Longrightarrow\ FX =\alpha i\left[\begin{array}{cc}1\\-i\end{array}\right]+\beta(-i)\left[\begin{array}{cc}1\\i\end{array}\right].$$ $\square$ Let's consider a rotation through an arbitrary angle $\theta$: $$\left[\begin{array}{} u \\ v \end{array}\right] = \left[\begin{array}{rrr} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right] \, \left[\begin{array}{} x \\ y \end{array}\right].$$ Then, we have $$\chi_A(\lambda)=(\cos\theta-\lambda)^2+\sin^2\theta =\cos^2\theta-2\cos\theta\,\lambda+\lambda^2+\sin^2\theta =\lambda^2-2\cos\theta\,\lambda+1.$$ Therefore, $$\lambda_{1,2}=\frac{2\cos\theta \pm \sqrt{(2\cos\theta)^2-4}}{2} =\frac{2\cos\theta \pm 2\sqrt{\cos^2\theta-1}}{2} =\cos\theta \pm \sqrt{-\sin^2\theta} =cos\theta \pm i\sin\theta.$$ So, the argument of the complex eigenvalues is equal to the angle of rotation (up to a sign): $$|\operatorname{arg}\lambda_{1,2}|=|\theta|.$$ The eigenvectors are the same as in the last example: $$V_1=\left[\begin{array}{cc}1\\-i\end{array}\right] \text{ and }V_2=\left[\begin{array}{cc}1\\i\end{array}\right].$$ Indeed, they are rotated through multiplication by the eigenvalues: $$\lambda_k V_k=(cos\theta \pm i\sin\theta)\left[\begin{array}{cc} 1\\ \mp i \end{array}\right] =\left[\begin{array}{cc} \cos\theta \pm i\sin\theta\\ \sin\theta \mp i\cos\theta \end{array}\right]=\left[\begin{array}{rrr} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array}\right]\left[\begin{array}{cc} 1\\ \mp i \end{array}\right].$$ Once again, we see how the real part of a complex eigenvector is rotated via complex multiplication: Warning: this does not apply to vector in ${\bf C}^2$ that aren't eigenvectors. Example (rotation with stretch-shrink). Let's consider this linear operator: $$\begin{cases} u&=3x&-13y,\\ v&=5x&+y\end{cases} \text{ and }F=\left[ \begin{array}{cc} 3&-13\\ 5&1 \end{array}\right].$$ Our analysis starts with the characteristic polynomial: $$\chi(\lambda)=\det (F-\lambda I)=\det \left[ \begin{array}{cc}3-\lambda&-13\\5&1-\lambda\end{array}\right]=\lambda^2-4\lambda+68.$$ We find the eigenvalues from the Quadratic Formula: $$\lambda_{1,2}=2\pm 8i.$$ Now we find the eigenvectors. We solve the two equations: $$FV_k=\lambda_k V_k,\ k=1,2.$$ The first: $$FV_1=\left[ \begin{array}{cc} 3&-13\\ 5&1 \end{array}\right]\left[\begin{array}{cc} x\\ y \end{array}\right]=(2+8i)\left[\begin{array}{cc} x\\ y \end{array}\right].$$ This gives as the following system of linear equations: $$\begin{cases} 3x&-13y&=(2+8i)x\\ 5x&+y&=(2+8i)y \end{cases} \ \Longrightarrow\ \begin{cases} (1-8i)x&-13y&=0\\ 5x&+(-1-8i)y&=0 \end{cases} \ \Longrightarrow\ x=\frac{(1+8i)}{5}y.$$ We choose the first eigenvector to be: $$V_1=\left[\begin{array}{cc} 1+8i\\ 5 \end{array}\right].$$ The second eigenvalue satisfies: $$FV_2=\left[ \begin{array}{cc} 3&-13\\ 5&1 \end{array}\right]\left[\begin{array}{cc} x\\ y \end{array}\right]=(2- 8i)\left[\begin{array}{cc} x\\ y \end{array}\right].$$ We have the following system: $$\begin{cases} 3x&-13y&=(2-8i)x\\ 5x&+y&=(2-8i)y \end{cases} \ \Longrightarrow\ \begin{cases} (1+8i)x&-13y&=0\\ 5x&+(-1+8i)y&=0 \end{cases}\ \Longrightarrow\ x=\frac{(1-8i)}{5}y.$$ We choose the second eigenvector to be: $$V_2=\left[\begin{array}{cc} 1-8i\\ 5 \end{array}\right].$$ The general complex value is a linear combination of the two: $$X=\alpha\, V_1+\beta\, V_2\ \Longrightarrow\ FX=\alpha(2+8i)\left[\begin{array}{cc}1+8i\\5\end{array}\right]+\beta(2-8i)\left[\begin{array}{cc}1-8i\\5\end{array}\right].$$ We know the effect of $F$ on these two vectors: they are rotated and stretched. As you can see, stretching is the same for both components and both basis vectors. That is why we have -- in addition to the rotation -- a uniform stretch (re-scaling) for the real plane. This is shown in the second row. This is the algebra for the above illustration: $$\lambda_1V_1=(2+8i)\left[\begin{array}{cc}1+8i\\5\end{array}\right]=\left[\begin{array}{cc}-62+24i\\10+40i\end{array}\right] \text{ and } \lambda_2V_2=(2-8i)\left[\begin{array}{cc}1-8i\\5\end{array}\right]=\left[\begin{array}{cc}-62-24i\\10-40i\end{array}\right].$$ $\square$ According to the Classification Theorem of Quadratic Polynomials, when the discriminant $D<0$, the two roots are conjugate: $$\lambda_{1,2}=a\pm bi.$$ They have the same modulus: $$||\lambda_{1,2}||=\sqrt{a^2+ b^2}.$$ This is why multiplying any complex number by either of the two numbers will produce the same rate of stretch. We conclude that ${\bf C}^2$ and, especially, the real plane ${\bf R}^2$ are stretched uniformly! This is the summary of our analysis. Theorem (Classification of linear operators with complex eigenvalues). Suppose a real matrix $F$ has two complex conjugate eigenvalues $\lambda_1$ and $\lambda_2$. Then, the operator $U=FX$ 1. rotates the real plane through the angle $\theta$ that satisfies the following: $$|\sin\theta|=|\operatorname{arg} \lambda_1|=|\operatorname{arg} \lambda_2|,$$ 2. stretches-shrinks the plane uniformly by the following factor: $$s=||\lambda_1||=||\lambda_2||.$$ We can also recast this theorem in exclusively "real" terms. We will need to the following result. Theorem. If a quadratic polynomial, $$x^2+px+q,$$ has a non-positive discriminant, $$D=p^2-4q \le 0,$$ then either of its roots, $$x=\frac{1}{2}(-p \pm \sqrt{D} ),$$ has its argument satisfy: $$\sin\big( \operatorname{arg}x \big)=\frac{1}{2}\sqrt{ 4-\frac{p^2}{q} },$$ and its module satisfy: $$||x||^2=q.$$ Proof. The modulus of the roots is the following: $$||x||^2=\big(\operatorname{Re}x\big)^2+\big(\operatorname{Im}x\big)^2=\frac{1}{4}( p^2 + |D| )=\frac{1}{4}( p^2 -D )=\frac{1}{4}\big( p^2 -( p^2-4q )\big)=q.$$ And their argument satisfies the following: $$\sin\big( \operatorname{arg}x \big)=\frac{ \operatorname{Im}x} {||x||}=\frac{ \frac{1}{2}\sqrt{D} } { \sqrt{q} }=\frac{1}{2}\sqrt{ \frac{-D}{q} }=\frac{1}{2}\sqrt{ -\frac{p^2-4q}{q} }=\frac{1}{2}\sqrt{ 4-\frac{p^2}{q} }.$$ $\blacksquare$ We define the trace of a matrix as the sum of its diagonal elements: $$\operatorname{tr}\left[ \begin{array}{ll}a&b\\c&d\end{array}\right]=a+d.$$ Then the characteristic polynomial takes this form: $$\begin{array}{lll} \chi(\lambda)&=\det\left[ \begin{array}{cc} a-\lambda & b \\ c & d-\lambda \end{array}\right]\\ &=ad-a\lambda-\lambda d+\lambda^2 -bc\\ &=\lambda^2-(a+d)\lambda+(ad-bc)\\ &=\lambda^2-\operatorname{tr} F\,\lambda+\det F.\\ \end{array}$$ We match this to the theorem above: $$p=-\operatorname{tr} F,\ q=\det F,\ D=(\operatorname{tr} F)^2-4\det F.$$ The roots are $$\lambda_{1,2}=\frac{1}{2}\big(\operatorname{tr} F \pm \sqrt{D} \big).$$ When the roots are complex, the modulus is the following: $$||\lambda_{1,2}||=\sqrt{\det F }.$$ And their argument satisfies the following: $$\sin\big( \operatorname{arg}\lambda_{1,2} \big)=\frac{1}{2} \sqrt{ \frac{(\operatorname{tr} F)^2}{\det F} -4 }.$$ This is the final result. Corollary. Suppose a real matrix $F$ satisfies: $$D=(\operatorname{tr} F)^2-4\det F \le 0.$$ Then, the operator $U=FX$ 1. rotates the real plane through the following angle: $$\theta=\sin^{-1}\left( \frac{1}{2}\sqrt{ \frac{4-(\operatorname{tr} F)^2}{\det F} } \right);$$ $$s=\sqrt{\det F }.$$ Yes, we have included the transitional case $D=0$! Indeed, it has the same stretch but no rotation. We finally put together our two classification theorems: the behavior of linear operators in terms of the eigenvalues -- real or complex -- of their matrices. We illustrate this classification below in the context of the Classification of Roots of Quadratic Polynomials: Complex calculus In spite of the representation of complex numbers by plane vectors, the formula for the derivative of a complex function doesn't follow the idea of the gradient but rather follows, and is identical to, the definition of the derivative of a real function. We rely on the fact that the algebra is the same even though the numbers are different. Definition. The derivative of a complex function $u=f(x)$ at $x=a$ is defined to be the limit of the difference quotients at $x=a$ as the increment $\Delta x$ is approaching $0$, denoted by: $$f'(x)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}=\lim_{||x-a||\to 0}\frac{f(x)-f(a)}{x-a};$$ in that case the function is called differentiable at $x=a$. This formula is made possible by the availability of multiplication and division. Warning: The formula is not the same as this: $$\lim_{x\to a}\frac{f(x)-f(a)}{||x-a||}.$$ Warning: The idea of the derivative isn't about the slope, rise over the run, anymore. The derivative is still the instantaneous rate of change of the output relative to the input. Many result are familiar such as the following. Theorem. If a function is differentiable, it is also continuous. Example. Computations work out in the exactly the same manner. Let's compute $f'(2)$ from the definition for $$f(x) = -x^{2} - x. $$ Definition: $$f'(2) = \lim_{h \to 0} \frac{f(2 + h) - f(2)}{h}. $$ To compute the difference quotient, we need to substitute twice: $$f(2 + h) = -(2 + h)^{2} - (2 + h),\ f(2) = -2^{2} - 2.$$ Now, we substitute into the definition: $$\begin{aligned} f'(2) &= \lim_{h \to 0} \frac{\left[ -(2 - h)^{2} - (2-h) \right] - \left[ -2^{2} - 2 \right]}{h} \\ &= \lim_{h \to 0} \frac{-4 - 4h - h^{2} - 2 - h + 4 + 2}{h} \\ &= \lim_{h \to 0} \frac{-5h - h^{2}}{h} \\ &= \lim_{h \to 0} (-5 -h ) \\ &= -5 - 0 \\ &= 5. \end{aligned}$$ $\square$ Theorem (Integer Power Formula). The derivative of $x^n$ is given by $$\begin{array}{ll} (x^n)'= n x^{n - 1}\\ \end{array}$$ Proof. The proof relies entirely on the formula: $$a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+...+ab^{n-2}+b^{n-1}).$$ $\blacksquare$ There is a counterpart for each rule of differentiation! Theorem (Algebra of Derivatives). Wherever complex functions $f$ and $g$ are differentiable, we have the following: $$\begin{array}{|ll|ll|} \hline \text{SR: }& (f+g)'=f'+g' & \text{CMR: }& (cf)'=cf'& \text{ for any complex }c\\ \text{PR: }& (fg)'=f'g+fg'& \text{QR: }& (f/g)'=\frac{f'g-fg'}{g^2} &\text{ provided }g\ne 0\\ \hline \end{array}$$ We can differentiate any polynomial easily now. Theorem. For any positive integer $n$ and any complex numbers $a_0,...,a_n$, we have the following: $$\begin{array}{lllll} &\big(a_nx^{n} + & a_{n-1}x^{n-1}& + a_{n-2}x^{n-2} &+...&+ a_2x^2&+a_1x&+a_0\big)'\\ =&a_nnx^{n-1} + & a_{n-1}(n-1)x^{n-2}& + a_{n-2}(n-2)x^{n-3} &+...&+ a_22x&+a_1&&. \end{array}$$ As far as applications are concerned, there is no such a relation as "less" or "more" among complex numbers! That's why we don't have to worry about: monotonicity, extreme points, concavity, etc. Just as before, reversing differentiation is called integration and the resulting functions are called antiderivatives; $F$ is an antiderivative of $f$ if $F'=f$. There is a counterpart for each rule of integration! Theorem (Algebra of Anti-Derivatives). Wherever $f$ and $g$ are integrable, we have the following: $$\begin{array}{|ll|ll|} \hline \text{SR: }& \int (f+g)\, dx=\int f\, dx+\int g\, dx & \text{CMR: }& \int (cf)\, dx=c\int f\, dx& \text{ for any complex }c\\ \text{PR: }& \int f\, dg=fg-\int g\, df& \text{LCR: }& \int f(mx+b)\, dx=\tfrac{1}{m}\int f(t)\, dt\Big|_{t=mx+b} &\text{ provided }m\ne 0\\ \hline \end{array}$$ Series and power series All the definitions and theorems continue to be identical or virtually identical to the ones in Chapter 15. Definition. For a given sequence $\{z_n\}=\{z_n:\ n=s,s+1,s+2,...\}$ of complex numbers, its sequence of partial sums $\{p_n:\ n=s,s+1,s+2,...\}$ is a sequence defined by the following recursive formula: $$p_s=z_s,\quad p_{n+1}=p_n+z_n,\ n=s,s+1,s+2,....$$ Definition. For a sequence $\{z_n\}$, the limit $S$ of its sequence of partial sums $\{p_n\}$ is called by the sum of the sequence or, more commonly, the sum of the series, denoted by: $$S=\sum_{i=s}^\infty z_i=\lim_{n \to \infty} \sum_{i=s}^n z_i.$$ This limit might also be infinite. From the Uniqueness of Limit we derive the following. Theorem (Uniqueness of Sum). A series can have only one limit (finite or infinite); i.e., if $a$ and $b$ are sums of the same series, then $a=b$. From the Component-wise Convergence of Sequences we derive the following. Theorem (Component-wise Convergence of Series). A series of complex numbers $\sum_{i=s}^\infty z_i$ i converges to a complex number $z$ if and only if both the real and the imaginary parts of the sequence of partial sums of $z_k$ converge to the real and the imaginary parts of $z$ respectively; i.e., $$\sum_{i=s}^\infty z_i= z \ \Longleftrightarrow\ \operatorname{Re}\sum_{i=s}^\infty z_i= \operatorname{Re}(z) \text{ and } \operatorname{Im}\sum_{i=s}^\infty z_i= \operatorname{Im}(z).$$ Definition. For a sequence $\{z_n\}$, the series: $$\sum_{i=\infty}^n z_i,$$ converges absolutely if the series of its moduli, $$\sum_{i=\infty}^n ||z_i||,$$ converges. Now, the algebra of series. Just as before, we can multiply a convergent series by a number term by term. Theorem (Constant Multiple Rule for Series). Suppose $\{s_n\}$ is a sequence. For any integer $a$ and any complex $c$, we have: $$ \sum_{a}^{\infty} (c\cdot s_n) = c \cdot \sum_{a}^{\infty} s_n,$$ provided the series converges. Just as before, we can add two convergent series term by term. Theorem (Sum Rule for Series). Suppose $\{s_n\}$ and $\{t_n\}$ are sequences. For any integer $a$, we have: $$\sum_{a}^{\infty} \left( s_n + t_n \right) = \sum_{a}^{\infty} s_n + \sum_{a}^{\infty} t_n, $$ provided the two series converge. Exercise. Prove these theorems. Definition. The series produced by the geometric progression $a_n=a\cdot r^n$ with ratio $r$ (a complex number), $$\sum_{n=s}^\infty ar^n=ar^1+ar^2+ar^3+ar^4+...+ar^n+...,$$ is called the geometric series with ratio $r$. Its sum is found the same way as for real variable. Theorem (Sum of Geometric Series). The geometric series with ratio $r$ converges absolutely when $||r||<1$ and diverges when $||r||>1$. In the former case, the sum is: $$\sum_{n=0}^\infty ar^n=\frac{a}{1-r}.$$ The only difference is that the absolute value is replaced with the modulus. Example. This is a visualization of a power series over complex numbers. We consider the series: $$z+z^2+z^3+...$$ For several values of $z$, the sequence and then the partial sums of the series are plotted. We can see how starting from a point inside the disk $||z||\le 1$ produces divergence and outside produces divergence: Indeed, the series $$1+z+z^2+z^3+...$$ is a geometric series with the ratio $r=z$. Therefore, it converges for all $||z||<1$, according to the theorem, and diverges for all $||z||>1$. In other words, it converges on the disk of radius $1$ centered at $0$ in ${\bf C}$ and diverges outside of it. This circle is the domain of the function defined by the series. We even have a formula for this function: $$1+z+z^2+z^3+..=\frac{1}{1-z}.$$ The difference between the two is in the domain. $\square$ Example. There is a hint at the idea of complex power series in Chapter 16. Let's compare the Taylor series of sine, cosine and the exponential function. The sine is odd and its Taylor series only includes odd terms: $$\sin x=\sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!}x^{2k+1}.$$ The cosine is even and its Taylor series only includes even terms: $$\cos x=\sum_{k=0}^\infty \frac{(-1)^k}{(2k)!}x^{2k}.$$ If we write them one under the other, we see how they complement each other: $$\begin{array}{r|ccc} n&0&1&2&3&4&5&...\\ \hline \sin x&&x&&-\frac{x^3}{3!}&&\frac{x^5}{5!}&...\\ \cos x&1&&-\frac{x^2}{2!}&&\frac{x^4}{4!}&&... \end{array}$$ What if we add the exponential function to this? $$\begin{array}{r|cccccccc} n&0&1&2&3&4&5&...\\ \hline \sin x&0&x&0&-\frac{x^3}{3!}&0&\frac{x^5}{5!}&...\\ \cos x&1&0&-\frac{x^2}{2!}&0&\frac{x^4}{4!}&0&...\\ \hline e^x&1&x&\frac{x^2}{2!}&\frac{x^3}{3!}&\frac{x^4}{4!}&\frac{x^5}{5!}&... \end{array}$$ This looks almost like addition!.. except for those minus signs. That's where $i$ comes in. Let's substitute $x=it$: $$\begin{align} e^{it} &{}= 1 + it + \frac{(it)^2}{2!} + \frac{(it)^3}{3!} + \frac{(it)^4}{4!} + \frac{(it)^5}{5!} + \cdots \\[8pt] &{}= 1 + it - \frac{t^2}{2!} - \frac{it^3}{3!} + \frac{t^4}{4!} + \frac{it^5}{5!} - \cdots \\[8pt] &{}= \left( 1 - \frac{t^2}{2!} + \frac{t^4}{4!} - \cdots \right) + i\left( t - \frac{t^3}{3!} + \frac{x^5}{5!} - \cdots \right) \\[8pt] &{}= \cos t + i\sin t . \end{align}$$ $\square$ It is called Euler's formula. More general is the following: $$e^{a+bi} = e^a(\cos b + i\sin b) .$$ In complex calculus, functions are more interrelated! Exercise. Show that $\sin^2 x+\cos^2 x=1$. Definition. A sequence $\{q_n\}$ of polynomials given by a recursive formula: $$q_{k+1}(x)=q_k(x)+c_{k+1}(x-a)^{k+1},\ k=0,1,2,...,$$ for some fixed (complex) number $a$ and a sequence of (complex) numbers $\{c_k\}$, is called a power series centered at $a$. The function represented by the limit of $q_n$ is called the sum of the series, written as: $$f(x)=c_0+c_1(x-a)+c_2(x-a)^2+...=\sum_k c_k(x-a)^k=\lim_{k\to \infty} q_k(x),$$ for all $x$ for which the limit exists. Then the three power series in the example above may serve as definitions of these three functions of complex variable: $$\begin{array}{r|cccccccc} n&0&1&2&3&4&5&&...\\ \hline \sin x=&&x&&-\frac{x^3}{3!}&&+\frac{x^5}{5!}&+&...\\ \cos x=&1&&-\frac{x^2}{2!}&&+\frac{x^4}{4!}&&-&...\\ e^x=&1&+x&+\frac{x^2}{2!}&+\frac{x^3}{3!}&+\frac{x^4}{4!}&+\frac{x^5}{5!}&+&..., \end{array}$$ pending the proof of their convergence. We will accept the following without proof. Theorem (The Weierstrass M-Test). Consider the power series $$\sum_{n=0}^{\infty} c_n(z−a)^n.$$ If there exists a sequence of non-negative real numbers $\{M_n\}$ such that $$|c_n(z−a)^n|\le M_n,\ n=0,1,2,...,$$ for all $z$ in an open disk $D$ around $a$ and such that the series $\sum_{n=0}^{\infty}M_n$ (of real numbers) converges. Then the power series converges uniformly on $D$. Convergence of the three series above follows. Definition. The greatest lower bound $r$ (that could be infinite) of the distances from $a$ to a point for which a power series centered at $a$ diverges is called the radius of convergence of the series. This definition is legitimate according to the Existence of $\sup$ Theorem from Chapter 5. We finally can see where the word "radius" comes from. Theorem. Suppose $r$ is the radius of convergence of a power series $$\sum_{n=0}^\infty c_n(x-a)^n.$$ Then, 1. when $r<\infty$, the domain of the series is a disk $B(a,R)$ in ${\bf C}$ of radius $r$ centered at $a$ with some points on its boundary possibly included, and 2. when $r=\infty$, the domain of the series is the whole ${\bf C}$. Example. $$\begin{array}{ll|c} \text{series}&\text{sum}&\text{domain}\\ \hline \sum_{k=0}^\infty x^k&=\frac{1}{1-x}&B(0,1)\\ \sum_{k=0}^\infty \frac{(-1)^k}{k!}x^k&=e^x&{\bf C}\\ \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!}x^{2k+1}&=\sin x&{\bf C}\\ \sum_{k=0}^\infty \frac{(-1)^k}{(2k)!}x^{2k}&=\cos x&{\bf C} \end{array}$$ $\square$ Solving ODEs with power series Recall from Chapter 16 the following important results. Theorem (Uniqueness of Power Series). If two power series are equal, as functions, on an open interval $(a-r,a+r),\ r>0$, then their corresponding coefficients are equal, i.e., $$\sum_{n=0}^{\infty} c_n(x-a)^n=\sum_{n=0}^{\infty} d_n(x-a)^n \text{ for all } a-r<x<a+r\ \Longrightarrow\ c_n=d_n \text{ for all } n=0,1,2,3...$$ Theorem (Term-by-Term Differentiation and Integration). Suppose $R>0$ is the radius of convergence of a power series $$f(x)=\sum_{n=0}^{\infty} a_n(x-a)^n.$$ Then the function $f$ represented by this power series is differentiable (and, therefore, integrable) on the open disk $|x-a|<R$ and the power series representations of its derivative and its antiderivative converge on this disk and are found by term by term differentiation and integration of the power series of $f$, i.e., $$f'(x)=\left( \sum_{n=0}^{\infty} c_n(x-a)^n \right)'=\sum_{n=0}^{\infty} \left( c_n(x-a)^n \right)'=\sum_{n=1}^{\infty} nc_n(x-a)^{n-1},$$ and $$\int f(x)\, dx=\int \left(\sum_{n=0}^{\infty} c_n(x-a)^n\right)\, dx=\sum_{n=0}^{\infty} \int c_n(x-a)^n\, dx= \sum_{n=0}^{\infty} \frac{c_n}{n+1}(x-a)^{n+1}.$$ Example (ODEs of first order). Suppose we need to solve this initial value problem (we pretend we don't know the answer): $$y'=ky, \ y(0)=y_0.$$ The solution is the same as the one presented in Chapter 16. We assume that the unknown function $y=y(x)$ is differentiable and, therefore, is represented by a term-by-term differentiable power series. We differentiate the series and then match the terms according to the equation: $$\begin{array}{ccc} y&=&c_0&+&c_1x&+&c_2x^2&+&c_3x^3&+&...&+&c_nx^n&+&...\\ y'&=&\ &&c_1&+&2c_2x&+&3c_3x^2&+&...&+&nc_nx^{n-1}&+&...\\ \Longrightarrow&&&\swarrow&&\swarrow&&\swarrow&&...&&\swarrow&&\\ y'&=&c_1&+&2c_2x&+&3c_3x^2&+&...&+&nc_nx^{n-1}&+&(n+1)c_{n+1}x^{n}&+...\\ ||&&||&&||&&||&&&&||&&||\\ k\cdot y&=&kc_0&+&kc_1x&+&kc_2x^2&+&...&+&kc_{n-1}x^{n-1}&+&kc_{n}x^n&+...\\ \end{array}$$ According to the Uniqueness of Power Series, the coefficients have to match! Thus, we have a sequence of equations: $$\begin{array}{ccccc} &c_1&2c_2&3c_3&...&(n+1)c_{n+1}&+...\\ &||&||&||&&||\\ &kc_0&kc_1&kc_2&...&kc_n&+...\\ \end{array}$$ We can start solving these equations from left to right: $$\begin{array}{llllllll} &c_1&\Longrightarrow\ c_1=kc_0 & &2c_2&\Longrightarrow\ c_2=k^2c_0/2& &3c_3&...\\ &||&&&||&&&||&&\\ &kc_0& &\Longrightarrow&kc_1=k^2c_0& &\Longrightarrow&kc_2=k^3c_0/2&...\\ \end{array}$$ The condition $y(0)=y_0$ means that $c_0=y_0$. Therefore, $$c_n=y_0\frac{k^n}{n!}.$$ We recognize the resulting series: $$y=\sum_{n=0}^\infty y_0\frac{k^n}{n!}x^n=y_0\sum_{n=0}^\infty \frac{1}{n!}(kx)^n=y_0e^{kx}.$$ Note to get the $n$th term we solve a system of linear equations with the following the augmented matrix: $$\left[ \begin{array}{cccccccc|c} k&-1&0&0&0&...&0&0&0\\ 0&k&-2&0&0&...&0&0&0\\ 0&0&k&-3&0&...&0&0&0\\ ...&...&...&...&...&...&...&...&...\\ 0&0&0&0&0&...&k&-n&0\\ \end{array}\right].$$ $\square$ Warning: recognizing the resulting series isn't to be expected. Exercise. Solve the initial value problem: $$y'=ky+1, \ y(0)=y_0.$$ Example (ODEs of second order). Suppose we need to solve this initial value problem: $$y' '=-y, \ y(0)=y_0, \ y'(0)=v_0.$$ Again, we assume that the unknown function $y=y(x)$ is represented by a term-by-term differentiable power series. We differentiate the series twice and then match the terms according to the equation: $$\begin{array}{ccccc} y&=&c_0&+&c_1x&+&c_2x^2&+&c_3x^3&+&c_4x^4&+&c_5x^5&+&...\\ y'&=&\ &&c_1&+&2c_2x&+&3c_3x^2&+&4c_nx^{3}&+&5c_nx^{4}&+&...\\ y' '&=&\ &&\ &+&2c_2&+&3\cdot 2c_3x&+&4\cdot 3c_nx^{2}&+&5\cdot 4c_nx^{3}&+&...\\ \Longrightarrow\\ y' '&=&2c_2&+&3\cdot 2c_3x&+&4\cdot 3c_4x^{2}&+&5\cdot 4c_nx^{3}&+&...\\ ||&&||&&||&&||&&&&\\ -y&=&-c_0&+&-c_1x&+&-c_2x^2&+&-c_3x^3&+&...\\ \end{array}$$ The coefficients have to match: $$\begin{array}{ccccc} &2c_2&3\cdot 2c_3&4\cdot 3c_4&5\cdot 4c_5&...&n(n-1)c_n&...\\ &||&||&||&||\\ &-c_0&-c_1&-c_2&-c_3&...&-c_{n-2}&...\\ \end{array}$$ We can start solving these equations from left to right, odd separate from even. First, for even $n$: $$\begin{array}{rlllllll} &2c_2&\Longrightarrow\ c_2=c_0/2 & &4\cdot 3c_4&\Longrightarrow\ c_4=-c_0/(4\cdot 3\cdot 2)& ...\Longrightarrow &c_n=\pm c_0/n!&...\\ &||&&&||&&&\\ &-c_0& &\Longrightarrow&-c_2=-c_0/2& &\\ \end{array}$$ The condition $y(0)=y_0$ means that $c_0=y_0$. Therefore, $$n\text{ even }\Longrightarrow\ c_n=(-1)^{n/2+1} \frac{y_0}{n!}.$$ We recognize the resulting series: $$\sum_{\text{ even }n=0}^\infty (-1)^{n/2+1} \frac{y_0}{n!}x^n =y_0\sum_{\text{ even }n=0}^\infty (-1)^{n/2+1} \frac{1}{n!}x^n= y_0\cos x .$$ Second, for odd $n$: $$\begin{array}{rlllllll} &3\cdot 2c_3&\Longrightarrow\ c_3=c_1/(3\cdot 2) & &5\cdot 4c_5&\Longrightarrow\ c_5=-c_1/(5\cdot 4\cdot 3\cdot 2)&... &\Longrightarrow &c_n=\pm c_1/n!&...\\ &||&&&||&&&\\ &-c_1& &\Longrightarrow&-c_3=-c_1/(3\cdot 2)& \\ \end{array}$$ The condition $y'(0)=v_0$ means that $c_1=v_0$. Therefore, $$n\text{ odd }\Longrightarrow\ c_n=(-1)^{(n+1)/2} \frac{v_0}{n!}.$$ We recognize the resulting series: $$y=\sum_{\text{ odd }n=1}^\infty (-1)^{(n+1)/2} \frac{v_0}{n!}x^n=v_0\sum_{\text{ odd }n=1}^\infty (-1)^{(n+1)/2} \frac{1}{n!}x^n=v_0\sin x.$$ This is the result: $$y=y_0\cos x+v_0\sin x.$$ $\square$ Exercise. Provide the augmented matrix for this system of linear equations. Exercise. Solve the initial value problem: $$y' '+xy'+y=0, \ y(0)=1, \ y'(0)=1.$$ Example. Now let's consider something less recognizable.... When the resulting series isn't recognizable, it is used to approximate the answer via its partial sums. The accuracy of this approximation is given by the error bound for Taylor polynomials given in Chapter 16, as follows. Theorem (Error bound). Suppose a function $y=y(x)$ is $(n+1)$ times differentiable at $x=a$. Suppose also that for each $i=0,1,2,...,n+1$, we have $$|y^{(i)}(t)|<K_i \text{ for all } t \text{ between } a \text{ and } x,$$ and some real number $K_i$. Then $$E_n(x)=|y(x)-T_n(x)|\le K_{n+1}\frac{|x-a|^{n+1}}{(n+1)!},$$ where $T_n$ is the $n$th Taylor polynomial of $y$. Retrieved from "https://calculus123.com/index.php?title=Vector_and_complex_variables&oldid=574"
CommonCrawl
Proving $ 1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}\leq 2-\frac{1}{n}$ for all $n\geq 2$ by induction Let $P(n)$ be the statement that $1+\dfrac{1}{4}+\dfrac{1}{9}+\cdots +\dfrac{1}{n^2} <2- \dfrac{1}{n}$. Prove by mathematical induction. Use $P(2)$ for base case. Attempt at solution: So I plugged in $P(2)$ for the base case, providing me with $\dfrac{1}{4} < \dfrac{3}{2}$ , which is true. I assume $P(n)$ is true, so I need to prove $P(k) \implies P(k+1)$. So $\dfrac{1}{(k+1)^2} < 2 - \dfrac{1}{k+1}$. I don't know where to go from here, do I assume that by the Inductive hypothesis that it's true? discrete-mathematics inequality summation induction Donald DangDonald Dang $\begingroup$ In $P(n)$, do you mean that the sum is $< n - 1/n$, rather than $<1/n$? Although as I look closer, it's probably $2 - 1/n$ on the right. $\endgroup$ – pjs36 Apr 4 '15 at 19:34 $\begingroup$ Yes it is. Sorry for the typo. Its 2 - 1/n $\endgroup$ – Donald Dang Apr 4 '15 at 19:37 $\begingroup$ The logic in the line "I assume $P(n)$ is true, so I need to prove $P(k)\Rightarrow P(k+1)$." is awkward. It should read: "I need to prove $P(k)\Rightarrow P(k+1)$, so I assume that $P(n)$ is true for $n=k$. $\endgroup$ – Michael Burr Apr 4 '15 at 19:57 $\begingroup$ The line "So $\frac{1}{(k+1)^2}<2-\frac{1}{k+1}$" doesn't seem to follow from the previous steps (even though it is true). Can you show how you conclude that? $\endgroup$ – Michael Burr Apr 4 '15 at 20:05 For $n\geq 2$, let $S(n)$ denote the statement $$ S(n) : 1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}\leq 2-\frac{1}{n}. $$ Base step ($n=2$): $S(2)$ says that $1+\frac{1}{4}=\frac{5}{4}\leq\frac{3}{2}= 2-\frac{1}{2}$, and this is true. Inductive step: Fix some $k\geq 2$ and suppose that $S(k)$ is true. It remains to show that $$ S(k+1) : 1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{k^2}+\frac{1}{(k+1)^2}\leq 2-\frac{1}{k+1} $$ holds. Starting with the left-hand side of $S(k+1)$, \begin{align} 1+\frac{1}{4}+\cdots+\frac{1}{k^2}+\frac{1}{(k+1)^2} &\leq 2-\frac{1}{k}+\frac{1}{(k+1)^2}\quad\text{(by $S(k)$)}\\[1em] &= 2-\frac{1}{k+1}\left(\frac{k+1}{k}-\frac{1}{k+1}\right)\\[1em] &= 2-\frac{1}{k+1}\left(\frac{k^2+k+1}{k(k+1)}\right)\tag{simplify}\\[1em] &< 2-\frac{1}{k+1}.\tag{$\dagger$} \end{align} we see that the right-hand side of $S(k+1)$ follows. Thus, $S(k+1)$ is true, thereby completing the inductive step. By mathematical induction, for any $n\geq 2$, the statement $S(n)$ is true. Addendum: How did I get from the "simplify" step to the $(\dagger)$ step? Well, the numerator is $k^2+k+1$ and the denominator is $k^2+k$. We note that, $k^2+k+1>k^2+k$ (this boils down to accepting that $1>0$). Since $\frac{1}{k+1}$ is being multiplied by something greater than $1$, this means that what is being subtracted from $2$ in the "simplify" step is larger than what is being subtracting from $2$ in the $(\dagger)$ step. Note: It really was unnecessary to start your base case at $n=2$. Starting at $n=1$ would have been perfectly fine. Also, note that this exercise shows that the sum of the reciprocals of the squares converges to something at most $2$; in fact, the series converges to $\frac{\pi^2}{6}$. Daniel W. FarlowDaniel W. Farlow 18k1111 gold badges4848 silver badges9090 bronze badges Although OP asks for proof by induction, other answers cover it, so I will add solution through integral estimation. What we will use is integral test for convergence of series, more precisely, the last line in the proof section of Wiki. We have estimate $$\sum_{k=1}^n\frac 1{k^2} \leq 1 + \int_1^n \frac{dx}{x^2} = 1 + \left(-\left.\frac 1x\ \right|_1^n \right) = 2 - \frac 1 n$$ (see this for visualization) EnnarEnnar $\begingroup$ I will add that this idea is useful also for estimations for other sums. Some nice pictures related to the harmonic series $\sum_{k=1}^n \frac1k$ can be seen here. $\endgroup$ – Martin Sleziak Oct 25 '15 at 9:31 $\begingroup$ @MartinSleziak, thank you for the link. It serves as much better visual aid than my attempt with Alpha. $\endgroup$ – Ennar Oct 25 '15 at 9:36 Hint: You assume that the statement is true for $n=k$. In other words, $$ 1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{k^2}<2-\frac{1}{k}. $$ Now, add $\frac{1}{(k+1)^2}$ to both sides to get $$ 1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{k^2}+\frac{1}{(k+1)^2}<2-\frac{1}{k}+\frac{1}{(k+1)^2}. $$ What you would really like is that $$ 2-\frac{1}{k}+\frac{1}{(k+1)^2}<2-\frac{1}{k+1} $$ because then, by transitivity, your result would hold. So, can you prove that? Michael BurrMichael Burr Your base case is wrong. You should realize it's true since $\frac{5}{4}<\frac{3}{2}$ obtained from $$1+\frac{1}{2^2} = \frac{5}{4} < \frac{3}{2} = 2-\frac{1}{2}$$ For the induction step, suppose $P(n)$ is true for all $n \in \{1,2,\ldots,k\}$. Then $$\begin{align}1+\frac{1}{2^2}+\ldots+\frac{1}{k^2}+\frac{1}{(k+1)^2} = \left(1+\frac{1}{2^2}+\ldots+\frac{1}{k^2}\right)+\frac{1}{(k+1)^2} \\ < \left(2-\frac{1}{k} \right)+\frac{1}{(k+1)^2}\end{align}$$ Now if you can show that $$\left(2-\frac{1}{k} \right)+\frac{1}{(k+1)^2}<2-\frac{1}{k+1}$$ you are done. It is possible to get to that inequality simply starting with the fact that $\frac{1}{k+1}<\frac{1}{k}$ graydadgraydad Notice that for $k\ge2$ you have $$\frac1{k^2} \le \frac1{k(k-1)} = \frac1{k-1} - \frac{1}{k}.$$ Using this we can get $$\sum_{k=1}^{n} \frac1{k^2} = 1+\frac1{2^2}+\frac1{3^2}+\dots+\frac1{n^2} \le 1+\frac1{2\cdot 1}+\frac1{3\cdot 2}+\dots+\frac1{n(n-1)} \le\\\le 1+\left(1-\frac12\right)+\left(\frac12-\frac13\right)+\dots+\left(\frac1{n-1}-\frac1n\right).$$ Notice that many terms cancel out and in the end we get $$\sum_{k=1}^n \frac1{k^2} \le 2-\frac1n.$$ This is called telescoping series. (In fact, it is probably the best known telescoping series - it is also mentioned in the Wikipedia article I linked to.) It is not difficult to rewrite this to induction proof. Induction step would be $$\sum_{k=1}^n \frac1{k^2}+\frac1{(n+1)^2} \le \left(2-\frac1n\right) + \frac1{(n+1)^2} \le \left(2-\frac1n\right) + \left(\frac1n -\frac1{n+1}\right) =\\= 2-\frac1{n+1}.$$ (The first inequality is from the induction hypothesis. The second one is from the equality given at the beginning of this post.) As an exercise you might try to prove that $\sum\limits_{k=2}^n \dfrac1{k(k-1)} =1-\dfrac1n$ in a similar way. See also this post and some other posts about the same sum. Martin SleziakMartin Sleziak $\begingroup$ Your cancellation while telescoping isn't correct. The partial sum should be bounded by $2-\frac 1 n$. This is probably the reason why someone downvoted this answer. I will +1 after correction. $\endgroup$ – Ennar Oct 25 '15 at 9:09 $\begingroup$ @Ennar You are right. (Thanks for noticing it.) There were also some other typos (which I have edited before). I have no problem with the downvote, and I can think of plenty other reasons why the post might have been downvoted. $\endgroup$ – Martin Sleziak Oct 25 '15 at 9:26 Not the answer you're looking for? Browse other questions tagged discrete-mathematics inequality summation induction or ask your own question. Help with proof using induction: $1 + \frac{1}{4} + \frac{1}{9}+\cdots+\frac{1}{n^2}\leq 2-\frac{1}{n}$ Proof verification for proving $\forall n \ge 2, 1 + \frac1{2^2} + \frac1{3^2} + \cdots + \frac1{n^2} < 2 − \frac1n$ by induction Proof of $\sum_{k = 1}^{n} \frac{1}{k^{2}} < 2 - \frac{1}{n}$ Proving that $\sum_{i=1}^n\frac{1}{i^2}<2-\frac1n$ for $n>1$ by induction What is the formula for $\frac{1}{1\cdot 2}+\frac{1}{2\cdot 3}+\frac{1}{3\cdot 4}+\cdots +\frac{1}{n(n+1)}$ Sum of series question: $S_n = 1 + 1/4 + 1/9 + 1/16 + 1/25 + … + 1/n^2 < 2$ Proving inequality for induction proof: $\frac1{(n+1)^2} + \frac1{n+1} < \frac1n$ Proof by Induction for inequality, $\sum_{k=1}^nk^{-2}\lt2-(1/n)$ Induction on inequalities: $\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\ldots+\frac1{n^2}<2$ Proving inequality $\ 1+\frac14+\frac19+\cdots+\frac1{n^2}\le 2-\frac1n$ using induction Backwards induction to show that $x_1\cdots x_n \leq ((x_1+\cdots+x_n)/n )^n$ Proving $\frac{1}{n+1} + \frac{1}{n+2}+\cdots+\frac{1}{2n} > \frac{13}{24}$ for $n>1,n\in\Bbb N$ by Induction Use induction on $n$ to prove that $2n+1<2^n$ for all integers $n≥3$. Induction: show that $\sum\limits_{k=1}^n \frac{1}{\sqrt{k}} < 2 \sqrt{n}$ for all n $\in Z_+$ Weak Mathematical Induction for Modulo Arithmetic $8\mid 3^{2n}-1$ Proving by induction that $\frac{n(n + 1)(2n + 1)}{6} = 0^2 + 1^2 + 2^2 + 3^2 + … + n^2$ Proving $n! < n^n$ by induction for all $n\geq 2$. Proving A is a subset of S by mathematical induction? Prove that $\tfrac{a_1a_2\cdots a_n(1-a_1-a_2-\cdots-a_n)}{(a_1+a_2+\cdots+a_n)(1-a_1)\cdots(1-a_n)} \leq \frac{1}{n^{n+1}}.$ A strange inductive proof: Induction on $n$, for all positive integers $n,n\ge1$
CommonCrawl
Mathematics Project Proposals In the real world, math is about solving unsolved problems, analyzing unfamiliar situations, devising new mathematical systems and tools, exploring unknown landscapes and more — all the while reasoning with clear, rigorous logic. Students of math must ask their own questions and search for their own directions. These skills are essential outside math, but I believe that math research is a good way to train them. With proper guidance, students can potentially discover something truly original. This page proposes math projects of various difficulties to be investigated by high school students. I try to propose unsolved questions; investigators might as well put their efforts into something actually useful. Completely solving unsolved questions may be too much to expect, but I think partial progress (e.g. only works for special cases), even by intuitive means (i.e. not perfectly rigorous) or computer programs, would already be impressive. Top priority for any project would be a thorough literature review to check what has been analyzed and what results are known. That arms the investigator with tools developed by those in the field, avoids reinventing the wheel, and learns what the professionals consider to be promising or important. Each proposal ends with a few suggested readings if it came from existing ideas, but not if I thought it up myself. In both cases, investigators should search further on their own, and ask: What has been done before? What can I try? How can I generalize this? Each proposal ends with a few possible methods of attack or possible directions of inquiry, but I would prefer the investigators find their own way forward independently. Investigators who are lost or stuck can send a request for further suggestions or more advanced directions to [email protected]. Unfortunately I cannot guarantee that my suggestions will lead anywhere at all, but in any case investigators are recommended to think beyond them. Good luck, and I'll be glad to learn of any progress in these projects! Keep me posted, or give any feedback, through the contact form at the bottom. "Good" Dissections Geometric dissections concern the cutting of polygonal shapes into a finite number of pieces and rearranging the pieces into another shape. Here are some sample dissections: *Some diagrams of geometric dissections by Gavin Theobald; click to see many more. Do you like some dissections more than others? What is a "good" dissection? Does it depend on the number of pieces used? Symmetry? Flipping over pieces? Does it depend on the starting and ending shapes? What criteria, or combination of criteria, for "good'' dissections can be formalized and stated using a mathematical language? Using those criteria, write a mathematical procedure (possibly a computer program?) that can tell if dissections are "good" or not. Find large, general families of "good" dissections. Challenging: how do we tell if a dissection uses the fewest number of pieces? Note that the Wallace-Bolyai-Gerwien Theorem guarantees that any two polygons of equal area are equidecomposable, that is, one can be dissected and rearranged into the other. (Frederickson, 1997) and (Lindgren, 1964) survey several types of dissections and the techniques used to generate them, as well some of the history behind these puzzles. (Frederickson, 2002) does the same for special dissections whose pieces can be hinged together. There is probably more literature on this topic that can be found. (Frederickson, 1997) Greg N. Frederickson. Dissections: Plane and Fancy. Cambridge University Press (1997). (Frederickson, 2002) Greg N. Frederickson. Hinged Dissections: Swinging and Twisting. Cambridge University Press (2002). (Lindgren, 1964) Harry Lindgren. Geometric Dissections. Princeton, N.J.: D. Van Nostrand (1964). Dissecting Polyhedra The Wallace-Bolyai-Gerwien Theorem guarantees that any two polygons of equal area are equidecomposable, that is, one can be dissected into a finite number of pieces and rearranged into the other. However, the 3D version of this doesn't work; for example, a cube is not equidecomposable with a tetrahedron of the same volume. But some dissections of polyhedra do work, such as the one illustrated below and many more described in (Frederickson, 1997) and (Frederickson, 2002). But can you find large, general families of polyhedron pairs that are equidecomposable? Dissecting a parallelepiped (with rectangular base) into a cuboid. Are right prisms with equal base area and height equidecomposable? Find large, general families of polyhedron pairs that are equidecomposable. Several techniques for creating dissections listed in (Frederickson, 1997), (Frederickson, 2002), (Lindgren, 1964) and further literature will be useful. Light Ray Prediction Suppose a light ray travelling in the plane is trapped in a rectangular box whose walls are mirrors that the light bounces off against. If its speed and initial position and direction are known, how do we predict where the light would have reached after a certain amount of time has passed? Working it out seems hard at first because each bounce must take into account the wall involved, which affects the direction of the deflected light ray, which affects the next wall-bounce, which affects the new direction... the potential scenarios to be accounted for seem to explode. However, the following diagram shows a possible trick: A blue light ray travels within the yellow box, bouncing off the mirrored walls. It starts from the black square and ends at the black circle. Dashed black lines mark out reflected copies of the box, while the red dotted line marks a virtual light ray which goes from the back square to the white triangle. The red virtual light ray marks the path that the light ray would have taken if the box wasn't there. The real light ray and virtual light ray reach the single arrowhead at the same time, then the double arrowhead at the same time, and so on. But while it's hard to find the position of the blue arrowheads, the red arrowheads are rather straightforward to locate! It remains to deduce the positions of the blue arrowheads from the red ones. The key is to realize that each bounce of the real light ray corresponds to the virtual light ray striking a virtual wall (the dashed black lines). The virtual light ray strikes lines A, B, C then D. The following process can transform the path of the virtual light ray into that of the real light ray: Take the part of the path of the virtual light ray on the right of D and reflect it about D. Take the part of the new path above C and reflect it about C. Take the part of the new path to the right of B and reflect it about B. Take the part of the new path to the right of A and reflect it about A. How can this process be used to predict where the light ray has reached after a certain amount of time? In the first place, what variables (e.g. initial position and direction) need to be fixed and known before the prediction can happen? What if instead of a light ray, we had a billiard ball bouncing around inside a rectangular table? (Ignore friction and whatnot.) Can the same method be used? If yes, why? If no, how can it be adapted? What if the light ray was bouncing in 3D, inside a cuboid box with mirrored walls? Stretch your imagination and consider the strangest types of boxes in any dimension or shape, and try to adapt the reflection method to them. Note that I have seen the reflection trick inside some book before, so it is probably well-known — all the more important that investigators do a thorough literature review to see what has been tried. Chapter XI of (Berger, 2010) discusses other dimensions of this problem, but does not appear to use the trick. (Berger, 2010) Mercel Berger. Geometry Revealed: A Jacob's Ladder to Modern Higher Geometry. Springer-Verlag Berlin Heidelberg (2010). Determinants in Terms of Matrix Multiplication This came from a previous post. The determinant of a $2 \times 2$ matrix $\mathbf{A}$ has a simple formula:\begin{equation} \det\mathbf{A} = \det\begin{bmatrix} a & b\\ c & d \end{bmatrix} = ad - bc \end{equation}It can be verified that we could also express it as \det\mathbf{A} = \begin{bmatrix} \end{bmatrix}\mathbf{A}^\mathsf{T} 0 & 1\\ -1 & 0 \end{bmatrix}\mathbf{A} 1\\0 \end{bmatrix}. Verify the formula. Formulate a general version of this kind of formula for a determinant of an $n \times n$ matrix. Laplace expansion would probably be useful. Prove the generalized formula. Mathematical induction would probably be useful. Game of Life and other Cellular Automata Conway's Game of Life is a mathematical simulation that produces "lifelike" results from very simple input and simulation rules. In an infinite 2D grid of squares, some are "alive" while the others are "dead". Some rules are used to calculate from the current distribution of live and dead squares a new distribution, or "pattern", of live and dead squares, and the same rules are used to take this new pattern and calculate the next pattern, and so on. The Wikipedia article on this "game" provides a good introduction, featuring: Patterns that never change (still lifes) Patterns that change but repeat themselves after a number of steps (oscillators) Patterns that rapidly explode into chaos Patterns that "travel" in a certain direction (spaceships) Patterns that repeat themselves but shoot out spaceships at regular intervals (see the illustration below) Patterns that can replicate themselves somewhere else on the grid Patterns that, when you look from far away, simulate the Game of Life itself (metacells) *A single Gosper's Glider Gun creating "gliders" What other interesting patterns can you come up with? Can you change the rules or setup of the game, yet still define some new, interesting patterns? How about a higher-dimensional grid? How about a triangular or hexagonal grid? How about letting each cell have more than two states? This proposal is probably the most open-ended one, but this game is very famous so it's extremely important to find out what interesting structures have been defined by other researchers already. The content of Mathematics Research Proposals by Cheng Herng Yi, except the figures whose captions are prefixed by (*), is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
CommonCrawl
Search papers Search references Mat. Sb.: Personal entry: Mat. Sb., 2013, Volume 204, Number 1, Pages 47–78 (Mi msb8076) This article is cited in 10 scientific papers (total in 10 papers) A family of Nikishin systems with periodic recurrence coefficients S. Delvauxa, A. Lópeza, G. López Lagomasinob a Department of Mathematics, KU Leuven, Belgium b Departamento de Matemáticas, Universidad Carlos III de Madrid, Spain Abstract: Suppose we have a Nikishin system of $p$ measures with the $k$th generating measure of the Nikishin system supported on an interval $\Delta_k\subset\mathbb R$ with $\Delta_k\cap\Delta_{k+1}=\varnothing$ for all $k$. It is well known that the corresponding staircase sequence of multiple orthogonal polynomials satisfies a $(p+2)$-term recurrence relation whose recurrence coefficients, under appropriate assumptions on the generating measures, have periodic limits of period $p$. (The limit values depend only on the positions of the intervals $\Delta_k$.) Taking these periodic limit values as the coefficients of a new $(p+2)$-term recurrence relation, we construct a canonical sequence of monic polynomials $\{P_{n}\}_{n=0}^\infty$, the so-called Chebyshev-Nikishin polynomials. We show that the polynomials $P_n$ themselves form a sequence of multiple orthogonal polynomials with respect to some Nikishin system of measures, with the $k$th generating measure being absolutely continuous on $\Delta_k$. In this way we generalize a result of the third author and Rocha [22] for the case $p=2$. The proof uses the connection with block Toeplitz matrices, and with a certain Riemann surface of genus zero. We also obtain strong asymptotics and an exact Widom-type formula for functions of the second kind of the Nikishin system for $\{P_{n}\}_{n=0}^\infty$. Bibliography: 27 titles. Keywords: multiple orthogonal polynomial, Nikishin system, block Toeplitz matrix, Hermite-Padé approximant, strong asymptotics, ratio asymptotics. Funding Agency Grant Number Fonds Wetenschappelijk Onderzoek Ministerio de Ciencia e Innovación de España MTM 2009-12740-C03-01 DOI: https://doi.org/10.4213/sm8076 Full text: PDF file (723 kB) References: PDF file HTML file Sbornik: Mathematics, 2013, 204:1, 43–74 Bibliographic databases: UDC: 517.53 MSC: Primary 42C05; Secondary 41A21 Received: 16.10.2011 and 13.07.2012 Citation: S. Delvaux, A. López, G. López Lagomasino, "A family of Nikishin systems with periodic recurrence coefficients", Mat. Sb., 204:1 (2013), 47–78; Sb. Math., 204:1 (2013), 43–74 Citation in format AMSBIB \Bibitem{DelLopLop13} \by S.~Delvaux, A.~L\'opez, G.~L\'opez Lagomasino \paper A~family of Nikishin systems with periodic recurrence coefficients \jour Mat. Sb. \pages 47--78 \mathnet{http://mi.mathnet.ru/msb8076} \crossref{https://doi.org/10.4213/sm8076} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3060076} \zmath{https://zbmath.org/?q=an:06197055} \jour Sb. Math. \crossref{https://doi.org/10.1070/SM2013v204n01ABEH004291} Linking options: http://mi.mathnet.ru/eng/msb8076 https://doi.org/10.4213/sm8076 http://mi.mathnet.ru/eng/msb/v204/i1/p47 This publication is cited in the following articles: Delvaux S., López A., "Abey High-order three-term recursions, Riemann–Hilbert minors and Nikishin systems on star-like sets", Constr. Approx., 37:3 (2013), 383–453 R. K. Kovacheva, S. P. Suetin, "Distribution of zeros of the Hermite–Padé polynomials for a system of three functions, and the Nuttall condenser", Proc. Steklov Inst. Math., 284 (2014), 168–191 V. I. Buslaev, S. P. Suetin, "On equilibrium problems related to the distribution of zeros of the Hermite–Padé polynomials", Proc. Steklov Inst. Math., 290:1 (2015), 256–263 S. P. Suetin, "Distribution of the zeros of Padé polynomials and analytic continuation", Russian Math. Surveys, 70:5 (2015), 901–951 A. V. Komlov, S. P. Suetin, "Distribution of the zeros of Hermite–Padé polynomials", Russian Math. Surveys, 70:6 (2015), 1179–1181 W. Van Assche, "Ratio asymptotics for multiple orthogonal polynomials", Modern trends in constructive function theory, Contemp. Math., 661, ed. D. Hardin, D. Lubinsky, B. Simanek, Amer. Math. Soc., Providence, RI, 2016, 73–85 A. Martinez-Finkelshtein, E. A. Rakhmanov, S. P. Suetin, "Asymptotics of type I Hermite-Padé polynomials for semiclassical functions.", Modern trends in constructive function theory, Contemp. Math., 661, ed. D. Hardin, D. Lubinsky, B. Simanek, Amer. Math. Soc., Providence, RI, 2016, 199–228 A. Lopez-Garcia, G. Lopez Lagomasino, "Nikishin systems on star-like sets: ratio asymptotics of the associated multiple orthogonal polynomials", J. Approx. Theory, 225 (2018), 1–40 D. Barrios Rolanía, J. S. Geronimo, G. López Lagomasino, "High-order recurrence relations, Hermite-Padé approximation and Nikishin systems", Sb. Math., 209:3 (2018), 385–420 Lopez-Garcia A. Lopez Lagomasino G., "Nikishin Systems on Star-Like Sets: Ratio Asymptotics of the Associated Multiple Orthogonal Polynomials, II", J. Approx. Theory, 250 (2020), UNSP 105320 Full text: 122 First page: 16 What is a QR-code?
CommonCrawl
Digital modelling of circuits with diode (i.e. guitar distortion) For the last few days I was struggling to come up with an idea how to model a guitar overdrive effect which works in realtime. One of the simplest circuit diagrams that produce it looks as follows: The plot depicts output for 400Hz 1V sine wave at the input. Normally I'd derive transfer function of the circuit, but in this case I think it cannot be done because diode is a nonlinear component and transfer functions exist only for linear differential equations. Nevertheless the program I used for the simulation (Falstad Circuit Simulator) generates valid solution somehow... I was thinking of maybe deriving differential equation for the whole circuit and then applying some numerical scheme to find the solution - this however could be easily become unstable and it would be considerably harder for me to analyze. Are there some tools that could readily generate equations (be it difference, differential or transfer functions) for such circuit? EDIT I already tried with waveshaping. I came up with an idea to find coefficients of a polynomial to generate best fitting waveshaping function using a genetic algorithm. The algorithm converged nicely, but unfortunately, using a polynomial as the function model (tried up to 5th order) I cannot get anything that would resemble the desired output waveform. Max Walczak Max WalczakMax Walczak $\begingroup$ Google search is your friend there. Aalto University LC has some good Open Access papers you could check ... as like Dias de Paiva, Rafael : Circuit modeling studies related to guitars and audio processing (check "publication 4" as for an example). $\endgroup$ – Juha P Jul 6 '17 at 9:29 if you treat the diodes as having memoryless non-linear volt-amp characteristics and treat the capacitor as linear and having memory, you can use Euler's backward method to represent the capacitors and everything else are static Kirchoff equations (with some nonlinearity, you need to represent the back-to-back diodes accurately - the standard diode equation might not be good enough). okay, first let's deal with the parallel diodes. a single diode volt-amp characteristic is $$ i_D(t) = I_D \left( e^{\frac{q \ v_D(t)}{k T}} -1 \right) $$ $v_D(t)>0$ means forward bias of the diode and a significant current $i_D(t)>0$ flows. when $v_D(t)<0$, it is reversed biased and the most current that will flow is $i_D(t)=-I_D<0$ . $I_D$ is the reverse saturation current and is very tiny. $q$ is the elementary charge (sometimes called the "electron charge", but is a positive number), $k$ is the Boltzmann constant, and $T$ (in this section) is the absolute temperature of the PN junction of the diode (about 293K). $\frac{kT}{q}$ is a voltage about $\tfrac{1}{40}$ volt. when $v_D(t)=\frac{kT}{q}$ the forward diode current is still approximately $+I_D$, which is extremely small. the exponential curve doesn't visibly kick in until $v_D(t) \approx 25 \cdot \frac{kT}{q} = 0.65$ volt. keeping the polarities defined in the same direction but reversing the diode results in $$ i_D(t) = -I_D \left(e^{\frac{-q \ v_D(t)}{k T}} -1\right) $$ putting the two in parallel (assuming the same $I_D$), the volt-amp description is $$\begin{align} i_D(t) & = I_D \left(e^{\frac{q \ v_D(t)}{k T}} -1\right) - I_D \left(e^{\frac{-q \ v_D(t)}{k T}} -1\right) \\ \\ & = I_D \left(e^{\frac{q \ v_D(t)}{k T}} - e^{\frac{-q \ v_D(t)}{k T}}\right) \\ \\ & = 2 I_D \sinh\left(\frac{q \ v_D(t)}{k T}\right) \\ \end{align} $$ inverting the volt-amp equation: $$\begin{align} v_D(t) &= \frac{kT}{q}\operatorname{arcsinh}\left(\frac{i_D(t)}{2 I_D} \right) \\ &= \frac{kT}{q}\ln\left(\frac{i_D(t)+\sqrt{i_D(t)^2 + (2I_D)^2}}{2 I_D} \right) \\ \end{align} $$ probably the diodes have some contact resistance, and there is a fudge factor $\eta \approx 1$ or $2$ scaling the voltage, which will make the equation look like $$ v_D(t) = \frac{\eta kT}{q}\operatorname{arcsinh}\left(\frac{i_D(t)}{2 I_D} \right) + R_D \, i_D(t) $$ now, probably, rather than trying to find (from measured curves) the values of $I_D$ and $R_D$ and, the $\eta\frac{kT}{q}$ factor, you will probably just have a power series to represent the diodes. since the equation has odd-symmetry, your power series will have only odd-order terms. $$ i_D(t) = a_1 v_D(t) + a_3 (v_D(t))^3 + a_5 (v_D(t))^5 + \dots $$ you might not even need the inner terms and might be able to get away with just two terms, like $$ i_D(t) = a_1 v_D(t) + \dots a_9 (v_D(t))^9 $$ ignoring terms in between. this will make your computations cheaper. that's all i want to say about the diodes. you will have some memoryless model for the diode pair: $$ i_D(t) = f\big( \, v_D(t) \,\big) $$ or the inverse function $$ v_D(t) = g\big( \, i_D(t) \,\big) $$ that you will need to nail down a decent approximation of. Euler's backward method: the capacitor has fundamental volt-amp characteristics: $$\begin{align} i_C(t) &= C \frac{d \, v_C(t)}{dt} \\ &= C \lim_{\Delta t \to 0} \frac{v_C(t) - v_C(t-\Delta t)}{\Delta t} \\ \\ &\approx C \frac{v_C(t) - v_C(t-\Delta t)}{\Delta t} \qquad \text{for very small } \Delta t \\ \end{align} $$ or looking at it from the perspective of capacitor voltage $$\begin{align} v_C(t) &= \frac{1}{C} \int\limits_{-\infty}^{t} i_C(u) \, du \\ &= \frac{1}{C} \left( \int\limits_{-\infty}^{t-\Delta t} i_C(u) \, du + \int\limits_{t-\Delta t}^{t} i_C(u) \, du \right) \\ &= v_C(t-\Delta t) + \frac{1}{C} \int\limits_{t-\Delta t}^{t} i_C(u) \, du \\ &\approx v_C(t-\Delta t) + \frac{1}{C} \int\limits_{t-\Delta t}^{t} i_C(t) \, du \\ \\ &= v_C(t-\Delta t) + \frac{1}{C} i_C(t) \, \Delta t \qquad \text{for very small } \Delta t \\ \end{align} $$ both approximations say exactly the same thing and demonstrate the meaning we have when we differentiate between memoryless devices and those with memory. the ideal diode is memoryless because, in the ideal, the relationship between the voltage and current at its terminals depends only on their values at the present. the ideal capacitor is a device with memory because the relationship between the present voltage and current at its terminals depends on a voltage the capacitor remembers from the very recent past. so let the tiny "differential" time $\Delta t$ be our sampling period or $\Delta t = \frac{1}{f_\text{s}}$. then we need evaluate $t$ at only integer sampling periods: $$ t = n \cdot \Delta t $$ $$\begin{align} v_C(t) &= v_C(t-\Delta t) + \frac{\Delta t}{C} i_C(t) \\ \\ v_C(n \Delta t) &= v_C(n \Delta t-\Delta t) + \frac{\Delta t}{C} i_C(n \Delta t) \\ \\ &= v_C((n-1) \Delta t) + \frac{\Delta t}{C} i_C(n \Delta t) \\ \\ v_C[n] &= v_C[n-1] + \frac{\Delta t}{C} i_C[n] \\ \end{align}$$ at a specific time $t$, for every capacitor, you can represent that capacitor as a (briefly) constant voltage source (the previous sample's voltage) in series with a resistor having value $R_C=\tfrac{\Delta t}{C}$. and we can express that voltage source in series with a resistor as a Thevenin source having its own Norton equivalent whenever it is convenient to express it as such. (it is convenient to do that for the 470 pF cap in the feedback path.) let $C_1$=100nF, $R_1$=1K, $C_2$=470pF, $R_2$=10K, assuming the op-amp is ideal, the load resistance is not salient. the op-amp in negative feedback maintains virtual equality across its input terminals. the current flowing through $R_1$ into $C_1$ is $$\begin{align} i_1(t) &= \frac{1}{R_1}(v_\text{in}(t) - v_{C1}(t)) \\ &= \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}(t) - v_{C1}(t-\Delta t)) \\ \\ i_1[n] &= \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}[n] - v_{C1}[n-1]) \\ \end{align}$$ now the feedback current must equal $i_1(t)$. here, you want to express the source/resistance model of $C_2$ as a Norton equivalent. then the two diodes, the 10K resistor $R_2$, and the Norton resistance ($\tfrac{\Delta t}{C_2}$ are in parallel with the Norton current source of $v_{C2}(t-\Delta t)\tfrac{C_2}{\Delta t}$) must be placed in parallel and an aggregate volt-amp characteristic must be known. you want voltage as a function of current. $$ v_D(t) = g\big( \, i_D(t) \, \big) $$ that is the volt-amp characteristic of two diodes in parallel with two known resistances. so the feedback current will be divided into two currents, the known Norton equivalent current from the known $C_2$ voltage of the previous sample period and whatever flows into your non-linear device which is two diodes in parallel with two resistances. $$ i_1(t) = -v_{C2}(t-\Delta t)\tfrac{C_2}{\Delta t} + i_D(t) $$ or $$\begin{align} i_D(t) &= i_1(t) + v_{C2}(t-\Delta t)\tfrac{C_2}{\Delta t} \\ &= \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}(t) - v_{C1}(t-\Delta t)) + v_{C2}(t-\Delta t)\tfrac{C_2}{\Delta t} \\ \end{align}$$ $$\begin{align} v_D(t) &= g\big( \, i_D(t) \, \big) \\ &= g\big( i_1(t) + v_{C2}(t-\Delta t)\tfrac{C_2}{\Delta t} \big) \\ &= g\bigg( \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}(t) - v_{C1}(t-\Delta t)) + v_{C2}(t-\Delta t)\tfrac{C_2}{\Delta t} \bigg) \\ \end{align}$$ in discrete time, it's $$v_D[n] = g\bigg( \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}[n] - v_{C1}[n-1]) + v_{C2}[n-1]\tfrac{C_2}{\Delta t} \bigg) $$ and the output is $$\begin{align} v_\text{out}[n] &= v_D[n] + v_\text{in}[n] \\ &= g\bigg( i_1[n] + v_{C2}[n-1]\tfrac{C_2}{\Delta t} \bigg) + v_\text{in}[n] \\ &= g\bigg( \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}[n] - v_{C1}[n-1]) + v_{C2}[n-1]\tfrac{C_2}{\Delta t} \bigg) + v_\text{in}[n] \\ \end{align}$$ and you must update your two capacitor states for the next sampling period: $$\begin{align} v_{C1}[n] &= v_{C1}[n-1] + \frac{\Delta t}{C_1} i_1[n] \\ &= v_{C1}[n-1] + \frac{\Delta t}{C_1} \left( \frac{1}{R_1+\tfrac{\Delta t}{C_1}}(v_\text{in}[n] - v_{C1}[n-1]) \right) \\ \\ v_{C2}[n] &= v_D[n] \\ \end{align}$$ robert bristow-johnsonrobert bristow-johnson $\begingroup$ If I understand correctly, you mean to first represent the whole circuit as a differential equation of voltage in time and then discretize it using euler scheme? Can you elaborate on this? $\endgroup$ – Max Walczak Jul 7 '17 at 7:21 $\begingroup$ only the capacitors need be described by a differential equation, which can then be discretized with the simplest Euler method. i'll try to lay out some equations into my answer in a bit. $\endgroup$ – robert bristow-johnson Jul 7 '17 at 15:18 $\begingroup$ depends on the sample rate, Jazz. forward Euler doesn't require iterations and makes for efficient code. and looking at the circuit, with parallel diodes and a parallel 10K resistor, i might guess that the non-linear function of those three memoryless devices in parallel will be reasonably mild. i would bet that, picking two good coefficients that: $$ i_D(t) = \left(a_1 + a_9 \bigg( \Big( \big( v_D(t) \big)^2 \Big)^2 \bigg)^2 \right) v_D(t) $$ will model the nonlinearity pretty well for the sake of capturing the sound of the given circuit. $\endgroup$ – robert bristow-johnson Jul 7 '17 at 20:37 $\begingroup$ @Jazzmaniac, do you mean decreasing the stepsize makes things worse because of numerical issues?? (i haven't been worrying about quantization errors in this answer.) $ v_C(t) - v_C(t - \Delta t) $ gets closer and closer to zero as $\Delta t \to 0$. i can certainly imagine that the difference might round to zero when it is necessary that $ v_C(t) \ne v_C(t - \Delta t) $ $\endgroup$ – robert bristow-johnson Jul 8 '17 at 18:57 $\begingroup$ Thanks a lot for the approach you suggested because it gave me a good insight into how similar problems can be attacked and possibly solved. From engineering standpoint I'd agree with Robert - if it sounds good and doesn't "glitch" then it's good enough. I cannot think of any other numerical method that would be faster than Euler I (provided that Euler I will generate decent sound). The choice of the numerical method is just a detail for me because I can pick among many - what I really value about this answer is the approach presented as a whole. For me it's much simpler than i.e. WDF. $\endgroup$ – Max Walczak Jul 17 '17 at 9:06 One possible tool is Wave Digital Filter analysis which is a type of physical modeling that represents signals as travelling waves. It can also be extended to non-linear elements such as diodes. However, for distortion unit, you could instead of trying to digitalize the analog circuit try to extract a waveshaper from its behaviour. Common waveshapers are 3rd order polynomial and atan function. EDIT : here is an intersting paper about wave digital domain modeling of guitar distortion EDIT2 : and here a DAFx lecture about existing methods for non linearities in virtual analog edited Jul 6 '17 at 9:07 FlorentFlorent $\begingroup$ Thank you for your valuable answer! The waveshaping approach seems more practical for me, but there's still one problem I cannot figure out: Are there some general schemes I could use to derive waveshaping function for a circuit of for a frequency response of a circuit? I can imagine I could for example use a genetic algorithm to find polynomial coefficients for frequency response, but maybe there's some more deterministic way? $\endgroup$ – Max Walczak Jul 6 '17 at 9:17 $\begingroup$ Well if your circuit is simply a distortion, you could find the caracteristic function (Vout = f(Vin) ) and then match a polynomial (or other forms). If it's a more complex circuit it might not be as "trivial" $\endgroup$ – Florent Jul 6 '17 at 9:25 $\begingroup$ Also I should warn you that obviously, waveshaping will introduce a lot of harmonics and will probably alias your signal... You could either take the easy way of "a distorted signal sounds harsh anyway so I don't mind", or you could oversample your signal to bandlimit it. Julius Orion Smith has an interesting chapter on non-linearities in his physical modeling book $\endgroup$ – Florent Jul 6 '17 at 9:28 $\begingroup$ Thanks a lot for all your answers! Now I will have a good bit of reading ahead! :) $\endgroup$ – Max Walczak Jul 6 '17 at 9:38 Not the answer you're looking for? Browse other questions tagged distortion or ask your own question. Digital Distortion effect algorithm Total Harmonic Distortion calculation and its origins What is this kind of distortion called? Spectral baseline distortion simulation - Bruker smile Identify the Type of Image Distortion (On Lena Image) Linear vs Non linear distortion How to prevent distortion after down-sampling? Signal distortion under non-linear channel Exponential Swept Sine Distortion
CommonCrawl