text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
12:00 - 15:00 Student Posters Setup, Cartier / International Foyer, Centre Mont Royal
15:30 - 16:30 Stephanie van Willigenburg (University of British Columbia), CMS Krieger-Nelson Prize, An introduction to quasisymmetric Schur functions, Symposia (Auditorium), Centre Mont Royal
16:30 - 20:00 Registration, Level 1 - Hall Symposia, Centre Mont Royal
17:00 - 19:00 Welcome Reception, Level 4, Centre Mont Royal
19:00 - 20:00 Robert McCann (University of Toronto, Canada), CMS Jeffery-Williams Prize, Optimal transport between unequal dimensions, Symposia (Auditorium), Centre Mont Royal
08:00 - 17:00 Registration, University of Montreal - Hall d'Honneur, next to the Cormier (K-500) Auditorium
09:00 - 18:00 AARMS-CMS Student Poster Session, Cartier / International Foyer, Centre Mont Royal
09:45 - 11:00 Opening, Prizes, University of Montreal, Salle Cormier, K-500
11:15 - 12:15 Shafrira Goldwasser (MIT, USA), Plenary Lectures, Pseudo Deterministic Algorithms and Proofs, Cormier Hall K-500
12:15 - 13:45 Box Lunch, University of Montreal
13:45 - 14:45 Jeremy Quastel (University of Toronto, Canada), Invited Speakers, Asymptotic fluctuations in the KPZ universality class, Cormier Hall K-500
Lia Bronsard (McMaster University, Canada), Invited Speakers, Droplet phase in a nonlocal isoperimetric problem under confinement, E-310
Maria Chudnovsky (Columbia University, USA), Invited Speakers, Induced Subgraphs and Coloring, Z-110
14:45 - 15:15 Break, University of Montreal
15:15 - 16:15 Yuval Peres (Microsoft Research, USA), Plenary Lectures, Gravitational allocation on the sphere and overhanging blocks, Cormier Hall K-500
16:30 - 17:30 Manuel del Pino (Universidad de Chile), Plenary Lectures, Singularity formation and bubbling in nonlinear diffusions, Cormier Hall K-500
20:00 - 21:00 Erik Demaine (MIT), Public Lectures, Folding Paper: Visual Art Meets Mathematics, Auditorium, CMR
Public Lecture - Erik Demaine, Auditorium, Centre Mont Royal
09:00 - 10:00 Rustum Choksi (McGill University, Canada), Invited Speakers, Geometric Variational Problems with Nonlocal Interactions: Gamow's Liquid Drop Problem and Beyond, Symposia Auditorium, CMR
Yakov Eliashberg (Stanford University, USA), Invited Speakers, Weinstein manifolds revisited, Cartier I & II, CMR
Héctor H. Pastén Vásquez (Harvard University, USA), MCA Prize Lectures, The abc Conjecture and the d(abc) Theorem, International, Centre Mont-Royal
10:00 - 10:30 Break, Mezzannines, Centre Mont Royal
10:30 - 11:30 Jeremy Kahn (Brown University, USA), Invited Speakers, Applications and Frontiers in Surface Subgroups, Symposia Auditorium, CMR
Paolo Piccione (University of São Paulo, Brazil), Invited Speakers, Teichmüller theory, collapse of flat manifolds and applications to the Yamabe problem, Cartier I & II, CMR
Vlad Vicol (Princeton University, USA), MCA Prize Lectures, Turbulent weak solutions of the Euler equations, International, Centre Mont-Royal
11:45 - 12:05 Damaris Schindler (Utrecht University, The Netherlands), Advances in Algebraic and Analytic Number Theory, On integral points on degree four del Pezzo surfaces, McGill U., Rutherford Physics Building, Room 114
Erik Talvila (University of the Fraser Valley, Canada), Advances in Analysis, PDE's and Related Applications, The heat equation with the continuous primitive integral, McGill U., Burnside Hall, Room 920
Hiroaki Terao (Hokkaido University), Advances in Arrangement Theory, On the Exponents of Restrictions of Weyl Arrangements, Centre Mont Royal, Room Cartier I & II
Sarah Koch (University of Michigan), Arithmetic Dynamics, Realizing postcritical combinatorics, McGill U., Rutherford Physics Building, Room 118
Gustavo Buscaglia (University of Sao Paulo, Brazil), Applied Math and Computational Science across the Americas, Domain decomposition with Robin interface conditions for reservoir simulation, McGill U, Bronfman Building, Room 1
Thomas Church (Stanford University), Cohomology of Groups, Asymptotic representation theory over $\mathbb{Z}$ and cohomology of arithmetic groups, McGill U., Trottier Building (Engineering), Room 70
Luz Angelica Caudillo Mata (University of British Columbia), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Multiscale and Upscaling Methods to Simulate Geophysical Electromagnetic Responses, McGill U., Bronfman Building, Room 45
Abdullahi Adem (North-West University), Contributed Papers, Multiple wave solutions and conservation laws of the Date-Jimbo-Kashiwara-Miwa (DJKM) equation via symbolic computation, McGill U., Burnside Hall, Room 1B45
Maya Stein (Universidad de Chile), Current Trends in Combinatorics, Partitions into monochromatic subgraphs, Centre Mont Royal, Salon International I & II
Geordie Richards (University of Rochester), Finite and Infinite Dimensional Hamiltonian Systems, On invariant Gibbs measures for generalized KdV, McGill U., Burnside Hall, Room 1205
Victor Perez Abreu (Centro de Investigación en Matemáticas, Guanajuato, Mexico), Free Probability and its Applications, On new noncommutative processes arising from matricial random processes, McGill U., Arts Building, Room 260
Yuri Bahturin (Memorial University of Newfoundland, Canada), Groups and Algebras, Real graded division algebras, McGill U., Trottier Building (Engineering), Room 60
Kenneth Meyer (University of Cincinnati, USA), Geometry of Differential Equations, Real and Complex, Asymptotic Stability Estimates near an Equilibrium Point, McGill U., Arts Building, Room W-120
David Futer (Temple University), Geometric Group Theory, Geometrizing generic few-relator groups, McGill U., Trottier Building (Engineering), Room 100
Andres Larrain-Hubach (University of Dayton), Gauge Theory and Special Geometry, The Nahm Transform on Taub-NUT Space, McGill U., Birks Building, Room 203
Edward Bierstone (Univ. of Toronto - Canada), Holomorphic Foliations and Singularities of Mappings and Spaces, Global smoothing of a subanalytic set, McGill U., Burnside Hall, Room 1214
Andrés Navas (Universidad de Santiago de Chile), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Some problems concerning orders on groups of geometric origin, McGill U., Trottier Building (Engineering), Room 1100
Michael Makkai (McGill University), Interactions Between Model Theory and Analysis and Topology, A survey of first-order logic with dependent sorts (FOLDS), McGill U., Trottier Building (Engineering), Room 2110
Mary Beth Ruskai (University of Vermont), Mathematics of Quantum Phases of Matter and Quantum Information, Extreme Points of Unital Quantum Channels, McGill U., Bronfman Building, Room 46
Luis Vega (BCAM and Universidad del Pais Vasco, Spain), Nonlinear Dispersive Equations, The Talbot effect and the dynamics of vortex filaments: transfer of energy and momentum, McGill U., Burnside Hall, Room 1B23
Alejandro Jofré (Departamento de Ingeniería Matemática, Universidad de Chile, Santiago, Chili), Optimization and Control, Variance-based stochastic extragradient methods with linear search for stochastic variational inequalities, McGill U., Bronfman Building, Room 151
Dmitry Khavinson (University of South Florida, USA), Operator Theory on Function Spaces, Vanishing of reproducing kernels in spaces of analytic functions, McGill U., Arts Building, Room W-215
Augusto Teixeira (Instituto Nacional de Matemática Pura e Aplicada), Probability Theory, Sharpness of the phase transition for continuum percolation in $R^2$, McGill U., Arts Building, Room W-20
Fedor Manin (University of Toronto), Quantitative Geometry and Topology, Counting thick embeddings, McGill U., McConnell Engineering Building, Room 11
Alberto Grunbaum (University of California, Berkeley, USA), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, A generalization of Schur functions, classical random walks, unitary and open quantum walks., McGill U., Bronfman Building, Room 178
Vyacheslav Futorny (University of São Paulo), Symmetry in Algebra, Topology, and Physics, Algebras of invariant differential operators and their representations, McGill U., Birks Building, Room 205
Peter Sternberg (Indiana University Bloomington - USA), Singularities and Phase Transitions in Condensed Matter, Phrase transitions in a model for nematic liquid crystals, McGill U., Burnside Hall, Room 1B39
Rebecca Goldin (George Mason University, USA), Symmetries of Symplectic Manifolds and Related Topics, On equivariant structure constants for G/B, McGill U., McConnell Engineering Building, Room 13
Toni Bluher (Department of Defense (United States)), Theory and Applications of Finite Fields, Dickson polynomials, McGill U., Trottier Building (Engineering), Room 2120
11:45 - 12:25 Miguel Xicoténcatl (CINEVSTAV, México), Groups in Geometry and Topology, On the cohomology of mapping class groups and configuration spaces of non-orientable surfaces, McGill U., Birks Building, Room 111
11:45 - 12:30 Ariel Pacetti (Universidad de Buenos Aires), Galois Representations and Automorphic Forms, Non-paritious Hilbert modular forms, McGill U., Rutherford Physics Building, Room 115
11:45 - 12:45 Yuri Tschinkel (NYU), Calabi-Yau Manifolds and Calabi-Yau Algebra, McGill U., McConnell Engineering Building, 204
Nikita Karpenko (Univeristy of Alberta, Canada), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, Chow ring of generic flag varieties, McGill U., McConnell Engineering Building, Room 304
Alejandro Adem (University of British Columbia), Stringy Geometry, Twisted K-theory for Actions with Maximal Rank Isotropy, McGill U., McConnell Engineering Building, Room 12
12:15 - 12:35 Amalia Pizarro Madariaga (Universidad de Valparaíso, Chile), Advances in Algebraic and Analytic Number Theory, Irreducible characters with bounded root Artin conductor, McGill U., Rutherford Physics Building, Room 114
Sonia Mazzucchi (Universita degli Studi Trento (Italy)), Advances in Analysis, PDE's and Related Applications, Projective systems of functionals and generalized Feynman-Kac formulae, McGill U., Burnside Hall, Room 920
Masahiko Yoshinaga (Hokkaido University), Advances in Arrangement Theory, Remarks on characteristic quasi-polynomials of deformed Weyl arrangements, Centre Mont Royal, Room Cartier I & II
Joseph Silverman (Brown University), Arithmetic Dynamics, A Dynamical Shafarevich Conjecture with Portraits, McGill U., Rutherford Physics Building, Room 118
Daniel Szyld (Temple University), Applied Math and Computational Science across the Americas, Asynchronous Optimized Schwarz, Theory and Experiments, McGill U, Bronfman Building, Room 1
Rita Jimenez (Instituto de Matematicas, Universidad Autonoma de Mexico, Oaxaca), Cohomology of Groups, On the cohomology of spaces of maximal tori in classical Lie groups, McGill U., Trottier Building (Engineering), Room 70
Jie Sun (Clarkson University), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Kernel-based Reconstruction of Spatially Embedded Complex Networks, McGill U., Bronfman Building, Room 45
Karim Samei (Bu Ali Sina University, Hamedan, Iran.), Contributed Papers, Singleton Bounds for R-additive Codes, McGill U., Burnside Hall, Room 1B45
Amanda Montejano (Universidad Nacional Autónoma de México), Current Trends in Combinatorics, The existence of zero-sum subgraphs in $\{-1, 1\}$-weightings of $E(K_n)$, Centre Mont Royal, Salon International I & II
Robert L. Jerrard (University of Toronto), Finite and Infinite Dimensional Hamiltonian Systems, dynamics of nearly-parallel vortex filaments in the Gross-Pitaevskii equations, McGill U., Burnside Hall, Room 1205
Jonathan Novak (University of California, San Diego), Free Probability and its Applications, Semiclassical asymptotics of $\mathrm{GL}_N(\mathbb{C})$ tensor products, McGill U., Arts Building, Room 260
Alexander Cardona (University of los Andes, Colombia), Geometry of Differential Equations, Real and Complex, Spectral invariants and global pseudo-differential calculus on homogeneous spaces, McGill U., Arts Building, Room W-120
Eduardo Martinez-Pedroza (Memorial Universtity of Newfoundland), Geometric Group Theory, Subgroups of relatively hyperbolic groups of relative dimension 2, McGill U., Trottier Building (Engineering), Room 100
Mark Stern (Duke University), Gauge Theory and Special Geometry, Instantons on ALF spaces, McGill U., Birks Building, Room 203
Felipe Cano (Univ. Valladolid - Spain), Holomorphic Foliations and Singularities of Mappings and Spaces, Local Uniformization of Singular Codimension One Foliations, McGill U., Burnside Hall, Room 1214
Adam Clay (University of Manitoba), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, The number of circular orderings of a group, McGill U., Trottier Building (Engineering), Room 1100
Bradd Hart (McMaster University), Interactions Between Model Theory and Analysis and Topology, Practical definability in the model theory of operator algebras, McGill U., Trottier Building (Engineering), Room 2110
Francisco Delgado (Tecnológico de Monterrey, Campus Estado de México), Mathematics of Quantum Phases of Matter and Quantum Information, SU(2) decomposition for complex quantum information hamiltonians generating entanglement, McGill U., Bronfman Building, Room 46
Didier Pilod (Federal University of Rio de Janeiro (UFRJ), Brazil), Nonlinear Dispersive Equations, Construction of a minimal mass blow up solution of the modified Benjamin-Ono equation, McGill U., Burnside Hall, Room 1B23
Mario Bravo (Departamento de Matemática y Ciencias de la Computación, Universidad de Santiago de Chile, Santiago, Chili), Optimization and Control, Sharp convergence rates for averaged non expansive maps, McGill U., Bronfman Building, Room 151
Raul Curto (University of Iowa, USA), Operator Theory on Function Spaces, A New Necessary Condition for the Hyponormality of Toeplitz Operators on the Bergman Space, McGill U., Arts Building, Room W-215
Alexander Fribergh (Universite de Montreal), Probability Theory, The ant in the labyrinth, McGill U., Arts Building, Room W-20
Jae Choon Cha (POSTECH, Republic of Korea), Quantitative Geometry and Topology, Bordism, chain homotopy, and Cheeger-Gromov $\rho$-invariants, McGill U., McConnell Engineering Building, Room 11
Luis Velazquez (Universidad de Zaragoza, Spain), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, QUANTUM WALKS: RECURRENCE \& TOPOLOGICAL PHASES, McGill U., Bronfman Building, Room 178
Mikhail Khovanov (Columbia University), Symmetry in Algebra, Topology, and Physics, How to categorify the ring of integers localized at two, McGill U., Birks Building, Room 205
Radu Ignat (Institut de Mathématiques de Toulouse - France), Singularities and Phase Transitions in Condensed Matter, A DeGiorgi type conjecture for divergence-free maps, McGill U., Burnside Hall, Room 1B39
Jeffrey Carlson (University of Toronto, Canada), Symmetries of Symplectic Manifolds and Related Topics, Equivariant formality beyond Hamiltonian actions, McGill U., McConnell Engineering Building, Room 13
Antonio Cafure (Universidad de Buenos Aires, (UBA- Buenos Aires, Argentina)), Theory and Applications of Finite Fields, Cyclotomic polinomials over finite fields, McGill U., Trottier Building (Engineering), Room 2120
12:30 - 13:15 Gonzalo Tornaría (Universidad de la Republica, Montevideo, Uruguay), Galois Representations and Automorphic Forms, Waldspurger formula for Hilbert modular forms, McGill U., Rutherford Physics Building, Room 115
12:45 - 14:00 Early Career Panel, Mont Royal II, Centre Mont-Royal
14:15 - 14:35 Emanuel Carneiro (IMPA, Brazil), Advances in Algebraic and Analytic Number Theory, Fourier optimization problems in number theory, McGill U., Rutherford Physics Building, Room 114
Pauline Bailet (University of Bremen), Advances in Arrangement Theory, A vanishing result for the first twisted cohomology of affine varieties and applications to line arrangements, Centre Mont Royal, Room Cartier I & II
John Doyle (University of Rochester), Arithmetic Dynamics, Reduction of dynatomic curves, McGill U., Rutherford Physics Building, Room 118
Jose Castillo (San Diego State University), Applied Math and Computational Science across the Americas, High Order Compact Mimetic Differences and Discrete Energy Decay in 2D Wave Motions, McGill U, Bronfman Building, Room 1
Dev Sinha (University of Oregon, USA), Cohomology of Groups, Cohomology of symmetric and alternating groups, McGill U., Trottier Building (Engineering), Room 70
Wenke Wilhelms (University of British Columbia), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, The mimetic multiscale method for electromagnetics at a borehole with casing, McGill U., Bronfman Building, Room 45
Lahcen Laayouni (Al Akhawayn University), Contributed Papers, On the efficiency of the Algebraic Optimized Schwarz Methods, McGill U., Burnside Hall, Room 1B45
Rob Morris (Instituto Nacional de Matemática Pura e Aplicada), Current Trends in Combinatorics, The typical structure of sparse graphs in hereditary properties, Centre Mont Royal, Salon International I & II
Luis Vega (Universidad del País Vasco), Finite and Infinite Dimensional Hamiltonian Systems, McGill U., Burnside Hall, Room 1205
Mario Diaz (Queen's University), Free Probability and its Applications, A New Application of Free Probability Theory: Data Privacy, McGill U., Arts Building, Room 260
Inna Bumagina (Carleton University, Canada), Groups and Algebras, McGill U., Trottier Building (Engineering), Room 60
Johanna Garcia Saldaña (Catholic University of the Most Holy Conception, Chile), Geometry of Differential Equations, Real and Complex, An approach to the period function through the harmonic balance, McGill U., Arts Building, Room W-120
Priyam Patel (University of California Santa Barbar), Geometric Group Theory, Algebraic and topological properties of big mapping class groups, McGill U., Trottier Building (Engineering), Room 100
Ian Hambleton (McMaster University, Canada), Groups in Geometry and Topology, Group actions on homology 3-spheres, McGill U., Birks Building, Room 111
Andrew B. Royston (Texas A-M), Gauge Theory and Special Geometry, Magnetic Monopoles and N=2 super Yang-Mills, McGill U., Birks Building, Room 203
Fuensanta Aroca (Univ. Nac. Aut. Mexico - Mexico), Holomorphic Foliations and Singularities of Mappings and Spaces, Parametrizations with exponents in cones and the algebraic closure of the field of rational functions in several variables., McGill U., Burnside Hall, Room 1214
Thomas Koberda (University of Virginia), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Square roots of Thompson's group F, McGill U., Trottier Building (Engineering), Room 1100
Christopher Eagle (University of Victoria), Interactions Between Model Theory and Analysis and Topology, Topology, abstract model theory, and the omitting types theorem, McGill U., Trottier Building (Engineering), Room 2110
Alonso Botero (Universidad de los Andes), Mathematics of Quantum Phases of Matter and Quantum Information, Asymptotic entanglement concentration of multi-qubit W states, McGill U., Bronfman Building, Room 46
Andrew Bernoff (Harvey Mudd College), Nonlocal Variational Problems, Biological Aggregation Driven by Social/Environmental Factors: A Nonlocal Model and Its Degenerate Cahn-Hilliard Approximation, McGill U., Burnside Hall, Room 1B36
Héctor Ramírez (Departamento de Ingeniería Matemática, Universidad de Chile, Santiago, Chili), Optimization and Control, BIOREMEDIATION OF WATER RESOURCES: AN OPTIMAL CONTROL APPROACH, McGill U., Bronfman Building, Room 151
Lewis Coburn (SUNY at Buffalo, USA), Operator Theory on Function Spaces, Toeplitz Quantization, McGill U., Arts Building, Room W-215
Frederi Viens (Michigan State University), Probability Theory, Applications of the analysis on Wiener space, McGill U., Arts Building, Room W-20
Slava Krushkal (University of Virginia), Quantitative Geometry and Topology, Geometric complexity of embeddings, McGill U., McConnell Engineering Building, Room 11
Radhakrishnan Balu (US Army Research Laboratory, Adelphi, MD), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Group Theoretic Treatment of Quantum Walks, McGill U., Bronfman Building, Room 178
Peter Samuelson (University of Edinburgh), Symmetry in Algebra, Topology, and Physics, Hall algebras and skein theory, McGill U., Birks Building, Room 205
Takashi Kimura (Boston University), Stringy Geometry, Stringy operations in equivariant K-theory and cohomology, McGill U., McConnell Engineering Building, Room 12
Marjorie Senechal (Smith College), Soft Packings, Nested Clusters, and Condensed Matter, Icosahedral Snowflakes?, McGill U., Bronfman Building, Room 179
Dmitry Golovaty (University of Akron - USA), Singularities and Phase Transitions in Condensed Matter, On minimizers of Landau-de Gennes energy for nematic liquid crystals, McGill U., Burnside Hall, Room 1B39
Victor Guillemin (MIT, USA), Symmetries of Symplectic Manifolds and Related Topics, Torus actions with collinear weights, McGill U., McConnell Engineering Building, Room 13
Ricardo Conceição (Gettysburg College), Theory and Applications of Finite Fields, Definition and first properties of Markov polynomials, McGill U., Trottier Building (Engineering), Room 2120
14:15 - 15:00 Juan Davila (Universidad de Chile), Nonlinear Partial Differential Equations, Vortices for the 2D Euler equation, McGill U., Burnside Hall, Room 1B24
14:15 - 15:15 Irene Fonseca (Carnegie Mellon University, USA), Advances in Analysis, PDE's and Related Applications, Variational Models for Image Processing, McGill U., Burnside Hall, Room 920
Mike Roth (Queens University), Calabi-Yau Manifolds and Calabi-Yau Algebra, Some diophantine applications of local positivity, McGill U., McConnell Engineering Building, 204
Marc Levine (University of Essen, Germany), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, Motivic Virtual Fundamental Classes, McGill U., McConnell Engineering Building, Room 304
14:30 - 15:15 Luis Lomelí (Pontífica Universidad Católica de Valparaíso), Galois Representations and Automorphic Forms, Globalization of supercuspidal representations over function fields and applications, McGill U., Rutherford Physics Building, Room 115
14:30 - 15:30 Student CV Writing Workshop, Mansfield 5, Centre Mont-Royal
14:45 - 15:05 Arindam Roy (Rice University, USA), Advances in Algebraic and Analytic Number Theory, On the distribution of imaginary parts of zeros of derivatives of the Riemann $\xi$-function, McGill U., Rutherford Physics Building, Room 114
Alexander I. Suciu (Northeastern University), Advances in Arrangement Theory, Arrangement complements and Milnor fibrations, Centre Mont Royal, Room Cartier I & II
David Krumm (Colby College), Arithmetic Dynamics, Galois groups in a family of dynatomic polynomials, McGill U., Rutherford Physics Building, Room 118
Miguel Dumett (San Diego State University, USA), Applied Math and Computational Science across the Americas, A high-order accurate mimetic discretization of the Eikonal equation with Soner boundary conditions, McGill U, Bronfman Building, Room 1
Gabriel Minian (Universidad de Buenos Aires), Cohomology of Groups, A new test for studying asphericity of group presentations and Whitehead's asphericity question, McGill U., Trottier Building (Engineering), Room 70
Christina Frederick (Georgia Tech), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Multiscale methods for seafloor identification in sonar imagery, McGill U., Bronfman Building, Room 45
Pedro Pablo CARDENAS ALZATE (Universidad Tecnológica de Pereira), Contributed Papers, The Zhou's method for solving delay differential equations applied to biological models, McGill U., Burnside Hall, Room 1B45
Juan José Montellano (Universidad Nacional Autónoma de México), Current Trends in Combinatorics, Transversals of sequences, Centre Mont Royal, Salon International I & II
Andres Contreras (New Mexico State University), Finite and Infinite Dimensional Hamiltonian Systems, Eigenvalue preservation for the Beris-Edwards system, McGill U., Burnside Hall, Room 1205
Solesne Bourguin (Boston University), Free Probability and its Applications, Some recent results on Wigner integrals, McGill U., Arts Building, Room 260
Liudmila Sabinina (Universidad Autónoma del Estado de Morelos), Groups and Algebras, On Malcev algebras with the identity $J(x_1x_2...x_n, y, z) =0.$, McGill U., Trottier Building (Engineering), Room 60
Jesus Mucino Raymundo (National Autonomous University of Mexico, Morelia, Mexico), Geometry of Differential Equations, Real and Complex, Essential singularities of complex analytic vector fields on $\mathbb{C}$, McGill U., Arts Building, Room W-120
Balazs Strenner (Georgia Institute of Technology), Geometric Group Theory, Fast computation in mapping class groups, McGill U., Trottier Building (Engineering), Room 100
Angélica Osorno (Reed College, USA), Groups in Geometry and Topology, Equivariant Infinite Loop Space Machines, McGill U., Birks Building, Room 111
Gonçalo Oliveira (Duke University), Gauge Theory and Special Geometry, Gauge Theory and SU(3) structures, McGill U., Birks Building, Room 203
Alicia Dickenstein (Univ. Buenos Aires - Argentina), Holomorphic Foliations and Singularities of Mappings and Spaces, Sparse mixed discriminants, toric jacobians, and iterated discriminants, McGill U., Burnside Hall, Room 1214
Javier Sanchez (Universidade de Sao Paulo, Brazil), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, A way of obtaining free group algebras inside division rings, McGill U., Trottier Building (Engineering), Room 1100
Frank Tall (University of Toronto), Interactions Between Model Theory and Analysis and Topology, Omitting Types and the Baire Category Theorem, McGill U., Trottier Building (Engineering), Room 2110
Salvador Venegas-Andraca (Tecnológico de Monterrey), Mathematics of Quantum Phases of Matter and Quantum Information, Entanglement-tuned quantum walks, McGill U., Bronfman Building, Room 46
Andrea Nahmod (University of Massachussetts, Amherst, USA), Nonlinear Dispersive Equations, Probabilistic well-posedness for 2D wave equations with derivative null form nonlinearity., McGill U., Burnside Hall, Room 1B23
Weiran Sun (Simon Fraser University), Nonlocal Variational Problems, Radiative Transfer Equation with the Henyey-Greenstein Kernel, McGill U., Burnside Hall, Room 1B36
Michele Palladino (Penn State University, State College, USA), Optimization and Control, Growth Model for Tree Stems and Vines, McGill U., Bronfman Building, Room 151
Nikolai Vasilevski (CINVESTAV, Mexico), Operator Theory on Function Spaces, Toeplitz operators defined by sesquilinear forms, McGill U., Arts Building, Room W-215
Maria Gordina (University of Connecticut), Probability Theory, Couplings for hypoelliptic diffusions, McGill U., Arts Building, Room W-20
Alan Reid (University of Texas), Quantitative Geometry and Topology, Embedding arithmetic hyperbolic manifolds, McGill U., McConnell Engineering Building, Room 11
Debbie Leung (University of Waterloo, Canada), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, From embezzlement (of entanglement) to breaking any (conservation) law, McGill U., Bronfman Building, Room 178
Ben Cooper (University of Iowa), Symmetry in Algebra, Topology, and Physics, The Hall algebras of surfaces, McGill U., Birks Building, Room 205
Erin Teich (University of Michigan), Soft Packings, Nested Clusters, and Condensed Matter, Local environments in glassy hard particle systems, McGill U., Bronfman Building, Room 179
Sookyung Joo (Old Dominion University - USA), Singularities and Phase Transitions in Condensed Matter, Polarization-modulated orthogonal smectic liquid crystals, McGill U., Burnside Hall, Room 1B39
Claude Gravel (The Tutte Institute for Mathematics and Computing), Theory and Applications of Finite Fields, Permutations with one cycle of maximal length, and output bits of maximal algebraic degree, McGill U., Trottier Building (Engineering), Room 2120
14:45 - 15:15 Yunfeng Jiang (University of Kansas), Stringy Geometry, On motivic virtual signed Euler characteristics, McGill U., McConnell Engineering Building, Room 12
15:15 - 15:45 Break, McGill University, Various Buildings
15:45 - 16:05 Akshaa Vatwani (University of Waterloo, Canada), Advances in Algebraic and Analytic Number Theory, Twin primes and the parity problem, McGill U., Rutherford Physics Building, Room 114
Francisco Javier Mendoza Torres (Benemérita Universidad, México), Advances in Analysis, PDE's and Related Applications, The Convolution Theorem over a subset of bounded variation functions, McGill U., Burnside Hall, Room 920
Daniel C. Cohen (Louisiana State University), Advances in Arrangement Theory, Residual freeness of arrangement groups, Centre Mont Royal, Room Cartier I & II
Jason Bell (University of Waterloo), Arithmetic Dynamics, A Dynamical Mordell-Lang conjecture for coherent sheaves, McGill U., Rutherford Physics Building, Room 118
Juan Carlos Cabral Figueredo (National University of Asuncion, Paraguay), Applied Math and Computational Science across the Americas, On adaptive control strategy for restarting GMRES, McGill U, Bronfman Building, Room 1
Daniel Juan Pineda (Centro de Ciencias Matematicas, Universidad Autonoma de Mexico, Morelia), Cohomology of Groups, On the geometric dimension for classifying spaces for mapping class groups, McGill U., Trottier Building (Engineering), Room 70
Susana Gomez Gomez (Universidad Nacional Autonoma de Mexico), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Caracterization of Carbonate Oil Reservoirs, with Euclidean and with Fractal Geometries, using Well Tests Data, McGill U., Bronfman Building, Room 45
Domingo Tarzia (CONICET and Universidad Austral), Contributed Papers, The one-phase Stefan problem with a latent heat of fusion depending of the position of the free boundary and its velocity, McGill U., Burnside Hall, Room 1B45
Sulamita Klein (Universidade Federal do Rio de Janeiro), Current Trends in Combinatorics, Graph Sandwich Problems and Graphs Partition Poblems: how do they relate?, Centre Mont Royal, Salon International I & II
Cheng Yu (University of Texas at Austin), Finite and Infinite Dimensional Hamiltonian Systems, McGill U., Burnside Hall, Room 1205
Paul Skoufranis (York University), Free Probability and its Applications, Conditional Bi-Free Independence, McGill U., Arts Building, Room 260
Jeremy MacDonald (Concordia University), Groups and Algebras, McGill U., Trottier Building (Engineering), Room 60
Regilene Delazari dos Santos Oliveira (Universidade de São Paulo, Campus de S. Carlos), Geometry of Differential Equations, Real and Complex, Singular levels and topological invariants of Morse Bott integrable systems on surfaces, McGill U., Arts Building, Room W-120
Tarik Aougab (Brown University), Geometric Group Theory, Weil-Petersson analogs for metric graphs, McGill U., Trottier Building (Engineering), Room 100
Alejandro Adem (University of British Columbia, Canada), Groups in Geometry and Topology, Homotopy Group Actions and Group Cohomology, McGill U., Birks Building, Room 111
André Belotto da Silva (Univ. Paul Sabatier Toulouse III - France), Holomorphic Foliations and Singularities of Mappings and Spaces, Local monomialization of a system of first integrals of Darboux type, McGill U., Burnside Hall, Room 1214
Azer Akhmedov (North Dakota State University), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, On (non-)embeddability of knot groups into the group of diffeomorphisms of compact 1-manifolds, McGill U., Trottier Building (Engineering), Room 1100
Eduardo Dueñez (The University of Texas at San Antonio), Interactions Between Model Theory and Analysis and Topology, Ergodic theorems and metastability: A continuous logic viewpoint, McGill U., Trottier Building (Engineering), Room 2110
Fiona Burnell (University of Minnesota), Mathematics of Quantum Phases of Matter and Quantum Information, Hamiltonians for anyon permuting symmetries, McGill U., Bronfman Building, Room 46
Alex Himonas (Notre Dame), Nonlinear Dispersive Equations, Ill-posedness for a family of nonlinear and nonlocal evolution equations, McGill U., Burnside Hall, Room 1B23
Cyrill Muratov (New Jersey Institute of Technology), Nonlocal Variational Problems, A universal thin film model for Ginzburg-Landau energy with dipolar interaction, McGill U., Burnside Hall, Room 1B36
Orestes Bueno (Departamento de Economía, Universidad del Pacífico, Lima, Perú), Optimization and Control, The pseudomonotone polar for multivalued operators, McGill U., Bronfman Building, Room 151
Trieu Le (University of Toledo, USA), Operator Theory on Function Spaces, Commutants of Separately Radial Toeplitz Operators on the Bergman Space, McGill U., Arts Building, Room W-215
Joel Hass (UC Davis), Quantitative Geometry and Topology, Comparing Surfaces of Genus Zero, McGill U., McConnell Engineering Building, Room 11
Carlos Lardizabal (Federal University of Rio Grande do Sul, Brazil), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Hitting times and total variations of open quantum walks, McGill U., Bronfman Building, Room 178
Nicolás Andruskiewitsch (Universidad Nacional de Córdoba), Symmetry in Algebra, Topology, and Physics, Finite-dimensional Lie algebras arising from Nichols algebras of diagonal type, McGill U., Birks Building, Room 205
Jean-Guillaume Eon (Universidade Federal do Rio de Janeiro, Brazil), Soft Packings, Nested Clusters, and Condensed Matter, Topology and symmetry of modular compounds from the viewpoint of their labelled quotient graphs, McGill U., Bronfman Building, Room 179
Maria Aguareles (Universitat de Girona - Spain), Singularities and Phase Transitions in Condensed Matter, Laws of motion for spiral waves in the complex Ginzburg-Landau equation in bounded domains, McGill U., Burnside Hall, Room 1B39
Nasser Heydari (Memorial University of Newfoundland, Canada), Symmetries of Symplectic Manifolds and Related Topics, Equivariant Perfection and Kirwan Surjectivity in Real Symplectic Geometry, McGill U., McConnell Engineering Building, Room 13
Maria Chara (Instituto de Matemática Aplicada del Litoral (UNL-CONICET, Santa Fe, Argentina)), Theory and Applications of Finite Fields, An Artin-Schreier tower of function fields in even characteristic, McGill U., Trottier Building (Engineering), Room 2120
15:45 - 16:30 Ricardo Menares (Pontífica Universidad Católica de Valparaíso), Galois Representations and Automorphic Forms, Non-optimal levels of reducible mod l Galois representations, McGill U., Rutherford Physics Building, Room 115
Robert Jerrard (University of Toronto), Nonlinear Partial Differential Equations, Dynamics of defects in a semilinear wave equation, McGill U., Burnside Hall, Room 1B24
15:45 - 16:45 Andrea Solotar (University of Buenos Aires), Calabi-Yau Manifolds and Calabi-Yau Algebra, Hochschild cohomology of 3-dimensional Sridharan algebras, McGill U., McConnell Engineering Building, 204
Kyle Ormsby (Reed College, USA), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, Vanishing in motivic stable stems, McGill U., McConnell Engineering Building, Room 304
Vincent Bouchard (University of Alberta), Stringy Geometry, Quantization and Topological Recursion, McGill U., McConnell Engineering Building, Room 12
16:15 - 16:35 Nathan Ng (University of Lethbridge, Canada), Advances in Algebraic and Analytic Number Theory, The sixth moment of the Riemann zeta function and ternary additive divisor sums, McGill U., Rutherford Physics Building, Room 114
Jaqueline Godoy Mesquita (UNIVERSIDADE DE BRASÍLIA, BRAZIL), Advances in Analysis, PDE's and Related Applications, Generalized ODEs and measure differential equations: results and applications, McGill U., Burnside Hall, Room 920
Barbara Gutierrez (Cinvestav), Advances in Arrangement Theory, "Secuencial topological complexity of the complement of complex hyperplane arrangements in general position"., Centre Mont Royal, Room Cartier I & II
Araceli Bonifant (University of Rhode Island), Arithmetic Dynamics, Tongues and Tricorns in a Space of Rational Maps, McGill U., Rutherford Physics Building, Room 118
Juan Carlos Galvis (Universidad Nacional de Colombia), Applied Math and Computational Science across the Americas, Numerical methods for high-contrast multiscale problems, McGill U, Bronfman Building, Room 1
David Sprehn (University of Copenhagen, Denmark), Cohomology of Groups, Stable Homology of Classical Groups, McGill U., Trottier Building (Engineering), Room 70
Eric Chung (Chinese University of Hong Kong), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Cluster-based Generalized Multiscale Finite Element Method for elliptic PDEs with random coefficients, McGill U., Bronfman Building, Room 45
Nazish Iftikhar (National University of Computer and Emerging Sciences, Lahore Campus, Pakistan.), Contributed Papers, Classifying Robertson-Walker scale factor using Noether's approach, McGill U., Burnside Hall, Room 1B45
Yoshiko Wakabayashi (Universidade de São Paulo), Current Trends in Combinatorics, Decomposing highly connected graphs into paths of any given length, Centre Mont Royal, Salon International I & II
Stefan Le Coz (Institut de Mathématiques de Toulouse), Finite and Infinite Dimensional Hamiltonian Systems, Stability of multi-solitons for the derivative nonlinear Schrödinger equation, McGill U., Burnside Hall, Room 1205
Yinzheng Gu (Queen's University), Free Probability and its Applications, Bi-monotonic independence for pairs of algebras, McGill U., Arts Building, Room 260
Svetla Vassileva (Champlain College Saint-Lambert), Groups and Algebras, McGill U., Trottier Building (Engineering), Room 60
Nabil Kahouadji (Northeastern Illinois University, USA), Geometry of Differential Equations, Real and Complex, Isometric Immersions of Pseudo-Spherical Surfaces via Differential Equations, McGill U., Arts Building, Room W-120
Jing Tao (Oklahoma University), Geometric Group Theory, Coarse geometry of the Thurston metric, McGill U., Trottier Building (Engineering), Room 100
Andrés Ángel (Universidad de los Andes, Colombia), Groups in Geometry and Topology, Decompositions of equivariant bordism groups., McGill U., Birks Building, Room 111
Ruxandra Moraru (University of Waterloo), Gauge Theory and Special Geometry, Hermitian-Einstein equations on generalized K\"ahler manifolds, McGill U., Birks Building, Room 203
Alexandre Fernandes (Univ. Fed. Ceará - Brazil), Holomorphic Foliations and Singularities of Mappings and Spaces, On Lipschitz regularity at infinity of complex algebraic sets, McGill U., Burnside Hall, Room 1214
Enrique Ramirez-Losada (CIMAT Guanajuato), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Genus of satellite tunnel number one knots, McGill U., Trottier Building (Engineering), Room 1100
Henry Towsner (University of Pennsylvania), Interactions Between Model Theory and Analysis and Topology, How uniform is provable convergence?, McGill U., Trottier Building (Engineering), Room 2110
Dominic Williamson (University of Vienna), Mathematics of Quantum Phases of Matter and Quantum Information, Hamiltonian models for topological phases of matter in three spatial dimensions, McGill U., Bronfman Building, Room 46
Claudio Muñoz (Universidad de Chile, Chile), Nonlinear Dispersive Equations, Decay of small perturbations on 1D scalar field equations, McGill U., Burnside Hall, Room 1B23
Vesa Julin (University of Jyväskylä), Nonlocal Variational Problems, Stability of the Gaussian isoperimetric inequality, McGill U., Burnside Hall, Room 1B36
John Cotrina (Departamento de Economía, Universidad del Pacífico, Lima, Perú), Optimization and Control, Remarks on $p$-cyclic monotone linear operators, McGill U., Bronfman Building, Room 151
Nina Zorboska (University of Manitoba), Operator Theory on Function Spaces, Intrinsic operators on spaces of holomorphic functions, McGill U., Arts Building, Room W-215
Serguei Popov (University of Campinas - UNICAMP), Probability Theory, Two-dimensional random interlacements, McGill U., Arts Building, Room W-20
Alexander Dranishnikov (University of Florida), Quantitative Geometry and Topology, On Topological Complexity of Nonorientable Surfaces, McGill U., McConnell Engineering Building, Room 11
Venegas-andraca Salvador (Tecnologico de Monterrey, México), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Further Advances on Quantum PageRank, McGill U., Bronfman Building, Room 178
Sean Clark (Northeastern University), Symmetry in Algebra, Topology, and Physics, Canonical bases for quantized general linear and orthosymplectic Lie superalgebras, McGill U., Birks Building, Room 205
Mikhail Bouniaev (University of Texas Rio Grande Valley, USA), Soft Packings, Nested Clusters, and Condensed Matter, On the Concept of t-bonded Sets, McGill U., Bronfman Building, Room 179
Giacomo Canevari (University of Oxford - UK), Singularities and Phase Transitions in Condensed Matter, Sets of topological singularities for vector-valued maps, McGill U., Burnside Hall, Room 1B39
Alejandro Cabrera (Universidade Federal do Rio de Janeiro, Brazil), Symmetries of Symplectic Manifolds and Related Topics, Odd symplectic supergeometry, characteristic classes and reduction, McGill U., McConnell Engineering Building, Room 13
Christine Kelley (University of Nebraska-Lincoln), Theory and Applications of Finite Fields, Multilevel coding and multistage decoding on partial erasure channels, McGill U., Trottier Building (Engineering), Room 2120
16:30 - 17:15 Claus Sorensen (University of California, San Diego), Galois Representations and Automorphic Forms, Insensitivity of deformation rings under parabolic induction, McGill U., Rutherford Physics Building, Room 115
17:00 - 17:20 Piper Harron (University of Hawaii, USA), Advances in Algebraic and Analytic Number Theory, Shapes of Galois Quartic Number Fields, McGill U., Rutherford Physics Building, Room 114
George Chen (Cape Breton University, Canada), Advances in Analysis, PDE's and Related Applications, Global Existence for a Singular Gierer-Meinhardt and Enzyme Kinetics System, McGill U., Burnside Hall, Room 920
Daniel Moseley (Jacksonville University), Advances in Arrangement Theory, The Orlik-Terao algebra and the cohomology of configuration space, Centre Mont Royal, Room Cartier I & II
Jan Kiwi (Pontificia Universidad Católica de Chile), Arithmetic Dynamics, Irreducibility of complex cubic polynomials with a periodic critical point, McGill U., Rutherford Physics Building, Room 118
Silvia Jimenez-Bolanos (Colgate University, USA), Applied Math and Computational Science across the Americas, Navier Slip Condition for Viscous Fluids on a Rough Boundary, McGill U, Bronfman Building, Room 1
Bernardo Uribe (Universidad del Norte), Cohomology of Groups, Morita Equivalence of pointed fusion categories, McGill U., Trottier Building (Engineering), Room 70
Yury Elena García Puerta (Centro de Investigación en Matemáticas, A.C.), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, BAYESIAN ANALYSIS OF A MULTI-PATHOGEN MODEL, McGill U., Bronfman Building, Room 45
Imran Naeem (Lahore University of Management sciences (LUMS), Pakistan), Contributed Papers, A new approach to construct first integrals and closed-form solutions of dynamical systems for epidemics, McGill U., Burnside Hall, Room 1B45
Ortrud Oellermann (University of Winnipeg), Current Trends in Combinatorics, On the Mean Order of Certain Types of Substructures of Graphs, Centre Mont Royal, Salon International I & II
Walter Craig (McMaster University), Finite and Infinite Dimensional Hamiltonian Systems, A Hamiltonian and its Birkhoff normal form for water waves, McGill U., Burnside Hall, Room 1205
Pierre Tarrago (Centro de Investigación en Matemáticas, Guanajuato, Mexico), Free Probability and its Applications, Free wreath product quantum groups and free probability, McGill U., Arts Building, Room 260
Victor Petrogradsky (University of Brasilia, Brazil), Groups and Algebras, Lie identities of symmetric Poisson algebras, McGill U., Trottier Building (Engineering), Room 60
Ana Rechtman (National Autonomous University of Mexico, Mexico City, Mexico), Geometry of Differential Equations, Real and Complex, The trunkennes of a flow, McGill U., Arts Building, Room W-120
Pallavi Dani (Louisiana State University), Geometric Group Theory, Subgroup distortion in hyperbolic groups, McGill U., Trottier Building (Engineering), Room 100
Guillermo Cortiñas (Universidad de Buenos Aires, Argentina), Groups in Geometry and Topology, $G$-equivariant, bivariant algebraic K-theory, McGill U., Birks Building, Room 111
Andriy Haydys (Universität Bielefeld), Gauge Theory and Special Geometry, The Seiberg-Witten equations with multiple spinors in dimension three, McGill U., Birks Building, Room 203
Fabiola Manjarrez-Gutierrez (Instituto de Matemáticas, UNAM, Cuernavaca), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Genus one knots and circular thin position, McGill U., Trottier Building (Engineering), Room 1100
Jon Yard (University of Waterloo and Perimeter Institute), Mathematics of Quantum Phases of Matter and Quantum Information, Topological phases and arithmetic, McGill U., Bronfman Building, Room 46
Massimiliano Morini (Università degli studi di Parma), Nonlocal Variational Problems, Nonlinear stability results for the nonlocal Mullins-Sekerka flow., McGill U., Burnside Hall, Room 1B36
Rafael Correa (Departamento de Ingeniería Matemática, Universidad de Chile, Santiago, Chili), Optimization and Control, On Brøndsted-Rockafellar's Theorem for convex lower semicontinuous epi-pointed functions in locally convex spaces, McGill U., Bronfman Building, Room 151
Javad Mashreghi (Universite Laval, Canada), Operator Theory on Function Spaces, The Gleason--Kahane--Zelazko theorem for modules, McGill U., Arts Building, Room W-215
Sanchayan Sen (McGill University), Probability Theory, Novel scaling limits for random discrete structures, McGill U., Arts Building, Room W-20
Assaf Naor (Princeton University), Quantitative Geometry and Topology, A spectral gap precludes low-dimensional embeddings, McGill U., McConnell Engineering Building, Room 11
TAKUYA Machida (Nihon University, Japan), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Quantum walk on a half line, McGill U., Bronfman Building, Room 178
Rita Jiménez Rolland (CCMM-UNAM), Symmetry in Algebra, Topology, and Physics, Representation stability and convergence of point-counting, McGill U., Birks Building, Room 205
Emily Clader (San Francisco State University), Stringy Geometry, Higher-genus wall-crossing in Gromov-Witten and Landau-Ginzburg theory, McGill U., McConnell Engineering Building, Room 12
Undine Leopold (Northeastern University, USA), Soft Packings, Nested Clusters, and Condensed Matter, Euclidean Symmetry of Closed Surfaces Immersed in 3-Space, McGill U., Bronfman Building, Room 179
Ihsan Topaloglu (Virginia Commonwealth University - USA), Singularities and Phase Transitions in Condensed Matter, Height-constrained nonlocal interactions energies via degenerate diffusion, McGill U., Burnside Hall, Room 1B39
Shlomo Sternberg (Harvard University, USA), Symmetries of Symplectic Manifolds and Related Topics, The Stasheff associahedron, McGill U., McConnell Engineering Building, Room 13
Ariane Masuda (New York City College of Technology (New York City, United States)), Theory and Applications of Finite Fields, Goppa Codes over Kummer extensions, McGill U., Trottier Building (Engineering), Room 2120
17:00 - 17:45 Jose Mazon (University of Valencia), Nonlinear Partial Differential Equations, Kurdyka-\L{}ojasiewicz-Simon inequality for gradient flows in metric spaces, McGill U., Burnside Hall, Room 1B24
17:00 - 18:00 Steven Lu (UQAM), Calabi-Yau Manifolds and Calabi-Yau Algebra, Projective Kahler Manifolds with semi-negative holomorphic sectional curvature, McGill U., McConnell Engineering Building, 204
Jose Pablo Pelaez Menaldo (UNAM, Mexico City, Mexico), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, A triangulated approach to the the Bloch-Beilinson filtration, McGill U., McConnell Engineering Building, Room 304
17:15 - 18:00 Aftab Pande (Universidade Federal do Rio de Janeiro), Galois Representations and Automorphic Forms, Reductions of Galois Representations, McGill U., Rutherford Physics Building, Room 115
17:30 - 17:50 Ari Shnidman (Boston College, USA), Advances in Algebraic and Analytic Number Theory, Quadratic twists of an elliptic curve admitting a 3-isogeny, McGill U., Rutherford Physics Building, Room 114
Mariana Smit Vega Garcia (University of Washington, USA), Advances in Analysis, PDE's and Related Applications, The singular free boundary in the Signorini problem, McGill U., Burnside Hall, Room 920
Michael J. Falk (Northern Arizona University), Advances in Arrangement Theory, Weak orders - left and right - and configuration spaces, Centre Mont Royal, Room Cartier I & II
Michelle Manes (University of Hawaii), Arithmetic Dynamics, Dynamical Belyi maps, McGill U., Rutherford Physics Building, Room 118
Carlos E. Mejía (Universidad Nacional de Colombia), Applied Math and Computational Science across the Americas, Finite difference methods for fractional advection dispersion equations, McGill U, Bronfman Building, Room 1
Christopher Drupieski (DePaul University), Cohomology of Groups, Some graded analogues of one-parameter subgroups and applications to the cohomology of graded group schemes, McGill U., Trottier Building (Engineering), Room 70
Rehana Naz (Lahore School of Economics, Pakistan), Contributed Papers, The first integrals and closed-form solutions of optimal control problems, McGill U., Burnside Hall, Room 1B45
Sergey Norin (McGill University), Current Trends in Combinatorics, Asymptotic density of graphs excluding a disconnected minor, Centre Mont Royal, Salon International I & II
Renato Calleja (Universidad Nacional Auntónoma de México), Finite and Infinite Dimensional Hamiltonian Systems, Symmetries and choreographies in families that bifurcate from the polygonal relative equilibrium of the n-body problem, McGill U., Burnside Hall, Room 1205
Jiun-Chau Wang (University of Saskatchewan), Free Probability and its Applications, Multiplicative bi-free infinite divisibility, McGill U., Arts Building, Room 260
Mikhail Malakhaltsev (University of los Andes, Colombia), Geometry of Differential Equations, Real and Complex, Binary differential equations and 3-webs with singularities, McGill U., Arts Building, Room W-120
Tullia Dymarz (University of Wisconsin), Geometric Group Theory, A model for random nilpotent groups, McGill U., Trottier Building (Engineering), Room 100
Daniel Juan Pineda (Centro de Ciencias Matemáticas UNAM, México), Groups in Geometry and Topology, Rigidity for high graph manifolds, McGill U., Birks Building, Room 111
Francisco Gonzalez-Acuña (Instituto de Matemáticas, UNAM, Cuernavaca), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, 2-stratifold spines of 3-manifolds, McGill U., Trottier Building (Engineering), Room 1100
Riccardo Cristoferi (Carnegie Mellon University), Nonlocal Variational Problems, Periodic critical points of the Ohta-Kawasaki functional, McGill U., Burnside Hall, Room 1B36
Pedro Pérez-Aros (Centro de Modelamiento Matemático, Santiago, Chili), Optimization and Control, Subdifferential characterization of probability functions under Gaussian distribution, McGill U., Bronfman Building, Room 151
Ruhan Zhao (SUNY at Brockport, USA), Operator Theory on Function Spaces, Closures of Hardy and Hardy-Sobolev spaces in the Bloch type space on the unit ball, McGill U., Arts Building, Room W-215
David Plaza (University of Chile), Symmetry in Algebra, Topology, and Physics, Homomorphisms between cell modules of the endomorphisms ring of a Bott-Samelson bimodule., McGill U., Birks Building, Room 205
Hsian-Hua Tseng (Ohio State), Stringy Geometry, A tale of four theories, McGill U., McConnell Engineering Building, Room 12
Egon Schulte (Northeastern University), Soft Packings, Nested Clusters, and Condensed Matter, Highly Symmetric Complexes and Graphs in Ordinary Space, McGill U., Bronfman Building, Room 179
Duvan A. Henao Manrique (Pontificia Universidad Católica de Chile - Chile), Singularities and Phase Transitions in Condensed Matter, Existence of minimizers of the neoHookean energy in 3D, McGill U., Burnside Hall, Room 1B39
Yael Karshon (University of Toronto, Canada), Symmetries of Symplectic Manifolds and Related Topics, Classification results in equivariant symplectic geometry, McGill U., McConnell Engineering Building, Room 13
Gretchen Mathews (Clemson University (South Carolina, United States)), Theory and Applications of Finite Fields, AG codes as products of Reed-Solomon codes, McGill U., Trottier Building (Engineering), Room 2120
20:00 - 22:30 Cecilia Quartet Concert, McGill University, Pollack Hall
08:30 - 09:30 Bernardo Uribe (University del Norte, Barranquilla, Colombia), Invited Speakers, Decomposition of equivariant complex vector bundles on fixed point sets, Cartier I & II, CMR
Daniel T. Wise (McGill University, Canada), Invited Speakers, The Cubical Route to Understanding Groups, Symposia Auditorium, CMR
Pablo Ferrari (University of Buenos Aires, Argentina), Invited Speakers, Stationary solitons in the Box Ball System in Z, International I (E) & II (F), CMR
10:00 - 11:00 Peter Ozsvath (Princeton University, USA), Plenary Lectures, Computing Knot Floer homology, Symposia Auditorium, CMR
11:00 - 16:00 Career Fair, Foyer, Level 4, Centre Mont Royal
11:15 - 11:35 Olga Balkanova (University of Turku, Finland), Advances in Algebraic and Analytic Number Theory, Non-vanishing of automorphic L-functions in the weight aspect, McGill U., Rutherford Physics Building, Room 114
Takuro Abe (Kyushu University), Advances in Arrangement Theory, Hyperplane arrangements and Hessenberg varieties, Centre Mont Royal, Room Cartier I & II
Juan Rivera-Letelier (University of Rochester), Arithmetic Dynamics, A tricothomy for the ergodic theory of ultrametric rational maps, McGill U., Rutherford Physics Building, Room 118
Frédéric Valentin (LNCC, Brazil), Applied Math and Computational Science across the Americas, Multiscale Hybrid-Mixed Method for Fluids, McGill U, Bronfman Building, Room 1
Daniel Nakano (University of Georgia), Cohomology of Groups, Bilinear and Quadratic Forms of Rational Modules of Split Reductive Groups, McGill U., Trottier Building (Engineering), Room 70
Julianne Chung (Virginia Tech), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Computationally efficient methods for Bayesian inversion and uncertainty quantification, McGill U., Bronfman Building, Room 45
Vladislav Bukshtynov (Florida Institute of Technology), Contributed Papers, Optimal Reconstruction of Constitutive Relations for Porous Media Flows, McGill U., Burnside Hall, Room 1B45
Michela Procesi (Universitá di Roma Tre), Finite and Infinite Dimensional Hamiltonian Systems, Finite dimensional invariant tori in PDEs, McGill U., Burnside Hall, Room 1205
Alexandru Nica (University of Waterloo), Free Probability and its Applications, An application of free cumulants to meandric systems, McGill U., Arts Building, Room 260
Pavel Zalesskii (University of Brasilia), Groups and Algebras, The profinite completion of 3-manifold groups, McGill U., Trottier Building (Engineering), Room 60
Alessandro Portaluri (University of Torino, Italy), Geometry of Differential Equations, Real and Complex, Index and stability of closed semi-Riemannian geodesics, McGill U., Arts Building, Room W-120
Johanna Mangahas (University of Buffalo), Geometric Group Theory, Normal right-angled Artin group subgroups of mapping class groups, McGill U., Trottier Building (Engineering), Room 100
Cheol-Hyun Cho (University of Seoul, Korea), Groups in Geometry and Topology, Group actions and localized mirror functors, McGill U., Birks Building, Room 111
Lorenzo Foscolo (Stony Brook), Gauge Theory and Special Geometry, Non-compact G2 manifolds from asymptotically conical Calabi-Yau 3-folds, McGill U., Birks Building, Room 203
Liam Watson (Université de Sherbrooke), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Bordered Floer homology via immersed curves, McGill U., Trottier Building (Engineering), Room 1100
Nate Ackermann (Harvard), Interactions Between Model Theory and Analysis and Topology, Transferring Results from Infinitary Classical Model Theory to Infinitary Continuous Model Theory, McGill U., Trottier Building (Engineering), Room 2110
Luiz Gustavo Farah (Federal University of Minas Gerais (UFMG), Brazil), Nonlinear Dispersive Equations, Instability of solitary waves in the KdV-type equations - Part I, McGill U., Burnside Hall, Room 1B23
Theodore Kolokolnikov (Dalhousie University), Nonlocal Variational Problems, The rise and fall of kings: globalization and class mobility, McGill U., Burnside Hall, Room 1B36
Valeriano Antunes de Oliveira (Universidade Estadual Paulista, S. J. do Rio Preto, Brazil), Optimization and Control, The constant rank constraint qualification in the continuous-time context, McGill U., Bronfman Building, Room 151
Raphael Clouatre (University of Manitoba, Canada), Operator Theory on Function Spaces, Annihilating ideals and spectra for commuting row contractions, McGill U., Arts Building, Room W-215
Lionel Levine (Cornell University), Probability Theory, Random Harmonic Functions, McGill U., Arts Building, Room W-20
Matthew Kahle (Ohio State University), Quantitative Geometry and Topology, Topological solid, liquid, and gas, McGill U., McConnell Engineering Building, Room 11
Chandrashekar Madiah (Institute for Mathematical Sciences, Chennai, India), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Accelerating quantum walks, McGill U., Bronfman Building, Room 178
Reimundo Heluani (IMPA), Symmetry in Algebra, Topology, and Physics, Chiral homology and rationality of vertex algebras, McGill U., Birks Building, Room 205
Oleg Musin (University of Texas Rio Grande Valley, USA), Soft Packings, Nested Clusters, and Condensed Matter, Extreme Euclidean and spherical point configurations, McGill U., Bronfman Building, Room 179
Oleg Lavrentovich (Kent State University - USA), Singularities and Phase Transitions in Condensed Matter, "Singularities and Flows in Living Liquid Crystals", McGill U., Burnside Hall, Room 1B39
Alessia Mandini (Pontifícia Universidade Católica do Rio de Janeiro, Brazil), Symmetries of Symplectic Manifolds and Related Topics, Symplectic embeddings and infinite staircases -- Part I, McGill U., McConnell Engineering Building, Room 13
Daniel Katz (California State University, Northridge (Los Angeles, United States)), Theory and Applications of Finite Fields, Valuations of Weil Sums of Binomials, McGill U., Trottier Building (Engineering), Room 2120
11:15 - 12:00 Ellen Eischen (University of Oregon), Galois Representations and Automorphic Forms, Automorphic forms, congruences, and p-adic L-functions, McGill U., Rutherford Physics Building, Room 115
Monica Clapp (Universidad Nacional Autonoma de Mexico), Nonlinear Partial Differential Equations, Towers of nodal bubbles for the Bahri-Coron problem in punctured domains, McGill U., Burnside Hall, Room 1B24
11:15 - 12:15 Radu Lazu (Stony Brook University), Calabi-Yau Manifolds and Calabi-Yau Algebra, (Talk Cancelled), McGill U., McConnell Engineering Building, 204
Kirsten Wickelgren (Georgia Tech, USA), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, Motivic Euler numbers and an arithmetic count of the lines on a cubic surface, McGill U., McConnell Engineering Building, Room 304
Bernardo Uribe Jongbloed (Universidad del Norte in Barranquilla , Colombia), Stringy Geometry, Stringy structures in cohomology and K-theory of orbifolds, McGill U., McConnell Engineering Building, Room 12
11:45 - 12:05 Luis Lomeli (Universidad de Valparaíso, Chile), Advances in Algebraic and Analytic Number Theory, Asai cube L-functions and the local Langlands correspondence, McGill U., Rutherford Physics Building, Room 114
Luis Angel Gutierrez Mendez (Benemérita Universidad, México), Advances in Analysis, PDE's and Related Applications, Space of successions classics contained in the space of Henstock-Kurzweil integrable functions, McGill U., Burnside Hall, Room 920
Jorge Pereira (IMPA), Advances in Arrangement Theory, Representations of quasi-projective groups, Centre Mont Royal, Room Cartier I & II
Jamie Juul (Amherst College), Arithmetic Dynamics, Images of Iterated Polynomials over Finite Fields, McGill U., Rutherford Physics Building, Room 118
Osni Marques (Lawrence Berkeley National Laboratory, USA), Applied Math and Computational Science across the Americas, Nonlinear Eigensolver based on Padé Approximants, McGill U, Bronfman Building, Room 1
Jon Carlson (University of Georgia, USA), Cohomology of Groups, Separable rings in stable category over cyclic~$p$-groups, McGill U., Trottier Building (Engineering), Room 70
Olivier Zahm (MIT), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Certified dimension reduction in nonlinear Bayesian inverse problems, McGill U., Bronfman Building, Room 45
Buthinah Bin Dehaish (King Abdullaziz University), Contributed Papers, Fixed Point Theorem for monotone Lipschitzian mappings, McGill U., Burnside Hall, Room 1B45
Alexandr Kostochka (University of Illininois at Urbana-Champaign), Current Trends in Combinatorics, Packing chromatic number of subcubic graphs, Centre Mont Royal, Salon International I & II
Yannick Sire (John Hopkins University), Finite and Infinite Dimensional Hamiltonian Systems, A posteriori KAM for PDEs, McGill U., Burnside Hall, Room 1205
James Pascoe (Washington University in St. Louis), Free Probability and its Applications, Applications of model-realization theory to inverse problems in free probability, McGill U., Arts Building, Room 260
Ivan Shestakov (State University of Sao Paulo), Groups and Algebras, A finite representation of Jordan algebras, McGill U., Trottier Building (Engineering), Room 60
Martha P. Dussan Angulo (Universidade de São Paulo, Brazil), Geometry of Differential Equations, Real and Complex, Bjorling problem for timelike surfaces and solutions of homogeneous wave equation, McGill U., Arts Building, Room W-120
Sam Taylor (Yale University), Geometric Group Theory, Largest projections for random walks, McGill U., Trottier Building (Engineering), Room 100
Eduardo González (University of Massachusetts, USA), Groups in Geometry and Topology, Gauged Maps and quantum K-theory, McGill U., Birks Building, Room 111
Ljudmila Kamenova (Stony Brook University), Gauge Theory and Special Geometry, Hyperbolicity in hyperkaehler geometry, McGill U., Birks Building, Room 203
Tye Lidman (North Carolina State University), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Concordance in homology spheres, McGill U., Trottier Building (Engineering), Room 1100
Alessandro Vignati (York University), Interactions Between Model Theory and Analysis and Topology, The Model theory of the Jacelon algebra, McGill U., Trottier Building (Engineering), Room 2110
Svetlana Roudenko (George Washington University, USA), Nonlinear Dispersive Equations, Instability of solitary waves in the KdV-type equations - Part II, McGill U., Burnside Hall, Room 1B23
David Shirokoff (New Jersey Institute of Technology), Nonlocal Variational Problems, Conic programming of a variational inequality for self-assembly, McGill U., Burnside Hall, Room 1B36
Yboon García (Departamento de Economía, Universidad del Pacífico, Lima, Perú), Optimization and Control, Existence results for equilibrium problem, McGill U., Bronfman Building, Room 151
Michael Stessin (SUNY at Albany, USA), Operator Theory on Function Spaces, Spectral characterization of representations of symmetry groups, McGill U., Arts Building, Room W-215
Tom Hutchcroft (University of British Colombia), Probability Theory, The strange geometry of high-dimensional random forests, McGill U., Arts Building, Room W-20
Jing Tao (University of Oklahoma), Quantitative Geometry and Topology, Fine geometry of the Thurston metric, McGill U., McConnell Engineering Building, Room 11
Barry C Sanders (University of Calgary), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Wavelets for quantum state generation, McGill U., Bronfman Building, Room 178
Matt Hogancamp (USC), Symmetry in Algebra, Topology, and Physics, Categorical Diagonalization, McGill U., Birks Building, Room 205
Nikolai Erokhovets (Steklov Mathematical Institute, Russia), Soft Packings, Nested Clusters, and Condensed Matter, Operations sufficient to obtain any Pogorelov polytope from barrels. Improvements for fullerenes., McGill U., Bronfman Building, Room 179
Arghir Dani Zarnescu (Basque Center for Applied Mathematics), Singularities and Phase Transitions in Condensed Matter, On the dynamical emergence of nematic defects, McGill U., Burnside Hall, Room 1B39
Ana Rita Pires (Fordham University, USA), Symmetries of Symplectic Manifolds and Related Topics, Symplectic embeddings and infinite staircases - Part II, McGill U., McConnell Engineering Building, Room 13
Andreas Weingartner (Southern Utah University), Theory and Applications of Finite Fields, The degree distribution of polynomial divisors over finite fields, McGill U., Trottier Building (Engineering), Room 2120
13:30 - 14:15 Daniel Barrera Salazar (Universitat Politecnica de Catalunya), Galois Representations and Automorphic Forms, On the exceptional zeros of p-adic L-functions of Hilbert modular forms, McGill U., Rutherford Physics Building, Room 115
13:45 - 14:05 Leo Goldmakher (Williams College, USA), Advances in Algebraic and Analytic Number Theory, The P\'{o}lya-Vinogradov Inequality, McGill U., Rutherford Physics Building, Room 114
Rita Jimenez Rolland (Universidad Nacional Autonoma de Mexico), Advances in Arrangement Theory, Stability for hyperplane complements and statistics over finite fields, Centre Mont Royal, Room Cartier I & II
Kathryn Lindsey (University of Chicago), Arithmetic Dynamics, Convex shapes and harmonic caps, McGill U., Rutherford Physics Building, Room 118
Sandra Augusta Santos (UNICAMP, Brazil), Applied Math and Computational Science across the Americas, On the rank-sparsity decomposition problem, McGill U, Bronfman Building, Room 1
Julia Pevtsova (University of Washington), Cohomology of Groups, Detection of nilpotence and projectivity for finite unipotent group schemes, McGill U., Trottier Building (Engineering), Room 70
Chuang Xu (University of Alberta), Contributed Papers, Best finite constrained approximations of one-dimensional probabilities, McGill U., Burnside Hall, Room 1B45
Silvia Fernández (California State University, Northridge), Current Trends in Combinatorics, Rectilinear Local Crossing Numbers of Complete and Complete Bipartite Graphs, Centre Mont Royal, Salon International I & II
Slim Ibrahim (University of Victoria), Finite and Infinite Dimensional Hamiltonian Systems, Ground State Solutions of the Gross Pitaevskii Equation Associated to Exciton-Polariton Bose-Einstein Condensates, McGill U., Burnside Hall, Room 1205
Todd Kemp (University of California, San Diego), Free Probability and its Applications, Partitioned Matrices with Correlations, McGill U., Arts Building, Room 260
Olga Kharlampovich (Hunter College, CUNY), Groups and Algebras, Tarski-type problems for free Lie algebras, McGill U., Trottier Building (Engineering), Room 60
Débora Lopes da Silva (Universidade Federal do Sergipe, Brazil), Geometry of Differential Equations, Real and Complex, Codimension one partially umbilic singularities of hypersurfaces of $\mathbb{ R}^4$, McGill U., Arts Building, Room W-120
Jason Behrstock (City University of New York), Geometric Group Theory, Quasiflats in hierarchically hyperbolic spaces, McGill U., Trottier Building (Engineering), Room 100
César Galindo (Universidad de los Andes, Colombia), Groups in Geometry and Topology, Isocategorical Groups and their Weil Representations, McGill U., Birks Building, Room 111
Andrew Clarke (Universidade Federal do Rio de Janeiro), Gauge Theory and Special Geometry, $G_2$ structures and the Strominger system in dimension $7$, McGill U., Birks Building, Room 203
Bruna Oréfice Okamoto (Univ. Fed. São Carlos - Brazil), Holomorphic Foliations and Singularities of Mappings and Spaces, Equisingularity of map germs from a surface to the plane., McGill U., Burnside Hall, Room 1214
Ying Hu (Université du Québec à Montréal), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, McGill U., Trottier Building (Engineering), Room 1100
Xavier Caicedo (Universidad de los Andes, Bogotá, Colombia), Interactions Between Model Theory and Analysis and Topology, A Lindström's theorem for continuous logic, McGill U., Trottier Building (Engineering), Room 2110
Xie Chen (Caltech), Mathematics of Quantum Phases of Matter and Quantum Information, Group Cohomology and Symmetry Protected Topological Phases, McGill U., Bronfman Building, Room 46
Walter Craig (McMaster University, Canada), Nonlinear Dispersive Equations, Birkhoff normal form for nonlinear wave equations, McGill U., Burnside Hall, Room 1B23
Geraldo Nunes Silva (Universidade Estadual Paulista, S. J. do Rio Preto, Brazil), Optimization and Control, Minmax control problems and the Hamilton-Jacobi Equation, McGill U., Bronfman Building, Room 151
Thomas Ransford (Universite Laval), Operator Theory on Function Spaces, Cyclicity in the harmonic Dirichlet space, McGill U., Arts Building, Room W-215
Eyal Lubetzky (New York University), Probability Theory, Mixing times of critical Potts models, McGill U., Arts Building, Room W-20
Boris Lishak (University of Toronto), Quantitative Geometry and Topology, The space of trinagulations of compact 4-manifolds, McGill U., McConnell Engineering Building, Room 11
David Feder (University of Calgary), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Detecting topological order in two dimensions by continuous-time quantum walk, McGill U., Bronfman Building, Room 178
Monica Vazirani (UC Davis), Symmetry in Algebra, Topology, and Physics, The "Springer" representation of the DAHA, McGill U., Birks Building, Room 205
Carla Farsi (University of Colorado Boulder), Stringy Geometry, The spectrum of orbifold connected sums and collapsing, McGill U., McConnell Engineering Building, Room 12
Alexey Glazyrin (University of Texas Rio Grande Valley), Soft Packings, Nested Clusters, and Condensed Matter, Average degrees in sphere packings, McGill U., Bronfman Building, Room 179
Daniele Sepe (Universidade Federal Fluminense, Brazil), Symmetries of Symplectic Manifolds and Related Topics, Integrable billiards and symplectic embeddings, McGill U., McConnell Engineering Building, Room 13
Jonathan Jedwab (Simon Fraser University (Burnaby, Canada)), Theory and Applications of Finite Fields, A strong external difference family with more than two subsets, McGill U., Trottier Building (Engineering), Room 2120
13:45 - 14:30 Yannick Sire (Johns Hopkins University), Nonlinear Partial Differential Equations, A singular perturbation problem for the fractional Allen-Cahn equation, McGill U., Burnside Hall, Room 1B24
13:45 - 14:45 Kathryn Hare (University of Waterloo, Canada), Advances in Analysis, PDE's and Related Applications, Local dimensions of self-similar measures with overlap, McGill U., Burnside Hall, Room 920
Katrin Wendland (University of Freiburg), Calabi-Yau Manifolds and Calabi-Yau Algebra, Hodge-elliptic genera and how they govern K3 theories, McGill U., McConnell Engineering Building, 204
Daniel Juan Pineda (UNAM, Morelia, Mexico), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, On NIl groups of the quaternion group, McGill U., McConnell Engineering Building, Room 304
14:15 - 14:35 Victor Cuauhtémoc Garcia (Metropolitan Autonomous University, Mexico), Advances in Algebraic and Analytic Number Theory, Additive basis with coefficients of newforms, McGill U., Rutherford Physics Building, Room 114
Nir Gadish (University of Chicago), Advances in Arrangement Theory, Finitely generated sequences of arrangements and representation stability, Centre Mont Royal, Room Cartier I & II
Kenneth Jacobs (Northwestern University), Arithmetic Dynamics, Archimedean Perspectives from non-Archimedean Dynamics, McGill U., Rutherford Physics Building, Room 118
Marcus Sarkis (Worcester Polytechnic Institute, USA), Applied Math and Computational Science across the Americas, On an adaptive finite element phase-field dynamic fracture model, McGill U, Bronfman Building, Room 1
Noe Barcenas (Centro de Ciencias Matematicas, Universidad Autonoma de Mexico, Morelia), Cohomology of Groups, Stable Finiteness Properties of Infinite Discrete Groups, McGill U., Trottier Building (Engineering), Room 70
Alen Alexanderian (North Carolina State University), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Scalable methods for D-optimal experimental design in large-scale Bayesian linear inverse problems, McGill U., Bronfman Building, Room 45
Eric Jose Avila (Universidad Autonoma de Yucatan), Contributed Papers, Global dynamics of a periodic SEIRS model with general incidence rate, McGill U., Burnside Hall, Room 1B45
Martín Safe (Universidad Nacional del Sur, Argentina), Current Trends in Combinatorics, Characterizing and finding forbidden subgraphs for subclasses of circular-arc graphs, Centre Mont Royal, Salon International I & II
Rosa María Vargas Magaña (Universidad Nacional Auntónoma de México), Finite and Infinite Dimensional Hamiltonian Systems, Whitham-Boussinesq model for variable depth topography. Results on normal and trapped modes for non trivial geometries., McGill U., Burnside Hall, Room 1205
Emily Redelmeier (ISARA Corporation), Free Probability and its Applications, Cumulants in the finite-matrix and higher-order free cases, McGill U., Arts Building, Room 260
Dessislava Kochloukova (Brazil State University of Campinas), Groups and Algebras, Subdirect sums and fibre sums of Lie algebras, McGill U., Trottier Building (Engineering), Room 60
Jean Carlos Cortissoz (University of los Andes, Colombia), Geometry of Differential Equations, Real and Complex, On Bloch's Theorem, McGill U., Arts Building, Room W-120
Talia Fernos (University of North Carolina at Greensboro), Geometric Group Theory, Regular Isometries of CAT(0) Cube Complexes are Plentiful, McGill U., Trottier Building (Engineering), Room 100
Carlos Segovia (UNAM, México), Groups in Geometry and Topology, High dimensional topological quantum field theory, McGill U., Birks Building, Room 111
Henrique Sá Earp (UNICAMP), Gauge Theory and Special Geometry, Construction of $\rm G_2$-instantons via twisted connected sums, McGill U., Birks Building, Room 203
Mauricio Corrêa Jr. (Univ. Fed. Minas Gerais - Brazil), Holomorphic Foliations and Singularities of Mappings and Spaces, Čech-Bott-Chern cohomology and refined residue theory., McGill U., Burnside Hall, Room 1214
Bruno Aaron Cisneros de la Cruz (Instituto de Matemáticas, UNAM Oaxaca), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Normal forms on virtual braid groups, McGill U., Trottier Building (Engineering), Room 1100
Jose Iovino (The University of Texas at San Antonio), Interactions Between Model Theory and Analysis and Topology, McGill U., Trottier Building (Engineering), Room 2110
Paulo Teotonio-Sobrinho (Universidade de São Paulo), Mathematics of Quantum Phases of Matter and Quantum Information, Topological Order in Higher Gauge Theories and Cohomology, McGill U., Bronfman Building, Room 46
Mihaela Ifrim (University of California, Berkeley, USA), Nonlinear Dispersive Equations, Well-posedness and dispersive decay of small data solutions for the Benjamin-Ono equation, McGill U., Burnside Hall, Room 1B23
Almut Burchard (University of Toronto), Nonlocal Variational Problems, On a non-local shape optimization problem related with swarming, McGill U., Burnside Hall, Room 1B36
Cristopher Hermosilla (Universidad Técnica Federico Santa María, Santiago, Chili), Optimization and Control, Constrained and impulsive Linear Quadratic control problems, McGill U., Bronfman Building, Room 151
Catherine Beneteau (University of South Florida, USA), Operator Theory on Function Spaces, Zeros of optimal polynomial approximants in Dirichlet-type spaces, McGill U., Arts Building, Room W-215
Tianyi Zheng (University of California, San Diego), Probability Theory, Joint behavior of volume growth and entropy of random walks on groups, McGill U., Arts Building, Room W-20
Regina Rotman (University of Toronto), Quantitative Geometry and Topology, Short geodesics on closed Riemannian manifolds, McGill U., McConnell Engineering Building, Room 11
Marcos Villagra (National University of Asuncion Country, Paraguay), Quantum Walks, Open Quantum Walks, Quantum Computation and Related Topics, Quantum Algorithms for Multiobjective Optimization, McGill U., Bronfman Building, Room 178
Eugene Gorsky (UC Davis), Symmetry in Algebra, Topology, and Physics, Khovanov-Rozansky homology and the flag Hilbert scheme, McGill U., Birks Building, Room 205
Evan DeCorte (McGill University), Soft Packings, Nested Clusters, and Condensed Matter, The Witsenhausen problem in dimension 3, McGill U., Bronfman Building, Room 179
Stanley Alama (McMaster University - Canada), Singularities and Phase Transitions in Condensed Matter, Energy minimizing patterns for a copolymer model with confinement, McGill U., Burnside Hall, Room 1B39
Eckhard Meinrenken (University of Toronto, Canada), Symmetries of Symplectic Manifolds and Related Topics, On the quantization of Hamiltonian loop group spaces, McGill U., McConnell Engineering Building, Room 13
Lucia Moura (University of Ottawa (Ottawa, Canada)), Theory and Applications of Finite Fields, Ordered Orthogonal Array Construction Using LFSR Sequences, McGill U., Trottier Building (Engineering), Room 2120
14:15 - 14:45 Dorette Pronk (Dalhousie University), Stringy Geometry, Mapping Groupoids for Topological Orbifolds, McGill U., McConnell Engineering Building, Room 12
14:15 - 15:00 Matías Victor Moya Giusti (Max Planck Institute for Mathematics), Galois Representations and Automorphic Forms, Ghost classes in the cohomology of the Shimura variety associated to GSp(4), McGill U., Rutherford Physics Building, Room 115
14:30 - 15:30 Building a Teaching Dossier, Mansfield 5, Centre Mont-Royal
14:45 - 15:05 Ayla Gafni (University of Rochester, USA), Advances in Algebraic and Analytic Number Theory, Pair correlation statistics in subsets of the integers, McGill U., Rutherford Physics Building, Room 114
Juan Hector Arredondo Ruiz (UNIVERSIDAD AUTÓNOMA METROPOLITANA, MÉXICO), Advances in Analysis, PDE's and Related Applications, On the Factorization theorem in the space of Henstock-Kurzweil integrable functions, McGill U., Burnside Hall, Room 920
Andrew Berget (Western Washington University), Advances in Arrangement Theory, Internal zonotopal algebras and monomial reflection groups, Centre Mont Royal, Room Cartier I & II
Ricardo Menares (Pontificia Universidad Católica de Valparaíso), Arithmetic Dynamics, Equidistribution of p-adic Hecke orbits on the modular curve, McGill U., Rutherford Physics Building, Room 118
Suely Oliveira (The University of Iowa, USA), Applied Math and Computational Science across the Americas, Parallel Computing Large-scale Data Problems, McGill U, Bronfman Building, Room 1
Alejandro Adem (University of British Columbia), Cohomology of Groups, Free Finite Group Actions on Rational Homology Spheres, McGill U., Trottier Building (Engineering), Room 70
Misha Kilmer (Tufts University), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, Krylov Recycling for Sequences of Shifted Systems Arising in Image Reconstruction, McGill U., Bronfman Building, Room 45
Eugen Mandrescu (Holon Institute of Technology, Israel), Contributed Papers, Shedding vertices and well-covered graphs, McGill U., Burnside Hall, Room 1B45
Penny Haxell (University of Waterloo), Current Trends in Combinatorics, Stability for matchings in regular tripartite hypergraphs, Centre Mont Royal, Salon International I & II
Livia Corsi (Georgia Institute of Technology), Finite and Infinite Dimensional Hamiltonian Systems, A non-separable locally integrable Hamiltonian system, McGill U., Burnside Hall, Room 1205
Brent Nelson (University of California, Berkeley), Free Probability and its Applications, Free Stein kernels and an improvement of the free logarithmic Sobolev inequality, McGill U., Arts Building, Room 260
Alexei Krassilnikov (University of Brasilia, Brazil), Groups and Algebras, Lie nilpotent asssociative algebras, McGill U., Trottier Building (Engineering), Room 60
Adolfo Guillot Santiago (National Autonomous University of Mexico, Cuernavaca, Mexico), Geometry of Differential Equations, Real and Complex, Algebraic differential equations with uniform solutions, McGill U., Arts Building, Room W-120
Jason Manning (Cornell University), Geometric Group Theory, Cubulations from improper actions, McGill U., Trottier Building (Engineering), Room 100
Jean-Pierre Magnot (Lycée Jeanne d'Arc, Clermont-Ferrand, France), Groups in Geometry and Topology, Differential geometry and well-posedness of the KP hierarchy, McGill U., Birks Building, Room 111
Florent Schaffhauser (Universidad de Los Andes), Gauge Theory and Special Geometry, Hitchin components for fundamental groups of 2-orbifolds, McGill U., Birks Building, Room 203
Jose Seade (Univ. Nac. Aut. Mexico - Mexico), Holomorphic Foliations and Singularities of Mappings and Spaces, On the Chern classes of singular varieties, McGill U., Burnside Hall, Room 1214
Hans Boden (McMaster University, Canada), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Concordance invariants of virtual knots, McGill U., Trottier Building (Engineering), Room 1100
Isaac Goldbring (University of California, Irvine), Interactions Between Model Theory and Analysis and Topology, Boundary amenability of groups via ultrapowers, McGill U., Trottier Building (Engineering), Room 2110
Alex Bullivant (University of Leeds, UK), Mathematics of Quantum Phases of Matter and Quantum Information, Higher Symmetry Topological Phases and Loop Braid Invariants, McGill U., Bronfman Building, Room 46
Magdalena Czubak (University of Colorado at Boulder), Nonlinear Dispersive Equations, The exterior domain problem on the hyperbolic plane, McGill U., Burnside Hall, Room 1B23
Robin Neumayer (The University of Texas at Austin), Nonlocal Variational Problems, Higher regularity of the free boundary for the fractional obstacle problem, McGill U., Burnside Hall, Room 1B36
María Soledad Aronna (Escola de Matemática Aplicada, Fundação Getúlio Vargas, Rio de Janeiro, Brazil), Optimization and Control, Second order analysis of bilinear optimal control, McGill U., Bronfman Building, Room 151
Raul Quiroga-Barranco (CIMAT, Mexico), Operator Theory on Function Spaces, Toeplitz operators, special symbols and moment maps, McGill U., Arts Building, Room W-215
Mykhaylo Shkolnikov (Princeton University), Probability Theory, Edge of spiked beta ensembles, McGill U., Arts Building, Room W-20
Yevgeniy Liokumovich (MIT), Quantitative Geometry and Topology, Quantitative aspects of Min-Max Theory, McGill U., McConnell Engineering Building, Room 11
Jon Kujawa (University of Oklahoma), Symmetry in Algebra, Topology, and Physics, Webs of Type Q, McGill U., Birks Building, Room 205
Karoly Bezdek (University of Calgary), Soft Packings, Nested Clusters, and Condensed Matter, On totally separable packings of soft balls, McGill U., Bronfman Building, Room 179
Lidia Mrad (University of Arizona - USA), Singularities and Phase Transitions in Condensed Matter, Dynamic Analysis of Chevron Structures in Smectics, McGill U., Burnside Hall, Room 1B39
Leonardo Mihalcea (Virginia Tech University, USA), Symmetries of Symplectic Manifolds and Related Topics, An affine quantum cohomology ring, McGill U., McConnell Engineering Building, Room 13
David Thomson (Carleton University (Ottawa, Canada)), Theory and Applications of Finite Fields, Doubly-periodic Costas arrays, McGill U., Trottier Building (Engineering), Room 2120
14:45 - 15:15 Ilya Shapiro (University of Windsor), Stringy Geometry, Some invariance properties of cyclic cohomology with coefficients, McGill U., McConnell Engineering Building, Room 12
14:45 - 15:30 Julio Rossi (Universidad de Buenos Aires), Nonlinear Partial Differential Equations, Maximal operators for the $p$-Laplacian family, McGill U., Burnside Hall, Room 1B24
14:45 - 15:45 Pedro Luis del Angel (CIMAT), Calabi-Yau Manifolds and Calabi-Yau Algebra, Variations of Hodge structures associated to some equisingular families of Calaby-Yau varieties., McGill U., McConnell Engineering Building, 204
Ben Antieau (University of Illinois at Chicago, USA), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, Negative and homotopy $K$-theory of ring spectra and extensions of the theorem of the heart, McGill U., McConnell Engineering Building, Room 304
15:00 - 15:45 Florence Gillibert (Pontífica Universidad Católica de Valparaíso), Galois Representations and Automorphic Forms, Abelian surfaces with quaternionic multiplication, and rational points on Atkin-Lehner quotients of Shimura curves, McGill U., Rutherford Physics Building, Room 115
15:15 - 15:35 Sun Kim (University of Illinois, USA), Advances in Algebraic and Analytic Number Theory, Sums of squares and Bessel functions, McGill U., Rutherford Physics Building, Room 114
Alejandro Vélez-Santiago (University of Puerto Rico), Advances in Analysis, PDE's and Related Applications, A quasi-linear Neumann problem of Ambrosetti-Prodi type in non-smooth domains, McGill U., Burnside Hall, Room 920
Jeremiah Bartz (University of North Dakota), Advances in Arrangement Theory, Induced and Complete Multinets, Centre Mont Royal, Room Cartier I & II
Myrto Mavraki (University of British Columbia), Arithmetic Dynamics, Quasi-adelic measures, equidistribution and preperiodic points for families of rational maps, McGill U., Rutherford Physics Building, Room 118
Cristina Turner (Universidad Nacional de Córdoba, Argentina), Applied Math and Computational Science across the Americas, Adjoint method for a tumour invasion PDE-constrained optimization problem using FEM, McGill U, Bronfman Building, Room 1
Ian Hambleton (McMaster University), Cohomology of Groups, Group cohomology with group ring coefficients, McGill U., Trottier Building (Engineering), Room 70
Tim Hoheisel (McGill University), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, A New Class of Matrix Support Functionals with Applications, McGill U., Bronfman Building, Room 45
Yuan-Jen Chiang (University of Mary Washington), Contributed Papers, Leaf-wise Harmonic Maps of Manifolds with 2-dimensional Foliations, McGill U., Burnside Hall, Room 1B45
Marcos Kiwi (Universidad de Chile), Current Trends in Combinatorics, Bijective Proof of Kasteleyn's Toroidal Perfect Matching Cancellation Theorem, Centre Mont Royal, Salon International I & II
Nicholas Faulkner (University of Ontario Institute of Technology), Finite and Infinite Dimensional Hamiltonian Systems, Equivariant KAM, McGill U., Burnside Hall, Room 1205
Alexey Kuznetsov (York University), Free Probability and its Applications, Free stable distributions, McGill U., Arts Building, Room 260
Evegeny Plotkin (Bar-Ilan University, Israel), Groups and Algebras, Word equations and word equations with constants, McGill U., Trottier Building (Engineering), Room 60
Frederico Xavier (Texas Christian University, USA), Geometry of Differential Equations, Real and Complex, On the inversion of real polynomial maps, McGill U., Arts Building, Room W-120
Jingyin Huang (McGill University), Geometric Group Theory, Virtual specialness without hyperbolicity, McGill U., Trottier Building (Engineering), Room 100
Reimundo Heluani (IMPA, Brazil), Groups in Geometry and Topology, On T-duality of certain nilmanifolds, McGill U., Birks Building, Room 111
Laura Schaposnik (University of Illinois at Chicago), Gauge Theory and Special Geometry, On some singular fibres of the Hitchin fibration, McGill U., Birks Building, Room 203
Jorge Vitorio Pereira (IMPA - Brazil), Holomorphic Foliations and Singularities of Mappings and Spaces, Effective algebraic integration in bounded genus, McGill U., Burnside Hall, Room 1214
Matthieux Calvez (Universidad de Santiago de Chile - Universidad de la Frontera), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Acylindrical hyperbolically of Artin Groups of Spherical Type, McGill U., Trottier Building (Engineering), Room 1100
Xingshan Cui (Stanford University), Mathematics of Quantum Phases of Matter and Quantum Information, State Sum Invariants of Three Manifolds from Spherical Multi-fusion Categories, McGill U., Bronfman Building, Room 46
Dana Mendelson (University of Chicago), Nonlinear Dispersive Equations, An infinite sequence of conserved quantities for the cubic Gross-Pitaevskii hierarchy on R, McGill U., Burnside Hall, Room 1B23
Xin Yang Lu (McGill University), Nonlocal Variational Problems, Centroidal Voronoi tessellations and Gersho's conjecture in 3D, McGill U., Burnside Hall, Room 1B36
Francisco Silva (Xlim, Université de Limoges, Limoges, France), Optimization and Control, On the variational formulation of some stationary second order MFGs, McGill U., Bronfman Building, Room 151
Yunus Zeytuncu (University of Michigan-Dearborn), Operator Theory on Function Spaces, Compactness of Hankel and Toeplitz operators on domains in $\mathbb{C}^n$, McGill U., Arts Building, Room W-215
Li-Cheng Tsai (Columbia University), Probability Theory, The Speed-$ N^2 $ Large Deviations of the TASEP, McGill U., Arts Building, Room W-20
Greg Chambers (University of Chicago), Quantitative Geometry and Topology, Monotone homotopies and sweepouts, McGill U., McConnell Engineering Building, Room 11
David Rose (UNC), Symmetry in Algebra, Topology, and Physics, Traces and link homology, McGill U., Birks Building, Room 205
Rémy Rodiac (Pontificia Universidad Católica de Chile - Chile), Singularities and Phase Transitions in Condensed Matter, Regularity of limiting vorticities of the Ginzburg-Landau equations, McGill U., Burnside Hall, Room 1B39
Jonathan Weitsman (Northeastern University, USA), Symmetries of Symplectic Manifolds and Related Topics, On Geometric Quantization of (some) Poisson Manifolds, McGill U., McConnell Engineering Building, Room 13
Lisa Bromberg (United States Military Academy, West Pointe), Theory and Applications of Finite Fields, Navigating in the Cayley graph of $\mathrm{SL}(2,\mathbb{F}_p)$ and applications to hashing, McGill U., Trottier Building (Engineering), Room 2120
15:15 - 15:45 Xiang Tang, Stringy Geometry, McGill U., McConnell Engineering Building, Room 12
16:15 - 16:35 Amanda Tucker (University of Rochester, USA), Advances in Algebraic and Analytic Number Theory, Statistics of genus numbers of cubic fields, McGill U., Rutherford Physics Building, Room 114
Marcia Cristina A. B. Federson (Universidade de São Paulo, Brazil), Advances in Analysis, PDE's and Related Applications, The Generalized Feynman Integral and Applications, McGill U., Burnside Hall, Room 920
Emanuele Delucchi (University of Fribourg), Advances in Arrangement Theory, Fundamental polytopes of metric trees via hyperplane arrangements, Centre Mont Royal, Room Cartier I & II
Nicole Looper (Northwestern University), Arithmetic Dynamics, Arboreal Galois representations with large image, McGill U., Rutherford Physics Building, Room 118
Paula Vasquez (University of South Carolina), Applied Math and Computational Science across the Americas, Dynamical modeling of the yeast genome, McGill U, Bronfman Building, Room 1
Christopher Bendel (University of Wisconsin-Stout), Cohomology of Groups, Good filtrations of tensor products, McGill U., Trottier Building (Engineering), Room 70
Stefan Veldsman (Nelson Mandela University), Contributed Papers, Generalized complex numbers over near-fields, McGill U., Burnside Hall, Room 1B45
Jacob Fox (MIT), Current Trends in Combinatorics, Regularity methods in discrete geometry, Centre Mont Royal, Salon International I & II
Isaac Pérez Castillo (Universidad Nacional Autónoma de México), Free Probability and its Applications, Large deviation function for the number of eigenvalues of sparse random graphs inside an interval, McGill U., Arts Building, Room 260
Alexei Myasnikov (CUNY, USA), Groups and Algebras, McGill U., Trottier Building (Engineering), Room 60
Ronaldo Alves Garcia (Universidade Federal de Goiás, Brazil), Geometry of Differential Equations, Real and Complex, Darboux curves on surfaces, McGill U., Arts Building, Room W-120
Rita Jimenez-Rolland (Universidad Nacional Autónoma de México), Geometric Group Theory, Powers of the Euler class for the pure mapping class group, McGill U., Trottier Building (Engineering), Room 100
Pedram Hekmati (IMPA), Gauge Theory and Special Geometry, E-polynomials of singular character varieties, McGill U., Birks Building, Room 203
Maria Aparecida Ruas (Univ. São Paulo - Brazil), Holomorphic Foliations and Singularities of Mappings and Spaces, Lipschitz Normal Embeddings in the Space of Matrices, McGill U., Burnside Hall, Room 1214
Dale Rolfsen (University of British Columbia), Interactions Between Geometric Group Theory, Low-Dimensional Topology and Geometry, and Dynamics, Ordering fundamental groups of small hyperbolic 3-manifolds, McGill U., Trottier Building (Engineering), Room 1100
Qing Zhang (Texas A&M University), Mathematics of Quantum Phases of Matter and Quantum Information, Congruence Subgroups and Super-Modular Categories, McGill U., Bronfman Building, Room 46
Tadele Mengesha (The University of Tennessee, Knoxville), Nonlocal Variational Problems, Calderon-Zygmund theory for the spectral fractional elliptic equations, McGill U., Burnside Hall, Room 1B36
Luis Briceño Arias (Universidad Técnica Federico Santa María, Santiago, Chili), Optimization and Control, Forward-Backward-Half forward splitting for solving monotone inclusions, McGill U., Bronfman Building, Room 151
Maribel Loaiza (CINVESTAV, Mexico), Operator Theory on Function Spaces, On Toeplitz operators on the poly harmonic Bergman space, McGill U., Arts Building, Room W-215
Jessica Lin (McGill University), Probability Theory, Optimal Error Estimates in the Stochastic Homogenization for Elliptic Equations in Nondivergence Form, McGill U., Arts Building, Room W-20
Tullia Dymarz (University of Wisconsin, Madison), Quantitative Geometry and Topology, BiLipschitz equivalence of coarsely dense separated nets, McGill U., McConnell Engineering Building, Room 11
Joel Kamnitzer (University of Toronto), Symmetry in Algebra, Topology, and Physics, Crystals and monodromy of Bethe vectors, McGill U., Birks Building, Room 205
Xavier Lamy (Institut de Mathématique de Toulouse), Singularities and Phase Transitions in Condensed Matter, Lifting line fields to director fields with bounded variation, McGill U., Burnside Hall, Room 1B39
Steven Rayan (University of Saskatchewan, Canada), Symmetries of Symplectic Manifolds and Related Topics, The quiver at the bottom of the twisted nilpotent cone on $\mathbb{CP}^1$, McGill U., McConnell Engineering Building, Room 13
Lucas Reis (Universidade Federal de Minas Gerais, Brazil), Theory and Applications of Finite Fields, On the explicit factorization of $f(x^n)$ over finite fields., McGill U., Trottier Building (Engineering), Room 2120
16:15 - 17:00 Daniel Kohen (Universidad de Buenos Aires), Galois Representations and Automorphic Forms, Heegner point constructions, McGill U., Rutherford Physics Building, Room 115
Boyan Sirakov (Pontificia Universidade Catolica do Rio de Janeiro), Nonlinear Partial Differential Equations, Exact multiplicity results for a nonlinear elliptic problem, and geometric structure of the set of solutions, McGill U., Burnside Hall, Room 1B24
16:15 - 17:05 Rui Loja Fernandes (University of Illinois at Urbana-Champaign, USA), Groups in Geometry and Topology, Symplectic Gerbes, McGill U., Birks Building, Room 111
16:15 - 17:15 Charles Doran (University of Alberta), Calabi-Yau Manifolds and Calabi-Yau Algebra, Calabi-Yau Threefolds Fibered by High Rank Lattice Polarized K3 Surfaces, McGill U., McConnell Engineering Building, 204
Inna Zakharevich (Cornell University, USA), Recent Trends in Algebraic Cycles, Algebraic K-Theory and Motives, A derived zeta-function, McGill U., McConnell Engineering Building, Room 304
Ernesto Lupercio (CINVESTAV), Stringy Geometry, McGill U., McConnell Engineering Building, Room 12
16:45 - 17:05 Jhon Jairo Bravo Grijalba (University of Cauca, Colombia), Advances in Algebraic and Analytic Number Theory, Linear forms in k-Fibonacci sequences, McGill U., Rutherford Physics Building, Room 114
Timothy Myers (Howard University, USA), Advances in Analysis, PDE's and Related Applications, Lebesgue Integration on a Banach Space with a Schauder Basis, McGill U., Burnside Hall, Room 920
Graham Denham (University of Western Ontario), Advances in Arrangement Theory, Local systems on complements of smooth hypersurface arrangements, Centre Mont Royal, Room Cartier I & II
Robert Benedetto (Amherst College), Arithmetic Dynamics, Computing arboreal Galois groups of some cubic polynomials, McGill U., Rutherford Physics Building, Room 118
Miguel Dumett (San Diego State University, USA), Applied Math and Computational Science across the Americas, L1 Norm Regularization of the Kirchhoff Standard Migrated Image, McGill U, Bronfman Building, Room 1
Jose Cantarero (CONACYT -CIMAT Merida), Cohomology of Groups, Benson-Carlson duality for p-local finite groups, McGill U., Trottier Building (Engineering), Room 70
Ryad Ghanam (Virginia Commonwealth University in Qatar), Contributed Papers, Non-Solvable subalgebras of gl(4,R), McGill U., Burnside Hall, Room 1B45
Jacob Mostovoy (Instituto Politécnico Nacional, Mexico), Groups and Algebras, Multiplicative graphs and related algebras, McGill U., Trottier Building (Engineering), Room 60
Daniel Offin (Queens University, Canadá), Geometry of Differential Equations, Real and Complex, Multiple periodic solutions in classical Hamiltonian systems., McGill U., Arts Building, Room W-120
Bena Tshishuku (Harvard University), Geometric Group Theory, Obstructions to Nielsen realization, McGill U., Trottier Building (Engineering), Room 100
Ákos Nagy (University of Waterloo), Gauge Theory and Special Geometry, The Berry Connection of the Ginzburg–Landau Vortices, McGill U., Birks Building, Room 203
Dmitri Nikshych (University of New Hampshire), Mathematics of Quantum Phases of Matter and Quantum Information, Classifying braidings on fusion categories, McGill U., Bronfman Building, Room 46
Nestor Guillen (University of Massachusetts at Amherst), Nonlocal Variational Problems, Min-max formulas for nonlocal elliptic operators and applications, McGill U., Burnside Hall, Room 1B36
Armando Sanchez-Nungaray (Universidad Veracruzana, Mexico), Operator Theory on Function Spaces, Commutative algebras of Toeplitz operators on the Siegel domain, McGill U., Arts Building, Room W-215
Leonardo Rolla (Universidad de Buenos Aires), Probability Theory, Absorbing-State Phase Transitions 3.0, McGill U., Arts Building, Room W-20
Walter Neumann (Columbia University), Quantitative Geometry and Topology, Some applications of coarse metrics, McGill U., McConnell Engineering Building, Room 11
Vyjayanthi Chari (UC Riverside), Symmetry in Algebra, Topology, and Physics, McGill U., Birks Building, Room 205
Tiziana Giorgi (New Mexico State University - USA), Singularities and Phase Transitions in Condensed Matter, Switching mechanism in the $B_{\text{1RevTilted}}$ phase of bent-core liquid crystals, McGill U., Burnside Hall, Room 1B39
Elisheva Adina Gamse (Univeristy of Toronto, Canada), Symmetries of Symplectic Manifolds and Related Topics, Vanishing theorems in the cohomology ring of the moduli space of parabolic vector bundles, McGill U., McConnell Engineering Building, Room 13
Ivelisse Rubio (University of Puerto Rico (Río Piedras, Puerto Rico)), Theory and Applications of Finite Fields, A refinement of a theorem of Carlitz, McGill U., Trottier Building (Engineering), Room 2120
17:30 - 18:30 Juan Davila (University of Chile), Invited Speakers, Finite time singularities for the harmonic map flow in 2D with values into de sphere, Symposia Auditorium, CMR
Shmuel Weinberger (University of Chicago, USA), Invited Speakers, The (un)reasonable (in)effectiveness of algebraic topology., International I (E) & II (F), CMR
Pablo Shmerkin (Torcuato Di Tella University and CONICET), MCA Prize Lectures, Expansions in bases 2 and 3: old conjectures and new results, Cartier, Centre Mont-Royal
20:00 - 21:00 Étienne Ghys (ENS Lyon), Public Lectures, Embouteillages dans les villes : un problème mathématique ?, Auditorium, CMR
Public Lecture - Étienne Ghys, Auditorium, Centre Mont Royal
21:15 - 22:30 Student Fireworks Outing, Entrance, CMR
23:00 - Student Social, Saint-Houblon
09:00 - 10:00 Luz de Teresa (IM UNAM, Mexico), Invited Speakers, Give me absolute control over every......, Cartier I & II, CMR
Robert Morris (IMPA, Brazil), Invited Speakers, Random graph processes, International I (E) & II (F), CMR
Robert Morris (Instituto de Matemática Pura e Aplicada, Brazil), MCA Prize Lectures, Random graph processes, International, Centre Mont-Royal
Umberto Hryniewicz (Universidade Federal do Rio de Janeiro, Brazil), MCA Prize Lectures, Symplectic dynamics: methods and results, Symposium Auditorium, Centre Mont-Royal
10:30 - 11:30 Carlos Gustavo Moreira (IMPA, Brazil), Invited Speakers, Geometric properties of the Markov and Lagrange spectra and dynamical generalizations, Cartier I & II, CMR
Krzysztof Burdzy (University of Washington, USA), Invited Speakers, On number of collisions of billiard balls, International I (E) & II (F), CMR
Matt Kerr (Washington U St. Louis, USA), Invited Speakers, Normal functions in geometry, physics, arithmetic, Symposia Auditorium, CMR
11:45 - 12:05 Elizabeth Gross (San Jose State University, USA), Applied and Computational Algebra and Geometry, Joining and Decomposing Reaction Networks, McGill U., Birks Building, Room 205
Piper Harron (University of Hawaii at Manoa), Arithmetic Geometry and Related Topics, Equidistribution of Shapes of Number Fields of degree 3, 4, and 5, Rutherford Physics Building, Room 114
Chris Francisco (Oklahoma State University), Combinatorial Commutative Algebra, Borel ideals with two Borel generators and Koszulness, McGill U., Birks Building, Room 111
Fabienne Chouraqui (University of Haifa, Israel), Computations in Groups and Applications, THE GARSIDE GROUPS AND SOME OF THEIR PROPERTIES, McGill U., Bronfman Building, Room 45
Michel Delfour (University of Montreal), Control of Partial Differential Equations, Shape and Topological Derivatives via One Sided Differentiation of Lagrangian Functionals, McGill U., Burnside Hall, Room 1205
Briceyda Delgado (Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional), CMS-Studc Student Research Session, Hilbert Transform associated to the main Vekua equation, Rutherford Physics Building, Room 118, McGill
Leo Rebholz (Clemson University, USA), Equations of Fluid Mechanics: Numerics, On conservation laws of Navier-Stokes Galerkin discretizations, McGill U., McConnell Engineering Building, Room 13
Cecilia Mondaini (Texas A&M University), Equations of Fluid Mechanics: Analysis, Analysis of a feedback-control data assimilation algorithm, McGill U., McConnell Engineering Building, Room 11
Eyal Lubetzky (NYU), Extremal and Probabilistic Combinatorics, Comparing mixing times on sparse random graphs, McGill U.,Arts Building, W-120
Sergio Almaraz (Universidade Federal Fluminense, Brazil), Geometric Analysis, On Yamabe type problems on manifolds with boundary, McGill U., Burnside Hall, Room 1B23
Marcos Jardim (UNICAMP), Geometry and Physics of Higgs Bundles, Branes on moduli spaces of sheaves, McGill U., Burnside Hall, Room 1B24
Jean-Pierre Gabardo (McMaster University), Harmonic Analysis and Inverse Problems, Weighted Beurling densities and sampling theory, McGill U., Trottier Bldg., Rm 2110
Victor Ostrik (University of Oregon, USA), Hopf Algebras and Tensor Categories, Modular extensions group of a symmetric tensor category., McGill U., Arts Building, Room W-215
Alessandro Portaluri (University of Torino, Italy), Hamiltonian Systems and Celestial Mechanics, Index theory, Maslov index, Spectral flow, Colliding trajectories, Parabolic motions, Homothetic orbits, McGill U., Burnside Hall, Room 1B36
Anna Mazzucato (Pennsylvania State University), Incompressible Fluid Dynamics, The vanishing viscosity limit for an Oseen-type equation, McGill U., Burnside Hall, Room 920
Jaqueline G. Mesquita (University of Brasília, Brazil), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Massera's Theorem for dynamic equations on time scales and applications, McGill U., McConnell Engineering Building, Room 204
Xiang Tang (Washington University, St Louis, USA), Noncommutative Geometry and Quantization, Roe C*-algebra for groupoids and generalized Lichnerowicz Vanishing theorem, McGill U., Bronfman Building, Room 46
Catherine Sulem (University of Toronto), Nonlinear and Stochastic Partial Differential Equations, Surface water waves over bathymetry, McGill U., Burnside Hall, Room 1214
Henryk Iwaniec (Rutgers University), Number Theory & Analysis, Critical zeros of L-functions, McGill U., Rutherford Physics Building, Room 115
Jianhong Wu (York University, Canada), Recent Advance in Disease Dynamics Analysis, Epidemic models with multiple delays: impact of diapause, McGill U., McConnell Engineering Building, Room 304
Kaiming Zhao (Wilfrid Laurier University), Representations of Lie Algebras, Simple Witt modules that are $U(\mathfrak{h})$-free modules of finite rank, McGill U., Arts Building, Room 260
Shiping Liu (University of Sherbrooke), Representation Theory of Algebras, Auslander-Reiten components with bounded short cycles, McGill U., Arts Building, Room W-20
Ilya Kossovskiy (Masaryk University, Czech Republic), Several Complex Variables, Classification of 3-dimensional real-analytic CR-manifolds, McGill U., Trottier Building, Rm 2120
Omri Sarig (Weizmann Institute of Science, Israel), Symbolic Dynamics, Symbolic dynamics for surface diffeomorphisms: updates, McGill U., Trottier Building (Engineering), Room 1100
Edward Tymchatyn (University of Saskatchewan), Shape, Homotopy, and Attractors, Cell Structures, McGill U., Burnside Hall, Room 1B39
Gilles Gonçalves de Castro (Universidade Federal de Santa Catarina), Topological Dynamics and Operator Algebras, A groupoid approach to the C*-algebras of labeled graphs, McGill U., Bronfman Building, Room 151
11:45 - 12:10 Sihem Mesnager (, University of Paris 8, Paris 13 and Telecom ParisTech), Mathematical Applications in Cryptography, Recent advances on bent functions for symmetric cryptography, Centre Mont Royal, Room Cartier I & II
11:45 - 12:25 Charles Doran (University of Alberta), Motives and Periods, Hodge Numbers from Picard-Fuchs Equations, McGill U., Birks Building, Room 203
Alexandru Nica (University of Waterloo, Canada), Von Neumann Algebras and their Applications, A theorem of the central limit type, in the framework of the infinite symmetric group, McGill U., Bronfman Building, Room 179
11:45 - 12:35 Ketty A. de Rezende (Universidade Estadual de Campinas, Brazil), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Homological Tools for Understanding Dynamical Systems, McGill U., Trottier Building (Engineering), Room 70
Israel Michael Sigal (University of Toronto), Mathematical Physics, On the Bogolubov-de Gennes Equations of Superconductivity, McGill, McConnell Engineering Building, Rm 12
11:45 - 12:45 Louis Billera (Cornell University), Geometry and Combinatorics of Cell Complexes, On the real linear algebra of binary vectors, McGill U., Burnside Hall, Room 1B45
Alex Kontorovich (Rutgers University, USA), Spectrum and Dynamics, The SuperPAC: Superintegral Packing Arithmeticity Conjecture, McGill U., Trottier Building (Engineering), Room 60
12:15 - 12:35 Gregory G. Smith (Queen's University, Canada), Applied and Computational Algebra and Geometry, Resolutions of Toric Vector Bundles, McGill U., Birks Building, Room 205
Robert Harron (University of Hawaii at Manoa), Arithmetic Geometry and Related Topics, Equidistribution of shapes of cubic fields of fixed quadratic resolvent, Rutherford Physics Building, Room 114
Sarah Mayes-Tang (Quest University), Combinatorial Commutative Algebra, Betti tables of graded systems of ideals, McGill U., Birks Building, Room 111
Andrés Navas Flores (Universidad de Santiago de Chile, Chile), Computations in Groups and Applications, Some combinatorial problems associated to orderability properties of groups, McGill U., Bronfman Building, Room 45
Assia Benabdallah (Aix-Marseille Université), Control of Partial Differential Equations, New phenomena for the null controllability of parabolic systems, McGill U., Burnside Hall, Room 1205
Siddhi Pathak (Queen's University), CMS-Studc Student Research Session, On a Conjecture of Livingston, Rutherford Physics Building, Room 118, McGill
Michael Jolly (Indiana University), Equations of Fluid Mechanics: Analysis, A determining form for the surface quasi-geostrophic equation, McGill U., McConnell Engineering Building, Room 11
Lutz Warnke (Georgia Tech), Extremal and Probabilistic Combinatorics, Packing nearly optimal $R(3,t)$ graphs, McGill U.,Arts Building, W-120
Lara Anderson (Virginia Tech), Geometry and Physics of Higgs Bundles, Elliptically Fibered CY Geometries and Emergent Hitchin Systems, McGill U., Burnside Hall, Room 1B24
Pilar Herreros (Pontificia Universidad Católica de Chile), Harmonic Analysis and Inverse Problems, Boundary Rigidity with nonpositive curvature, McGill U., Trottier Bldg., Rm 2110
István Heckenberger (University of Marburg, Germany), Hopf Algebras and Tensor Categories, Some examples of PBW deformations, McGill U., Arts Building, Room W-215
Stefanella Boatto (Universidad Federal de Rio de Janeiro, Brazil), Hamiltonian Systems and Celestial Mechanics, N-body and N-vortex dynamics on surfaces of revolution : curvature and topological contributions, McGill U., Burnside Hall, Room 1B36
Milton C. Lopes Filho (Universidade Federal do Rio de Janeiro), Incompressible Fluid Dynamics, Newtonian limit of the Euler-alpha model in domains with boundary, McGill U., Burnside Hall, Room 920
Rodrigo Ponce (Talca University, Chile), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Asymptotic behavior of solutions to a Volterra equation, McGill U., McConnell Engineering Building, Room 204
Alejandro Cabrera (Federal University of Rio de Janeiro, Rio de Janeiro, Brazil), Noncommutative Geometry and Quantization, A geometric approach to some equivalent C*-algebras, McGill U., Bronfman Building, Room 46
Susan Friedlander (University of Southern California), Nonlinear and Stochastic Partial Differential Equations, Asymptotics for magnetostrophic turbulence in the Earth's fluid core, McGill U., Burnside Hall, Room 1214
John Friedlander (University of Toronto), Number Theory & Analysis, On Dirichlet $L$-functions, McGill U., Rutherford Physics Building, Room 115
Katia Vogt Geisse (Universidad Adolfo Ibáñez, Chile), Recent Advance in Disease Dynamics Analysis, Structured models and their reproduction numbers: Effect of the way of transmission, control measures and social conditions, McGill U., McConnell Engineering Building, Room 304
Reimundo Heluani (IMPA, Brazil), Representations of Lie Algebras, Cohomology of vertex algebras, McGill U., Arts Building, Room 260
Hernán Giraldo (Universidad de Antioquia, Colombia), Representation Theory of Algebras, Shapes of Auslander-Reiten Triangles, McGill U., Arts Building, Room W-20
Nordine Mir (Texas A&M University Qatar), Several Complex Variables, Convergence of formal CR transformations, McGill U., Trottier Building, Rm 2120
Dan Thompson (Ohio State University, USA), Symbolic Dynamics, Symbolic dynamics and specification for geodesic flow on CAT(-1) spaces, McGill U., Trottier Building (Engineering), Room 1100
Vesko Valov (Nipissing University), Shape, Homotopy, and Attractors, Homogeneous finite-dimensional metric compacta, McGill U., Burnside Hall, Room 1B39
Michael Schraudner (Universidade de Chile), Topological Dynamics and Operator Algebras, Automorphism groups of subshifts through group extensions, McGill U., Bronfman Building, Room 151
12:15 - 12:40 Claude Crépeau (McGill University), Mathematical Applications in Cryptography, Relativistic Commitments and Zero-Knowledge Proofs, Centre Mont Royal, Room Cartier I & II
12:45 - 14:15 Gender and Mathematics Panel, Atrium, Trottier Building, McGill University
14:15 - 14:35 Rafael Villarreal (Cinvestav, Mexico), Applied and Computational Algebra and Geometry, Minimum distance functions of complete intersections, McGill U., Birks Building, Room 205
Guillermo Mantilla-Soler (Universidad de los Andes), Arithmetic Geometry and Related Topics, A characterization of arithmetic equivalence via Galois representations, Rutherford Physics Building, Room 114
Augustine O'Keefe (Connecticut College), Combinatorial Commutative Algebra, Bounds on the regularity of toric ideals of graphs, McGill U., Birks Building, Room 111
Olga Kharlampovich (Hunter College, USA), Computations in Groups and Applications, What does a group algebra of a free group ``know'' about the group?, McGill U., Bronfman Building, Room 45
Ademir Pazoto (Universidade Federal Do Rio De Janeiro), Control of Partial Differential Equations, Control and Stabilization for a coupled system of linear Benjamin-Bona-Mahony type equations, McGill U., Burnside Hall, Room 1205
Jean Lagacé (Université de Montréal), CMS-Studc Student Research Session, Lattice point counting in spectral theory, Rutherford Physics Building, Room 118, McGill
Anna Mazzucato (Pennsylvania State University), Equations of Fluid Mechanics: Analysis, On the two-dimensional Kuramoto-Sivashinsky equation, McGill U., McConnell Engineering Building, Room 11
Paul Smith (Tel Aviv University), Extremal and Probabilistic Combinatorics, Towards universality in bootstrap percolation, McGill U.,Arts Building, W-120
Dan Ketover (Princeton University, USA), Geometric Analysis, Free boundary minimal surfaces of unbounded genus, McGill U., Burnside Hall, Room 1B23
Anton Dochtermann (Texas State University), Geometry and Combinatorics of Cell Complexes, Co-parking functions and h-vectors of graphical matroids, McGill U., Burnside Hall, Room 1B45
Alessia Mandini (PUC Rio de Janeiro), Geometry and Physics of Higgs Bundles, Hyperpolygon spaces and parabolic Higgs bundles, McGill U., Burnside Hall, Room 1B24
Ben Adcock (Simon Fraser University), Harmonic Analysis and Inverse Problems, Robustness to unknown error in sparse regularization, McGill U., Trottier Bldg., Rm 2110
Vladislav Kharchenko (Universidad Autónoma de México, Mexico), Hopf Algebras and Tensor Categories, Free braided nonassociative Hopf algebras and Sabinin $\tau $-algebras, McGill U., Arts Building, Room W-215
Ernesto Perez-Chavela (ITAM, México), Hamiltonian Systems and Celestial Mechanics, New families of relative equilibria in the curved N-body problem, McGill U., Burnside Hall, Room 1B36
Dana Mendelson (University of Chicago), Incompressible Fluid Dynamics, An infinite sequence of conserved quantities for the cubic Gross-Pitaevskii hierarchy on R, McGill U., Burnside Hall, Room 920
Dahisy Lima (Northwestern, USA), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Extracting Dynamical Information from a Morse-Novikov Spectral Sequence, McGill U., Trottier Building (Engineering), Room 70
Xiaoying Han (Auburn University, USA), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Dynamical Structures in Stochastic Chemical Reaction Systems, McGill U., McConnell Engineering Building, Room 204
Joseph Varilly (University of Costa Rica, San Jose, Costa Rica), Noncommutative Geometry and Quantization, How does chirality of the Standard Model arise?, McGill U., Bronfman Building, Room 46
Helena Nussenzveig Lopes (Universidade Federal do Rio de Janeiro), Nonlinear and Stochastic Partial Differential Equations, Critical Regularity for Energy Conservation in 2D Inviscid Fluid Dynamics, McGill U., Burnside Hall, Room 1214
Kannan Soundararajan (Stanford University), Number Theory & Analysis, Value distribution of L-functions., McGill U., Rutherford Physics Building, Room 115
Zhisheng Shuai (University of Central Florida, USA), Recent Advance in Disease Dynamics Analysis, Coupled Infectious Disease Models via Asymmetric Movements, McGill U., McConnell Engineering Building, Room 304
Alistair Savage (University of Ottawa), Representations of Lie Algebras, An equivalence between truncations of categorified quantum groups and Heisenberg categories, McGill U., Arts Building, Room 260
Charles Paquette (University of Connecticut, USA), Representation Theory of Algebras, Group actions on cluster categories and cluster algebras, McGill U., Arts Building, Room W-20
Xianghong Gong (University of Wisconsin--Madison, USA), Several Complex Variables, H\"{o}lder estimates for homotopy operators on strictly pseudoconvex domains with $C^2$ boundary, McGill U., Trottier Building, Rm 2120
Godofredo Iommi (Pontificia Universidad Católica de Chile, Chile), Symbolic Dynamics, Thermodynamic properties of the Jacobi Perron algorithm, McGill U., Trottier Building (Engineering), Room 1100
Ana Rechtman (Université de Strasbourg), Shape, Homotopy, and Attractors, Variations of the Kuperberg plug with positive topological entropy, McGill U., Burnside Hall, Room 1B39
Charles Starling (Carleton University), Topological Dynamics and Operator Algebras, Bratteli-Vershik models for partial actions of $\mathbb{Z}$, McGill U., Bronfman Building, Room 151
Emily Redelmeier (ISARA Corporation), Von Neumann Algebras and their Applications, Diagrammatic techniques for real and quaternionic matrices: finite case and asymptotics, McGill U., Bronfman Building, Room 179
14:15 - 14:40 Edgar Martinez Moro (University of Valladolid), Finite Algebraic Combinatorics and Applications, New Avenues for Test Sets in Coding Theory, Centre Mont Royal, Salon International I & II
Benjamin Schlein (University of Zurich), Mathematical Physics, Dynamical and spectral properties of Bose-Einstein condensates, McGill, McConnell Engineering Building, Rm 12
14:15 - 14:55 Pedro Luis del Angel (CIMAT), Motives and Periods, Specialization of cycles and the K-theory elevator, McGill U., Birks Building, Room 203
14:15 - 15:00 Pablo Shmerkin (CONICET and Universidad Torcuato di Tella, Buenos Aires, Argentina), Fractal Geometry and Dynamical Systems, Furstenberg's intersection conjecture and the $L^q$ norm of convolutions, McGill U., Trottier Building (Engineering), Room 100
14:15 - 15:15 Steve Zelditch (Northwestern University, USA), Spectrum and Dynamics, Intersections of nodal sets and curves and geometric control, McGill U., Trottier Building (Engineering), Room 60
14:45 - 15:05 Katie Morrison (University of Northern Colorado, USA), Applied and Computational Algebra and Geometry, Algebraic Signatures of Convex and Non-Convex Neural Codes, McGill U., Birks Building, Room 205
María Chara (Instituto de Matemática Aplicada del Litoral), Arithmetic Geometry and Related Topics, Subtowers of towers of function fields, Rutherford Physics Building, Room 114
Jorge Neves (University of Coimbra, Portugal), Combinatorial Commutative Algebra, Regularity of the vanishing ideal over a parallel composition of paths, McGill U., Birks Building, Room 111
Yago Antolin Pichel (Madrid, Spain), Computations in Groups and Applications, Counting with the falsification by fellow traveller property, McGill U., Bronfman Building, Room 45
Ivonne Rivas (Universidad del Valle), Control of Partial Differential Equations, Stabilization for a type of uncontrollable systems, McGill U., Burnside Hall, Room 1205
Miriam Bocardo Gaspar (Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional), CMS-Studc Student Research Session, Local zeta functions and p-adic string amplitudes, Rutherford Physics Building, Room 118, McGill
John Bowman (University of Alberta, Canada), Equations of Fluid Mechanics: Numerics, On the Global Attractor of 2D Incompressible Turbulence with Random Forcing, McGill U., McConnell Engineering Building, Room 13
Adam Larios (University of Nebraska - Lincoln), Equations of Fluid Mechanics: Analysis, Continuous Data Assimilation: Multiphysics and Nonlinear Feedback, McGill U., McConnell Engineering Building, Room 11
Asaf Ferber (MIT), Extremal and Probabilistic Combinatorics, Spanning universality property of random graphs, McGill U.,Arts Building, W-120
Andrés Angel (Universidad de los Andes), Geometry and Combinatorics of Cell Complexes, Evasiveness of graph properties and graphs on 2p vertices., McGill U., Burnside Hall, Room 1B45
Steve Bradlow (UIUC), Geometry and Physics of Higgs Bundles, Fiber products and spectral data for Higgs bundles, McGill U., Burnside Hall, Room 1B24
Qiyu Sun (University of Central Florida), Harmonic Analysis and Inverse Problems, Phaseless sampling and reconstruction of real-valued signals in shift-invariant spaces, McGill U., Trottier Bldg., Rm 2110
Mikhail Kotchetov (Memorial University, Canada), Hopf Algebras and Tensor Categories, Applications of affine group schemes to the study of gradings by abelian groups, McGill U., Arts Building, Room W-215
Jaime Andrade (Univerdsidad del Bío, Bío, Chile), Hamiltonian Systems and Celestial Mechanics, Regularization of the restricted three body problem on surfaces of constant curvature, McGill U., Burnside Hall, Room 1B36
Tarek Elgindi (Princeton University), Incompressible Fluid Dynamics, Finite-time singularity formation for De Gregorio's model of the 3d vorticity equation, McGill U., Burnside Hall, Room 920
Mariana Silveira (UFABC - São Paulo, Brazil), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Bifurcation detected by a Spectral Sequence in the Morse-Smale Setting, McGill U., Trottier Building (Engineering), Room 70
Jiayin Jin (Georgia Institute of Technology), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Dynamics near solitons of the supercritical gKDV equations, McGill U., McConnell Engineering Building, Room 204
Severino Toscano do Rego Melo (University of Sao Paulo, Sao Paulo, Brazil), Noncommutative Geometry and Quantization, K-theory of pseudodifferential operators with semiperiodic symbols on a cylinder., McGill U., Bronfman Building, Room 46
Vlad Vicol (Princeton University), Nonlinear and Stochastic Partial Differential Equations, Nonuniqueness of weak solutions to the SQG equation, McGill U., Burnside Hall, Room 1214
Emanuel Carneiro (IMPA), Number Theory & Analysis, Bandlimited approximations and estimates for the Riemann zeta-function, McGill U., Rutherford Physics Building, Room 115
Brenda Tapia-Santos (Universidad Veracruzana, Mexico), Recent Advance in Disease Dynamics Analysis, A model for the dynamics of nonsterilizing HIV vaccines, McGill U., McConnell Engineering Building, Room 304
Dimitar Grantcharov (University of Texas, Arlington), Representations of Lie Algebras, Bounded weight modules of the Lie algebra of vector fields on the affine space, McGill U., Arts Building, Room 260
Daniel Labardini (Universidad Nacional Autónoma de México, Mexico), Representation Theory of Algebras, Species with potential arising from surfaces with orbifold points, McGill U., Arts Building, Room W-20
Liz Vivas (Ohio State University, USA), Several Complex Variables, Parabolic skew-products and parametrization, McGill U., Trottier Building, Rm 2120
Zemer Kosloff (Hebrew University of Jerusalem, Israel), Symbolic Dynamics, The Krieger types of Bernoulli and Markov shifts, McGill U., Trottier Building (Engineering), Room 1100
Steve Hurder (University of Illinois at Chicago), Shape, Homotopy, and Attractors, Smooth flows with fractional entropy dimension, McGill U., Burnside Hall, Room 1B39
Marcelo Sobottka (UFSC - Universidade Federal de Santa Catarina), Topological Dynamics and Operator Algebras, The Curtis-Hedlund-Lyndon Theorem for generalized sliding block codes between Ott-Tomforde-Willis shift spaces, McGill U., Bronfman Building, Room 151
Rolando de Santiago (University of Iowa, USA), Von Neumann Algebras and their Applications, Product Rigidity for Non-Prime Group von Neumann Algebras, McGill U., Bronfman Building, Room 179
14:45 - 15:10 Ángeles Vazquez-Castro (Universidad Autónoma de Barcelona, UAB), Finite Algebraic Combinatorics and Applications, Wiretap Coset Coding in the Transform Domain, Centre Mont Royal, Salon International I & II
Francis N. Castro (Universidad de Puerto Rico Mayaguez), Mathematical Applications in Cryptography, Diophantine Equations with Binomials Coefficents and Perturbations of Symmetric Boolean Functions, Centre Mont Royal, Room Cartier I & II
Alessandro Giuliani (University of Rome), Mathematical Physics, Haldane relation for interacting dimers, McGill, McConnell Engineering Building, Rm 12
15:15 - 15:40 Dr. Valentin Suder (UVSQ), Mathematical Applications in Cryptography, Two Notions of Differential Equivalence on Sboxes, Centre Mont Royal, Room Cartier I & II
15:45 - 16:05 Vladimir Istkov (The Pennsylvania State University , USA), Applied and Computational Algebra and Geometry, Detecting non-linear rank via the topology of hyperplane codes., McGill U., Birks Building, Room 205
Antonio Cafure (Universidad Nacional de General Sarmiento), Arithmetic Geometry and Related Topics, Cyclotomic polinomials and linear algebra, Rutherford Physics Building, Room 114
Luis Nunez-Betancourt (University of Virginia), Combinatorial Commutative Algebra, Graph connectivity via homological invariants, McGill U., Birks Building, Room 111
Christophe Hohlweg (University of Quebec, Montreal, Canada), Computations in Groups and Applications, Garside Shadows and Automata in Coxeter groups, McGill U., Bronfman Building, Room 45
Ludovick Gagnon (Jussieu, France), Control of Partial Differential Equations, Sufficient conditions for the boundary controllability of the wave equation with transmission conditions, McGill U., Burnside Hall, Room 1205
Jordan Kostiuk (University of Alberta), CMS-Studc Student Research Session, Geometrization of Supersymmetry Algebras, Rutherford Physics Building, Room 118, McGill
Andy Wan, Equations of Fluid Mechanics: Numerics, Conservative schemes for dynamical systems with application to vortex dynamics, McGill U., McConnell Engineering Building, Room 13
Susan Friedlander (University of Southern California), Equations of Fluid Mechanics: Analysis, Small parameter limits for magnetogeostrophic turbulence., McGill U., McConnell Engineering Building, Room 11
Gweneth McKinley (MIT), Extremal and Probabilistic Combinatorics, Counting H-free graphs for bipartite H, McGill U.,Arts Building, W-120
Xin Zhou (MIT, USA), Geometric Analysis, Min-max minimal hypersurfaces with free boundary, McGill U., Burnside Hall, Room 1B23
Art Duval (University of Texas at El Paso), Geometry and Combinatorics of Cell Complexes, A non-partitionable Cohen-Macaulay simplicial complex, McGill U., Burnside Hall, Room 1B45
Andy Neitzke (UT Austin), Geometry and Physics of Higgs Bundles, Abelianization in classical complex Chern-Simons theory, McGill U., Burnside Hall, Room 1B24
Laura De Carli (Florida International University), Harmonic Analysis and Inverse Problems, p-Riesz bases in quasi shift invariant spaces, McGill U., Trottier Bldg., Rm 2110
Sonia Natale (Universidad Nacional de Córdoba, Argentina), Hopf Algebras and Tensor Categories, On the classification of almost square-free modular categories., McGill U., Arts Building, Room W-215
Cristina Stoica (Wilfrid Laurier University, Canada), Hamiltonian Systems and Celestial Mechanics, Remarks on the n-body dynamics on surfaces of revolution, McGill U., Burnside Hall, Room 1B36
Anne Bronzi (Universidade Estadual de Campinas), Incompressible Fluid Dynamics, Abstract framework for the theory of statistical solutions, McGill U., Burnside Hall, Room 920
Hiroshi Kokubu (Kyoto, Japan), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Global dynamics of systems with steep nonlinearities, McGill U., Trottier Building (Engineering), Room 70
Márcia Federson (University of São Paulo, Brazil), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Generalized ODEs: overview and trends, McGill U., McConnell Engineering Building, Room 204
Sherry Gong (MIT, Cambridge, USA), Noncommutative Geometry and Quantization, Traces on reduced group $C^*$-algebras, McGill U., Bronfman Building, Room 46
Dana Mendelson (University of Chicago), Nonlinear and Stochastic Partial Differential Equations, Probabilistic scattering for the 4D energy-critical defocusing nonlinear wave equation, McGill U., Burnside Hall, Room 1214
Lola Thompson (Oberlin College), Number Theory & Analysis, Bounded gaps between primes and the length spectra of arithmetic hyperbolic $3$-orbifolds, McGill U., Rutherford Physics Building, Room 115
Yanyu Xiao (University of Cincinati, USA), Recent Advance in Disease Dynamics Analysis, Seasonal impact on vector-borne disease dynamics, McGill U., McConnell Engineering Building, Room 304
Ben Cox (College of Charleston), Representations of Lie Algebras, On the universal central extension of certain Krichever-Novikov algebras., McGill U., Arts Building, Room 260
Khrystyna Serhiyenko (University of California, Berkeley, USA), Representation Theory of Algebras, Mutation of Conway-Coxeter friezes, McGill U., Arts Building, Room W-20
Arturo Fernández Pérez (Universidade Federal de Minas Gerais, Brazil), Several Complex Variables, Residue-type indices, applications to holomorphic foliations and Levi-flat hypersurfaces, McGill U., Trottier Building, Rm 2120
Albert Fisher (Universidade de São Paulo, Brazil), Symbolic Dynamics, Finite and infinite measures for adic transformations, McGill U., Trottier Building (Engineering), Room 1100
Daniel Ingebretson (University of Illinois at Chicago), Shape, Homotopy, and Attractors, Hausdorff dimension of Kuperberg minimal sets, McGill U., Burnside Hall, Room 1B39
Hui Li (University of Windsor), Topological Dynamics and Operator Algebras, On The Products of Two Odometers, McGill U., Bronfman Building, Room 151
Thomas Sinclair (Purdue University, USA), Von Neumann Algebras and their Applications, Robinson forcing in C*-algebras, McGill U., Bronfman Building, Room 179
15:45 - 16:10 Padmapani Seneviratne (Texas A&M, University), Finite Algebraic Combinatorics and Applications, Paley type bipartite graphs and self-dual codes, Centre Mont Royal, Salon International I & II
Guiseppe de Nittis (PUC, Chille), Mathematical Physics, Linear Response Theory: An Analytic-Algebraic Approach, McGill, McConnell Engineering Building, Rm 12
15:45 - 16:25 Colleen Robles (Duke University), Motives and Periods, Generalizing the Satake-Bailey-Borel compactification, McGill U., Birks Building, Room 203
15:45 - 16:30 Krystal Taylor (Ohio State University, Columbus, United States), Fractal Geometry and Dynamical Systems, On the algebraic sum of a planar set and a smooth curve, McGill U., Trottier Building (Engineering), Room 100
15:45 - 16:45 Chris Sogge (Johns Hopkins University, USA), Spectrum and Dynamics, On the concentration of eigenfunctions, McGill U., Trottier Building (Engineering), Room 60
16:15 - 16:35 Guillermo Matera (Universidad Nacional de General Sarmiento, Argentina), Applied and Computational Algebra and Geometry, On the bit complexity of polynomial system solving, McGill U., Birks Building, Room 205
Natalia Garcia-Fritz (University of Toronto), Arithmetic Geometry and Related Topics, Curves of low genus and applications to Diophantine problems, Rutherford Physics Building, Room 114
Susan Cooper (North Dakota State University), Combinatorial Commutative Algebra, The Waldschmidt Constant For Monomial Ideals, McGill U., Birks Building, Room 111
Elisabeth Fink (University of Ottawa, Canada), Computations in Groups and Applications, Labelled geodesics in Coxeter groups, McGill U., Bronfman Building, Room 45
Cristhian Montoya (UNAM, Mexico), Control of Partial Differential Equations, Robust Stackelberg controllability for the Navier-Stokes equations, McGill U., Burnside Hall, Room 1205
Yurij Salmaniw (McMaster University), CMS-Studc Student Research Session, Bounded Solutions to a Singular Parabolic System, Rutherford Physics Building, Room 118, McGill
Slim Ibrahim (University of Victoria), Equations of Fluid Mechanics: Analysis, Analysis of a Magneto-Hydro-Dynamic (MHD) system, McGill U., McConnell Engineering Building, Room 11
Julian Sahasrabudhe (University of Memphis), Extremal and Probabilistic Combinatorics, Exponential Patterns in Arithmetic Ramsey Theory, McGill U.,Arts Building, W-120
Edgardo Roldán-Pensado (Universidad Nacional Autónoma de México), Geometry and Combinatorics of Cell Complexes, Measure partitions with fixed directions, McGill U., Burnside Hall, Room 1B45
Sara Maloni (University of Virginia), Geometry and Physics of Higgs Bundles, The geometry of quasi-Hitchin symplectic Anosov representations., McGill U., Burnside Hall, Room 1B24
Sui Tang (John Hopkins University), Harmonic Analysis and Inverse Problems, Phase retrieval of evolving signals from space-time samples, McGill U., Trottier Bldg., Rm 2110
Julia Plavnik (Texas A&M University, USA), Hopf Algebras and Tensor Categories, On classification of super-modular categories by rank, McGill U., Arts Building, Room W-215
Antonio Hernandez-Garduno (UNAM, México), Hamiltonian Systems and Celestial Mechanics, Symmetric bifurcations of relative equilibria and isotropy for an $X_2 Y$ molecule, McGill U., Burnside Hall, Room 1B36
Javier Gomez Serrano (Princeton University), Incompressible Fluid Dynamics, Global solutions for the generalized SQG equation, McGill U., Burnside Hall, Room 920
Hiroe Oka (Ryukoku , Japan), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Detecting Morse Decompositions of the Global Attractor of Regulatory Networks by Time Series Data, McGill U., Trottier Building (Engineering), Room 70
Juliana Pimentel (Federal University of ABC, Brazil), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Longtime behavior of reaction-diffusion equations with infinite-time blow-up, McGill U., McConnell Engineering Building, Room 204
Eugenia Ellis (University of the Republic, Montevideo, Uruguay), Noncommutative Geometry and Quantization, Algebraic quantum kk-theory, McGill U., Bronfman Building, Room 46
Benoit Pausader (Brown University), Nonlinear and Stochastic Partial Differential Equations, Global existence for a wave-Klein-Gordon system, McGill U., Burnside Hall, Room 1214
Henry Kim (University of Toronto), Number Theory & Analysis, The least prime in a conjugacy class, McGill U., Rutherford Physics Building, Room 115
Chunhua Shan (University of Toledo, USA), Recent Advance in Disease Dynamics Analysis, Oscillations and complex dynamics in mosquito-borne diseases, McGill U., McConnell Engineering Building, Room 304
Adriano Moura (University of Campinas), Representations of Lie Algebras, Tensor Products of Integrable Modules for Affine Algebras, Demazure Flags, and Partition Identities, McGill U., Arts Building, Room 260
Diane Castonguay (Universidade Federal de Goias, Goiania, Brasil), Representation Theory of Algebras, Polynomial recognition of cluster algebras of finite type, McGill U., Arts Building, Room W-20
Jorge Guillermo Hounie (Universidade Federal de São Carlos, Brazil), Several Complex Variables, A Hopf lemma for holomorphic functions in Hardy spaces and applications to CR mappings, McGill U., Trottier Building, Rm 2120
Jon Chaika (University of Utah, USA), Symbolic Dynamics, Almost every 3-interval exchange transformation is not simple, McGill U., Trottier Building (Engineering), Room 1100
Jose M. Sanjurjo (Universidad Complutense de Madrid), Shape, Homotopy, and Attractors, Perturbation of global attractors and Shape Theory, McGill U., Burnside Hall, Room 1B39
Maria Isabel Cortez (Universidad de Santiago de Chile), Topological Dynamics and Operator Algebras, Strong orbit equivalence and eigenvalues, McGill U., Bronfman Building, Room 151
Isaac Goldbring (UC Irvine, USA), Von Neumann Algebras and their Applications, Explicit sentences distinguishing McDuff's II$_1$ factors, McGill U., Bronfman Building, Room 179
16:15 - 16:40 Alberto Ravagnani (University of Toronto), Finite Algebraic Combinatorics and Applications, Combinatorics and covering radius of rank-metric error-correcting codes, Centre Mont Royal, Salon International I & II
Hanne van den Bosch (PUC, Chille), Mathematical Physics, Spectrum of Dirac operators describing Graphene Quantum dots, McGill, McConnell Engineering Building, Rm 12
17:00 - 17:20 Ezra Miller (Duke University, USA), Applied and Computational Algebra and Geometry, Algebraic data structures for topological summaries, McGill U., Birks Building, Room 205
Ricardo Conceição (Gettysburg College), Arithmetic Geometry and Related Topics, Solutions of the Hurwitz-Markoff equation over polynomial rings, Rutherford Physics Building, Room 114
Jennifer Biermann (Hobart and William Smith Colleges), Combinatorial Commutative Algebra, Colorings of simplicial complexes and vertex decomposability, McGill U., Birks Building, Room 111
Kirsten Morris (University of Waterloo), Control of Partial Differential Equations, Optimal Sensor Design in Estimation, McGill U., Burnside Hall, Room 1205
Kento Osuga (University of Alberta), CMS-Studc Student Research Session, Supereigenvalue Models and Topological Recursion, Rutherford Physics Building, Room 118, McGill
Alexandre Noll Marques (MIT, USA), Equations of Fluid Mechanics: Numerics, Solving flow problems to high order of accuracy with embedded boundaries, McGill U., McConnell Engineering Building, Room 13
Hakima Bessaih (University of Wyoming), Equations of Fluid Mechanics: Analysis, Mean field limit of interacting filaments and vector valued non linear PDEs, McGill U., McConnell Engineering Building, Room 11
Jacob Fox (Stanford), Extremal and Probabilistic Combinatorics, Arithmetic progressions, regularity lemmas, and removal lemmas, McGill U.,Arts Building, W-120
Davi Maximo (University of Pennsylvania, USA), Geometric Analysis, On Morse index estimates for minimal surfaces, McGill U., Burnside Hall, Room 1B23
Leticia Brambila-Paz (CIMAT), Geometry and Physics of Higgs Bundles, Coherent Higgs Systems, McGill U., Burnside Hall, Room 1B24
Kasso A. Okoudjou (University of Maryland), Harmonic Analysis and Inverse Problems, The HRT Conjecture for real-valued functions, McGill U., Trottier Bldg., Rm 2110
Eric Rowell (College Station, USA), Hopf Algebras and Tensor Categories, Metaplectic Modular Categories, McGill U., Arts Building, Room W-215
Lennard Bakker (Brigham Young University, USA), Hamiltonian Systems and Celestial Mechanics, Topological Existence of Periodic Orbits in a Two-Center Symmetric Pair Problem, McGill U., Burnside Hall, Room 1B36
Gabriela Planas (Universidade Estadual de Campinas), Incompressible Fluid Dynamics, Well-posedness for a non-isothermal flow of two viscous incompressible fluids, McGill U., Burnside Hall, Room 920
Pawel Pilarczyk (CSUCI, USA), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Progress in algorithmic computation of the homological Conley index map, McGill U., Trottier Building (Engineering), Room 70
Antonín Slavík (Charles University, Czech Republic), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Invariant regions for systems of lattice reaction-diffusion equations, McGill U., McConnell Engineering Building, Room 204
Joseph Migler (Ohio State University, Columbus, USA), Noncommutative Geometry and Quantization, Determinants of almost commuting operators, McGill U., Bronfman Building, Room 46
Vincent Martinez (Tulane University), Nonlinear and Stochastic Partial Differential Equations, Applications of asymptotic coupling in hydrodynamic and related equations, McGill U., Burnside Hall, Room 1214
Emilio Lauret (Universidad Nacional de Cordoba (Argentina)), Number Theory & Analysis, One-norm spectrum of a lattice, McGill U., Rutherford Physics Building, Room 115
Xiaotian Wu (University of Montreal, Canada), Recent Advance in Disease Dynamics Analysis, Model-based and data-driven pharmacokinetic parameter estimation, McGill U., McConnell Engineering Building, Room 304
Arturo Pianzola (University of Alberta), Representations of Lie Algebras, Lie algebroids arising from infinite dimensional Lie theory, McGill U., Arts Building, Room 260
Pamela Suárez (Universidad Nacional de Mar del Plata, Argentina), Representation Theory of Algebras, On the global dimension of the endomorphism algebra of a $\tau$-tilting module, McGill U., Arts Building, Room W-20
Purvi Gupta (University of Western Ontario, Canada), Several Complex Variables, A nonpolynomially convex isotropic torus with no attached discs, McGill U., Trottier Building, Rm 2120
Wenbo Sun (Ohio State University, USA), Symbolic Dynamics, Symbolic counter-examples for quantitative multiple recurrence problems, McGill U., Trottier Building (Engineering), Room 1100
Francisco R. Ruiz del Portal (Universidad Complutense de Madrid), Shape, Homotopy, and Attractors, About the cohomological Conley index of isolated invariant continua, McGill U., Burnside Hall, Room 1B39
Dr. Adam Dor-On (University of Waterloo), Topological Dynamics and Operator Algebras, Representations of Toeplitz-Cuntz-Krieger algebras, McGill U., Bronfman Building, Room 151
17:00 - 17:25 Felice Manganiello (Clemson University), Finite Algebraic Combinatorics and Applications, Representations of the Multicast Network Problem, Centre Mont Royal, Salon International I & II
17:00 - 17:40 Phillip Griffiths (Institute for Advanced Study), Motives and Periods, Hodge Theory and Moduli, McGill U., Birks Building, Room 203
Marcelo Laca (University of Victoria, Canada), Von Neumann Algebras and their Applications, KMS states of the C*-algebras of quasi-lattice ordered semigroups, McGill U., Bronfman Building, Room 179
17:00 - 17:45 Balász Bárány (Budapest University of Technology and Economics, Budapest, Hungary), Fractal Geometry and Dynamical Systems, Dimension of self-affine sets with typical linear parts, McGill U., Trottier Building (Engineering), Room 100
17:00 - 17:50 Jan Philip Solovaj (University of Copenhagen), Mathematical Physics, Dirac operators with magnetic links, McGill, McConnell Engineering Building, Rm 12
17:00 - 18:00 Michael Davis (The Ohio State University), Geometry and Combinatorics of Cell Complexes, The Euler Characteristic Conjecture and the Charney-Davis Conjecture, McGill U., Burnside Hall, Room 1B45
Xiangjin Xu (SUNY Binghampton), Spectrum and Dynamics, Gradient estimates for spectral clusters and Carleson measures on compact manifolds with boundary., McGill U., Trottier Building (Engineering), Room 60
17:30 - 17:50 Amalia Pizarro-Madariaga (Universidad de Valparaíso), Arithmetic Geometry and Related Topics, Rational Products of Singular Moduli, Rutherford Physics Building, Room 114
Jay Schweig (Oklahoma State University), Combinatorial Commutative Algebra, The type defect of a simplicial complex, McGill U., Birks Building, Room 111
Marcelo Moreira Cavalcanti (Universidade Estadual de Maringá), Control of Partial Differential Equations, UNIFORM DECAY RATE ESTIMATES FOR THE SEMILINEAR WAVE EQUATION IN INHOMOGENEOUS MEDIA WITH LOCALLY DISTRIBUTED NONLINEAR DAMPING, McGill U., Burnside Hall, Room 1205
Octavian Mitria (University of Western Ontario), CMS-Studc Student Research Session, Open Whitney umbrellas are locally polynomially convex, Rutherford Physics Building, Room 118, McGill
Pascal Poullet (Université des Antilles, France), Equations of Fluid Mechanics: Numerics, An explicit predictor-corrector scheme for sediment transports, McGill U., McConnell Engineering Building, Room 13
Michael Dabkowski (Lawrence Technological University), Equations of Fluid Mechanics: Analysis, On the Global Stability of a Nonlinear PDE with a Nonlocal Term, McGill U., McConnell Engineering Building, Room 11
Claudio Meneses (CIMAT), Geometry and Physics of Higgs Bundles, On the Narasimhan-Atiyah-Bott metrics on moduli of parabolic bundles, McGill U., Burnside Hall, Room 1B24
Zuhair Nashed (University of Central Florida), Harmonic Analysis and Inverse Problems, Perturbation Theory of Generalized Inverse Operators and Ill-Posed Inverse Problems, McGill U., Trottier Bldg., Rm 2110
César Galindo (Universidad de los Andes, Bogotá, Colombia), Hopf Algebras and Tensor Categories, Fermionic Modular Categories, McGill U., Arts Building, Room W-215
Michele Coti-Zelati (University of Maryland), Incompressible Fluid Dynamics, Mixing and dissipation in two-dimensional fluids, McGill U., Burnside Hall, Room 920
Ewerton Vieira (UF – Goiás, Brazil), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Detecting Bifurcations via Transition Matrix, McGill U., Trottier Building (Engineering), Room 70
Harbir Antil (George Mason University, Virginia, USA), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Fractional Operators with Inhomogeneous Boundary Conditions: Analysis, Control, and Discretization, McGill U., McConnell Engineering Building, Room 204
Rudy Rodsphon (Vanderbilt University, Nashville, USA), Noncommutative Geometry and Quantization, Quantizations and index theory, McGill U., Bronfman Building, Room 46
Konstantin Matetski (University of Toronto), Nonlinear and Stochastic Partial Differential Equations, Convergence of general weakly asymmetric exclusion processes, McGill U., Burnside Hall, Room 1214
Alex Kontorovich (Rutgers University), Number Theory & Analysis, Beyond Expansion and Arithmetic Chaos, McGill U., Rutherford Physics Building, Room 115
Thomas Brüstle (Université de Sherbrooke, Québec, Canada), Representation Theory of Algebras, Stability conditions and torsion classes, McGill U., Arts Building, Room W-20
Jorge Vitório Pereira (Instituto de Matemática Pura e Aplicada, Brazil), Several Complex Variables, Holonomy of compact leaves, McGill U., Trottier Building, Rm 2120
Yair Hartman (Northwestern University, USA), Symbolic Dynamics, Thompson's group F is not strongly amenable, McGill U., Trottier Building (Engineering), Room 1100
Rolando Jiménez Benitez (Universidad Nacional Autónoma de México, Cuernavaca Branch), Shape, Homotopy, and Attractors, Free, proper and cellular actions of discrete groups on homotopy circles, McGill U., Burnside Hall, Room 1B39
Daniel Gonçalves (UFSC - Universidade Federal de Santa Catarina), Topological Dynamics and Operator Algebras, Infinite alphabet edge shift spaces via ultragraphs and their C*-algebras, McGill U., Bronfman Building, Room 151
17:30 - 17:55 Michael O'Sullivan (Saint Diego University), Finite Algebraic Combinatorics and Applications, Maximal $t$-path traceable graphs, Centre Mont Royal, Salon International I & II
19:00 - 20:00 Banquet Reception, Centre Mont-Royal
20:00 - 22:00 Banquet, Centre Mont-Royal
Friday, July 28↟
09:00 - 10:00 Harald Helfgott (University Paris Diderot-Paris VII, France), Invited Speakers, Voronoi, Sierpinski, Eratosthenes, International I (E) & II (F), CMR
Jill Pipher (Brown University, USA), Invited Speakers, Regularity of solutions to elliptic and parabolic equations, Symposia Auditorium, CMR
Nicolas Andruskiewitsch (University of Cordoba, Argentina), Invited Speakers, Nichols algebras, Cartier I & II, CMR
10:30 - 11:30 Kannan Soundararajan (Stanford University, USA), Plenary Lectures, Recent progress in multiplicative number theory, Symposia Auditorium, CMR
11:45 - 12:05 Jon Hauenstein (University of Notre Dame , USA), Applied and Computational Algebra and Geometry, Equilibria of the Kuramoto model, McGill U., Birks Building, Room 205
Cecilia Salgado (Universidade Federal do Rio de Janeiro), Arithmetic Geometry and Related Topics, Rank bounds on fibrations of jacobians varieties, Rutherford Physics Building, Room 114
Adam Van Tuyl (McMaster University), Combinatorial Commutative Algebra, Algebraic properties of circulant graphs, McGill U., Birks Building, Room 111
Dessislave Kochloukova (UNICAMP, Campinhas, Brasil), Computations in Groups and Applications, Fibre products of groups, McGill U., Bronfman Building, Room 45
Valeria Cavalcanti (Universidade Estadual de Maringá), Control of Partial Differential Equations, Exponential stability for the wave model with localized memory, McGill U., Burnside Hall, Room 1205
Marie Lafrance (Université de Montréal), CMS-Studc Student Research Session, Supersymmetric sigma models and constant solutions, Rutherford Physics Building, Room 118, McGill
Andres Navas (Universidad de Santiago de Chile (USACH)), Discrete Groups and Operator Algebras, On the affine isometric actions over a dynamics., McGill U., Bronfman Building, Room 178
Edriss Titi (Texas A&M University / Weizmann Institute), Equations of Fluid Mechanics: Analysis, Recent advances concerning the Primitive Equations of oceanic and atmospheric dynamics, McGill U., McConnell Engineering Building, Room 11
Maya Stein (Universidad de Chile), Extremal and Probabilistic Combinatorics, Tree embeddings, McGill U.,Arts Building, W-120
Ailana Fraser (UBC, Canada), Geometric Analysis, Minimal submanifolds in manifolds of nonnegative Ricci curvature, McGill U., Burnside Hall, Room 1B23
Steve Rayan (University of Saskatchewan), Geometry and Physics of Higgs Bundles, Asymptotics of hyperpolygons, McGill U., Burnside Hall, Room 1B24
Galina Garcia (Universidad de Santiago de Chile), Harmonic Analysis and Inverse Problems, A source reconstruction algorithm for the Stokes system from local and missing velocity measurements, McGill U., Trottier Bldg., Rm 2110
Pavel Etingof (MIT, USA), Hopf Algebras and Tensor Categories, A counterexample to the Poincare-Birkhoff-Witt theorem, McGill U., Arts Building, Room W-215
Tanya Schmah (University of Ottawa), Hamiltonian Systems and Celestial Mechanics, Controlling rigid body attitude via shape change, McGill U., Burnside Hall, Room 1B36
Susan Friedlander (University of Southern California), Incompressible Fluid Dynamics, A stochastically forced shell model for turbulence, McGill U., Burnside Hall, Room 920
Marian Mrozek (UJ Krakòw, Poland), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Conley index theory for sampled dynamical systems, McGill U., Trottier Building (Engineering), Room 70
Tomás Caraballo (University of Seville, Spain), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Asymptotic behavior of 2D-Navier-Stokes equations with bounded or unbounded delay, McGill U., McConnell Engineering Building, Room 204
Mitsuru Wilson (Universidad de los Andes, Bogota, Colombia), Noncommutative Geometry and Quantization, Canonical group quantization of the noncommutative tori and the noncommutative spheres, McGill U., Bronfman Building, Room 46
Karen Zaya (University of Michigan), Nonlinear and Stochastic Partial Differential Equations, On Regularity Properties for Fluid Equations, McGill U., Burnside Hall, Room 1214
Michael Li (University of Alberta, Canada), Recent Advance in Disease Dynamics Analysis, Nonidentifiability Issues in Fitting Transmission Models with Disease Data, McGill U., McConnell Engineering Building, Room 304
Mikhail Kotchetov (Memorial University), Representations of Lie Algebras, Graded-simple modules via the loop construction, McGill U., Arts Building, Room 260
Maria Julia Redondo (Universidad Nacional del Sur, Bahia Blanca, Argentina), Representation Theory of Algebras, Cohomology of partial smash products, McGill U., Arts Building, Room W-20
Yunus E Zeytuncu (University of Michigan-- Dearborn, USA), Several Complex Variables, Friedrichs Operator on Pseudoconvex Domains in $\mathbb{C}^n$, McGill U., Trottier Building, Rm 2120
Samuel Petite (Université de Picardie Jules Verne, France), Symbolic Dynamics, Restrictions on the group of automorphisms preserving a subshift, McGill U., Trottier Building (Engineering), Room 1100
Alexander N. Dranishnikov (University of Florida), Shape, Homotopy, and Attractors, Cohomologically Strongly Infinite Dimensional Compacta, McGill U., Burnside Hall, Room 1B39
11:45 - 12:10 Daniel Panario (Carleton University), Mathematical Applications in Cryptography, Ambiguity, deficiency and differential spectrum of low degree normalized permutation polynomials over finite fields, Centre Mont Royal, Room Cartier I & II
11:45 - 12:15 Huaxin Lin (University of Oregon), Classification of Amenable C*-algebras, Recent results in the Elliott program, McGill U., Bronfman Building, Room 1
11:45 - 12:25 Ionut Chifan (University of Iowa, USA), Von Neumann Algebras and their Applications, Rigidity in group von Neumann algebra, McGill U., Bronfman Building, Room 179
11:45 - 12:30 Mehdi Pourbarat (Shahid Beheshti University, Tehran, Iran), Fractal Geometry and Dynamical Systems, On the arithmetic difference of middle Cantor sets, McGill U., Trottier Building (Engineering), Room 100
11:45 - 12:35 Gian Michele Graf (ETH, Zurich), Mathematical Physics, An overview on topological insulators, McGill, McConnell Engineering Building, Rm 12
11:45 - 12:45 Alex Suciu (Northeastern University), Geometry and Combinatorics of Cell Complexes, Polyhedral products, duality properties, and Cohen-Macaulay complexes, McGill U., Burnside Hall, Room 1B45
Pablo Shmerkin (Universidad di Tella, Argentina), Spectrum and Dynamics, Normal numbers and fractal measures, McGill U., Trottier Building (Engineering), Room 60
12:00 - 15:00 Student Posters Teardown, Cartier / International Foyer, Centre Mont Royal
12:15 - 12:35 Sara Kalisnik Verovsek (Brown University, USA), Applied and Computational Algebra and Geometry, Tropical Coordinates on the Space of Persistence Barcodes, McGill U., Birks Building, Room 205
Christelle Vincent (University of Vermont), Arithmetic Geometry and Related Topics, Constructing hyperelliptic curves of genus 3 whose Jacobians have CM, Rutherford Physics Building, Room 114
Sonja Mapes (University of Notre Dame), Combinatorial Commutative Algebra, Computing projective dimension of monomial ideals via associated hypergraphs and lcm-lattices, McGill U., Birks Building, Room 111
Cristobal Rivas (Universidad de Santiago de Chile), Computations in Groups and Applications, Dynamical properties of invariant orderings on groups, McGill U., Bronfman Building, Room 45
Eduardo Cerpa (Universidad Técnica Federico Santamaría), Control of Partial Differential Equations, On the stability of some PDE-ODE systems involving the wave equation, McGill U., Burnside Hall, Room 1205
Thad Janisse (University of Toronto), CMS-Studc Student Research Session, Finding Subalgebras of Real Semisimple Lie Algebras, Rutherford Physics Building, Room 118, McGill
Bogdan Nica (McGill University and Texas A&M), Discrete Groups and Operator Algebras, C*-algebras of hyperbolic groups - To infinity and back, McGill U., Bronfman Building, Room 178
Elaine Cozzi (Oregon State University), Equations of Fluid Mechanics: Analysis, The aggregation equation with Newtonian potential, McGill U., McConnell Engineering Building, Room 11
Jan Volec (McGill University), Extremal and Probabilistic Combinatorics, The codegree threshold of K4-, McGill U.,Arts Building, W-120
Victoria Hoskins (Freie Universität Berlin), Geometry and Physics of Higgs Bundles, Group actions on quiver moduli spaces and branes, McGill U., Burnside Hall, Room 1B24
Bin Han (University of Alberta), Harmonic Analysis and Inverse Problems, Directional Complex Tight Framelets with Applications to Image Processing, McGill U., Trottier Bldg., Rm 2110
Milen Yakimov (Louisiana State University, USA), Hopf Algebras and Tensor Categories, Prime spectra of 2-categories and categorification of open Richardson varieties, McGill U., Arts Building, Room W-215
Marian Gidea (Yeshiva University, New York, USA), Hamiltonian Systems and Celestial Mechanics, Computer assisted proof of Arnold diffusion in the planar elliptic restricted three-body problem, McGill U., Burnside Hall, Room 1B36
Alexander Schnirelman (Concordia University), Incompressible Fluid Dynamics, On the 2-point problem for the Euler-Lagrange equations, McGill U., Burnside Hall, Room 920
Joseph Shomberg (Providence College, USA), Models and Methods in Evolutionary Differential Equations on Mixed Scales, On non-isothermal viscous nonlocal Cahn$-$Hilliard equations, McGill U., McConnell Engineering Building, Room 204
Fereshteh Yazdani (University of New Brunswick, Fredericton, Canada), Noncommutative Geometry and Quantization, Hopf-Cyclic Cohomology of $\mathcal{H}_n$ with Nontrivial Coefficients, McGill U., Bronfman Building, Room 46
Magdalena Czubak (University of Colorado Boulder), Nonlinear and Stochastic Partial Differential Equations, The fluids equations on a hyperbolic space., McGill U., Burnside Hall, Room 1214
Amir Mohammadi (University of California at San Diego), Number Theory & Analysis, Effective equidistribution of certain adelic periods, McGill U., Rutherford Physics Building, Room 115
Max Oliveira de Souza (Universidade Federal Fluminense, Brazil), Recent Advance in Disease Dynamics Analysis, On Aedes, Wolbachia and the Control of Urban Arboviruses, McGill U., McConnell Engineering Building, Room 304
Georgia Benkart (University of Wisconsin, Madison), Representations of Lie Algebras, A Tangled Approach to Cross Product Algebras, Their Invariants and Centralizers, McGill U., Arts Building, Room 260
Yadira Valdivieso Diaz (Universidad Nacional de Mar del Plata, Argentina), Representation Theory of Algebras, Computing homologies of algebras from surfaces, McGill U., Arts Building, Room W-20
Loredana Lanzani (Syracuse University, USA), Several Complex Variables, Holomorphic Singular Integrals: counterexamples to the $L^p$ theory, McGill U., Trottier Building, Rm 2120
Van Cyr (Bucknell University, USA), Symbolic Dynamics, Automorphisms of zero entropy symbolic systems, McGill U., Trottier Building (Engineering), Room 1100
Danuta Kolodziejczyk (Warsaw University of Technology), Shape, Homotopy, and Attractors, Cartesian Powers of Shapes of FANR's and Polyhedra, McGill U., Burnside Hall, Room 1B39
Benjamin Itzá-Ortíz (UAEH - Universidade Autónoma Del Estado de Hidalgo), Topological Dynamics and Operator Algebras, The isomorphism class of mapping tori on simple Banach algebras, McGill U., Bronfman Building, Room 151
12:15 - 12:40 Jintai Ding (Cincinnati University), Mathematical Applications in Cryptography, Post-Quantum Key Exchange from the LWE, Centre Mont Royal, Room Cartier I & II
12:15 - 12:45 Bhishan Jacelon (University of Toronto), Classification of Amenable C*-algebras, McGill U., Bronfman Building, Room 1
12:45 - 13:15 Guihua Gong (University of Puerto Rico), Classification of Amenable C*-algebras, Classification of inductive limit C*-algebras with ideal property, McGill U., Bronfman Building, Room 1
13:15 - 13:45 Liangqing Li (University of Puerto Rico), Classification of Amenable C*-algebras, Exponential length of commutator unitaries of simple AH C*-algebras., McGill U., Bronfman Building, Room 1
13:45 - 14:15 Ping Wong Ng (University of Luisiana at Lafayette), Classification of Amenable C*-algebras, McGill U., Bronfman Building, Room 1
14:15 - 14:35 Stephen Watt (University of Waterloo, Canada), Applied and Computational Algebra and Geometry, Toward Gröbner Bases of Symbolic Polynomials, McGill U., Birks Building, Room 205
Ricardo Toledano (Universidad Nacional del Litoral), Arithmetic Geometry and Related Topics, S-minimal value set polynomials and towers of Garcia, Stichtenoth and Thomas type, Rutherford Physics Building, Room 114
Art Duval (University of Texas at El Paso), Combinatorial Commutative Algebra, A non-partitionable Cohen-Macaulay simplicial complex, and implications for Stanley depth, McGill U., Birks Building, Room 111
Jonathan Gryak (CUNY Graduate Center, USA), Computations in Groups and Applications, On the Conjugacy Problem in Certain Metabelian Groups, McGill U., Bronfman Building, Room 45
Bianca Calsavara (Campinas, Brazil), Control of Partial Differential Equations, Local exact controllability of two-phase field solidification systems with few controls, McGill U., Burnside Hall, Room 1205
Nancy Wallace (Université du Québec à Montreal), CMS-Studc Student Research Session, Une généralisation de l'opérateur $\nabla$. A generalization of the $\nabla$ operator. (Talk in French slides in English), Rutherford Physics Building, Room 118, McGill
Rufus Willett (University of Hawaii), Discrete Groups and Operator Algebras, K-homology and localization algebras, McGill U., Bronfman Building, Room 178
Xinwei Yu (University of Alberta, Edmonton), Equations of Fluid Mechanics: Analysis, Some new Prodi-Serrin type conditions for the 3D Navier-Stokes equations, McGill U., McConnell Engineering Building, Room 11
Simon Griffiths (PUC-RIO), Extremal and Probabilistic Combinatorics, Multi-coloured urns on graphs, and co-existence, McGill U.,Arts Building, W-120
Lu Wang (University of Wisconsin, USA), Geometric Analysis, Asymptotic structure of self-shrinkers of mean curvature flow, McGill U., Burnside Hall, Room 1B23
Sara Maloni (University of Virginia), Geometry and Combinatorics of Cell Complexes, Polyhedra inscribed in quadrics and their geometry., McGill U., Burnside Hall, Room 1B45
Qionling Li (CalTech), Geometry and Physics of Higgs Bundles, Metric domination for Higgs bundles of quiver type, McGill U., Burnside Hall, Room 1B24
Magali Louise Marie Folch (Universidad Nacional Autónoma de México (UNAM)), Harmonic Analysis and Inverse Problems, Smoothing operators coming from the Navier equation in elasticity, McGill U., Trottier Bldg., Rm 2110
Cris Negron (MIT, USA), Hopf Algebras and Tensor Categories, Small quantum groups associated to Belavin-Drinfeld triples, McGill U., Arts Building, Room W-215
Ezequiel Maderna (Universidad de la República, Montevideo, Uruquay), Hamiltonian Systems and Celestial Mechanics, Applications of weak KAM theory to some gravitational problems, McGill U., Burnside Hall, Room 1B36
Alexei Mailybaev (Instituto Nacional de Matemática Pura e Aplicada), Incompressible Fluid Dynamics, Spontaneously stochastic solutions for the Rayleigh-Taylor instability, McGill U., Burnside Hall, Room 920
Bogdan Batko (UJ Krakòw, Poland), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Weak Index Pairs and the Conley Index for Discrete Multivalued Dynamical Systems, McGill U., Trottier Building (Engineering), Room 70
Lijin Wang (University of Chinese Academy of Science, China), Models and Methods in Evolutionary Differential Equations on Mixed Scales, Computational singular perturbation analysis of stochastic differential equations systems with stiffness, McGill U., McConnell Engineering Building, Room 204
Farzad Fathizadeh (Caltech, Pasadena, USA), Noncommutative Geometry and Quantization, The term $a_4$ in the heat kernel expansion of noncommutative tori, McGill U., Bronfman Building, Room 46
Renato Calleja (Universidad Nacional Autónoma de México), Nonlinear and Stochastic Partial Differential Equations, Construction of Quasi-Periodic Response Solutions for Forced Systems with Strong Damping, McGill U., Burnside Hall, Room 1214
Misha Belolipetsky (IMPA), Number Theory & Analysis, Lehmer's problem and triangulations of arithmetic hyperbolic 3-orbifolds, McGill U., Rutherford Physics Building, Room 115
Yijun Lou (The Hong Kong Polytechnic University), Recent Advance in Disease Dynamics Analysis, Modelling Lyme Disease Transmission, McGill U., McConnell Engineering Building, Room 304
Andrea Solotar (Universidad de Buenos Aires), Representations of Lie Algebras, Some invariants of the super Jordan plane, McGill U., Arts Building, Room 260
Birge Huisgen-Zimmermann (University of California, Santa Barbara, USA), Representation Theory of Algebras, Irreducible components of varieties of representations, McGill U., Arts Building, Room W-20
Sofía Ortega Castillo (Centro de Investigación en Matemáticas), Several Complex Variables, Strong pseudoconvexity and Cauchy-Riemann equations, McGill U., Trottier Building, Rm 2120
Sebastián Donoso (Universidad de Chile, Chile), Symbolic Dynamics, Automorphism groups of Toeplitz subshifts, McGill U., Trottier Building (Engineering), Room 1100
Sergey Antonyan (Universidad Nacional Autónoma de México), Shape, Homotopy, and Attractors, Characterizing G-A(N)R spaces by means of $H$-fixed point sets, McGill U., Burnside Hall, Room 1B39
David Kerr (Texas A&M University), Topological Dynamics and Operator Algebras, Almost finiteness and $\mathcal{Z}$-stability, McGill U., Bronfman Building, Room 151
Roman Sasyk (Universidad de Buenos Aires, Argentina), Von Neumann Algebras and their Applications, On the Classification of Free Araki Woods Factors, McGill U., Bronfman Building, Room 179
14:15 - 14:40 Tim Alderson (University of New Brunswick), Finite Algebraic Combinatorics and Applications, Constructions and bounds on 3-D Optical Orthogonal Codes, Centre Mont Royal, Salon International I & II
Petr Lisonek (Simon Fraser University), Mathematical Applications in Cryptography, Non-existence results for vectorial bent functions with Dillon-type exponents, Centre Mont Royal, Room Cartier I & II
Jeff Schenker (University of Michigen, East Lansing), Mathematical Physics, Localization in disordered polaron models, McGill, McConnell Engineering Building, Rm 12
14:15 - 14:45 Andrew Dean (Lakehead University), Classification of Amenable C*-algebras, Classification Problems Involving Real C*-algebras, McGill U., Bronfman Building, Room 1
14:15 - 14:55 Patrick Brosnan (University of Maryland), Motives and Periods, Perverse obstructions to regular flat compactifications, McGill U., Birks Building, Room 203
14:15 - 15:00 Carlos Matheus Silva Santos (CNRS/Institut Galilée, Université Paris 13, France), Fractal Geometry and Dynamical Systems, On the Lagrange and Markov spectrum, McGill U., Trottier Building (Engineering), Room 100
14:15 - 15:15 Tatyana Barron (University of Western Ontario, Canada), Spectrum and Dynamics, Vector-valued Poincar\'e series on G/K, McGill U., Trottier Building (Engineering), Room 60
14:45 - 15:05 Luis David Garcia Puente (Sam Houston State University, USA), Applied and Computational Algebra and Geometry, Gr\"obner bases of neural ideals, McGill U., Birks Building, Room 205
Pedro Luis del Ángel Rodríguez (Centro de Investigación en Matemáticas), Arithmetic Geometry and Related Topics, Eichler-Shimura and extensions of Hodge structures, Rutherford Physics Building, Room 114
Anton Dochtermann (Texas State University), Combinatorial Commutative Algebra, Trees, skeleta, and combinatorics of monomial chip-firing ideals, McGill U., Birks Building, Room 111
Keivan Mallahi-Karai (Jacob University, Germany), Computations in Groups and Applications, Random semigroups in solvable linear groups, McGill U., Bronfman Building, Room 45
Bingyu Zhang (University of Cincinnati), Control of Partial Differential Equations, General Boundary Value Problems of the Kortweg-de Vries Equation on a Bounded Domain, McGill U., Burnside Hall, Room 1205
Nitin Chidambaram (University of Alberta), CMS-Studc Student Research Session, Topological recursion and quantum curves, Rutherford Physics Building, Room 118, McGill
Rodolfo Viera (Universidad de Santiago de Chile (USACH)), Discrete Groups and Operator Algebras, Densitites non-realizable as the Jacobian of a 2-dimensional bi-Lipschitz map are generic, McGill U., Bronfman Building, Room 178
David Shirokoff (NJIT, USA), Equations of Fluid Mechanics: Numerics, Unconditional Stability for Multistep Imex Schemes, McGill U., McConnell Engineering Building, Room 13
Aseel Farhat (University of Virginia), Equations of Fluid Mechanics: Analysis, Geometry of 3D turbulent flows and the scaling gap in the 3D Navier-Stokes regularity problem, McGill U., McConnell Engineering Building, Room 11
Peter Allen (LSE), Extremal and Probabilistic Combinatorics, Sparse hypergraph regularity, McGill U.,Arts Building, W-120
Emanuele Delucchi (University of Fribourg), Geometry and Combinatorics of Cell Complexes, Combinatorial models for toric arrangements, McGill U., Burnside Hall, Room 1B45
Sergei Gukov (CalTech), Geometry and Physics of Higgs Bundles, Equivariant invariants of the Hitchin moduli space, McGill U., Burnside Hall, Room 1B24
Ahmad Zayed (University of De Paul, USA), Harmonic Analysis and Inverse Problems, A New Two-Dimensional Fractional Fourier Transform and the Wigner Distribution, McGill U., Trottier Bldg., Rm 2110
Cristian Vay (Universidad Nacional de Córdoba, Argentina), Hopf Algebras and Tensor Categories, On projective modules over finite quantum groups, McGill U., Arts Building, Room W-215
Toshiaki Fujiwara (College of Liberal Arts and Sciences, Kitasato University, Japan), Hamiltonian Systems and Celestial Mechanics, Figure-eight solution and slalom solutions in function space, McGill U., Burnside Hall, Room 1B36
Nathan Glatt-Holtz (Tulane University), Incompressible Fluid Dynamics, Stochastic Models for Turbulent Convection, McGill U., Burnside Hall, Room 920
Naiara Vergian de Paulo Costa (Federal University of Santa Catarina), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Multiplicity of periodic orbits and homoclinics for Hamiltonian systems in $\mathbb{R}^4$, McGill U., Trottier Building (Engineering), Room 70
Son Luu Nguyen (University of Puerto Rico, Rio Piedras Campus, USA), Models and Methods in Evolutionary Differential Equations on Mixed Scales, On the McKean-Vlasov limit for interacting diffusions with Markovian switching, McGill U., McConnell Engineering Building, Room 204
Masoud Khalkhali (Western University, London, Canada), Noncommutative Geometry and Quantization, Ricci Curvature in Noncommutative Geometry, McGill U., Bronfman Building, Room 46
Camelia Pop (University of Minnesota), Nonlinear and Stochastic Partial Differential Equations, Boundary estimates for a degenerate parabolic equation with partial Dirichlet boundary conditions, McGill U., Burnside Hall, Room 1214
Clara Aldana (Universite du Luxembourg), Number Theory & Analysis, Determinants of Laplacians on surfaces with singularities, McGill U., Rutherford Physics Building, Room 115
Xi Huo (York University, Canada), Recent Advance in Disease Dynamics Analysis, Modelling Antimicrobial De-escalation: Implications for Stewardship Programs, McGill U., McConnell Engineering Building, Room 304
Nicolas Libedinsky (Universidad de Chile), Representations of Lie Algebras, The Anti-spherical category, McGill U., Arts Building, Room 260
José A. Vélez (Valdosta State University, Georgia (USA)), Representation Theory of Algebras, Universal deformation rings of string modules over a class of self-injective special biserial algebras, McGill U., Arts Building, Room W-20
Alcides Lins Neto (Instituto de Matemática Pura e Aplicada, Brazil), Several Complex Variables, Logarithmic foliations of codimension greater than one, McGill U., Trottier Building, Rm 2120
María Isabel Cortez (Universidad de Santiago de Chile, Chile), Symbolic Dynamics, Invariant measures, monotilable groups and orbit equivalence, McGill U., Trottier Building (Engineering), Room 1100
Natalia Jonard-Perez (Universidad Nacional Autónoma de México), Shape, Homotopy, and Attractors, Groups of affine transformations acting on hyperspaces of compact convex subsets of $\mathbb R^n$, McGill U., Burnside Hall, Room 1B39
Nick Ormes (University of Denver), Topological Dynamics and Operator Algebras, Speedups of Topological Systems, McGill U., Bronfman Building, Room 151
Lauren Ruth (UC Riverside, USA), Von Neumann Algebras and their Applications, von Neumann dimension for lattices in PGL(2,F), F a local non-archimedean field, McGill U., Bronfman Building, Room 179
14:45 - 15:10 Claudio Qureshi (IMECC, Brazil), Finite Algebraic Combinatorics and Applications, Periods of iterations of mappings over finite fields with restricted preimage sizes, Centre Mont Royal, Salon International I & II
Kenza Guenda (Victoria University), Mathematical Applications in Cryptography, A Secure New Variant of the McEliece Cryptosystem, Centre Mont Royal, Room Cartier I & II
Milivoje Lukic (Rice University), Mathematical Physics, Analytic quasiperiodic Schrodinger operators at small coupling, McGill, McConnell Engineering Building, Rm 12
14:45 - 15:15 Viviane Beuter (Federal University of Santa Catarina (UFSC)), Classification of Amenable C*-algebras, Simplicity of skew inverse semigroup rings with an application to Steinberg algebras, McGill U., Bronfman Building, Room 1
15:15 - 15:45 N. Christopher Phillips (University of Oregon), Classification of Amenable C*-algebras, McGill U., Bronfman Building, Room 1
Break, McGill University, Various Buildings
15:45 - 16:05 Carlos Valencia (Cinvestav, Mexico), Applied and Computational Algebra and Geometry, Some algorithmic aspect of arithmetical structures, McGill U., Birks Building, Room 205
Alejandra Alvarado (Eastern Illinois Unviersity), Arithmetic Geometry and Related Topics, Arithmetic Progressions on Conic Sections, Rutherford Physics Building, Room 114
Maryam Ehya Jahromi (Dalhousie University), Combinatorial Commutative Algebra, Whitney's Theorem and Subideals of Monomial Ideals, McGill U., Birks Building, Room 111
Ming Ming Zhang (Carlton University, Canada), Computations in Groups and Applications, Time complexity of the word problem in relatively hyperbolic groups, McGill U., Bronfman Building, Room 45
Alberto Mercado (Universidad Técnica Federica Santamaría), Control of Partial Differential Equations, Controllability of wave equations in heterogeneous media, McGill U., Burnside Hall, Room 1205
Victor I. Bravo Reyna (Centro de Investigación en Matemáticas), CMS-Studc Student Research Session, Electromagnetism on supersymmetric structures, Rutherford Physics Building, Room 118, McGill
Gisela Tartaglia (Universidad de Buenos Aires), Discrete Groups and Operator Algebras, The Farrell-Jones conjecture for Haagerup groups and $\mathcal{K}$-stable coefficients., McGill U., Bronfman Building, Room 178
Alexander Bihlo (Memorial University, Canada), Equations of Fluid Mechanics: Numerics, A well-balanced meshless tsunami propagation and inundation model, McGill U., McConnell Engineering Building, Room 13
Milton Lopes Filho (Universidade Federal do Rio de Janeiro), Equations of Fluid Mechanics: Analysis, Incompressible, ideal flows around many small obstacles, McGill U., McConnell Engineering Building, Room 11
Louigi Addario-Berry (McGill University), Extremal and Probabilistic Combinatorics, Voronoi cells in random maps, McGill U.,Arts Building, W-120
Jose Espinar (IMPA, Brazil.), Geometric Analysis, Fully nonlinear version of the Min-Oo Conjecture, McGill U., Burnside Hall, Room 1B23
Jens Harlander (Boise State University), Geometry and Combinatorics of Cell Complexes, On the Homotopy Classification of 2-Complexes, McGill U., Burnside Hall, Room 1B45
Laura Fredrickson (Stanford), Geometry and Physics of Higgs Bundles, Constructing solutions of Hitchin's equations near the ends of the moduli space, McGill U., Burnside Hall, Room 1B24
Matthew Hirn (Michigan State), Harmonic Analysis and Inverse Problems, Learning many body physics via wavelet scattering transforms, McGill U., Trottier Bldg., Rm 2110
Yinhuo Zhang (Hasselt, Belgium), Hopf Algebras and Tensor Categories, Reconstruction of monoidal categories from their invariants, McGill U., Arts Building, Room W-215
Ding Liang (Sichuan University, China), Hamiltonian Systems and Celestial Mechanics, Some notes on planar Newtonian four-body central configurations with adjacent equal masses, McGill U., Burnside Hall, Room 1B36
Slim Ibrahim (University of Victoria), Incompressible Fluid Dynamics, STABILITY OF RECEDING TRAVELING WAVES IN VISCOUS THIN FILMS, McGill U., Burnside Hall, Room 920
Tae-Hyuk (Ted) Ahn (Saint Louis University, USA), Models and Methods in Evolutionary Differential Equations on Mixed Scales, McGill U., McConnell Engineering Building, Room 204
Rufus Willett (University of Hawaii, USA), Noncommutative Geometry and Quantization, Cartans and rigidity for uniform Roe algebras, McGill U., Bronfman Building, Room 46
Michele Coti-Zelati (University of Maryland), Nonlinear and Stochastic Partial Differential Equations, Stochastic perturbations of passive scalars and small noise inviscid limits, McGill U., Burnside Hall, Room 1214
Alireza Golsefidy (University of California at San Diego), Number Theory & Analysis, Super-approximation, McGill U., Rutherford Physics Building, Room 115
Hongbin Guo (University of Ottawa, Canada), Recent Advance in Disease Dynamics Analysis, Global stability for a class of epidemiological models with multiple age structures, McGill U., McConnell Engineering Building, Room 304
Vera Serganova (University of California, Berkeley), Representations of Lie Algebras, Representations of direct limits of classical Lie algebras and superalgebras, McGill U., Arts Building, Room 260
Frauke Bleher (University of Iowa, USA), Representation Theory of Algebras, Support varieties and holomorphic differentials, McGill U., Arts Building, Room W-20
Paul Gauthier (Université de Montréal), Several Complex Variables, Approximation by random holomorphic functions, McGill U., Trottier Building, Rm 2120
Reem Yassawi (IRIF, Univ. Paris-7), Symbolic Dynamics, Recognizability for sequences of morphisms, McGill U., Trottier Building (Engineering), Room 1100
Joanna Furno (Indiana University–Purdue University Indianapolis), Shape, Homotopy, and Attractors, Ultrafilter constructions for group topologies, McGill U., Burnside Hall, Room 1B39
Rufus Willet (University of Hawaii), Topological Dynamics and Operator Algebras, Cartans and rigidity for uniform Roe algebras, McGill U., Bronfman Building, Room 151
Brent Nelson (UC Berkeley, USA), Von Neumann Algebras and their Applications, Derivations on non-tracial von Neumann algebras, McGill U., Bronfman Building, Room 179
15:45 - 16:10 Nuh Aydin (Kanyon College), Finite Algebraic Combinatorics and Applications, Recent Methods of Constructing New Linear Codes over $\mathbb{Z}_4$, Centre Mont Royal, Salon International I & II
Julio Toloza (University of Cordoba), Mathematical Physics, Dispersion Estimates for Spherical Schrödinger Operators, McGill, McConnell Engineering Building, Rm 12
15:45 - 16:15 David Kerr (Texas A&M University), Classification of Amenable C*-algebras, McGill U., Bronfman Building, Room 1
15:45 - 16:25 Andreas Malmendier (Utah State University), Motives and Periods, Geometry of (1,2)-polarized Kummer surfaces, McGill U., Birks Building, Room 203
15:45 - 16:30 Anton Gorodetski (University of California, Irvine, United States), Fractal Geometry and Dynamical Systems, Separable potentials and sums of regular Cantor sets., McGill U., Trottier Building (Engineering), Room 100
15:45 - 16:35 Umberto Hryniewicz (UFRJ – Rio, Brazil), Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Index pairs associated to finite-energy foliations, McGill U., Trottier Building (Engineering), Room 70
15:45 - 16:45 Richard Froese (University of British Columbia, Canada), Spectrum and Dynamics, Resonances lost and found, McGill U., Trottier Building (Engineering), Room 60
16:15 - 16:35 Alan Veliz Cuba (University of Dayton, USA), Applied and Computational Algebra and Geometry, On the Perfect Reconstruction of the Structure of Dynamic Networks, McGill U., Birks Building, Room 205
Andrew Harder (University of Miami), Arithmetic Geometry and Related Topics, Calabi-Yau threefolds and modular curves, Rutherford Physics Building, Room 114
Kuei-Nuan Lin (Penn State Greater Allegheny), Combinatorial Commutative Algebra, Generalized Newton Complementary Duals of Monomial Ideals, McGill U., Birks Building, Room 111
Scott Hansen (Iowa State University), Control of Partial Differential Equations, Boundary controllability of Schrodinger and beam equation with an internal point mass, McGill U., Burnside Hall, Room 1205
Sumin Leem (University of Calgary), CMS-Studc Student Research Session, Cryptographic pairings and applications, Rutherford Physics Building, Room 118, McGill
Mitsuru Wilson (Universidad de los Andes), Discrete Groups and Operator Algebras, Pseudo-differential calculus on noncommutative special unitary groups, McGill U., Bronfman Building, Room 178
Karen Zaya (University of Michigan), Equations of Fluid Mechanics: Analysis, On Regularity Properties for Fluid Equations, McGill U., McConnell Engineering Building, Room 11
Mathias Schacht (University of Hamburg), Extremal and Probabilistic Combinatorics, The three edge theorem, McGill U.,Arts Building, W-120
Stephan Rosebrock (Pädagogische Hochschule Karlsruhe, Institut für Mathematk), Geometry and Combinatorics of Cell Complexes, Labelled Oriented Trees and the Whitehead Conjecture, McGill U., Burnside Hall, Room 1B45
Michael Groechenig (FU Berlin), Geometry and Physics of Higgs Bundles, p-adic integration for the Hitchin system, McGill U., Burnside Hall, Room 1B24
Giang Tran (University of Texas Austin), Harmonic Analysis and Inverse Problems, Sparse Optimization in Learning Governing Equations for Time Varying Measurements, McGill U., Trottier Bldg., Rm 2110
Gastón Andrés García (Universidad de La Plata, Argentina), Hopf Algebras and Tensor Categories, On the determination of algebraic quantum subgroups, McGill U., Arts Building, Room W-215
Zhiqiang Wang (Chongqing University, Chongqing, China), Hamiltonian Systems and Celestial Mechanics, central configurations formed by nested regular polygons with unequal masses, McGill U., Burnside Hall, Room 1B36
Andrei Tarfulea (University of Chicago), Incompressible Fluid Dynamics, Improved estimates for thermal fluid equations, McGill U., Burnside Hall, Room 920
Everaldo de Mello Bonotto (University of São Paulo, Brazil), Models and Methods in Evolutionary Differential Equations on Mixed Scales, McGill U., McConnell Engineering Building, Room 204
Alcides Buss (Federal University of Santa-Carina, Florianopolis, Brazil), Noncommutative Geometry and Quantization, Groupoid actions - The symmetries of noncommutative spaces, McGill U., Bronfman Building, Room 46
David Herzog (Iowa State University), Nonlinear and Stochastic Partial Differential Equations, Scaling and saturation in infinite-dimensional control problems with applications to SPDEs, McGill U., Burnside Hall, Room 1214
Corentin Perret-Gentil (CRM, Montreal), Number Theory & Analysis, Quotients of elliptic curves over finite fields, McGill U., Rutherford Physics Building, Room 115
Zhenguo Bai (Memorial University of Newfoundland, Canada), Recent Advance in Disease Dynamics Analysis, A reaction-diffusion malaria model with seasonality and incubation period, McGill U., McConnell Engineering Building, Room 304
Jonathan Kujawa (University of Oklahoma), Representations of Lie Algebras, On Cyclotomic Schur Algebras, McGill U., Arts Building, Room 260
Emine Yildirim (Université du Québec à Montréal, Québec, Canada), Representation Theory of Algebras, Periodic behavior of Auslander-Reiten translation, McGill U., Arts Building, Room W-20
Eduardo Santillan Zeron (Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, México), Several Complex Variables, A broad view of $q$-plurisubharmonicity and $q$-pseudoconvexity, McGill U., Trottier Building, Rm 2120
Kelly Yancey (Institute for Defense Analyses, USA), Symbolic Dynamics, Structure of Rigidity Sequences for Substitution Dynamical Systems, McGill U., Trottier Building (Engineering), Room 1100
James E. Keesling (University of Florida), Shape, Homotopy, and Attractors, Spaces all of whose loops are small, McGill U., Burnside Hall, Room 1B39
Alcides Buss (UFSC - Universidade Federal de Santa Catarina), Topological Dynamics and Operator Algebras, Groupoid actions - The symmetries of noncommutative spaces, McGill U., Bronfman Building, Room 151
Paul Skoufranis (York University, Canada), Von Neumann Algebras and their Applications, Majorization in Von Neumann Algebras, McGill U., Bronfman Building, Room 179
16:15 - 16:40 Jonathan Jedwab (Simon Fraser University), Finite Algebraic Combinatorics and Applications, Costas cubes, Centre Mont Royal, Salon International I & II
Rafael Tiedra (PUC, Chille), Mathematical Physics, Spectral analysis of quantum walks with an anisotropic coin, McGill, McConnell Engineering Building, Rm 12
16:15 - 16:45 Maria Grazia Viola (Lakehead University), Classification of Amenable C*-algebras, Structure of ideals in a spatial $L^p$ AF algebra, McGill U., Bronfman Building, Room 1
16:45 - 17:15 Leonel Robert (University of Louisiana at Lafayette), Classification of Amenable C*-algebras, Dixmier sets of C*-algebras, McGill U., Bronfman Building, Room 1
17:00 - 17:20 Carina Curto (The Pennsylvania State University , USA), Applied and Computational Algebra and Geometry, Emergent dynamics from network connectivity: a minimal model, McGill U., Birks Building, Room 205
Ursula Whitcher (University of Wisconsin - Eau Claire and Mathematical Reviews), Arithmetic Geometry and Related Topics, Zeta functions of alternate mirror Calabi-Yau pencils, Rutherford Physics Building, Room 114
Carlos Valencia-Oleta (Cinvestav-IPN), Combinatorial Commutative Algebra, The combinatory of the arithmetical structures of the path, McGill U., Birks Building, Room 111
Denis Serbin (Stevens Institute of Technology, USA), Computations in Groups and Applications, Detecting conjugacy stability of subgroups, McGill U., Bronfman Building, Room 45
Shuxia Tang (University of Waterloo), Control of Partial Differential Equations, Sensor Design for Distributed Parameter Systems, McGill U., Burnside Hall, Room 1205
Olivier Binette (Université du Québec à Montreal), CMS-Studc Student Research Session, Bayesian learning: semiparametric modelling and asymptotic theory, Rutherford Physics Building, Room 118, McGill
Zhizhang Xie (Texas A&M), Discrete Groups and Operator Algebras, Additivity of higher rho invariants and nonrigidity of topological manifolds, McGill U., Bronfman Building, Room 178
Geoff McGregor, Equations of Fluid Mechanics: Numerics, A Parametric Interpolation Framework for 1D Scalar Conservation Laws using the Equal Area Principle, McGill U., McConnell Engineering Building, Room 13
Theodore Drivas (Johns Hopkins University), Equations of Fluid Mechanics: Analysis, An Onsager Singularity Theorem for Solutions of the Compressible Euler Equations, McGill U., McConnell Engineering Building, Room 11
David Conlon (University of Oxford), Extremal and Probabilistic Combinatorics, Unavoidable patterns in words, McGill U.,Arts Building, W-120
Siyuan Lu (McGill University), Geometric Analysis, Isometric embedding and quasi-local type inequality, McGill U., Burnside Hall, Room 1B23
Luis Montejano (Universidad Nacional Autónoma de México), Geometry and Combinatorics of Cell Complexes, Homological Sperner-type Theorems, McGill U., Burnside Hall, Room 1B45
Laurent Baratchart (INRIA), Harmonic Analysis and Inverse Problems, Inverse potential problems in divergence form on surfaces: non-uniqueness, McGill U., Trottier Bldg., Rm 2110
Mitja Mastnak (Saint Mary's University, Canada), Hopf Algebras and Tensor Categories, Bialgebras and coverings, McGill U., Arts Building, Room W-215
Zhifu Xie (The University of Southern Mississippi, USA), Hamiltonian Systems and Celestial Mechanics, Super Central Configurations and the Number of Central Configurations Under Geometric Equivalence, McGill U., Burnside Hall, Room 1B36
Ji Li (Huazhong University of Science and Technology, China), Models and Methods in Evolutionary Differential Equations on Mixed Scales, McGill U., McConnell Engineering Building, Room 204
Carla Farsi (University of Colorado, Boulder, USA), Noncommutative Geometry and Quantization, Semibranching function systems, representations, wavelets, and spectral triples for k-graphs, McGill U., Bronfman Building, Room 46
Anthony Varilly-Alvarado (Rice University), Number Theory & Analysis, On a uniform boundedness conjecture for Brauer groups of K3 surfaces, McGill U., Rutherford Physics Building, Room 115
Apoorva Khare (Stanford University), Representations of Lie Algebras, The Weyl-Kac weight formula, McGill U., Arts Building, Room 260
Hugh Thomas (Université du Québec à Montréal, Québec, Canada), Representation Theory of Algebras, Nilpotent endomorphisms of quiver representations and reverse plane partitions, McGill U., Arts Building, Room W-20
Eugene Poletsky (Syracuse University, USA), Several Complex Variables, Hausdorffization of the space of equivalence classes, McGill U., Trottier Building, Rm 2120
Edgardo Ugalde (Universidad Autónoma de San Luis Potosí, Mexico), Symbolic Dynamics, Projective convergence of random substitutions towards a Gibbs measures, McGill U., Trottier Building (Engineering), Room 1100
Boris Goldfarb (State University of New York at Albany), Shape, Homotopy, and Attractors, Extension and non-extension theorems for coarse properties of metric spaces, McGill U., Burnside Hall, Room 1B39
Carla Farsi (University of Colorado Boulder), Topological Dynamics and Operator Algebras, Semibranching function systems, representations, wavelets, and spectral triples for k-graphs, McGill U., Bronfman Building, Room 151
Cesar Galindo (Universidad de los Andes, Bogota, Columbia), Von Neumann Algebras and their Applications, Coideal subalgebras of some Kac algebras, McGill U., Bronfman Building, Room 179
17:00 - 17:25 Steve Szabo (Eastern Kentucky University), Finite Algebraic Combinatorics and Applications, Some Minimal Rings, Centre Mont Royal, Salon International I & II
17:00 - 17:40 Andrew Harder (University of Miami), Motives and Periods, Hodge numbers of Landau-Ginzburg models, McGill U., Birks Building, Room 203
17:00 - 17:45 Sergio Augusto Romaña Ibarra (Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brasil), Fractal Geometry and Dynamical Systems, On the Lagrange and Markov Dynamical Spectrum for Surfaces of Negative Curvature, McGill U., Trottier Building (Engineering), Room 100
17:00 - 17:50 Elliott Lieb (Princeton University), Mathematical Physics, Strichartz Inequality for Orthonormal Functions, McGill, McConnell Engineering Building, Rm 12
17:00 - 18:00 Panel Discussion, Morse, Conley, and Forman Approaches to Smooth and Discrete Dynamics, Panel Discussion, McGill U., Trottier Building (Engineering), Room 70
Jean Lagacé (Universite de Montreal), Spectrum and Dynamics, The Steklov spectrum of cuboids, McGill U., Trottier Building (Engineering), Room 60
17:15 - 17:45 Luis Santiago (Lakehead University), Classification of Amenable C*-algebras, McGill U., Bronfman Building, Room 1
17:30 - 17:50 Adriana Salerno (Bates College), Arithmetic Geometry and Related Topics, Alternate Mirror Families and Hypergeometric Functions, Rutherford Physics Building, Room 114
Sandra Spiroff (The University of Mississippi), Combinatorial Commutative Algebra, Combinatorial aspects of intersection algebras, McGill U., Birks Building, Room 111
Arpita Kar (Queen's University), CMS-Studc Student Research Session, A conjecture of Bateman regarding $r_5$$(n)$, Rutherford Physics Building, Room 118, McGill
Jean-Christophe Nave (McGill University, Canada), Equations of Fluid Mechanics: Numerics, Solving Incompressible 2D Euler's Equations with Exponential Resolution, McGill U., McConnell Engineering Building, Room 13
Mimi Dai (University of Illinois, Chicago), Equations of Fluid Mechanics: Analysis, Kolmogorov's dissipation number and the number of degrees of freedom for the 3D Navier-Stokes equations, McGill U., McConnell Engineering Building, Room 11
Gabriel Minian (Universidad de Buenos Aires), Geometry and Combinatorics of Cell Complexes, Homotopy colimits of finite posets, McGill U., Burnside Hall, Room 1B45
Costel Gabriel Bontea (University of New Hampshire, USA), Hopf Algebras and Tensor Categories, Classifying pointed braided finite tensor categories, McGill U., Arts Building, Room W-215
Laurent Moonens (Université Paris Sud 11, France), Models and Methods in Evolutionary Differential Equations on Mixed Scales, McGill U., McConnell Engineering Building, Room 204
Elizabeth Gillaspy (University of Montana, Missoula, USA), Noncommutative Geometry and Quantization, Generalized gauge actions, KMS states, and Hausdorff dimension for higher-rank graphs, McGill U., Bronfman Building, Room 46
José Antonio de la Peña (Centro de Investigación en Matematicas, Guanajuato, Mexico), Representation Theory of Algebras, Weakly non-negative quadratic forms (revisited), McGill U., Arts Building, Room W-20
Damir Kinzebulatov (Université Laval, Canada), Several Complex Variables, Kohn decomposition for forms on coverings of complex manifolds constrained along fibres, McGill U., Trottier Building, Rm 2120
Carlos Matheus Silva Santos (CNRS/Institut Galilée, Université Paris 13, France), Symbolic Dynamics, On problem 17 in Bowen's notebook, McGill U., Trottier Building (Engineering), Room 1100
Jerzy Dydak (University of Tennessee), Shape, Homotopy, and Attractors, Extension theorems for large scale spaces via neighborhood operators, McGill U., Burnside Hall, Room 1B39
Elizabeth Gillaspy (University of Münster), Topological Dynamics and Operator Algebras, Generalized gauge actions, KMS states, and Hausdorff dimension for higher-rank graphs, McGill U., Bronfman Building, Room 151
Bradd Hart (McMaster Universty, Canada), Von Neumann Algebras and their Applications, Theories of II$_1$ factors, McGill U., Bronfman Building, Room 179
17:30 - 17:55 Jay Wood (Western Michigan University), Finite Algebraic Combinatorics and Applications, Groups of isometries of additive codes over $GF(q)$, Centre Mont Royal, Salon International I & II
Not Scheduled↟
Jennifer Mueller (Colorado State University), Computational Inverse Problems: From Multiscale Modeling to Uncertainty Quantification, The direct D-bar method for dynamic EIT reconstructions of pulmonary function, McGill U., Bronfman Building, Room 45
Nikolay Romanovski (Syberian Branch of the Russian Academy of Sciences), Groups and Algebras, McGill U., Trottier Building (Engineering), Room 60
Sasha Shnirelman (Concordia University), Incompressible Fluid Dynamics, Constructions of weak solutions of the Euler equations, McGill U., Burnside Hall, Room 920
Cornelius Pillen (University of South Alabama), Representations of Lie Algebras, Lifting modules of a finite group of Lie type to its ambient algebraic group, McGill U., Arts Building, Room 260 | CommonCrawl |
Evolution of transcriptional networks in yeast: alternative teams of transcriptional factors for different species
Volume 17 Supplement 10
Proceedings of the 14th Annual Research in Computational Molecular Biology (RECOMB) Comparative Genomics Satellite Workshop: genomics
Adriana Muñoz1,2,4,
Daniella Santos Muñoz1,2,5,6,
Aleksey Zimin1 &
James A. Yorke1,2,3
The diversity in eukaryotic life reflects a diversity in regulatory pathways. Nocedal and Johnson argue that the rewiring of gene regulatory networks is a major force for the diversity of life, that changes in regulation can create new species.
We have created a method (based on our new "ping-pong algorithm) for detecting more complicated rewirings, where several transcription factors can substitute for one or more transcription factors in the regulation of a family of co-regulated genes. An example is illustrative. A rewiring has been reported by Hogues et al. that RAP1 in Saccharomyces cerevisiae substitutes for TBF1/CBF1 in Candida albicans for ribosomal RP genes. There one transcription factor substitutes for another on some collection of genes. Such a substitution is referred to as a "rewiring". We agree with this finding of rewiring as far as it goes but the situation is more complicated. Many transcription factors can regulate a gene and our algorithm finds that in this example a "team" (or collection) of three transcription factors including RAP1 substitutes for TBF1 for 19 genes. The switch occurs for a branch of the phylogenetic tree containing 10 species (including Saccharomyces cerevisiae), while the remaining 13 species (Candida albicans) are regulated by TBF1.
To gain insight into more general evolutionary mechanisms, we have created a mathematical algorithm that finds such general switching events and we prove that it converges. Of course any such computational discovery should be validated in the biological tests. For each branch of the phylogenetic tree and each gene module, our algorithm finds a sub-group of co-regulated genes and a team of transcription factors that substitutes for another team of transcription factors. In most cases the signal will be small but in some cases we find a strong signal of switching. We report our findings for 23 Ascomycota fungi species.
One of the several ways that species evolve and diverge from each other is through changes in regulatory networks and more specifically through changes in the regulation of genes by transcription factors. The 23 species with an established phylogeny in Fig. 1 are collectively an excellent environment or model for the study of gene regulation in general. To investigate evolutionary changes, we generally compare regulation in the species in one branch of the phylogenetic tree and compare that with the remaining species. A group of functionally linked and co-regulated genes is called a "regulon". A regulon (and its function) may be preserved across a family of related species despite changes in regulation. In the review [1], Li and Johnson propose three different scenarios for the evolution of transcriptional networks in yeast. Their scenarios are (1) "transcription factor turnover" where the transcription factor is conserved (as well as the transcription factor binding probability), but membership of genes in the regulon can change; (2) "transcription factor rewiring" or "switching" where the regulon members are conserved, but the regulation switches from one transcription factor to another transcription factor; (3) evolution of combinatorial interactions between transcription factors due to direct protein-protein contacts between DNA binding proteins.
Tree phylogeny for 23 species of yeast. We test each of the 12 selected branches (marked as #4, #5, #6, #10, #114, etc.) to partition the species in the tree for rewiring events. Note that the partition numbers that are one or two digits indicates the branch includes all species up to that species number. A whole genome duplication is indicated in branch #10. Each branch partitions the set of species into two sets M (the species on that branch) and M ⋆ (the remaining species). The 23 species are: Saccharomyces (S.) cerevisiae (1), S. paradoxus(2), S. mikatae (3), S. bayanus (4), Candida (C.) glabrata (5), S. castellii (6), Kluyveromyces (K.) waltii (7), S. kluyveri (8), K. lactis (9), Ashbya gossypii (10), Clavispora lusitaniae (11), Debaryomyces hansenii (12), C. guilliermondii (13), C. tropicalis (14), C. albicans (15), C. parapsilosis (16), Lodderomyces elongisporus (17), Yarrowia lipolytica (18), Aspergillus nidulans (19), Neurospora crassa (20), Schizosaccharomyces japonicus (21), Schizosaccharomyces octosporus (22), Schizosaccharomyces pombe (23)
In this paper we are interested in Scenario 2. Hogues et al. [2] report an example of scenario (2) change in regulation, namely that in Saccharomyces cerevisiae the transcription factor RAP1 regulates ribosomal RP genes, while in the same conditions in Candida albicans the regulation of the same ribosomal RP genes is done by the transcription factor TBF1 (and sometimes also CBF1). There one transcription factor for certain species is replaced by another transcription factor for different species, carrying out the regulation of the same collection of genes. In order for a collection of related genes to preserve their function, we must expect change in transcription factors to be carried out for a collection of genes. Additional such cases have been documented for yeast genes involved in mating [3] and in galactose metabolism [4, 5]. See also cases discussed in [6] and references therein.
Scenario (2) can also be discussed in terms of "motifs". A motif is a short segment in the DNA sequence, between 6–20 nucleotide pairs, usually fewer than 10, that can be positioned at different locations within the regulatory region of a gene [7]. Tanay et al. [8] focus on identifying motifs that are "enriched", i.e., the motif occurs in multiple species, controlling analogous regulons in those species.
Sarda and Hannenhalli [9] present a method for detecting rewiring, switching one transcription factor to another transcription factor in the same 23 yeast species we investigate.
Nocedal and Johnson [7] analyze more complex cases of transcription factor rewiring in yeast and concludes that future research is needed to understand transcription factor rewiring in regulatory networks that involve multiple transcription factors and larger regulons. They also say that it is important to consider evolution in the study of transcription factor rewiring. For us that means considering how regulation in a branch differs from the regulation in the other species of the tree. Our algorithm automatically finds a collection of genes for which switching occurs.
What our method does While it has been demonstrated that one transcription factor can be replaced by another (e.g., [2]), our algorithm looks for larger scale replacements. We present the first computational method that finds a regulon (denoted G) and two teams of transcription factors (denoted T and T ∗) for which there has been rewiring over evolutionary time for a specified branch M of the phylogenetic tree.
We use 53 evolutionarily conserved co-expression modules detected in [10] based on S. cerevisiae and C. albicans. Additional file 1: Our supplementary material lists the genes in each module (those modules for which there was a full set of orthologs for all the species). Some modules are contained in larger modules. The number of genes in each S. cerevisiae module ranged from 1 to 614 with an average of 54 and a total of 2840 genes for all the S. cerevisiae modules. We study the 23 Ascomycota fungi species with an established phylogenetic tree from [8] shown in Fig. 1. Our yeast species includes Saccharomyces cerevisiae, Candida albicans and Ashbya gossypii. All 23 yeast species names are provided in Additional file 2: Supplementary material.
We used the orthology mapping of corresponding genes across the 23 yeast species from [11]. In some cases there is no gene for a given species, but we chose genes that had the representatives (or orthologs) in all or almost all of the species. "Orthologs" are genes in related species that have similar nucleotide sequences, suggesting they came from the same ancestral gene by speciation. When a gene has multiple copies in one species, we pick one copy at random, resulting in 2557 genes of S. cerevisiae – plus the orthologous genes across the other 22 Ascomycota species.
This paper is based on our calculation and analysis of transcription factor binding probabilities, the computed probability that a transcription factor binds somewhere in the 600-base region preceding a gene of one of our species (we obtained those regions from [11]). We refer to that region as the "upstream promoter region". The set of 126 yeast transcription-factor-DNA binding-motifs (represented as Positional Weight Matrices (PWM)) was obtained from Transfac DB Database [12, 13]. While there are many factors determining whether a gene is activated or deactivated, it seems likely to be significant if the probability of a transcription factor is high for a branch of the phylogenetic tree and lower for the remaining species, or vice versa. We computed a binding probability for each of 126 transcription factors binding to each of 2557 genes in each of 23 Ascomycota species for a total of approximately 126×2557×23 probabilities, i.e., approximately 7 million probabilities (provided in Additional file 3: Supplementary material). Each of the genes that we selected was present in S. cerevisiae. We used the same 23 Ascomycota fungi and phylogeny [8], and our set of 126 transcription factors includes most of the 88 transcription factors that [14] uses, so we safely use 126 transcription factor binding motifs associated to S. cerevisiae and applied them to the other yeast species as [14] has demonstrated that most transcription factors have conserved their DNA motifs over large evolutionary distances.
Our skewness method
For each species, gene, and transcription factor, we examine the "binding probability", the probability that the transcription factor binds to the upstream promoter region of the gene.
If a particular branch of the phylogenetic tree has been selected, we say a transcription factor-gene pair is (positively) skewed toward that branch if the binding probabilities are on the average higher for species in that branch than for the species in the complement. Later we will define our function skew that measures how much it is skewed; (see Eq. 3). We say the pair is negatively skewed toward a branch if the reverse is true, that the binding probabilities are lower for the branch than in the complement. We usually average the skewness of a transcription factor over a collection of genes.
Computing skewness We pick a group M of species representing some branch of species in the phylogenetic tree in Fig. 1 (e.g., species 1−10). We use M ⋆ to designate the remaining species, 11−23 in this case. Hence M defines a branch (or partition) of species in the tree.
All calculations use some choice of M but we often omit mention of M and M ⋆ to simplify the notation.
For a collection of genes G we say a transcription factor is skewed towards M if it binds more strongly (averaging over the genes in G) for species in M than for species in M ⋆, and similarly it is skewed towards M ⋆ if the reverse holds. We aim at finding a branch and some related genes G in some module R and two collection of transcription factors that we denote T and T ⋆ so that on the average, transcription factors in T are skewed towards M for genes in G, while transcription factors in T ⋆ are skewed towards species in M ⋆.
To make that precise, we define the skewness, a measure of the difference in the average binding probabilities between M and M ⋆. Specifically, for a given branch M (with complement M ⋆) and each transcription factor x and each gene g, we compute the skewness skew(x,g,M) as follows in Eq. 4. We write 〈… 〉 for an average. We note that the average binding probability is computed by averaging over those species that have an ortholog of g; we exclude those species that do not have an orthologous gene from the average. All of the following depend on the choice of M. First we define P x,g,s = the binding (or occupancy) probability for transcription factor x to bind to the promoter of gene g in species s. (See Additional file 4: Supplementary methods, Section: Estimating transcription factor binding probabilities). We will use " ∗" to indicate M ∗, the complement of M is being used in a calculation.
Now we present a formula for the extent to which the binding probability of one transcription factor to one gene is "skewed", that is, stronger on the species in M than in M ⋆,
$$ skew(x,g,M) = \langle P_{x,g,s}\rangle_{s\in M} - \langle P_{x,g,s}\rangle_{s\in M^{\star}}, $$
Here "skew" measures how much x is skewed towards M for g. It is greater than 0 if x is skewed towards M and is less than 0 if x is skewed towards M ⋆.
Figure 2 is a prime example of our findings. It shows what we find when we investigate the module of RP genes focusing on the branch of the phylogeny tree denoted by '10' in Fig. 1 and consisting of the leftmost 10 species in that Figure. The dashed vertical line separates that branch from the rest of the tree. We see that for the 19 genes, transcription factor TBF1 (blue dots) has generally lower binding probabilities in M than in M ⋆ while the three transcription factors (the red team) are higher in M than in M ⋆ for those genes. Hence the overall dominance between the two teams is opposite for the red and blue teams. Note that the literature discusses this kind of switch for transcription factor TBF1 versus transcription factor RAP1 (a member of the red team), but here we find the switch apparently involves two other transcription factors as well, transcription factor FHL1 and transcription factor SFP1, members of the red team.
Transcription factor rewiring for Module 51, Ribosomal Protein (RP) genes. Here we describe the meaning of this and several following graphs. The species tree is partitioned into two groups: M is the set of species in one branch (labeled "10" in Fig. 1) and M ⋆ consists of the rest of the 23 species. In this and related figures, a dashed vertical line (or two) separates M from M ⋆. For each of the 23 species on the horizontal axis, we plot two dots, each of which is an average of binding probabilities that a transcription factor binds to a gene. Here for example each red dot is the average of 57 (=3 transcription factors in the red team times 19 genes) binding probabilities for the species in question, i.e., averaging over the genes in G and the transcription factors in team T (red dots) or in team T ⋆ (blue dots). The two dots for each species are connected with a solid line using the color of the upper dot. The first row in Table 1 reports on this case. Note that the box in the lower right specifies first the blue team T ⋆ (which here consists of a single transcription factor, TBF1), then the red team T (which here consists of three transcription factors, namely RAP1 SFP1 and SFL1), and finally the number of genes in the block. When there are too many transcription factors to fit in the box, only a few are given, but full data is given in the Additional file 5: Supplementary material for this graph (and all related graphs) including the names of the 19 genes that are discussed here
Table 1 Finding max blocks
We also define the skewness for a collection T of transcription factors, a collection of genes G, and a branch M as follows by averaging the skew(x,g) all the the transcription factors x in T and all the genes g in G, as follows.
$$ skew(T,G) = \langle skew(x,g)\rangle_{x\in T,g\in G}. $$
For each branch M and Module R our goal is to identify a group G of genes in R and two teams or groups of transcription factors T and T ⋆ so that
$$ skew(T,T^{\star},G)= skew(T,G)- skew(T^{\star},G) $$
is large. In the cases we care about, skew(T,G)>0 and skew(T ⋆,G)<0.
Blocks and substitution-maximizing blocks
We define a block denoted (T,T ∗,G) to be two groups or teams T and T ∗ of transcription factors and a group G of genes. We say there is a rewiring for a branch M of the tree when transcription factors in T are positively skewed for species in M for the genes in G while the transcription factors in T ∗ are negatively skewed.
We define a "substitution-maximizing block" or more simply a max block to be a block which has the property that if we substitute any gene for one of the genes in G, or any transcription factor for one of the transcription factors in the teams, then the skewness cannot not increase. But discarding a low scoring gene or transcription factor would raise the score of the block. Indeed the blocks with the highest scores are those that that have exactly one gene and one transcription factor in each of T and T ∗.
Finding max blocks by enumerating subsets is clearly out of the question, since we are dealing with candidate sets that may have dozens of genes and dozens of transcription factors.
We can refer to a block (T,T ∗,G) as an (m,m ∗,m G )-block when m,m ∗, and m G are the numbers of elements in T,T ∗, and G respectively.
For any starting collection G 0 of m G genes, the ping-pong algorithm finds some sets T and T ∗ and eventually a max block (T,T ∗,G) by repeatedly making substitutions in the elements of T,T ∗, and G that increase the score skew(T,T ∗,G); and since only substitutions are made, the numbers of elements in T,T ∗, and G remain m,m ∗, and m G respectively. A gene or transcription factor that is eliminated from one of the sets at one stage may later return after the mix of genes and transcription factors has changed.
A sequence of ever-shrinking max blocks
Next one of the numbers m,m ∗, and m G is decreased by 1: the discussion of "importance" below describes which of these is decreased. This decrementing process continues, yielding a sequence of max blocks whose total m+m ∗+m G decreases in steps of 1. When the process is stopped depends on the needs of the user. As discussed below, here we chose to stop when the importance (a ratio) reaches 0.5.
Our Ping-Pong Algorithm that yields a max block
In the game of ping-pong, the ball goes back and forth between the two sides. Here the block goes back and forth between two steps. The ping-pong algorithm consists of alternating between steps TF and G below repeatedly with skew(T,T ∗,G) increasing at each step until the process stops in the sense that skew reaches an equilibium, a max block.
A key point is that T and T∗ are generated from G without knowledge of previous versions of T and T∗. Similarly G is generated purely from T and T∗ without reference to any previous versions of G.
The ping-pong algorithm requires three positive integers, m,m ∗,m G and a set G of m G genes in a regulon R. The first time the ping-pong algorithm is applied, m G is the number of genes in the Module R and m+m ∗ is the total number of transcription factors. At least one of these three numbers will decrease during the attrition step described below.
Step TF: choosing transcription factors T and T ∗
Given a set G of genes, we compute the skew(x,G) scores of every transcription factor x and let the new T be the m highest scoring transcription factors and let T ∗ be the m ∗ lowest scoring transcription factors. Since skew(T,G) is the average of skew(x,G) for x in T, it follows that skew(T,G) is increased (or equal) by this new T. Similarly −skew(T ∗,G) is increased by the new choice of T ∗ and so is skew(T,T ∗,G).
Step G. choosing G
Note that skew(T,T ∗,G) is the average over the m G genes in G of the terms
$$skew(T,g)-skew(T^{*},g). $$
Next compute that term for each gene g in R and we set the new G to be the m G highest scoring genes in R. That increases (or possibly makes no change) in skew(T,T ∗,G).
Lemma: Steps G and TF never decrease the skew score.
To see this, let m be the number of transcription factors in T and m G be the number of genes in G. Notice that skew(T,G) can be written three ways, namely as the average of the m terms skew(x,G), averaging over all x in T, or as the average of the m G terms skew(T,g), averaging over all g in G. Both are equal to the average of the m×m G items skew(x,g).
$$ skew(T,G) = \langle skew(X,g)\rangle_{g\in G} = \langle skew(x,G)\rangle_{X\in T} $$
Hence if any gene g is introduced by Step G, it must have a higher skew scores
$$skew(T,T^{\star},g) = \langle skew(T,g)\rangle_{g\in G} - \langle skew(T^{\star},g)\rangle_{g\in G} $$
than each gene that is replaced. Similarly each transcription factor changed by step TF must increase the skew score.
In the above transcription factor step the algorithm is supposed to select the highest m scoring transcription factors for T, but for some choices of G there are fewer than m that have positive scores, or similarly with T ∗ there can be too few with negative scores. In such cases we terminate the ping-pong run. There are ways around this as long as there are some transcription factors with positive scores and others with negative scores: just decrease m or m ∗ as needed, but our goal was to present the algorithm in its simplest form. It is also possible to encounter sets of genes G for which there are no transcription factors with positive scores or none with negative scores.
Ping-pong stops at a max block
After applying this algorithm repeatedly, there will be no substitution of a single transcription factor or a single gene that would increase skew(T,T ∗,G) so that T,T ∗,G is a max block.
The algorithm alternates back and forth between the two steps repeatedly, letting T and T ∗ determine the set of genes G, and then letting G determine transcription factor teams T and T ∗. Each step increases the overall score skew(T,T ∗,G) until it stops at a max block: the only changes in the sets are those that increase the overall score. Since there are only a finite number of choices, the procedure must eventually stop at a max block, where the G that is used in step G is the G that is produced in the TF step.
Ping-Pong pseudocode
Input: (all_Gs,G 0,all_TFs,m,m ⋆,mG): all_Gs is the set of all genes in some module R; G 0 is an initial gene set of mG genes; all_TFs is the set of all transcription factors; m,m ⋆,mG remain constant; and m,m ⋆ are the numbers of transcription factors in team T and T ⋆ and mG is the number of genes
Output: The output is the max block (G, T, T ⋆) and its skewness score
new_score=0
G=G 0
score=new_score
Compute Step TF: Choose transcription factor teams T and T ∗
Compute Step G. Choose genes G
new_score=skew(T,T ⋆,G)
while new_score>score: (score is increasing)
return G, T, T ⋆, score
Stopping condition: Neither Step G nor Step TF ever decreases the skew score, so it must reach an equillibrium
The attrition step
For each x in T we define the "importance" of x to be the ratio of skew(x,G) divided by the highest score of the transcription factors in T; similarly for each y in T ∗, the "importance" of y is the ratio of skew(y,G) divided by the lowest score of the transcription factors; and for each g in G, the "importance" of g is the ratio of skew(T,T ∗,g) divided by the highest score in G. We now compare all of the importance scores and delete the one with the lowest score. In other words, we decrease by 1 one of the m,m ∗,m G . That increases the overall skew score. Now again we play ping-pong with the new reduced numbers, starting the game with our current G, possibly reduced by one gene.
As we proceed decreasing the numbers, we may lose some transcription factor or gene that later becomes more important to a reduced set of genes and transcription factors and so it enters back in. That is why we choose new teams from all transcription factors, not just the ones that were included on the last step, and the same holds for genes, using any genes in the specified regulon. We compute binding probabilities with 8 digit precision to avoid having tie scores, but if there is a tie score and one transcription factor or gene must be chosen, we retain the one(s) that comes first alphabetically.
When should attrition stop?
When we start, it is likely that some skew scores will be near 0, much smaller than other skew scores, so their importance will be near 0. The scientist who wishes to find many involved interacting genes and transcription factors might stop when the importance has risen to 0.25 (meaning that all the importance scores lie between 0.25 and 1.0). The experimentalist might wish to deal with fewer transcription factors and genes and so might stop at 0.75. In this paper and in the Additional file 5: Supplementary material we stopped when the importance reached 0.5.
We have examined the 12 largest branches of the species tree for each of the above mentioned modules using this approach. We indicated the branches with a slash and labeled them with a number as shown on Fig. 1. We determined a "max block" for each module and branch. For some, we found strong indications of rewiring.
Table 1 shows the cases with the largest skewness for the block of the module and branch, sorted by descending skewness. Columns 4 shows the difference Dif(M ⋆) between the two teams, T and T ⋆, on M ⋆,
$$ \begin{aligned} \text{Dif}(M^{\star})&=\text{Dif}^{\star}(T,T^{\star},G) \\ &=\langle P_{x,g,s}\rangle_{x\in T,g\in G,s\in M^{\star}} - \langle P_{x,g,s}\rangle_{x\in T^{\star},g\in G,s\in M^{\star}} \end{aligned} $$
while column 5 shows the difference Dif(M) on M,
$$ \begin{aligned} {}\text{Dif}(M)&=\text{Dif}(T,T^{\star},G) \\&= \langle P_{x,g,s}\rangle_{x\in T,g\in G,s\in M} - \langle P_{x,g,s}\rangle_{x\in T^{\star},g\in G,s\in M} \end{aligned} $$
Transcription factor rewiring for Module-51 genes Module 51 (see Fig. 2) consists of Ribosomal Protein (RP) genes exclusively. In the Introduction we noted that [2] reported that one transcription factor substitutes for another on some collection of genes in two species, namely Rap1 in Saccharomyces cerevisiae substitutes for TBF1 in Candida albicans for ribosomal RP genes. We find for a branch of 10 species, RAP1, FHL1, and SFP1 substitute for TBF1. Indeed we find that their skewness scores are similar: skew(Tbf1,Rap1,G)=0.777;skew(Tbf1,Fhl1,G)=0.713;skew(Tbf1,Sfp1,G)=0.711, where our algorithm finds the regulon G consists of 19 of the 31 RP genes in Module 51. See the Additional file 5: Supplementary material for a list of the 19 genes and other detailed information about the most significant block that was found for each module. Note that FHL1 is mentioned in [6] as a "a key player" in the regulation of RP genes in S. cerevisiae. We find it is involved in rewiring, according to our calculations.
Transcription factor rewiring for Module-59 genes Module 59 consists of conserved, co-expressed genes related to the biological function RNA methylation. Here there are two genes in the module and both are in the rewiring block. In Fig. 3 we see a much more complicated apparent rewiring than in Fig. 2.
Transcription factor rewiring for Module-59 (RNA methylation) genes. Here M is branch 112 from the phylogeny tree, so M={14,⋯,17} and M ⋆={1,⋯,13;18,⋯,23}. In Mred dominates blue, while elsewhere blue mostly dominates red
What is striking is that in 13 of the 19 species in M ⋆ transcription factor MATA1 (red dots) has binding probabilities near 0, (though in two M ⋆ species it is high). In contrast in the branch M, it is higher than the T ⋆ team (blue dots, consisting of 9 transcription factors.
Transcription factor rewiring for the Module-55 gene YMR290C Module 55 consists of a conserved, co-expressed gene related to the biological function ribosomal subunit assembly.
Here in Fig. 4 if the tree is cut at the bottom, separating the right branch of 19 species from the left-most branch of 4, it is arbitrary as to which of the two branches is called M and we have called it the left branch. If however we had called it the right branch, the graph and results would be the same. What we see is that in the four species of M, the 12 transcription factors of the T team (red dots) very clearly dominate the 14 transcription factors of the T ⋆ team. In contrast on the right side, the binding probabilities of the two teams are much closer, apparently all active. So the apparent switching behavior here is that the T clearly dominates T ⋆ on the left, while on the right all the transcription factors interact at similar levels (remembering that each dot is only an average).
Transcription factor rewiring for Module-55 genes. Here M is branch 4 from the phylogeny tree, so M={1,2,3,4}, and M ⋆={5,⋯,23}. Notice the large difference between red and blue dots in all species in M, while blue mostly dominates in M ⋆
Transcription factor rewiring for the Module-40 gene Module 40 consists of conserved, co-expressed genes related to the biological function actin cortical patch assembly. The phenomenon seen in Fig. 5 is somewhat similar to what is seen in the previous figure for Module 55. The branch of three species has one team turned on and one turned off, or at least at much lower binding probabilities, while for each of the other species, the two teams have similar binding probabilities.
Transcription factor rewiring for Module-40 genes. Here M is branch 20 from the phylogeny tree, so M={1,⋯,20}, and M ⋆={21,⋯,23}. Branch 20 is special in that it cuts the tree at the root separating the tree into two branches: M ⋆ is also a branch. Hence the roles of M and M ⋆ can be switched and the red and blue colors could be reversed. Notice then that branch M ⋆ has a wide separation between red and blue, while for the rest of the species of the tree, red and blue are closer together
Here in Fig. 5 if the tree is cut at the top, separating the left branch of 20 species from the right-most branch of 3, it is arbitrary as to which of the two branches is called M and we have called it the left branch. What we see is that in the three species of M ⋆, the two transcription factors of the T ⋆ team (blue dots) very clearly dominates the 33 transcription factors of the T team for the 3 genes in the block. In contrast on the left side, the binding probabilities of the two teams are much closer, apparently all active. So the apparent switching behavior here is that the M ⋆ clearly dominates M on the right, while on the left all the transcription factors interact at similar levels (remembering that each dot is only an average).
Transcription factor rewiring for Module-56 genes Module 56 (Fig. 6) consists of conserved, co-expressed genes related to the biological function purine ribonucleotide biosynthetic process. This example is most similar to Module 55 above in that there is an extreme difference between red and blue in M but not in M ∗.
Transcription factor rewiring for Module-56 genes. Here M is branch 113 from the phylogeny tree, so M={11,⋯,13}, and M ⋆={1,⋯,10;14,⋯,23}. Notice the large difference between red and blue dots in all species in M, while blue mostly dominates in M ⋆
Our method can address questions such as the following: Can different groups of genes in related species be regulated by the same group or "team" of transcription factors (as in Scenario 1)? Another question: Can a team of transcription factors become dominant for a collection of related genes in a tree branch while a second team is dominant on the other species (as in Scenario 2)? In this paper we focus in Scenario 2.
Our approach differs from that of Sarda and Hannenhalli [9] in that we define our skewness for each transcription factor while they define a function that compares the skewness of two transcription factors. They require more computation than our approach since they must make a complex computation of rewiring scores for each pair of transcription factors. We use extensive computation instead to look for more complicated situations where there can be several transcription factors that switch with one or more transcription factors. That is we find collections or teams of transcription factors which are positively skewed, averaging over the genes in a regulon, and for those transcription factors which are negatively skewed. We vary the selection of genes in the regulon and the teams of positively skewed transcription factors and the teams of negatively skewed transcription factors.
One of our colleagues, Chris Dock, tested a module (#2) of 40 genes. He picked G 0 to have 36 randomly selected genes, and repeated this process 100 times. The process always arrived at the same max block (using importance = 0.5). That suggests the process is robust, but does not guarantee a unique result.
A module consists of related genes and one can imagine simplistically that the module represents a process with just two stages; first one set of genes is activated, and later another set. If there is rewiring, each might have its own max block, the union of which might be the max block that the above process finds. These can be found by using a modified approach, where instead of starting with a large set of genes and contracting it as we have described above, one can start with one gene and expand the collection of genes until importance 0.5 is reached. This 'expanding" approach would often yield a subset of the "contracting" approach and the subset would depend on the initial gene. Here we chose to keep our report simple by restricting attention to the expanding max block approach which gives an overview.
Note that while T* consists of the transcription factors for which skew(x,G) is smallest (most negative), they are not necessarily all negative, and we have excluded some cases where not all x in T* had negative skew score. This was an optional choice, but it seemed appropriate in view of the concept of rewiring.
Nocedal and Johnson [7] write "We do not yet understand how a large network, composed of many transcription regulators and hundreds or thousands of target genes, forms in the first place." We believe that considering only cases in which one transcription factor is switched with another will be inadequate to describe the evolution of networks. They also write "A change even in the regulation of a single gene can have important consequences in modern species'.... However, most biological processes require the coordinated expression of many genes rather than a single gene" to produce a useful phenotype.
Our investigation aims at providing a new approach to thinking about the very complex idea of rewiring, freeing us from the constraint of considering only one transcription factor substituting for one transcription factor (or one gene for one gene).
All the examples in this paper discuss rewiring (Scenario 2) via one team of transcription factors substituting for another team on a collection of genes. However, it is an equivalent problem mathematically - using the same set of binding probabilities - to have one team of genes substitute for another team of genes for a collection of transcription factors (the turnover problem, Scenario 1). In an example of a Scenario 1, Habib et al. [14] present a method for tracing the evolutionary history of regulatory interactions of 88 regulatory DNA motifs associated with transcription factors across 23 Ascomycota fungi, (the same 23 that we study). They use their method to explain the evolution of transcription factor turnover for a collection of genes. Here the transcription factor changes which genes it regulates while preserving the function of the genes. Gasch et al. [15] also study changes in which regulon members that are regulated by certain transcription factors.
We further expect to be able to investigate more complicated problems with very similar ideas in which there is simultaneously a rewiring of transcription factors and a turnover of genes.
No numerical investigation such as ours can produce definitive biological results, but the fact that our first case in Table 1, top row, is similar to a well known case is promising since our results find a team of three transcription factors instead of one in the published results. Table 1 shows the 12 cases with the highest skew scores, and for the top 5 we have included figures. These seem to suggest rewiring of teams of transcription factors. Of course it is desirable to have some of these cases checked experimentally.
It may be significant that eight of the twelve cases in Table 1 involve only two branches, namely branches 4 and 112. While branch 4 includes Saccharomyces cerevisiae and relatives, branch 112 includes Candida albicans and relatives. Those species have been reported in cases of rewiring in the literature.
For each module and each branch of the tree we have computed a block (except for a small number of cases). The existence of a max block is not evidence of significant rewiring, and in fact there is no apparent test of statistical significance. Our solution to this complication is to examine the blocks that have the greatest skewness. For each module and each branch of the tree we compute the skewness of the resulting max block. Figure 7 is the histogram. The distribution has a long tail consisting of high skew scores. We believe that several cases in this flat tail represent actual cases of rewiring.
Histogram of Skewness Scores. The histogram of all skewness scores for the max blocks of all modules and all branches is shown in blue. The horizontal axis reports the two-digit truncation of skew and the height is the number of max blocks that had that score. The moving average in a sliding window of size 7 is shown in red. Note the long tail on the right, which corresponds to the high skew max blocks that we are most interested in
We believe that the understanding of the evolution of transcription networks will have to invoke teams of transcription factors and teams of genes in some essential form.
Li H, Johnson AD. Evolution of transcription networks - Lessons from yeast. Curr Biol. 2010; 17(20):R746–R753.
Hogues H, Lavoie H, Sellam A, Mangos M, Roemer T, Purisima E, Nantel A, Whiteway M. Transcription factor substitution during the evolution of fungal ribosome regulation. Mol Cell. 2008; 29(5):552–62.
Tsong AE, Tuch BB, Li H, Johnson AD. Evolution of alternative transcriptional circuits with identical logic. Nature. 2006; 443(7110):415–420.
Lavoie H, Hogues H, Whiteway M. Rearrangements of the transcriptional regulatory networks of metabolic pathways in fungi. Curr Opin Microbiol. 2009; 12(6):655–63.
Martchenko M, Levitin A, Hogues H, Nantel A, Whiteway M. Transcriptional rewiring of fungal galactose-metabolism circuitry. Curr Biol. 2007; 17(12):1007–13.
Weirauch MT, Hughes TR. Conserved expression without conserved regulatory sequence: the more things change, the more they stay the same. Trends Genet. 2010; 26(2):66–74.
Nocedal I, Johnson AD. How Transcription Networks Evolve and Produce Biological Novelty. CSH Symposia on Quant. Bio. 2015; LXXX:1–10. doi:10.1101/sqb.2015.80.027557.
Tanay A, Regev A, Shamir R. Conservation and evolvability in regulatory networks: The evolution of ribosomal regulation in yeast. PNAS. 2005; 102(20):7203–8.
Sarda S, Hannenhalli S. High-throughput identification of Cis-regulatory rewiring events in yeast. Mol Biol Evol. 2015; 32(12):3047–63.
Field Y, Fondufe-Mittendorf Y, Moore IK, Mieczkowski P, Kaplan N, Lubling Y, Lieb JD, Widom J, Segal E. Gene expression divergence in yeast is coupled to DNA-encoded nucleosome organization. Nat Genet. 2009; 41(4):438–45.
Wapinski I, Pfeffer A, Friedman N, Regev A. Natural history and evolutionary principles of gene duplication in fungi. Nature. 2007; 449(7158):54–61.
Matys V, Kel-Margoulis OV, Fricke E, Liebich I, Land S, et al.TRANSFAC and its module TRANSCompel: transcriptional gene regulation in eukaryotes. Nucleic Acids Res. 2006; 34:D108–110.
MacIsaac KD, Wang T, Gordon DB, Gifford DK, Stormo GD, Fraenkel E. An improved map of conserved regulatory sites for Saccharomyces cerevisiae. BMC Bioinforma. 2006; 7:113.
Habib N, Wapinski I, Maragalit H, Regev A, Friedman N. A functional selection model explains evolutionary robustness despite plasticity in regulatory networks. Mol Syst Biol. 2012; 619(8):1–18.
Gasch AP, Moses AM, Chiang DY, Fraser HB, Berardini M, Eisen MB. Conservation and evolution of cis-regulatory systems in ascomycete fungi. PLoS Biol. 2004; 2:e398.
The authors would like to thank Chris Dock, S. Hannenhalli, and M. Roberts for suggestions, and we thank the referees for their many useful comments.
The publication charges for this article were supported by National Science Foundation Plant Genome Research Program Grant 1444893.
This article has been published as part of BMC Genomics Vol 17 Suppl 10, 2016: Proceedings of the 14th Annual Research in Computational Molecular Biology (RECOMB) Comparative Genomics Satellite Workshop: genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-17-supplement-10.
This project was supported in part by National Research Initiative Competitive grants 2009-35205-05209 from the United States Department of Agriculture National Institute of Food and Agriculture. This research was also supported by National Institutes of Health grant R01-HG002945 and National Science Foundation Plant Genome Research Program Grant 1444893.
The overview of the supplementary material and methods is provided in Additional file 6: Supplementary material. The database of the computed binding probabilities of 126 transcription factors to 2557 genes shared by 23 species is included in Additional file 3: Supplementary Material. Additional supporting data is included in additional files.
AM and JY formulated the problem. AM and JY developed this novel mathematical approach. AM designed this computational tool and the MySQL relational database. AM, and DSM implemented this computational tool in Matlab. AM selected and extracted the data sets from the public databases, pre-process and loaded them into our MySQL database. AM and AZ loaded the datasets into Matlab. DS prepared figures and supplementary information. AM, JY conducted the interpretation of the results. AM, JY, and AZ contributed equally to the writing of this manuscript. All authors read, revised and approved the final version of the manuscript.
Institute for Physical Science and Technology, University of Maryland, College Park, Maryland, 20742, USA
Adriana Muñoz, Daniella Santos Muñoz, Aleksey Zimin & James A. Yorke
Department of Mathematics, University of Maryland, College Park, Maryland, 20742, USA
Adriana Muñoz, Daniella Santos Muñoz & James A. Yorke
Department of Physics, University of Maryland, College Park, Maryland, 20742, USA
James A. Yorke
Cold Spring Harbor Laboratory, 1 Bungtown Rd., Cold Spring Harbor, 11724, NY, USA
Adriana Muñoz
Faculty of Sciences, University of Ottawa, Ottawa, K1N 6N5, ON, Canada
Daniella Santos Muñoz
Faculty of Engineering, University of Ottawa, Ottawa, K1N 6N5, ON, Canada
Aleksey Zimin
Correspondence to Adriana Muñoz.
Additional file 1
Supplementary material: all genes present in each module. We report all modules and for each module, we list all genes in the module. Each gene entry includes identifier and name. (PDF 91 kb)
Supplementary material: yeast species. We report the yeast species names and identifier. (PDF 16 kb)
Supplementary material: database of the computed transcription factor binding probabilities. We report the database of the computed binding probabilities of 126 transcription factors to 2557 genes shared by 23 species. (TXT 415 kb)
Supplementary methods. We describe the method for estimating transcription factor binding probabilities for each PWM (equivalently, transcription factor) on the gene promoter, for each gene in each species. (PDF 135 kb)
Supplementary material: the transcription factors and genes, and their skewness scores of the rewiring blocks. For each module, results are reported for its top scoring branch. Each module lists the highest scoring block. (PDF 155 kb)
Supplementary material overview. We give an overview of the supplementary material and methods provided in this paper. (PDF 74 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Muñoz, A., Santos Muñoz, D., Zimin, A. et al. Evolution of transcriptional networks in yeast: alternative teams of transcriptional factors for different species. BMC Genomics 17 (Suppl 10), 826 (2016). https://doi.org/10.1186/s12864-016-3102-7
Transcription factor
Transcriptional networks | CommonCrawl |
Tool to apply the gaussian elimination method and get the row reduced echelon form, with steps, details, inverse matrix and vector solution.
Gaussian Elimination - dCode
Tag(s) : Matrix, Symbolic Computation
Thanks to your feedback and relevant comments, dCode has developed the best 'Gaussian Elimination' tool, so feel free to write! Thank you!
Gaussian Elimination Calculator
Coefficients Matrix M
Results Vector V
Display Detailed Calculation Steps
Only solution vector
See also: Equation Solver — Matrix Reduced Row Echelon Form
Equation System to Matrix Converter
Equation system (one per line)
3x+2y+z=1 2x+z=3 x+y+z=0
Variables to solve
See also: Equation Solver
What is the Gaussian Elimination method?
The Gaussian elimination algorithm (also called Gauss-Jordan, or pivot method) makes it possible to find the solutions of a system of linear equations, and to determine the inverse of a matrix.
The algorithm works on the rows of the matrix, by exchanging or multiplying the rows between them (up to a factor).
At each step, the algorithm aims to introduce into the matrix, on the elements outside the diagonal, zero values.
How to calculate the solutions of a linear equation system with Gauss?
From a system of linear equations, the first step is to convert the equations into a matrix.
Example: $$ \left\{ \begin{array}{} x&-&y&+&2z&=&5\\3x&+&2y&+&z&=&10\\2x&-&3y&-&2z&=&-10\\\end{array} \right. $$ can be written under multiplication">matrix multiplication form: $$ \left( \begin{array}{ccc} 1 & -1 & 2 \\ 3 & 2 & 1 \\ 2 & -3 & 2 \end{array} \right) . \left( \begin{array}{c} x \\ y \\ z \end{array} \right) = \left( \begin{array}{c} 5 \\ 10 \\ -10 \end{array} \right) $$ that corresponds to the (augmented) matrix $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 3 & 2 & 1 & 10 \\ 2 & -3 & 2 & -10 \end{array} \right) $$
Then, for each element outside the non-zero diagonal, perform an adequate calculation by adding or subtracting the other lines so that the element becomes 0.
Example: Subtract 3 times (Row 1) to (Row 2) such as the element in row 2, column 1 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 0 & 5 & -5 & -5 \\ 2 & -3 & -2 & -10 \end{array} \right) $$
Subtract 2 times (Row 1) to (Row 3) such as the element in row 3, column 1 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 0 & 5 & -5 & -5 \\ 0 & -1 & -6 & -20 \end{array} \right) $$
Subtract 1/5 times (Row 2) to (Row 3) such as the element in row 3, column 2 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & -1 & 2 & 5 \\ 0 & 5 & -5 & -5 \\ 0 & 0 & -7 & -21 \end{array} \right) $$
Subtract 1/5 times (Row 2) to (Row 1) such as the element in row 1, column 2 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & 0 & 1 & 4 \\ 0 & 5 & -5 & -5 \\ 0 & 0 & -7 & -21 \end{array} \right) $$
Subtract 5/7 times (Row 3) to (Row 2) such as the element in row 2, column 3 becomes 0: $$ \left( \begin{array}{ccc|c} 1 & 0 & 0 & 1 \\ 0 & 5 & 0 & 10 \\ 0 & 0 & -7 & -21 \end{array} \right) $$
Simplify each line by dividing the value on the diagonal
Example: $$ \left( \begin{array}{ccc|c} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 3 \end{array} \right) $$
The result vector is the last column.
Example: $ {1,2,3} $ that corresponds to $ {x,y,z} $ so $ x=1, y=2, z=3 $
dCode retains ownership of the online 'Gaussian Elimination' tool source code. Except explicit open source licence (indicated CC / Creative Commons / free), any algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (PHP, Java, C#, Python, Javascript, Matlab, etc.) no data, script or API access will be for free, same for Gaussian Elimination download for offline use on PC, tablet, iPhone or Android !
Equation Solver
Matrix Reduced Row Echelon Form
Boolean Expressions Calculator
Truth Table
Vertex Form of a Quadratic
Determinant of a Matrix
Trigonometric Equation Solver
elimination,pivot,gausse,jordan,matrix,system,equation
Source : https://www.dcode.fr/gaussian-elimination | CommonCrawl |
Capillary Assembly of Silicon Nanowires Using the Removable Topographical Patterns
Hong, Juree;Lee, Seulah;Lee, Sanggeun;Seo, Jungmok;Lee, Taeyoon 509
https://doi.org/10.3740/MRSK.2014.24.10.509 PDF KSCI
We demonstrate a simple and effective method to accurately position silicon nanowires (Si NWs) at desirable locations using drop-casting of Si NW inks; this process is suitable for applications in nanoelectronics or nanophotonics. Si NWs were assembled into a lithographically patterned sacrificial photoresist (PR) template by means of capillary interactions at the solution interface. In this process, we varied the type of solvent of the SiNW-containing solution to investigate different assembly behaviors of Si NWs in different solvents. It was found that the assembly of Si NWs was strongly dependent on the surface energy of the solvents, which leads to different evaporation modes of the Si NW solution. After Si NW assembly, the PR template was cleanly removed by thermal decomposition or chemical dissolution and the Si NWs were transferred onto the underlying substrate, preserving its position without any damage. This method enables the precise control necessary to produce highly integrated NW assemblies on all length scales since assembly template is easily fabricated with top-down lithography and removed in a simple process after bottom-up drop-casting of NWs.
Effect of Nano Grain Growth on Coefficient of Thermal Expansion in Electroplated Fe-Ni Invar Alloy
Yim, Tai Hong;Choe, Byung Hak;Jeong, Hyo Tae 515
The aim of this paper is to consider the effect of annealing on the coefficient of thermal expansion (CTE) of electroplated Invar Fe-Ni alloy. The CTE of the as-electroplated alloy is lower than those of alloys annealed at $400^{\circ}C$ and $800^{\circ}C$. XRD peaks become sharper as the as-electroplated alloy is annealed, which means the grain growth. The average grain sizes of as-electroplated and as-annealed alloys at $400^{\circ}C$ and $800^{\circ}C$ are 10 nm, 70 nm, and $2{\mu}m$, respectively, as determined by TEM and EBSD analyses. The CTE variation for the various grain sizes after annealing may come from the magnetostriction effect, which generates strain due to changes in the magnetization state of the alloys. The thermal expansion coefficient is considered to be affected by nano grain size in electroplated Fe-Ni Invar alloys. As grain size decreases, ferromagnetic forces might change to paramagnetic forces. The effect of lattice vibration damping of nano grain boundaries could lead to the decrease of CTE.
Effects of Cu and B on Effective Grain Size and Low-Temperature Toughness of Thermo-Mechanically Processed High-Strength Bainitic Steels
Lee, Seung-Yong;Hwang, Byoungchul 520
Effects of Cu and B on effective grain size and low-temperature toughness of thermo-mechanically processed high-strength bainitic steels were investigated in this study. The microstructure of the steel specimens was analyzed using optical, scanning, and transmission electron microscopy; their effective grain size was also characterized by electron back-scattered diffraction. To evaluate the strength and low-temperature toughness, tensile and Charpy impact tests were carried out. The specimens were composed of various low-temperature transformation products such as granular bainite (GB), degenerated upper bainite (DUB), lower bainite (LB), and lath marteniste (LM), dependent on the addition of Cu and B. The addition of Cu slightly increased the yield and tensile strength, but substantially deteriorated the low-temperature toughness because of the higher volume fraction of DUB with a large effective grain size. The specimen containing both Cu and B had the highest strength, but showed worse low-temperature toughness of higher ductile-brittle transition temperature (DBTT) and lower absorbed energy because it mostly consisted of LB and LM. In the B-added specimen, on the other hand, it was possible to obtain the best combination of high strength and good low-temperature toughness by decreasing the overall effective grain size via the appropriate formation of different low-temperature transformation products containing GB, DUB, and LB/LM.
Thermal Shock Resistance Property of TaC Added Ti(C,N)-Ni Cermets
Shin, Soon-Gi 526
Thermal shock resistance property has recently been considered to be one of the most important basic properties, in the same way that the transverse-rupture property is important for sintered hard materials such as ceramics, cemented carbides, and cermets. Attempts were made to evaluate the thermal shock resistance property of 10 vol% TaC added Ti(C,N)-Ni cermets using the infrared radiation heating method. The method uses a thin circular disk that is heated by infrared rays in the central area with a constant heat flux. The technique makes it possible to evaluate the thermal shock strength (Tss) and thermal shock fracture toughness (Tsf) directly from the electric powder charge and the time of fracture, despite the fact that Tss and Tsf consist of the thermal properties of the material tested. Tsf can be measured for a specimen with an edge notch, while Tss cannot be measured for specimens without such a notch. It was thought, however, that Tsf might depend on the radius of curvature of the edge notch. Using the Tsf data, Tss was calculated using a consideration of the stress concentration. The thermal shock resistance property of 10 vol% TaC added Ti(C,N)-Ni cermet increased with increases in the content of nitrogen and Ni. As a result, it was considered that Tss could be applied to an evaluation of the thermal shock resistance of cermets.
Highly Sensitive MEMS-Type Micro Sensor for Hydrogen Gas Detection by Modifying the Surface Morphology of Pd Catalytic Metal
Kim, Jung-Sik;Kim, Bum-Joon 532
In this study, highly sensitive hydrogen micro gas sensors of the multi-layer and micro-heater type were designed and fabricated using the micro electro mechanical system (MEMS) process and palladium catalytic metal. The dimensions of the fabricated hydrogen gas sensor were about $5mm{\times}4mm$ and the sensing layer of palladium metal was deposited in the middle of the device. The sensing palladium films were modified to be nano-honeycomb and nano-hemisphere structures using an anodic aluminum oxide (AAO) template and nano-sized polystyrene beads, respectively. The sensitivities (Rs), which are the ratio of the relative resistance were significantly improved and reached levels of 0.783% and 1.045 % with 2,000 ppm H2 at $70^{\circ}C$ for nano-honeycomb and nano-hemisphere structured Pd films, respectively, on the other hand, the sensitivity was 0.638% for the plain Pd thin film. The improvement of sensitivities for the nano-honeycomb and nano-hemisphere structured Pd films with respect to the plain Pd-thin film was thought to be due to the nanoporous surface topographies of AAO and nano-sized polystyrene beads.
Microstructure and Mechanical Properties on Solid Solution Heat Treatment of Al-6Si-2Cu Alloy for Lightweight Automotive
Hong, Seung-Pyo;Kim, Chung-Seok 538
Microstructural and mechanical characteristics of Al-6Si-2Cu alloy for lightweight automotive parts were investigated. The test specimens were prepared by gravity casting process. Solution heat treatments were applied to as-cast alloys to improve mechanical properties. The microstructure of the gravity casting specimen presents a typical dendrite structure, having a secondary dendrite arm spacing (SDAS) of $37{\mu}m$. In addition to the Al matrix, a large amount of coarsened eutectic Si, $Al_2Cu$ intermetallic phase, and Fe-rich phases were identified. After solution heat treatment, single-step solution heat treatments were found to considerably improve the spheroidization of the eutectic Si phase. Two-step solution treatments gave rise to a much improved spheroidization. The mechanical properties of the two-step solution heat treated alloy have been shown to lead to higher values of properties such as tensile strength and microhardness. Consequentially, the microstructural and mechanical characteristics of Al alloy have been successfully characterized and are available for use with other basic data for the development of lightweight automotive parts.
Optical and Electrical Properties of ZnO Hybrid Structure Grown on Glass Substrate by Metal Organic Chemical Vapor Deposition
Kim, Dae-Sik;Kang, Byung Hoon;Lee, Chang-Min;Byun, Dongjin 543
A zinc oxide (ZnO) hybrid structure was successfully fabricated on a glass substrate by metal organic chemical vapor deposition (MOCVD). In-situ growth of a multi-dimensional ZnO hybrid structure was achieved by adjusting the growth temperature to determine the morphologies of either film or nanorods without any catalysts such as Au, Cu, Co, or Sn. The ZnO hybrid structure was composed of one-dimensional (1D) nanorods grown continuously on the two-dimensional (2D) ZnO film. The ZnO film of 2D mode was grown at a relatively low temperature, whereas the ZnO nanorods of 1D mode were grown at a higher temperature. The change of the morphologies of these materials led to improvements of the electrical and optical properties. The ZnO hybrid structure was characterized using various analytical tools. Scanning electron microscopy (SEM) was used to determine the surface morphology of the nanorods, which had grown well on the thin film. The structural characteristics of the polycrystalline ZnO hybrid grown on amorphous glass substrate were investigated by X-ray diffraction (XRD). Hall-effect measurement and a four-point probe were used to characterize the electrical properties. The hybrid structure was shown to be very effective at improving the electrical and the optical properties, decreasing the sheet resistance and the reflectance, and increasing the transmittance via refractive index (RI) engineering. The ZnO hybrid structure grown by MOCVD is very promising for opto-electronic devices as Photoconductive UV Detectors, anti-reflection coatings (ARC), and transparent conductive oxides (TCO).
Effects of Tempering Treatment on Microstructure and Mechanical Properties of Cu-Bearing High-Strength Steels
Lee, Sang-In;Hwang, Byoungchul 550
The present study deals with the effects of tempering treatment on the microstructure and mechanical properties of Cu-bearing high-strength steels. Three kinds of steel specimens with different levels of Cu content were fabricated by controlled rolling and accelerated cooling, ; some of these steel specimen were tempered at temperatures ranging from $350^{\circ}C$ to $650^{\circ}C$ for 30 min. Hardness, tensile, and Charpy impact tests were conducted in order to investigate the relationship of microstructure and mechanical properties. The hardness of the Cu-added specimens is much higher than that of Cu-free specimen, presumably due to the enhanced solid solution hardening and precipitation hardening, result from the formation of very-fine Cu precipitates. Tensile test results indicated that the yield strength increased and then slightly decreased, while the tensile strength gradually decreased with increasing tempering temperature. On the other hand, the energy absorbed at room and lower temperatures remarkably increased after tempering at $350^{\circ}C$; and after this, the energy absorbed then did not change much. Suitable tempering treatment remarkably improved both the strength and the impact toughness. In the 1.5 Cu steel specimen tempered at $550^{\circ}C$, the yield strength reached 1.2 GPa and the absorbed energy at $-20^{\circ}C$ showed a level above 200 J, which was the best combination of high strength and good toughness.
Effects of Surface Characteristics of TiO2 Nanotublar Composite on Photocatalytic Activity
Lee, Jong-Ho;Youn, Jeong-Il;Kim, Young-Jig;Oh, Han-Jun 556
To synthesize a high-performance photocatalyst, N doped $TiO_2$ nanotubes deposited with Ag nanoparticles were synthesized, and surface characteristics, electrochemical behaviors, and photocatalytic activity were investigated. The $TiO_2$ nanotubular photocatalyst was fabricated by anodization; the Ag nanoparticles on the $TiO_2$ nanotubes were synthesized by a reduction reaction in $AgNO_3$ solution under UV irradiation. The XPS results of the N doped $TiO_2$ nanotubes showed that the incorporated nitrogen ions were located in interstitial sites of the $TiO_2$ crystal structure. The N doped titania nanotubes exhibited a high dye degradation rate, which is effectively attributable to the increase of visible light absorption due to interstitial nitrogen ions in the crystalline $TiO_2$ structure. Moreover, the precipitated Ag particles on the titania nanotubes led to a decrease in the rate of electron-hole recombination; the photocurrent of this electrode was higher than that of the pure titania electrode. From electrochemical and dye degradation results, the photocurrent and photocatalytic efficiency were found to have been significantly affected by N doping and the deposition of Ag particles.
N-Doped ZnO Nanoparticle-Carbon Nanofiber Composites for Use as Low-Cost Counter Electrode in Dye-Sensitized Solar Cells
An, Ha-Rim;Ahn, Hyo-Jin 565
Nitrogen-doped ZnO nanoparticle-carbon nanofiber composites were prepared using electrospinning. As the relative amounts of N-doped ZnO nanoparticles in the composites were controlled to levels of 3.4, 9.6, and 13.8 wt%, the morphological, structural, and chemical properties of the composites were characterized by means of field-emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), and X-ray photoelectron spectroscopy (XPS). In particular, the carbon nanofiber composites containing 13.8 wt% N-doped ZnO nanoparticles exhibited superior catalytic properties, making them suitable for use as counter electrodes in dye-sensitized solar cells (DSSCs). This result can be attributed to the enhanced surface roughness of the composites, which offers sites for $I_3{^-}$ ion reductions and the formation of Zn3N2 phases that facilitate electron transfer. Therefore, DSSCs fabricated with 13.8 wt% N-doped ZnO nanoparticle-carbon nanofiber composites showed high current density ($16.3mA/cm^2$), high fill factor (57.8%), and excellent power-conversion efficiency (6.69%); at the same time, these DSSCs displayed power-conversion efficiency almost identical to that of DSSCs fabricated with a pure Pt counter electrode (6.57%). | CommonCrawl |
Business Economics Profit (economics)
A monopolist faces a demand curve given by P = 10 - Q and has a constant marginal (and average)...
A monopolist faces a demand curve given by P = 10 - Q and has a constant marginal (and average) cost of $2. What is the economic profit made by this profit-maximizing monopolist?
a. $0
b. $12
c. $14
d. $16
e. none of the above
The profit is the difference between the incomes earned and the costs incurred by the firm. The profits show that the firm should continue its operation in the market. The profits can be categorized as normal profit, economic profit, and accounting profit.
The correct answer is b. $12.
The total revenue and marginal revenue functions are:
{eq}\begin{align*} TR &= PQ\\ &= 10Q - {Q^2}\\ MR &= \dfrac{{\partial TR}}{{\partial Q}}\\ &= 10 - 2Q \end{align*} {/eq}
The profit maximizing price and quantity are determined below:
{eq}\begin{align*} MR &= MC\\ 10 - 2Q &= 2\\ Q &= 4\,{\rm{units}}\\ P &= \$ 6 \end{align*} {/eq}
The economic profit generated by the monopolist is:
{eq}\begin{align*} \pi &= TR - TC\\ &= \left( {6 \times 4} \right) - \left( {6 \times 2} \right)\\ &= \$ 12 \end{align*} {/eq}
How to Calculate Economic Profit: Definition & Formula
Learn what the definition of economic profit is, and understand how to calculate it using an equation.
A monopolist faces a demand curve given by P = 10 - Q and has constant marginal and average cost of 2. What is the economic profit made by this profit-maximizing monopolist? A) 0 B) 12 C) 14 D) 16
A monopolist faces a demand curve given by P = 10 - Q and has constant marginal (and average cost) of 2. What is the output and the price that maximizes profit for this monopolist? (a) Q = 0, P = 10. (b) Q = 2, P = 8. (c) Q = 4, P = 6. (d) Q = 8, P = 2. (
A monopolist faces a demand curve given by P = 10 - Q and has constant marginal (and average cost) of 2. What is the output and the price that maximizes profit for the monopolist? A) Q = 0, P = 10 B) Q = 2, P = 8 C) Q = 4, P = 6 D) Q = 8, P = 2 E) None of
A monopolist faces a demand curve given by P=10-Q and has constant marginal (and average cost) of 2. What is the economic profit made by this profit-maximizing monopolist? a. 0 b. 12 c. 14 d. 16 e. None of the above
A monopolist faces a demand curve given by P=10-Q and has constant marginal (and average cost) of 2. What is the output and the price that maximizes profit for this monopolist? a. Q = 0, P = 10 b. Q = 2, P = 8 c. Q = 4, P = 6 d. Q = 8, P = 2 e. None of th
A monopolist faces a demand curve given by P = 10 - Q and has constant marginal and average cost of 2. What is the economic profit made by this profit-maximizing monopolist if they engage in perfect price discrimination? A) 32 B) 64 C) 100 D) 121 E) None
A monopolist faces a demand curve given by P = 10 - Q and has constant marginal (and average) cost of 2. What is the economic profit made by this profit-maximizing monopolist if they engage in perfect price discrimination? a. 432 b. 64 c. 100 d. 121 e. No
A single price monopolist faces a demand curve given by Q = 200 - 2p and has constant marginal (and average total cost) of 20. What is the economic profit made by this profit-maximizing monopolist? A. 0 B. 800 C. 3200 D. 6400 E. None of the above
A monopolist faces a demand curve given by P=10-Q and has constant marginal (and average cost) of 2. What is the economic profit made by this profit-maximizing monopolist if they engage in perfect price discrimination? a. 32 b. 64 c. 100 d. 121 e. None of
A monopolist faces a demand curve given by Q = 200 - 2p and has constant marginal (and average total cost) of 20. What is the economic profit made by this profit-maximizing monopolist if they engage in perfect price discrimination? A. 0 B. 800 C. 3200 D.
A monopolist faces a demand curve given by Q = 200 - 2p and has constant marginal (and average total cost) of 20. What is the economic profit made by this profit-maximising monopolist if they engage in perfect price discrimination? A) 0 B) 800 C) 3200 D)
A monopolist faces a demand curve: P = 100 - Q for its product. The monopolist has fixed costs of 1000 and a constant marginal cost of 4 on all units. Find the profit maximizing price, quantity, and p
A monopolist faces a market demand curve given by: Q = 70 - P. This monopolist charges a single price for its output. If the monopolist can produce at constant average and marginal costs of AC = MC =
A monopolist faces demand P = 10 - Q. It has costs C(Q) = 2Q. It can perfectly price discriminate. a. What is its marginal revenue curve? Graph the demand curve. b. Derive the profit maximizing outpu
A monopolist faces a demand curve of Q = 400 - 2P and MR = 200 - Q. The monopolist has a constant MC and ATC of $30. a. Find the monopolist's profit-maximizing output and price. b. Calculate the monopolist's profit. c. What is the Lerner Index for this in
A monopolist faces the demand curve P = 11 - Q. The firm's cost function is C = 6Q. a. Draw the demand and marginal revenue curves, and the average and marginal cost curves. What are the monopolist's
Suppose a monopolist faces the demand curve P = 250 - 2Q. The marginal cost of production is constant and equal to $10, and there are no fixed costs. A. What is the monopolist's profit-maximizing level of output? B. What price will the profit-maximizing m
A monopolist faces a demand curve given by P = 10 - Q and has constant marginal (and average cost) of 2. What is the value of the deadweight loss generated by this monopolist? A) 2 B) 4 C) 6 D) 8 E) None of the
Suppose a monopolist has a demand curve that can be expressed as P = 90 - Q. The monopolist's marginal revenue curve can be expressed as MR = 90 - 2Q. The monopolist has constant marginal costs and average total costs of $10. The profit-maximizing monopol
A monopolist can produce its output at a constant average and constant marginal cost of ATC = MC = 5. The monopoly faces a demand curve given by the function Q = 53 - P and a marginal revenue curve that is given by MR = 53 - 2Q. a. Draw the firm's demand
A monopolist faces the demand curve P = 100 - 2Q, where P is the price and Q is the quantity demanded. If the monopolist has a total cost of C = 50 + 20Q, determine its profit-maximizing price and output.
A monopolist faces a market demand curve given by Q = 53 - P. Its cost function is given by C = 5Q + 50, i.e. its MC = $5. a. Calculate the profit-maximizing price and quantity for this monopolist. Also, calculate its optimal profit. b. Suppose a second
A monopolist can produce at a constant average (and marginal) cost of AC= MC= $5. It faces a market demand curve given by Q=53-P. a. Calculate the profit-maximizing price and quantity for this monopolist. Also, calculate its profits. b. Suppose a second
A monopolist with the constant average and marginal cost equals to 8 (AC = MC = 8) and faces a demand of Q = 100 - P. Use the information to determine the following values. a) The profit-maximizing qu
A monopolist faces a demand curve given by P=10-Q and has constant marginal (and average cost) of 2. What is the value of the deadweight loss generated by this monopolist? a. 2 b. 4 c. 6 d. 8 e. None of the above
A monopolist faces a demand curve P = 50 - 5Q where P is the product price and Q is the output. The monopolists cost function is C(Q) = 10Q. What are the monopolist's profit maximizing price, output, and profit? What are the consumer surplus and dead-weig
Suppose a monopolist has constant average marginal cost of (AC = MC = 8) and faces demand such that QD = 100 - P. The firm's profit maximizing revenue will be: a) 736 b) 368 c) 2484 d) 3680
A monopolist faces a demand curve given by P = 40 - Q where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $2. There are no f
A monopolist faces the demand curve P=11-Q and the cost function is C=6Q a) Draw the demand and marginal revenue curves, and average and marginal cost curves. What are the monoplist's profit maximizi
A monopolist faces market demand given by Q_D = 65 - P and cost of production given by C = 0.5Q^2 + 5Q + 300. A. Calculate the monopolist's profit-maximizing output and price. B. Graph the monopolist's demand, marginal revenue, and marginal cost curves. S
Suppose a monopolist's marginal costs are constant at $20 per unit, and it faces a demand curve of Q = 300 - p. a. If it cannot price discriminate, what are the profit-maximizing price and quantity?
A monopolist faces a demand curve given by: P = 200 - 10Q, where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $60. There ar
Suppose a monopolist faces the demand curve P = 162 - 2Q. The monopolist's marginal costs are a constant $27 and they have fixed costs equal to $55. Given this information, what will the profit-maximizing price be for this monopolist?
Tinysoft is a monopolist that faces demand curve D(p) = 7p^-3, has constant marginal costs MC = 30 and no fixed cost. What price should the monopolist charge? (a) p = 45 (b) p = 22.5 (c) p = 0.77
A monopolist faces a demand curve given by P = 210 - 5Q where P is the price of the good and Q is quantity demanded. The marginal cost of production is constant and is equal to $60. There are no fixed
A monopolist is selling in a market with the following demand curve: P = 300 - 0.5 Q The marginal revenue is MR = 300 - Q. The firm has constant marginal costs of $50 per unit. What is the monopolist's profit-maximizing quantity and profit-maximizing pric
A monopoly firm faces an (inverse) demand curve of P = 196 - 28Q^0.5 + Q and has a constant marginal (and average) cost curve of 49. If the firm can perfectly price discriminate, what are its profits;
A monopolist faces a demand curve given by P=70-2Q, where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $6. There are no fixed
Suppose a monopolist faces the demand curve P = 164 - 1Q. The monopolist's marginal costs are a constant $22 and they have fixed costs equal to $132. Given this information, what will the profit-maximizing price be for this monopolist? Round answer to two
Below is the demand curve faced by a monopolist in the short run, along with marginal cost marginal revenue average total cost and average variable cost Calculate the monopolist's economic profit or l
Suppose a monopolist faces the following demand curve: P = 200 - 6Q The marginal cost of production is constant and equal to $20, and there are no fixed costs. (a) How much profit will the monopolist make if she maximizes her profit? (b) What would be t
Suppose a monopolist faces the following demand curve: P = 314 - 7Q. The long-run marginal cost of production is constant and equal to $20. a. What is the monopolist's profit-maximizing level of output? b. What price will the profit-maximizing monopolist
If a profit maximizing monopolist faces a linear demand curve and has zero marginal cost, it will produce at : A elasticity of demand equals 1. B the lowest point of marginal profit curve. C All of
A monopolist faces the following demand curve and the total cost curve are given as follows: __P = 100 - 0.5Q TC = 5Q__ a) What is the profit-maximizing level of output? b) What is the profit-maximizing price? c) How much profit does the monopolist earn?
Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What price will the profit maximizing monopolist charge? a. $100 b. $55 c. $45 d. $15 e. $10 f. No
A monopolist has costs C(Q)=5Q. It has one consumer whose inverse demand is P=35-Q. a. Derive the monopolist's marginal cost and average cost. Graph the demand and marginal cost curve. b. Derive the p
Assume a monopolist faces a market demand curve of 50 = Q - frac{1}{2}P and has a short run total cost function C = 640 + 20Q. A. What is the profit-maximizing level of output? What are the profits? Graph the marginal revenue, marginal cost, and demand cu
A monopolist faces a demand curve given by P = 40 - Q, where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $2. There are no fixed costs of production. To answer the following questio
Consider a monopolist that produces a good at a constant marginal and average cost of 5$. The market demand is given by p = 53 - Q A) calculate the monopolists' profit at the profit-maximizing equilibrium
Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. What is the monopolist's profit maximizing level of output? a. 10 b. 15 c. 16 d. 30 e. 33 f. None
Suppose a monopolist has a demand curve that can be expressed as P = 60 - Q. The monopolist's marginal revenue curve can be expressed as MR = 60 - 2Q. The monopolist has constant marginal costs of $20. The profit-maximizing monopolist will have a deadweig
Suppose a monopolist faces the following demand curve: P = 440 - 7Q. The long-run marginal cost of production is constant and equal to $20, and there are no fixed costs. a) What is the monopolist's profit-maximizing level of output? b) What price will
A monopolist faces a demand curve MB = 300 - 10Q and has declining marginal costs equal to MC = 150 = 5Q. A. Compute the optimal quantity Q* and the optimal price P*. B. Write down the equation for the monopolists' marginal revenue curve MR. C. How much w
Suppose the demand curve for a monopolist is P = 200 - QD. The monopolist has a constant marginal and average total cost of $50 per unit. a. Find the monopolist's profit-maximizing output and price.
A monopolist faces a demand curve given by P = 40 - Q, where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $2. There are no fixed costs of production. How much output should the mono
MICROECONOMICS A monopolist faces a demand curve P = -20Q + 10 and MR = -4Q + 10. Total Cost = 2d (no fixed cost) and MC = 2. a) What is the monopolist's profit-maximizing production quantity (Q*)?
A monopolist faces the inverse demand curve P = 60 - Q and its marginal costs are 2Q. What is the monopolist's Lerner index at its profit-maximizing quantity? a. 1 b. 3/7 c. 1/2 d. 1/3
The monopolist faces a demand curve given by D(p) = 100 - 2p. Its cost function is c(y) = 2y. a. What is its optimal level of output and price? b. If the demand curve facing the monopolist has a constant elasticity of 2, then what will be the monopolist's
Suppose a monopolist faces the following demand curve: P = 100 - 3Q. Marginal cost of production is constant and equal to $10, and there are no fixed costs. How much profit will the monopolist make if she maximizes her profit? a. $300 b. $327.50 c. $825 d
A monopolist can produce at constant average and marginal costs of AC=MC=9. The firm faces a demand curve given by : q=75-p a. Calculate the profit maximizing price-quantity combination for the mono
The demand for a monopolist's output is q = 6,000/((p+7)^2) , where p is its price. It has constant marginal costs equal to $5 per unit. What price will it charge to maximize its profits? The answer i
A monopolist has cost c(q) = q^2 and the inverse demand for its product is P = 15 - Q. What is its marginal revenue curve? Derive its marginal cost. Derive the firm's profit maximizing output, total
Suppose a monopolist faces the following demand curve: P=200-6Q. Marginal cost of production is constant and equal to $20, and there are no fixed costs. A) What is the monopolist's profit-maximizing l
Suppose a monopolist faces the following demand curve: P = 596-6Q. Marginal cost of production is constant and equal to $20, and there are no fixed costs. a) What is the monopolists profit maximizing level of output? b) What price will the profit maximi
A monopolist has a cost function C(Q) = 100 + 10Q + 2Q^2 and the inverse demand curve it faces is P = 90 - 2Q. This monopoly will maximize profit when it produces _____ units of output, by charging price _____ per unit. The maximum profit earned by this m
A monopolist faces demand given by: P = 100 - 0.4Qd, and has marginal costs given by: MC = 10 + 0.2Q a. Draw demand, marginal revenue and marginal cost curves. Calculate and show how much this firm
The demand curve that a monopolist faces is given by P = 75 - 0.5 Q, so their marginal revenue is MR = 75 - Q, and the marginal cost function is given by MC = 2 Q. Assume also that ATC at the profit-maximizing level of production is equal to $12.50. The d
Suppose a monopolist faces the following demand curve: P = 200 - 6Q The marginal cost of production is constant and equal to $20, and there are no fixed costs. (a) What is the monopolist's profit-maximizing level of output? (b) What price will the profit-
A monopolist has demand and cost curves given by: Q_D = 1000 - 2P \\ TC = 5,000 + 50Q a. The monopolist's profit-maximizing quantity is [{Blank}] units and the price is $[{Blank}], b. The monopolist's profit is $ [{Blank}].
A monopolist faces inverse demand P = 300 - 2Q. It has total cost TC = 60Q + 2Q2 and marginal cost MC = 60 + 4Q. What is the maximum profit the monopolist can earn in this market?
A monopolist faces demand given by: P = 100 - 0.4QD, and has marginal costs given by: MC = 10 + 0.2Q. (A) Draw the demand, marginal revenue and marginal cost curves. Calculate and show how much this firm will sell and what they will charge. (B) Calculate
Suppose a monopolist faces the demand curve P = 157 - 2Q. The monopolist's marginal costs are a constant $28 and they have fixed costs equal to $145. Given this information, what are the maximum profits this firm can earn?
A monopolist faces inverse market demand of P = 140 -Q/2, and has a total cost given by TC(Q)= 2Q^2 + 10Q + 200. a. Find this monopolist's profit maximizing output level, b. Find this monopolist's profit maximizing price, c. How much profit is this mon
A monopolist faces a demand curve given by P = 220 - 3Q, where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $40. There are no fixed costs of production. a. What quantity should the
A monopolist faces a demand curve given by P = 70 - 2Q where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $6. There are no fixed costs of production. A. What quantity should the mo
A monopolist can product at marginal cost of MC = 2y. (Constant ATC = MC implies that there are no fixed costs.) The firm faces a market demand curve given by: y^D = 36 - P. (a) Find the monopolist's
A monopolist faces a demand curve of the form x=10/p, and has a constant marginal cost of 1. What is the profit maximizing level of output?
Suppose that a monopolist faces the demand curve P) 2 Q, and has total cost curve TC(Q) = Q^2. (a) If the firm is unable to price discriminate, find the firm's profit maximizing price and quantity.
Suppose a profit-maximizing monopolist can engage in perfect price discrimination and faces a demand curve for its products given Q = 20 - 5P. This monopolist has a cost function of TC = 24 + 4Q. How much will monopolists profits be?
Suppose the demand curve for a monopolist is QD= 47,000 - 50 P, and the marginal revenue function is MR=940 - 0.04Q. The monopolist's Marginal Cost = 40+ 0.02Q and its Total Cost = 250,000+ 40Q+ 0.01Q^2. a. Find the monopolist's profit-maximizing output
Suppose a monopolist faces the following demand curve: P = 180 - 4Q. The marginal cost of production is constant and equal to $20, and there are no fixed costs. What price will the profit-maximizing monopolist charge? A. P = $100 B. P = $20 C. P = $60 D.
Suppose a monopolist faces the demand function 12\times 0.1 - Q. The corresponding marginal revenue function is 12 \times 0.2 - Q. Further, suppose that marginal cost is constant at $5. a) What will be the profit maximizing quantity and the profit maximiz
The inverse demand curve that a monopoly faces is p = 78 - 4Q. The firm's cost curve is C(Q) = 228 + 2Q(squared) + 6Q, so that MC = 4Q + 6. a. Derive the marginal revenue for the monopolist. b. What i
The monopolist faces a demand curve given by D(p)=100-2p. Its cost function is c(y)=2y. What is its optimal level of output and price? If the demand curve facing the monopolist has a constant elastici
Suppose a monopolist faces the following demand curve: P = 180 - 4Q. The marginal cost of production is constant and equal to $20, and there are no fixed costs. What is the monopolist's profit-maximizing level of output? A. Q = 45 B. Q = 40 C. Q = 30 D. Q
Suppose a monopolist faces consumer demand given by P = 400 - 2Q with a constant marginal cost of $80 per unit (where marginal cost equals average total cost. Assume the firm has no fixed costs). A. If the monopoly can only charge a single price, what wil
A monopolist faces the following demand curve P = 222 - 2Q. The monopolist's cost is given by C = 2Q. Calculate the profit-maximizing quantity and the corresponding price. What is the resulting profit/loss? Calculate the monopolist's markup.
A monopolist faces an inverse demand P = 300 - 2Q and has total cost TC = 60Q + 2Q2 and marginal cost MC = 60 + 4Q. What is the maximum profit the monopolist can earn in this market? A) 60 B) 240
Profit Maximization Assume a monopolist's marginal cost and marginal revenue curves intersect and the demand curve passes above its average total cost curve. The firm will: a. make an economic profit. b. stay in operation in the short run but shut down i
Suppose that a monopolist faces a demand curve given by P = 100 - 2Q and cost function given by C = 500 + 10Q + 0.5Q^2. 9) What is the monopoly's profit-maximizing output level? A) 15 B) 18 C) 20 D) 3
Suppose a monopolist faces the following demand curve: P = 180 - 4Q. The marginal cost of production is constant and equal to $20, and there are no fixed costs. How much profit will the monopolist make if she maximizes her profit? A. Profit = $1,600 B. Pr
A monopolist faces a demand curve given by P = 220 - 3Q, where P is the price of the good, and Q is the quantity demanded. The marginal cost of production is constant and is equal to $40. There are no fixed costs of production.
A monopolist faces the following demand curve: Price Quantity demanded $10 5 $9 10 $8 16 $7 23 $6 31 $5 49 $4 52 $3 60 The monopolist has total fixed costs of $40 and a constant marginal cost of $5. At the profit-maximizing level of output, the monopolist
A single price monopolist faces a demand curve given by Q = 200 - 2p and has constant marginal (and average total cost) of 20. What is the value of the deadweight loss generated by this monopolist? A. 0 B. 800 C. 3200 D. 6400 E. None of the above
A monopolist faces a demand curve given by P = 40 - Q where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $2. There are no fixed costs of production. Hint: Drawing a graph could b
A monopolist faces a demand curve given by P = 40 - Q where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $2. There are no fixed costs of production. Hint: Drawing a graph could be
A monopolist faces a demand curve given by P = 40 - Q where P is the price of the good and Q is the quantity demanded. The marginal cost of production is constant and is equal to $2. There are no fixed costs of production. Hint: Drawing a graph could | CommonCrawl |
Tornike's Portfolio
Structural Scaffolds for Citation Intent Classification in Scientific Publications
This post is a paper summary highlighting the main ideas of the paper "Structural Scaffolds for Citation Intent Classification in Scientific Publications" by Cohan et al. (2019). arXiv Github
Machine reading and automated analysis of scientific literature have increasingly become important due to information overload. Citations are typically used to measure the impact of scientific publications (Li and Ho, 2008)1. Citation Intent Classification is the task of identifying why an author cited another paper. The automatic identification of citation intent could also help users in doing research. FIGURE 1 shows an example of two citation intents. Some citations indicate direct use of a method, while others may acknowledge prior work or compare methods or results. Existing models are based on hand-engineered features, which may not sufficiently model signals in the text (e.g. linguistic patterns or cue phrases). Recent advances in Natural Language Processing (NLP) have introduced large, contextual representations that are obtained from textual data without the need for manual feature engineering. Cohan et al. (2019)2 introduce a novel framework to include structural knowledge into citations as well as a new dataset of citation intents: SciCite.
Citation intent example. Source: Cohan et al. (2019)
SciCite is five times larger, contains fewer but more general categories, and covers scientific literature from more general domains than existing datasets such as ACL-ARC (Jurgens et al., 2018)3. FIGURE 2 compares the datasets. The papers for SciCite were sampled from the Semantic Scholar corpus. The authors chose more general categories as some are very rare and would not have been enough for training. Citations were extracted using science-parse. The ACL-ARC dataset, which consists of Computational Linguistics papers, was annotated by domain experts in NLP. The training set for SciCite was crowdsourced using the Figure Eight platform, while the test set was annotated by an expert annotator.
SciCite vs ACL-ARC. Source: Cohan et al. (2019)
The proposed neural multitask framework consists of a main task (Citation intent) and two two structural scaffolds, or auxiliary tasks: section title and citation worthiness. The input $\textbf{x}$ is the set of tokens in the citation context, which are encoded by concatenating non-contextual word representations GloVe (Pennington et al., 2014)4 with contextualized embeddings ELMo (Peters et al., 2018)5 (Eq. 1).
$$\textbf{x}_i = [\textbf{x}_i^{\text {GloVe}}; \mathbf{x}_i^{\text {ELMo}}] \tag{1}$$
The encoded tokens then get fed into a bidirectional long short-term memory (Hochreiter and Schmidhuber, 1997)6 network with hidden size $d_2$, which results in the contextual representation of each token w.r.t. the entire sequence (Eq. 2).
$$\mathbf{h}_{i}=[\overrightarrow{\operatorname{LSTM}}(\mathbf{x}, i) ; \overleftarrow{\operatorname{LSTM}}(\mathbf{x}, i)] \tag{2}$$
Finally, an attention mechanism is added, which produces a vector representation of the input sequence (Eq. 3). $\textbf{w}$ is a parameter served as the query vector for dot-product attention.
$$\mathbf{z}=\sum_{i=1}^{n} \alpha_{i} \mathbf{h}_{i}, \quad \alpha_{i}=\operatorname{softmax}\left(\mathbf{w}^{\top} \mathbf{h}_{i}\right) \tag{3}$$
Structural Scaffolds
The citation worthiness task is to predict whether a sentence needs a citation. The hypothesis is that language in sentences with citations is different from regular sentences in scientific work. Sentences with citations are positive samples and sentences without citation markers are negative samples. This task could also be used in a different setting (e.g. paper draft aid).
The section title task is to predict the title of the section in which the citation appears. The hypothesis here is that citation intent is relevant to its section. Contrary to the other tasks, the authors use a large number of scientific papers to generate the training data for this task.
In the multitask framework, a Multi-Layer Perceptron (MLP) followed by a softmax layer is used for each task (Eq. 4). The class with the highest probability is then chosen.
$$\mathbf{y}^{(i)}=\operatorname{softmax}\left(\mathrm{MLP}^{(i)}(\mathbf{z})\right) \tag{4}$$
FIGURE 3 shows an overview of the proposed model.
Model overview. Source: Cohan et al. (2019)
For the citation worthiness task, citation markers were removed in order for the model to not "cheat" by simply recognizing citations in sentences. For sentence title, citations and their contexts were sampled from the corpus. Section titles were normalized with regular expressions to the following general categories: introduction, related work, method, and experiments. Titles that did not map to these were removed. The table below shows the total number of instances for each of the datasets and tasks.
ACL-ARC
SciCite
Citation worthiness 50k 73k
Sentence title 47k 90k
The proposed model, trained with standard hyperparameters, is compared to a strong baseline and a state-of-the-art model. The first baseline is a BiLSTM with an attention mechanism (with and without using ELMo embeddings) that only optimizes for the citation intent classification task. This is meant to show if the structural scaffolds and contextual embeddings, in fact, improve performance. The second baseline is the model used by Jurgens et al. (2018)3, which had the best-reported results on ACL-ARC. The authors incorporate a diverse set of features (e.g. pattern-based, topic-based, prototypical argument) and train a Random Forest classifier.
The results on both the ACL-ARC (FIGURE 4) and SciCite (FIGURE 5) datasets indicate that the inclusion of structural scaffolds improves performance on all of the baselines. The performance differences for the respective datasets is partly due to the different dataset sizes. Each auxiliary task contributes slightly over the baseline, while the combination of both tasks shows a large improvement on ACL-ARC and a marginal improvement on SciCite. The addition of contextual embeddings further increases performance by about 5% macro F1 on both datasets (including on the baselines).
Results on ACL-ARC. Source: Cohan et al. (2019)
Results on SciCite. Source: Cohan et al. (2019)
Figure 6 shows an example sentence from ACL-ARC for which the correct label is Future Work. The best-proposed model predicts this correctly, attending over more of the context, while the baseline predicts Compare. The attention is stronger on "compare" for the baseline, ignoring the context of its use.
Example from ACL-ARC. Source: Cohan et al. (2019)
When looking at each of the categories independently, categories with more instances show higher F1 scores on both datasets (FIGURE 7 and 8). Recall seems to suffer from a limited number of training instances.
Per category classification results on ACL-ARC. Source: Cohan et al. (2019)
Because the categories in SciCite are more general, there are more training instances for each. The recall on this dataset is accordingly higher.
Per category classification results on SciCite. Source: Cohan et al. (2019)
Cohan et al. (2019)2 show that structural properties of scientific literature can be useful for citation intent classification. The authors argue that relevant auxiliary tasks can help improve performance in multitask learning. The main contributions of this work are the following:
A new scaffold framework for citation intent classification.
A new state-of-the-art of 67.9% F1 on ACL-ARC (an increase of 13.3%).
A new dataset, SciCite, of citation intents, which is 5x the size of current datasets.
While the current work uses ELMo, Beltagy et al. (2019)7 show that incorporating BERT, a large pre-trained language model, which they fine-tuned on scientific data, increases performance further. A possible extension could be to adapt the model to other domains (e.g. Wikipedia).
SciBERT: A Pretrained Language Model for Scientific Text - Beltagy et al., 2019
ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing - Neumann et al., 2019
Thanks to Elvis for the review.
Zhi Li and Yuh-Shan Ho. 2008. Use of citation per publication as an indicator to evaluate contingent valuation research. Scientometrics. ↩︎
Arman Cohan, Waleed Ammar, Madeleine van Zuylen, and Field Cady. 2019. Structural scaffolds for citation intent classification in scientific publications. In NAACL-HLT, pages 3586–3596, Minneapolis, Minnesota. Association for Computational Linguistics. ↩︎
David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. TACL, 6:391–406. ↩︎
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. ↩︎
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. ↩︎
Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation. ↩︎
Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. SciBERT: A Pretrained Language Model for Scientific Text. CoRR, abs/1903.10676. ↩︎
NLP Machine Learning Text Classification
Tornike Tsereteli
M.Sc. student in Compuational Linguistics
I am a M.Sc. Computational Linguistics student at the University of Stuttgart. I work on Natural Language Processing and Machine Learning. My research interests are Transfer Learning, Ethics, Explainability and Privacy.
Rick and Morty Story Generator
humAIne | CommonCrawl |
Last 12 months (10)
Journal of Materials Research (13)
The Journal of Navigation (3)
Advances in Applied Mathematics and Mechanics (2)
Annals of Glaciology (2)
Chinese Journal of Agricultural Biotechnology (2)
East Asian Journal on Applied Mathematics (2)
Journal of the Marine Biological Association of the United Kingdom (2)
MRS Advances (2)
Probability in the Engineering and Informational Sciences (2)
The Canadian Entomologist (1)
Global Science Press (4)
International Glaciological Society (2)
MBA Online Only Members (2)
Australian Mathematical Society Inc (1)
Canadian Mathematical Society (1)
Entomological Society of Canada TCE ESC (1)
International Association for Chinese Management Research (1)
Nestle Foundation - enLINK (1)
Royal Aeronautical Society (1)
The Association for Asian Studies (1)
Syndemic effects of HIV risk behaviours: results from the NHANES study
L. Smith, C. Cao, X. Zong, D. T. McDermott, S. Stefanac, S. Haider, S. E. Jackson, N. Veronese, G. F. López-Sánchez, A. Koyanagi, L. Yang, I. Grabovac
Journal: Epidemiology & Infection / Volume 147 / 2019
Published online by Cambridge University Press: 12 July 2019, e241
The aim of the present study is to use the syndemic framework to investigate the risk of contracting HIV in the US population. Cross-sectional analyses are from The National Health and Nutrition Examination Survey. We extracted and aggregated data on HIV antibody test, socio-demographic characteristics, alcohol use, drug use, depression, sexual behaviours and sexually transmitted diseases from cycle 2009–2010 to 2015–2016. We carried out weighted regression among young adults (20–39 years) and adults (40–59 years) separately. In total, 5230 men and 5794 women aged 20–59 years were included in the present analyses. In total, 0.8% men and 0.2% women were tested HIV-positive. Each increasing HIV risk behaviour was associated with elevated odds of being tested HIV-positive (1.15, 95% CI 1.15–1.15) among young adults and adults (1.61, 95% CI 1.61–1.61). Multi-faceted, community-based interventions are urgently required to reduce the incidence of HIV in the USA.
Grain size effect on deformation twin thickness in a nanocrystalline metal with low stacking-fault energy
Yusheng Li, Liangjuan Dai, Yang Cao, Yonghao Zhao, Yuntian Zhu
Journal: Journal of Materials Research / Volume 34 / Issue 13 / 15 July 2019
Print publication: 15 July 2019
Grain size effect on twin thickness has been rarely investigated, especially when the grain size is less than 1000 nm. In our previous work (Mater. Sci. Eng.A527, 3942, 2010), different severe plastic deformation techniques were used to achieve a wide range of grain sizes from about 3 μm to 70 nm in a Cu–30% Zn alloy. Transmission electron microscopy (TEM) revealed a gradual decrease in the deformation twin thickness with decreasing grain size. In the present work, high-resolution TEM was used to further identify deformation twins and measure their thickness, especially for grain sizes below 70 nm. The twin thickness was found to gradually reduce with decreasing grain size, until a critical size (20 nm), below which only stacking faults were observed. Interestingly, the relationship between twin thickness and grain size in the ultrafine/nanocrystalline regime is found similar to that in the coarse-grained regime, despite the differences in their twinning mechanisms. This work provides a large set of data for setting up a model to predict the twin thickness in ultrafine-grained and nanocrystalline face-centered cubic materials.
Lithium for suicide and readmission prevention after electroconvulsive therapy for unipolar depression: population-based register study
Ole Brus, Yang Cao, Åsa Hammar, Mikael Landén, Johan Lundberg, Pia Nordanskog, Axel Nordenskjöld
Journal: BJPsych Open / Volume 5 / Issue 3 / May 2019
Electroconvulsive therapy (ECT) is effective for unipolar depression but relapse and suicide are significant challenges. Lithium could potentially lower these risks, but is used only in a minority of patients.
This study quantifies the effect of lithium on risk of suicide and readmission and identifies factors that are associate with readmission and suicide.
This population-based register study used data from the Swedish National Quality Register for ECT and other Swedish national registers. Patients who have received ECT for unipolar depression as in-patients between 2011 and 2016 were followed until death, readmission to hospital or the termination of the study at the end of 2016. Cox regression was used to estimate hazard ratios (HR) of readmission and suicide in adjusted models.
Out of 7350 patients, 56 died by suicide and 4203 were readmitted. Lithium was prescribed to 638 (9%) patients. Mean follow-up was 1.4 years. Lithium was significantly associated with lower risk of suicide (P = 0.014) and readmission (HR 0.84 95% CI 0.75–0.93). The number needed to be treated with lithium to prevent one readmission was 16. In addition, the following factors were statistically associated with suicide: male gender, being a widow, substance use disorder and a history of suicide attempts. Readmission was associated with young age, being divorced or unemployed, comorbid anxiety disorder, nonpsychotic depression, more severe symptoms before ECT, no improvement with ECT, not receiving continuation ECT or antidepressants, usage of antipsychotics, anxiolytics or benzodiazepines, severity of medication resistance and number of previous admissions.
More patients could benefit from lithium treatment.
Declaration of interest
Synthesis of multi-branched gold nanostructures and their surface-enhanced Raman scattering properties of 4-aminothiophenol
Min He, Beibei Cao, Xiangxiang Gao, Bin Liu, Jianhui Yang
Journal: Journal of Materials Research , First View
A facile one-pot and environmentally friendly method was developed to synthesize multi-branched flowerlike gold (Au) nanostructures by reducing chlorate gold (HAuCl4) with hydrogen peroxide (H2O2) in the presence of sodium citrate. The multibranched Au nanostructures were characterized by transmission electron microscopy and Ultraviolet-visible (UV-vis) absorption spectroscopy. The molar ratio of sodium citrate to HAuCl4 and the concentrations of the reacted reagents play important roles in the formation of multibranched Au nanostructures. The multibranched Au nanostructures with sharp tips exhibit excellent surface-enhanced Raman scattering (SERS) ability of 4-aminothiophenol (PATP). The experimental and simulated results both confirm that the photoinduced catalytic coupling reaction of PATP transformation to 4,4′-dimercaptoazobenzene occurs on the surface of multibranched Au nanostructures at a high power during the SERS measurement. It is believed that these multibranched Au nanostructures may find potential applications in SERS, biosensors, and the photoinduced surface catalytic application fields.
WINNER PLAYS STRUCTURE IN RANDOM KNOCKOUT TOURNAMENTS
Yang Cao, Sheldon M. Ross
Journal: Probability in the Engineering and Informational Sciences , First View
Published online by Cambridge University Press: 05 December 2018, pp. 1-11
Suppose there are n players, with player i having value vi > 0, and suppose that a game between i and j is won by i with probability vi/(vi + vj). In the winner plays random knockout tournament, we suppose that the players are lined up in a random order; the first two play, and in each subsequent game the winner of the last game plays the next in line. Whoever wins the game involving the last player in line, is the tournament winner. We give bounds on players' tournament win probabilities and make some conjectures. We also discuss how simulation can be efficiently employed to estimate the win probabilities.
Evaluation of salinomycin isolated from Streptomyces albus JSY-2 against the ciliate, Ichthyophthirius multifiliis
Jia-Yun Yao, Ming-Yue Gao, Yong-Yi Jia, Yan-Xia Wu, Wen-Lin Yin, Zheng Cao, Gui-Lian Yang, Hai-Bin Huang, Chun-Feng Wang, Jin-Yu Shen, Zhi-Min Gu
Journal: Parasitology / Volume 146 / Issue 4 / April 2019
The present study was undertaken to investigate the antiparasitic activity of extracellular products of Streptomyces albus. Bioactivity-guided isolation of chloroform extracts affording a compound showing potent activity. The structure of the compound was elucidated as salinomycin (SAL) by EI-MS, 1H NMR and 13C NMR. In vitro test showed that SAL has potent anti-parasitic efficacy against theronts of Ichthyophthirius multifiliis with 10 min, 1, 2, 3 and 4 h (effective concentration) EC50 (95% confidence intervals) of 2.12 (2.22–2.02), 1.93 (1.98–1.88), 1.42 (1.47–1.37), 1.35 (1.41–1.31) and 1.11 (1.21–1.01) mg L−1. In vitro antiparasitic assays revealed that SAL could be 100% effective against I. multifiliis encysted tomonts at a concentration of 8.0 mg L−1. In vivo test demonstrated that the number of I. multifiliis trophonts on Erythroculter ilishaeformis treated with SAL was markedly lower than that of control group at 10 days after exposed to theronts (P < 0.05). In the control group, 80% mortality was observed owing to heavy I. multifiliis infection at 10 days. On the other hand, only 30.0% mortality was recorded in the group treated with 8.0 mg L−1 SAL. The median lethal dose (LD50) of SAL for E. ilishaeformis was 32.9 mg L−1.
A combined computational and experimental study of the adsorption of sulfur containing molecules on molybdenum disulfide nanoparticles
Tao Yang, Junpeng Feng, Xingchen Liu, Yandan Wang, Hui Ge, Dongbo Cao, Hao Li, Qing Peng, Manuel Ramos, Xiao-Dong Wen, Baojian Shen
Journal: Journal of Materials Research / Volume 33 / Issue 21 / 14 November 2018
Published online by Cambridge University Press: 20 September 2018, pp. 3589-3603
Print publication: 14 November 2018
Combining density functional theory calculations and temperature programmed desorption (TPD) experiments, the adsorption behavior of various sulfur containing compounds, including C2H5SH, CH3SCH3, tetrahydrothiophene, thiophene, benzothiophene, dibenzothiophene, and their derivatives on the coordinately unsaturated sites of Mo27Sx model nanoparticles, are studied systematically. Sulfur molecules with aromaticity prefer flat adsorption than perpendicular adsorption. The adsorption of nonaromatic molecules is stronger than the perpendicular adsorption of aromatic molecules, but weaker than the flat adsorption of them. With gradual hydrogenation (HYD), the binding affinity in the perpendicular adsorption modes increases, while in flat adsorption modes it increases first, then decreases. Significant steric effects on the adsorption of dimethyldibenzothiophene were revealed in perpendicular adsorption modes. The steric effect, besides weakening adsorption, could also activate the S–C bonds through a compensation effect. Finally, by comparing the theoretical adsorption energies with the TPD results, we suggest that HYD and direct-desulfurization path may happen simultaneously, but on different active sites.
The prevalence and clinical characteristics of tick-borne diseases at One Sentinel Hospital in Northeastern China
Hong-Bo Liu, Ran Wei, Xue-Bing Ni, Yuan-Chun Zheng, Qiu-Bo Huo, Bao-Gui Jiang, Lan Ma, Rui-Ruo Jiang, Jin Lv, Yun-Xi Liu, Fang Yang, Yun-Huan Zhang, Jia-Fu Jiang, Na Jia, Wu-Chun Cao
Journal: Parasitology / Volume 146 / Issue 2 / February 2019
Northeastern China is a region of high tick abundance, multiple tick-borne pathogens and likely human infections. The spectrum of diseases caused by tick-borne pathogens has not been objectively evaluated in this region for clinical management and for comparison with other regions globally where tick-transmitted diseases are common. Based on clinical symptoms, PCR, indirect immunofluorescent assay and (or) blood smear, we identified and described tick-borne diseases from patients with recent tick bite seen at Mudanjiang Forestry Central Hospital. From May 2010 to September 2011, 42% (75/180) of patients were diagnosed with a specific tick-borne disease, including Lyme borreliosis, tick-borne encephalitis, human granulocytic anaplasmosis, human babesiosis and spotted fever group rickettsiosis. When we compared clinical and laboratory features to identify factors that might discriminate tick-transmitted infections from those lacking that evidence, we revealed that erythema migrans and neurological manifestations were statistically significantly differently presented between those with and without documented aetiologies (P < 0.001, P = 0.003). Twelve patients (6.7%, 12/180) were co-infected with two tick-borne pathogens. We demonstrated the poor ability of clinicians to identify the specific tick-borne disease. In addition, it is necessary to develop specific laboratory assays for optimal diagnosis of tick-borne diseases.
Mapping Strain and Relaxation in 2D Heterojunctions with Sub-picometer Precision
Yimo Han, Kayla Nguyen, Michael Cao, Paul Cueva, Mark W. Tate, Prafull Purohit, Saien Xie, Ming-Yang Li, Lain-Jong Li, Jiwoong Park, Sol M. Gruner, David A. Muller
Serum magnesium and cardiovascular mortality in peritoneal dialysis patients: a 5-year prospective cohort study
Hongjian Ye, Peiyi Cao, Xiaodan Zhang, Jianxiong Lin, Qunying Guo, Haiping Mao, Xueqing Yu, Xiao Yang
The aim of this study was to explore the association between serum Mg and cardiovascular mortality in the peritoneal dialysis (PD) population. This prospective cohort study included prevalent PD patients from a single centre. The primary outcome of this study was cardiovascular mortality. Serum Mg was assessed at baseline. A total of 402 patients (57 % male; mean age 49·3±14·9 years) were included. After a median of 49·9 months (interquartile range: 25·9–68·3) of follow-up, sixty-two patients (25·4 %) died of CVD. After adjustment for conventional confounders in multivariate Cox regression models, being in the lower quartile for serum Mg level was independently associated with a higher risk of cardiovascular mortality, with hazards ratios of 2·28 (95 % CI 1·04, 5·01), 1·41 (95 % CI 0·63, 3·16) and 1·62 (95 % CI 0·75, 3·51) for the lowest, second and third quartiles, respectively. A similar trend was observed when all-cause mortality was used as the study endpoint. Further analysis showed that the relationships between lower serum Mg and higher risk of cardiovascular and all-cause mortality were present only in the female subgroup, and not among male patients. The test for interaction indicated that the associations between lower serum Mg and cardiovascular and all-cause mortality differed by sex (P=0·008 and P=0·011, respectively). In conclusion, lower serum Mg was associated with a higher risk of cardiovascular and all-cause mortality in the PD population, especially among female patients.
Approximation forte pour les variétés avec une action d'un groupe linéaire
Yang Cao
Journal: Compositio Mathematica / Volume 154 / Issue 4 / April 2018
Let $G$ be a connected linear algebraic group over a number field $k$ . Let $U{\hookrightarrow}X$ be a $G$ -equivariant open embedding of a $G$ -homogeneous space $U$ with connected stabilizers into a smooth $G$ -variety $X$ . We prove that $X$ satisfies strong approximation with Brauer–Manin condition off a set $S$ of places of $k$ under either of the following hypotheses:
(i) $S$ is the set of archimedean places;
(ii) $S$ is a non-empty finite set and $\bar{k}^{\times }=\bar{k}[X]^{\times }$ .
The proof builds upon the case $X=U$ , which has been the object of several works.
Direct CFD prediction of dynamic derivatives for a complete transport aircraft in the dry and heavy rain environment
Z. Wu, Y. Cao, Y. Yang
Journal: The Aeronautical Journal / Volume 122 / Issue 1247 / January 2018
Published online by Cambridge University Press: 20 November 2017, pp. 1-20
Among various aviation meteorological conditions, heavy rain is an important one that may seriously affect aircraft flight safety. Over the past decades, appreciable efforts have been made to study the impacts of heavy rain on aircraft flight performance. Although there has been a consistent conclusion that heavy rain can cause great static aerodynamic performance degradation, such as lift decrease and drag increase, little has been known on the effects of heavy rain on aircraft dynamic flight performance. This article explores the static and dynamic aerodynamic performance of an approximated model of the DLR-F12 transport aircraft in simulated heavy rain environment. A novel synthesised approach is proposed to study the stability dynamic derivatives in a heavy rain condition. The results suggest that heavy rain not only causes more fuel consumption to compensate the lost lift performance but also induces great dynamic flight performance degradations, especially the short-period mode performance, thus seriously threatens aircraft flight safety.
Fixpoint semantics and optimization of recursive Datalog programs with aggregates*
CARLO ZANIOLO, MOHAN YANG, ARIYAM DAS, ALEXANDER SHKAPSKY, TYSON CONDIE, MATTEO INTERLANDI
Journal: Theory and Practice of Logic Programming / Volume 17 / Issue 5-6 / September 2017
A very desirable Datalog extension investigated by many researchers in the last 30 years consists in allowing the use of the basic SQL aggregates min, max, count and sum in recursive rules. In this paper, we propose a simple comprehensive solution that extends the declarative least-fixpoint semantics of Horn Clauses, along with the optimization techniques used in the bottom-up implementation approach adopted by many Datalog systems. We start by identifying a large class of programs of great practical interest in which the use of min or max in recursive rules does not compromise the declarative fixpoint semantics of the programs using those rules. Then, we revisit the monotonic versions of count and sum aggregates proposed by Mazuran et al. (2013b, The VLDB Journal 22, 4, 471–493) and named, respectively, mcount and msum. Since mcount, and also msum on positive numbers, are monotonic in the lattice of set-containment, they preserve the fixpoint semantics of Horn Clauses. However, in many applications of practical interest, their use can lead to inefficiencies, that can be eliminated by combining them with max, whereby mcount and msum become the standard count and sum. Therefore, the semantics and optimization techniques of Datalog are extended to recursive programs with min, max, count and sum, making possible the advanced applications of superior performance and scalability demonstrated by BigDatalog (Shkapsky et al. 2016. In SIGMOD. ACM, 1135–1149) and Datalog-MC (Yang et al. 2017. The VLDB Journal 26, 2, 229–248).
Microstructural evolution and mechanical properties of a 5052 Al alloy with gradient structures
Yusheng Li, Lingzhen Li, Jinfeng Nie, Yang Cao, Yonghao Zhao, Yuntian Zhu
Journal: Journal of Materials Research / Volume 32 / Issue 23 / 14 December 2017
In this paper, we report on the microstructural evolution and mechanical properties of a 5052 Al alloy processed by rotationally accelerated shot peening (RASP). A thick deformation layer of ∼2 mm was formed after the RASP process. Nano-sized grains, equiaxed subgrains, and elongated subgrains were observed along the depth of the deformation layer. Dislocation accumulation and dynamic recrystallization were found primarily responsible for the grain refinement process. An obvious microhardness gradient was observed for all of the samples with different RASP processing parameters, and the microhardness in the top surface of 50 m/s-5 min RASP-processed sample is twice that of its coarse-grained (CG) counterpart. The yield strengths of the RASP-processed 5052 Al alloy samples were 1.4–2.6 times that of CG counterparts, while retaining a decent ductility (25–84% that of CG). The superior properties imparted by the gradient structure are expected to expand the application of the 5052 Al alloy as a structural material.
Anti-torque systems of electromechanical cable-suspended drills and test results
Pavel Talalay, Xiaopeng Fan, Zhichuan Zheng, Jun Xue, Pinlu Cao, Nan Zhang, Rusheng Wang, Dahui Yu, Chengfeng Yu, Yunlong Zhang, Qi Zhang, Kai Su, Dongdong Yang, Jiewei Zhan
Journal: Annals of Glaciology / Volume 55 / Issue 68 / 2014
To prevent spinning of the upper non-rotated part of the electromechanical drill, an 'anti-torque system' has to be included in the downhole unit. At the same time, the anti-torque must allow the drill to move up and down the borehole during drilling and tripping operations. Usually the anti-torque system has a blade form of various designs that engages with the borehole wall and counteracts the torque from the stator of the driving motor. This paper presents a review of the different anti-torque systems and test results with selected designs (leaf spring, skate and U-shaped anti-torque systems). Experiments showed that the skate anti-torque system can provide the maximal holding torque between 67 and 267 Nm−1 depending on the skates' outer diameter and ice temperature, while the leaf spring anti-torque system can provide only 2.5–40 N m−1 (in case of straight contact between the ice and the leaf springs). The total resistance force to axial movement of the skate anti-torque system lies in the range 209–454N if the system is vibrating. For the leaf spring anti-torque system, the total axial resistance force is far less (19–243 N).
Low-load diamond drill bits for subglacial bedrock sampling
Pinlu Cao, Cheng Yang, Zhichuan Zheng, Rusheng Wang, Nan Zhang, Chunpeng Liu, Zhengyi Hu, Pavel Talalay
Electromechanical cable-suspended drilling technology is considered one of the most feasible methods for subglacial bedrock drilling. The outstanding feature of this technology is that the bit load produced by the drill weight is usually within the range 1.5–4 kN while the recommended load for diamond drilling is 10–30 kN or even more. Therefore, searching for the diamond bits that can drill in extremely hard formations with minimal load and acceptable rates of penetration and torque is the necessary step to prove the feasibility of electromechanical subglacial drilling technology. A special test stand has been designed and constructed to examine the impregnated, surface-set, toothed and specially manufactured bionic drill bits. The results of experiments with ten types of drill bits show that the toothed diamond drill bit has the highest penetration rate of 3.18 m h−1 in very hard and abrasive granite under a 3 kN load. The torque (28.7 Nm) and power consumption (1.5 kW) of toothed drill bits are acceptable for cable-suspended drilling. The penetration rates of bionic drill bits may also be considered suitable and fall within the range 1.0–1.69 m h−1 under the lowest tested load.
Understanding 2D Crystal Vertical Heterostructures at the Atomic Scale Using Advanced Scanning Transmission Electron Microscopy
Sarah J. Haigh, Aidan P. Rooney, Tom J.A. Slater, Eric Prestat, Ekaterina Khestanova, Rob Dryfe, Matej Velický, Roman V. Gorbachev, Rada Boya, Yang Cao, Irina Grigorieva, Kostya Novoselov, Fred Withers, Andre K. Geim
Journal: Microscopy and Microanalysis / Volume 23 / Issue S1 / July 2017
Dopant-controlled photoluminescence of Ag-doped Zn–In–S nanocrystals
Xinjun Shi, Jinju Zheng, Minhui Shang, Tingting Xie, Jiangbo Xie, Sheng Cao, Weiyou Yang
In this work, we reported the growth of cadmium-free Ag-doped Zn–In–S nanocrystals (NCs) with effective photoluminescence (PL) via a hot-injection strategy. The effects of the nucleation temperatures, reaction times, and Ag-doping concentrations on the PL properties of Ag-doped Zn–In–S NCs were investigated systematically. The as-synthesized NCs exhibit color-tunable PL emissions covering a broad visible range of 472–585 nm. After being passivated by a protective ZnS shell, the PL quantum yield (QY) of the resultant NCs was greatly improved up to 33%. With the increase of the Ag-doping level, the PL is significantly intensified due to the improved concentration of Ag ions which provides more holes to recombine with electrons from the bottom of the conduction band. This also makes the emission via the dopant energy level become a powerful, competitive advantage for the NCs with higher Ag-doping levels, resulting in a longer lifetime and higher PL QY. These results suggest that tailoring the Ag-doping level can be a powerful strategy to control the optical properties of Ag-doped Zn–In–S NCs.
The Wuhan Twin Birth Cohort (WTBC)
Jinzhu Zhao, Shaoping Yang, Anna Peng, Zhengmin Qian, Hong Xian, Tianjiao Chen, Guanghui Dong, Yiming Zhang, Xijiang Hu, Zhong Chen, Jiangxia Cao, Xiaojie Song, Shunqing Xu, Tongzhang Zheng, Bin Zhang
The Wuhan Pre/Post-Natal Twin Birth Registry (WPTBR) is one of the largest twin birth registries with comprehensive medical information in China. It recruits women from the first trimester of pregnancy and their twins from birth. From January 2006 to May 2016, the total number of twins enrolled in WPTBR is 13,869 twin pairs (27,553 individuals). The WPTBR initiated the Wuhan Twin Birth Cohort (WTBC). The WTBC is a prospective cohort study carried out through incorporation of three samples. The first one comprises 6,920 twin pairs, and the second one, 6,949 twin pairs. Both are population-based samples linked to the WPTBR and include pre- and post-natal information from WPTBR. The second sample includes neonatal blood spots as well. Using a hospital-based approach, we recently developed a third sample with a target enrolment of 1,000 twin pairs and their mothers. These twins are invited, via their parents, to participate in a periodic health examination from the first trimester of pregnancy to 18 years. Biological samples are collected initially from the mother, including blood, urine, cord blood, cord, amniotic fluid, placenta, breast milk and meconium, and vaginal secretions, and later from the twins, including meconium, stool, urine, and blood. This article describes the design, recruitment, follow-up, data collection, and measures, as well as ongoing and planned analyses at the WTBC. The WTBC offers a unique opportunity to follow women from prenatal to postnatal, as well as follow-up of their twins. This cohort study will expand the understanding of genetic and environmental influences on pregnancy and twins' development in China.
Novel Field Effect Transistor Fabrication Based on Non-Graphene 2D Materials
Yu-Tao Li, Hai-Ming Zhao, He Tian, Peng-Zhi Shao, Xin Xin, Hui-Wen Cao, Ning-Qin Deng, Yi Yang, Tian-Ling Ren
Journal: MRS Advances / Volume 2 / Issue 60 / 2017
In this paper, field effect transistors (FET) based on different kinds of non-graphene materials are introduced, which are MoS2, WSe2 and black phosphorus (BP). Those devices have their unique features in fabrication process compared with conventional FETs. Among them, MoS2 FET shows better electrical characteristics by applying a SiO2 protective layer; WSe2 FET is fabricated based on a new low pressure chemical vapor deposition (LPCVD) method; BP FET acquires high on/off ratio and high hole mobility by using a simple dry transfer method. Those novel non-graphene materials inspire new design and fabrication process of basic logic device. | CommonCrawl |
Listing all monotone increasing binary digits
For $n=5$, I have $32$ binary digits and for this there are $2^{32}$ combinations. Out of this many, the interesting ones (i.e., the ones that have monotone increasing binary digits) are only about "$2111$". I know how these can be found very easily but I am missing Mathematica knowledge. Therefore I cannot complete my program.
First I need to seperate these $32$ bits according to Pascals triangle numbers. Since $n=5$, these numbers are $1,5,10,10,5,1$. Their sum is $32$ and I separate the digits accordingly. My list is as follows:
List={
{0|00000|0000000000|0000000000|00000|0}
... those who satisfy the rule below
{1|11111|1111111111|1111111111|11111|1}}
From right to the left I start with the 5 bits and take all possible combinations:
|00000|--> 00001,00010,00011...11111
This says I have the following ones in the list
{0|00000|0000000000|0000000000|00001|1,
0|00000|0000000000|0000000000|00010|1,
0|00000|0000000000|0000000000|00011|1,...,
0|00000|0000000000|0000000000|11111|1}
Then I go left via keeping |11111|1 as fixed. Now I have $10$ digits and the following are the elements of the list
Then, I do the same thing again but fixing |1111111111|11111|1. Then the followings are the elements of the set:
Again the same thing and we are done:
So in total there are $2^5$ numbers from the first stage, From the other two stages we have $2^{10}-1$ for each and for the last stage $2^5-1$ numbers. We also have all zeros and all ones and if I calculated correctly there are $2111$ of them. I used the character "|" for separation. The final list should look like
List={{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},....,{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}}
I only have the following code part:
IntegerDigits[2, 2, 5]
{0, 0, 0, 1, 0}
IntegerDigits[2, 2, 10]
{0, 0, 0, 0, 0, 0, 0, 0, 1, 0}
Using this code in a 'for loop' I can obtain such combinations but I must concatenate these to the previous bits which are all zeros and all subsequent bits which are all 1s. This is what I don't know how to do.
list-manipulation binary digits
Carl Woll
Seyhmus GüngörenSeyhmus Güngören
One idea is to make use of Tuples:
monotoneTuples[b_, m_, e_] := Rest @ Tuples@Join[
ConstantArray[{0}, b],
ConstantArray[{0,1}, m],
ConstantArray[{1}, e]
monotoneTuples will create your partitioned set. For example:
Column @ monotoneTuples[5, 3, 1] //TeXForm
$\begin{array}{l} \{0,0,0,0,0,0,0,1,1\} \\ \{0,0,0,0,0,0,1,0,1\} \\ \{0,0,0,0,0,0,1,1,1\} \\ \{0,0,0,0,0,1,0,0,1\} \\ \{0,0,0,0,0,1,0,1,1\} \\ \{0,0,0,0,0,1,1,0,1\} \\ \{0,0,0,0,0,1,1,1,1\} \\ \end{array}$
Then, you can join each of these sets:
res = Join[
monotoneTuples[31,1,0],
monotoneTuples[16,10,6],
monotoneTuples[6,10,16],
monotoneTuples[1,5,26],
monotoneTuples[0,1,31]
res //Length
For memory reasons, it makes sense to create a list of integers instead of a list of bit vectors, especially if you will be creating a lot of them. So, an alternative is:
monotoneIntegers[list_] := With[{a = Accumulate[Prepend[0] @ Most @ Reverse @ list]},
Prepend[0][Join @@ (Range[2, 2^Reverse@list] 2^a - 1)]
For example, compare:
integers = monotoneIntegers[{1,3,2}]
bitvectors = IntegerDigits[%, 2, 6]
{0, 1, 2, 3, 7, 11, 15, 19, 23, 27, 31, 63}
{{0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 1, 0}, {0, 0, 0, 0, 1, 1}, {0, 0, 0, 1, 1, 1}, {0, 0, 1, 0, 1, 1}, {0, 0, 1, 1, 1, 1}, {0, 1, 0, 0, 1, 1}, {0, 1, 0, 1, 1, 1}, {0, 1, 1, 0, 1, 1}, {0, 1, 1, 1, 1, 1}, {1, 1, 1, 1, 1, 1}}
The memory usage of the integers is much less:
ByteCount @ integers
ByteCount @ bitvectors
and the disparity increases with more bits.
Carl WollCarl Woll
$\begingroup$ thats really an excellent way. Just the all zero sequence is missing. then the total number will be also $2111$. $\endgroup$ – Seyhmus Güngören Sep 22 '18 at 23:11
$\begingroup$ monotoneTuples[0, 31, 1] this is killing my mathematica) $\endgroup$ – Seyhmus Güngören Sep 23 '18 at 22:04
$\begingroup$ @SeyhmusGüngören It takes 550GB to store a 2^31 by 32 array of integers, so that's why your computer is unhappy. $\endgroup$ – Carl Woll Sep 24 '18 at 0:39
$\begingroup$ maybe it could be nicer if your comment would be told to me by Mathematica itself? $\endgroup$ – Seyhmus Güngören Sep 24 '18 at 13:44
ClearAll[monotoneDigits]
monotoneDigits = Module[{a = Accumulate[Binomial[#, Range[-1, #]]]},
Join[{ConstantArray[0, 2^#]},
IntegerDigits[Join@@Range[2^Most[1 + a] - 1, 2^Rest[a] - 1, 2^Most[a]], 2, 2^#]]]&;
Length @ monotoneDigits[5]
Rest @ monotoneDigits[5] == res (* res from Carl's answer *)
Length /@ (monotoneDigits /@ Range[2, 6])
{6, 17, 96, 2111, 1114238}
monotoneDigits2[3] // Column // TeXForm
$\small\begin{array}{l} \{0,0,0,0,0,0,0,0\} \\ \{0,0,0,0,0,0,0,1\} \\ \{0,0,0,0,0,0,1,1\} \\ \{0,0,0,0,0,1,0,1\} \\ \{0,0,0,0,0,1,1,1\} \\ \{0,0,0,0,1,0,0,1\} \\ \{0,0,0,0,1,0,1,1\} \\ \{0,0,0,0,1,1,0,1\} \\ \{0,0,0,0,1,1,1,1\} \\ \{0,0,0,1,1,1,1,1\} \\ \{0,0,1,0,1,1,1,1\} \\ \{0,0,1,1,1,1,1,1\} \\ \{0,1,0,0,1,1,1,1\} \\ \{0,1,0,1,1,1,1,1\} \\ \{0,1,1,0,1,1,1,1\} \\ \{0,1,1,1,1,1,1,1\} \\ \{1,1,1,1,1,1,1,1\} \\ \end{array}$
Update: An alternative using Tuples in combination with PadLeft and PadRight:
ClearAll[paddedTuples , monotoneDigits2]
paddedTuples[t_][{m_, n_}] := PadLeft[PadRight[Rest@Tuples[{0, 1}, m],
{Automatic, n}, 1], {Automatic, t}]
monotoneDigits2 = Module[{a = Transpose[{#, Accumulate@#}& @ Binomial[#, Range[0, #]]]},
Join @@ (paddedTuples[2^#] /@ a)]&;
monotoneDigits2[5] == res
$\begingroup$ Great job as always.. $\endgroup$ – Seyhmus Güngören Sep 23 '18 at 19:24
Not the answer you're looking for? Browse other questions tagged list-manipulation binary digits or ask your own question.
Listing all monotone binary functions
Importing or reading data in binary format 212
apply binary operation to all adjacent pairs
Replacing numbers of which you only know certain digits
How to find all binary numbers with the kth =0 and smaller than 2^n?
Extracting binary digits of machine precision numbers
Correcting nonsense digits during data import | CommonCrawl |
Over 3 years (145)
Physics and Astronomy (58)
Materials Research (43)
Statistics and Probability (8)
MRS Online Proceedings Library Archive (39)
The Journal of Laryngology & Otology (29)
Publications of the Astronomical Society of Australia (8)
Epidemiology & Infection (6)
The Journal of Agricultural Science (6)
British Journal of Nutrition (4)
Canadian Journal of Emergency Medicine (4)
Experimental Agriculture (4)
Journal of Fluid Mechanics (3)
PMLA / Publications of the Modern Language Association of America (3)
Animal Genetic Resources/Resources génétiques animales/Recursos genéticos animales (2)
Journal of Materials Research (2)
Laser and Particle Beams (2)
Microscopy and Microanalysis (2)
Symposium - International Astronomical Union (2)
Disaster Medicine and Public Health Preparedness (1)
Global Mental Health (1)
Journal of Helminthology (1)
Journal of the Australian Mathematical Society (1)
Mathematical Proceedings of the Cambridge Philosophical Society (1)
Materials Research Society (43)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (9)
JLO (1984) Ltd (8)
The Australian Society of Otolaryngology Head and Neck Surgery (7)
The New Zealand Society of Otolaryngology, Head and Neck Surgery (5)
Canadian Association of Emergency Physicians (CAEP) (4)
International Astronomical Union (3)
Modern Language Association of America (3)
Applied Probability Trust (2)
Australian Mathematical Society Inc (2)
Institute and Faculty of Actuaries (2)
AMMS - Australian Microscopy and Microanalysis Society (1)
European Microwave Association (1)
International Psychogeriatric Association (1)
International Society for Biosafety Research (1)
Ryan Test (1)
Society for Disaster Medicine and Public Health, Inc. SDMPH (1)
Shakespeare Survey (1)
The Jacobs Foundation Series on Adolescence (1)
Values-Based Practice (1)
Recruitment, Readiness, and Retention of Providers at a Field Hospital during the Pandemic
Ishaan Gupta, Zishan K. Siddiqui, Mark D. Phillips, Amteshwar Singh, Shaker M. Eid, Flora Kisuule, James R. Ficke, CONQUER COVID Consortium, Melinda E. Kantsiper
Journal: Disaster Medicine and Public Health Preparedness / Accepted manuscript
Published online by Cambridge University Press: 10 January 2022, pp. 1-21
In response to the coronavirus disease 2019 (COVID-19) pandemic, the State of Maryland established a 250-bed emergency response field hospital at the Baltimore Convention Center to support the existing healthcare infrastructure. To operationalize this hospital with 65 full-time equivalent (FTE) clinicians in less than four weeks, more than 300 applications were reviewed, 186 candidates were interviewed, and 159 clinicians were credentialed and onboarded. The key steps to achieve this undertaking involved employing multidisciplinary teams with experienced personnel, mass outreach, streamlined candidate tracking, pre-interview screening, utilizing all available expertise, expedited credentialing, and focused onboarding. To ensure staff preparedness, the leadership developed innovative team models, applied principles of effective team building, and provided 'just in time' training on COVID-19 and non-COVID-19 related topics to the staff. The leadership focused on staff safety and well-being, offered appropriate financial remuneration and provided leadership opportunities that allowed retention of staff.
The ASKAP Variables and Slow Transients (VAST) Pilot Survey
Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 12 October 2021, e054
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.
The Evolutionary Map of the Universe pilot survey
Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov
Published online by Cambridge University Press: 07 September 2021, e046
We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
The MAGPI survey: Science goals, design, observing strategy, early results and theoretical framework
C. Foster, J. T. Mendel, C. D. P. Lagos, E. Wisnioski, T. Yuan, F. D'Eugenio, T. M. Barone, K. E. Harborne, S. P. Vaughan, F. Schulze, R.-S. Remus, A. Gupta, F. Collacchioni, D. J. Khim, P. Taylor, R. Bassett, S. M. Croom, R. M. McDermid, A. Poci, A. J. Battisti, J. Bland-Hawthorn, S. Bellstedt, M. Colless, L. J. M. Davies, C. Derkenne, S. Driver, A. Ferré-Mateu, D. B. Fisher, E. Gjergo, E. J. Johnston, A. Khalid, C. Kobayashi, S. Oh, Y. Peng, A. S. G. Robotham, P. Sharda, S. M. Sweet, E. N. Taylor, K.-V. H. Tran, J. W. Trayford, J. van de Sande, S. K. Yi, L. Zanisi
Published online by Cambridge University Press: 26 July 2021, e031
We present an overview of the Middle Ages Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey, a Large Program on the European Southern Observatory Very Large Telescope. MAGPI is designed to study the physical drivers of galaxy transformation at a lookback time of 3–4 Gyr, during which the dynamical, morphological, and chemical properties of galaxies are predicted to evolve significantly. The survey uses new medium-deep adaptive optics aided Multi-Unit Spectroscopic Explorer (MUSE) observations of fields selected from the Galaxy and Mass Assembly (GAMA) survey, providing a wealth of publicly available ancillary multi-wavelength data. With these data, MAGPI will map the kinematic and chemical properties of stars and ionised gas for a sample of 60 massive ( ${>}7 \times 10^{10} {\mathrm{M}}_\odot$ ) central galaxies at $0.25 < z <0.35$ in a representative range of environments (isolated, groups and clusters). The spatial resolution delivered by MUSE with Ground Layer Adaptive Optics ( $0.6-0.8$ arcsec FWHM) will facilitate a direct comparison with Integral Field Spectroscopy surveys of the nearby Universe, such as SAMI and MaNGA, and at higher redshifts using adaptive optics, for example, SINS. In addition to the primary (central) galaxy sample, MAGPI will deliver resolved and unresolved spectra for as many as 150 satellite galaxies at $0.25 < z <0.35$ , as well as hundreds of emission-line sources at $z < 6$ . This paper outlines the science goals, survey design, and observing strategy of MAGPI. We also present a first look at the MAGPI data, and the theoretical framework to which MAGPI data will be compared using the current generation of cosmological hydrodynamical simulations including EAGLE, Magneticum, HORIZON-AGN, and Illustris-TNG. Our results show that cosmological hydrodynamical simulations make discrepant predictions in the spatially resolved properties of galaxies at $z\approx 0.3$ . MAGPI observations will place new constraints and allow for tangible improvements in galaxy formation theory.
Mastoid cavity obliteration using bone pâté versus bioactive glass granules in the management of chronic otitis media (squamous disease): a prospective comparative study
A K Mishra, A Mallick, J R Galagali, A Gupta, A Sethi, A Ghotra
Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 6 / June 2021
To compare the efficacy of bone pâté versus bioactive glass in mastoid obliteration.
This randomised parallel groups study was conducted at a tertiary care centre between September 2017 and August 2019. Sixty-eight patients, 33 males and 35 females, aged 12–56 years, randomly underwent single-stage canal wall down mastoidectomy with mastoid obliteration using either bone pâté (n = 35) or bioactive glass (n = 33), and were evaluated 12 months after the operation.
A dry epithelised cavity (Merchant's grade 0 or 1) was achieved in 65 patients (95.59 per cent). Three patients (4.41 per cent) showed recidivism. The mean air–bone gap decreased to 16.80 ± 4.23 dB from 35.10 ± 5.21 dB pre-operatively. The mean Glasgow Benefit Inventory score was 30.02 ± 8.23. There was no significant difference between the two groups in these outcomes. However, the duration of surgery was shorter in the bioactive glass group (156.87 ± 7.83 vs 162.28 ± 8.74 minutes; p = 0.01).
The efficacy of both materials was comparable.
Clinical Study of 668 Indian Subjects with Juvenile, Young, and Early Onset Parkinson's Disease
Prashanth L. Kukkle, Vinay Goyal, Thenral S. Geetha, Kandadai R. Mridula, Hrishikesh Kumar, Rupam Borgohain, Adreesh Mukherjee, Pettarusp M. Wadia, Ravi Yadav, Soaham Desai, Niraj Kumar, Ravi Gupta, Atanu Biswas, Pramod K. Pal, Uday Muthane, Shymal K. Das, Niall Quinn, Vedam L. Ramprasad, the Parkinson Research Alliance of India (PRAI)
Journal: Canadian Journal of Neurological Sciences / Volume 49 / Issue 1 / January 2022
Published online by Cambridge University Press: 09 March 2021, pp. 93-101
Print publication: January 2022
To determine the demographic pattern of juvenile-onset parkinsonism (JP, <20 years), young-onset (YOPD, 20–40 years), and early onset (EOPD, 40–50 years) Parkinson's disease (PD) in India.
Materials and Methods:
We conducted a 2-year, pan-India, multicenter collaborative study to analyze clinical patterns of JP, YOPD, and EOPD. All patients under follow-up of movement disorders specialists and meeting United Kingdom (UK) Brain Bank criteria for PD were included.
A total of 668 subjects (M:F 455:213) were recruited with a mean age at onset of 38.7 ± 8.1 years. The mean duration of symptoms at the time of study was 8 ± 6 years. Fifteen percent had a family history of PD and 13% had consanguinity. JP had the highest consanguinity rate (53%). YOPD and JP cases had a higher prevalence of consanguinity, dystonia, and gait and balance issues compared to those with EOPD. In relation to nonmotor symptoms, panic attacks and depression were more common in YOPD and sleep-related issues more common in EOPD subjects. Overall, dyskinesias were documented in 32.8%. YOPD subjects had a higher frequency of dyskinesia than EOPD subjects (39.9% vs. 25.5%), but they were first noted later in the disease course (5.7 vs. 4.4 years).
This large cohort shows differing clinical patterns in JP, YOPD, and EOPD cases. We propose that cutoffs of <20, <40, and <50 years should preferably be used to define JP, YOPD, and EOPD.
Australian square kilometre array pathfinder: I. system description
A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier
Published online by Cambridge University Press: 05 March 2021, e009
In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown.
Cross cultural and global uses of a digital mental health app: results of focus groups with clinicians, patients and family members in India and the United States
Elena Rodriguez-Villa, Abhijit R. Rozatkar, Mohit Kumar, Vikram Patel, Ameya Bondre, Shalini S. Naik, Siddharth Dutt, Urvakhsh M. Mehta, Srilakshmi Nagendra, Deepak Tugnawat, Ritu Shrivastava, Harikeerthan Raghuram, Azaz Khan, John A. Naslund, Snehil Gupta, Anant Bhan, Jagadisha Thirthall, Prabhat K. Chand, Tanvi Lakhtakia, Matcheri Keshavan, John Torous
Journal: Global Mental Health / Volume 8 / 2021
Published online by Cambridge University Press: 24 August 2021, e30
Despite significant advancements in healthcare technology, digital health solutions – especially those for serious mental illnesses – continue to fall short of their potential across both clinical practice and efficacy. The utility and impact of medicine, including digital medicine, hinges on relationships, trust, and engagement, particularly in the field of mental health. This paper details results from Phase 1 of a two-part study that seeks to engage people with schizophrenia, their family members, and clinicians in co-designing a digital mental health platform for use across different cultures and contexts in the United States and India.
Each site interviewed a mix of clinicians, patients, and their family members in focus groups (n = 20) of two to six participants. Open-ended questions and discussions inquired about their own smartphone use and, after a demonstration of the mindLAMP platform, specific feedback on the app's utility, design, and functionality.
Our results based on thematic analysis indicate three common themes: increased use and interest in technology during coronavirus disease 2019 (COVID-19), concerns over how data are used and shared, and a desire for concurrent human interaction to support app engagement.
People with schizophrenia, their family members, and clinicians are open to integrating technology into treatment to better understand their condition and help inform treatment. However, app engagement is dependent on technology that is complementary – not substitutive – of therapeutic care from a clinician.
The Rapid ASKAP Continuum Survey I: Design and first results
Australian SKA Pathfinder
D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier
Published online by Cambridge University Press: 30 November 2020, e048
The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz.
Site-specific fertilizer nitrogen management in Bt cotton using chlorophyll meter
Arun Shankar, R. K. Gupta, Bijay-Singh
Journal: Experimental Agriculture / Volume 56 / Issue 3 / June 2020
Field experiments were conducted to standardize protocols for site-specific fertilizer nitrogen (N) management in Bt cotton using Soil Plant Analysis Development (SPAD) chlorophyll meter. Performance of different SPAD-based site-specific N management scenarios was evaluated vis-à-vis blanket fertilizer N recommendation. The N treatments comprised a no-N (control), four fixed-time and fixed N doses (60, 90, 120, and 150 kg N ha-1) including the recommended dose (150 kg ha-1), and eight fixed-time and adjustable N doses based on critical SPAD readings of 45 and 41 at first flowering and boll formation stages, respectively. The results revealed that by applying 45 or 60 kg N ha-1 at thinning stage of the crop and critical SPAD value-guided dose of 45 or 30 kg N ha-1 at first flowering stage resulted in yields similar to that recorded by applying the recommended dose of 150 kg N ha-1. However, significantly higher N use efficiency as well as 30–40% less total fertilizer N use was recorded with site-specific N management. Applying 30 kg N ha-1 at thinning and SPAD meter-guided 45 kg N ha-1 at first flowering were not enough and required additional SPAD meter-guided 45 kg N ha-1 at boll formation for sustaining yield levels equivalent to those observed by following blanket recommendation but resulted in 20% less fertilizer N application. Our data revealed that SPAD meter-based site-specific N management in Bt cotton results in optimum yield with dynamic adjustment of fertilizer N doses at first flowering and boll formation stages. The total amount of N fertilizer following site-specific management strategies was substantially less than the blanket recommendation of 150 kg N ha-1, but the extent may vary in different fields.
Experiential learnings from the Nipah virus outbreaks in Kerala towards containment of infectious public health emergencies in India
Rima R. Sahay, Pragya D. Yadav, Nivedita Gupta, Anita M. Shete, Chandni Radhakrishnan, Ganesh Mohan, Nikhilesh Menon, Tarun Bhatnagar, Krishnasastry Suma, Abhijeet V. Kadam, P. T. Ullas, B. Anu Kumar, A. P. Sugunan, V. K. Sreekala, Rajan Khobragade, Raman R. Gangakhedkar, Devendra T. Mourya
Journal: Epidemiology & Infection / Volume 148 / 2020
Published online by Cambridge University Press: 23 April 2020, e90
Nipah virus (NiV) outbreak occurred in Kozhikode district, Kerala, India in 2018 with a case fatality rate of 91% (21/23). In 2019, a single case with full recovery occurred in Ernakulam district. We described the response and control measures by the Indian Council of Medical Research and Kerala State Government for the 2019 NiV outbreak. The establishment of Point of Care assays and monoclonal antibodies administration facility for early diagnosis, response and treatment, intensified contact tracing activities, bio-risk management and hospital infection control training of healthcare workers contributed to effective control and containment of NiV outbreak in Ernakulam.
Effect of dopant on the morphology and electrochemical performance of Ni1-xCaxCo2O4 (0 ≤ x ≤ 0.8) oxide hierarchical structures
D. Guragain, C. Zequine, R. Bhattarai, J. Choi, R. K. Gupta, X. Shen, S. R. Mishra
Journal: MRS Advances / Volume 5 / Issue 48-49 / 2020
Published online by Cambridge University Press: 23 March 2020, pp. 2487-2494
The binary metal oxides are increasingly used as supercapacitor electrode materials in energy storing devices. Particularly NiCo2O4 has shown promising electrocapacitive performance with high specific capacitance and energy density. The electrocapacitive performance of these oxides largely depends on their morphology and electrical properties governed by their energy band-gaps and defects. The morphological structure of NiCo2O4 can be altered via the synthesis route, while the energy band-gap could be altered by doping. Also, doping can enhance crystal stability and bring in grain refinement, which can further improve the much-needed surface area for high specific capacitance. Given the above, this study evaluates the electrochemical performance of Ca-doped Ni1-xCaxCo2O4 (0 ≤ x ≤ 0.8) compounds. This stipulates promising applications for electrodes in future supercapacitors.
Tranexamic acid has no advantage in head and neck surgical procedures: a randomised, double-blind, controlled clinical trial
A Thakur, S Gupta, J S Thakur, R S Minhas, R K Azad, M S Vasanthalakshmi, D R Sharma, N K Mohindroo
Journal: The Journal of Laryngology & Otology / Volume 133 / Issue 12 / December 2019
Published online by Cambridge University Press: 18 November 2019, pp. 1024-1032
To assess the effect of tranexamic acid in head and neck surgical procedures.
A prospective, double-blind and randomised, parallel group, placebo-controlled clinical trial was conducted. Ninety-two patients undergoing various head and neck surgical procedures were randomised. Subjects received seven infusions of coded drugs (tranexamic acid or normal saline) starting at the time of skin closure. Haematological, biochemical, blood loss and other parameters were observed by the staff, who were blinded to patients' group allocation (case or control).
Patients were analysed on the basis of type of surgery. Fifty patients who had undergone surgical procedures, including total thyroidectomy, total parotidectomy, and various neck dissections with or without primary tumour excision, were included in the first group. The second group comprised 41 patients who had undergone hemithyroidectomy, lobectomy or superficial parotidectomy. There was no statistical difference in blood parameters between both groups. There was a reduction in post-operative drain volume, but this was not significant.
Although this prospective, randomised, placebo-controlled clinical trial found a reduction in post-operative drain volume in tranexamic acid groups, the difference was not statistically significant between the various head and neck surgical procedure groups.
Rice straw biochar improves soil fertility, growth, and yield of rice–wheat system on a sandy loam soil
R. K. Gupta, Ashaq Hussain, Yadvinder-Singh, S. S. Sooch, J. S. Kang, Sandeep Sharma, G. S. Dheri
Journal: Experimental Agriculture / Volume 56 / Issue 1 / February 2020
Print publication: February 2020
Biochar has received attention due to its potential for mitigating climate change through carbon sequestration in soil and improving soil quality and crop productivity. This study evaluated the effects of rice straw biochar (RSB) and rice husk ash (RHA) each applied at 5 Mg ha−1 and four N levels (0, 40, 80, and 120 kg ha−1) on soil fertility, growth, and yield of rice and wheat for three consecutive rice–wheat rotations. RSB significantly increased electrical conductivity, dehydrogenase activity, and P and K contents when compared to control (no amendment) up to 7.5 cm soil depth. Both RSB and RHA did not influence shoot N concentration in wheat plant but significantly increased P and K concentrations at 60 days after sowing. Grain yields of both rice and wheat were significantly higher in RSB as compared to control (no amendment) and RHA treatments. While the highest grain yields of rice and wheat were observed at 120 kg N ha−1 in RHA and no biochar-treated plots, a significant increase in grain yields was observed at 80 kg N ha−1 in RSB treatment, thereby saving 40 kg N ha−1 in each crop. Both agronomic and recovery N efficiencies in rice and wheat were significantly higher in RSB-amended soil compared to control. Significant positive correlations were observed between soil N, P, and K concentrations and total N, P, and K concentrations in aboveground biomass of wheat at 60 days after sowing. This study showed the potential benefits of applying RSB for improving soil fertility and yields of rice and wheat in a rice–wheat system.
The influence of maternal and infant nutrition on cardiometabolic traits: novel findings and future research directions from four Canadian birth cohort studies
Nutrition Society Summer Meeting 2018
R. J. de Souza, M. A. Zulyniak, J. C. Stearns, G. Wahi, K. Teo, M. Gupta, M. R. Sears, P. Subbarao, S. S. Anand
Journal: Proceedings of the Nutrition Society / Volume 78 / Issue 3 / August 2019
Print publication: August 2019
A mother's nutritional choices while pregnant may have a great influence on her baby's development in the womb and during infancy. There is evidence that what a mother eats during pregnancy interacts with her genes to affect her child's susceptibility to poor health outcomes including childhood obesity, pre-diabetes, allergy and asthma. Furthermore, after what an infant eats can change his or her intestinal bacteria, which can further influence the development of these poor outcomes. In the present paper, we review the importance of birth cohorts, the formation and early findings from a multi-ethnic birth cohort alliance in Canada and summarise our future research directions for this birth cohort alliance. We summarise a method for harmonising collection and analysis of self-reported dietary data across multiple cohorts and provide examples of how this birth cohort alliance has contributed to our understanding of gestational diabetes risk; ethnic and diet-influences differences in the healthy infant microbiome; and the interplay between diet, ethnicity and birth weight. Ongoing work in this birth cohort alliance will focus on the use of metabolomic profiling to measure dietary intake, discovery of unique diet–gene and diet–epigenome interactions, and qualitative interviews with families of children at risk of metabolic syndrome. Our findings to-date and future areas of research will advance the evidence base that informs dietary guidelines in pregnancy, infancy and childhood, and will be relevant to diverse and high-risk populations of Canada and other high-income countries.
LO04: Canadian best practice diagnostic algorithm for acute aortic syndrome
R. Ohle, S. McIsaac, J. Yan, K. Yadav, P. Jetty, R. Atoui, N. Fortino, B. Wilson, N. Coffey, T. Scott, A. Cournoyer, F. Rubens, D. Savage, D. Ansell, J. Middaugh, A. Gupta, B. Bittira, Y. Callaway, S. Bignucolo, B. Mc Ardle, E. Lang
Journal: Canadian Journal of Emergency Medicine / Volume 21 / Issue S1 / May 2019
Published online by Cambridge University Press: 02 May 2019, pp. S7-S8
Print publication: May 2019
Introduction: Acute aortic syndrome (AAS) is a time sensitive aortic catastrophe that is often misdiagnosed. There are currently no Canadian guidelines to aid in diagnosis. Our goal was to adapt the existing American Heart Association (AHA) and European Society of Cardiology (ESC) diagnostic algorithms for AAS into a Canadian evidence based best practices algorithm targeted for emergency medicine physicians. Methods: We chose to adapt existing high-quality clinical practice guidelines (CPG) previously developed by the AHA/ESC using the GRADE ADOLOPMENT approach. We created a National Advisory Committee consisting of 21 members from across Canada including academic, community and remote/rural emergency physicians/nurses, cardiothoracic and cardiovascular surgeons, cardiac anesthesiologists, critical care physicians, cardiologist, radiologists and patient representatives. The Advisory Committee communicated through multiple teleconference meetings, emails and a one-day in person meeting. The panel prioritized questions and outcomes, using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess evidence and make recommendations. The algorithm was prepared and revised through feedback and discussions and through an iterative process until consensus was achieved. Results: The diagnostic algorithm is comprised of an updated pre test probability assessment tool with further testing recommendations based on risk level. The updated tool incorporates likelihood of an alternative diagnosis and point of care ultrasound. The final best practice diagnostic algorithm defined risk levels as Low (0.5% no further testing), Moderate (0.6-5% further testing required) and High ( >5% computed tomography, magnetic resonance imaging, trans esophageal echocardiography). During the consensus and feedback processes, we addressed a number of issues and concerns. D-dimer can be used to reduce probability of AAS in an intermediate risk group, but should not be used in a low or high-risk group. Ultrasound was incorporated as a bedside clinical examination option in pre test probability assessment for aortic insufficiency, abdominal/thoracic aortic aneurysms. Conclusion: We have created the first Canadian best practice diagnostic algorithm for AAS. We hope this diagnostic algorithm will standardize and improve diagnosis of AAS in all emergency departments across Canada.
Identification of proteins for controlled nucleation of metal-organic crystals for nanoenergetics
Zachary E. Reinert, Chia-Suei Hung, Andrea R. Poole, Joseph M. Slocik, Marquise G. Crosby, Srikanth Singamaneni, Rajesh R. Naik, Patrick B. Dennis, Wendy J. Crookes-Goodson, Maneesh K. Gupta
Journal: MRS Communications / Volume 9 / Issue 2 / June 2019
Published online by Cambridge University Press: 24 April 2019, pp. 456-463
Here, we report that a marine sandworm Nereis virens jaw protein, Nvjp1, nucleates hemozoin with similar activity as the native parasite hemozoin protein, HisRPII. X-ray diffraction and scanning electron microscopy confirm the identity of the hemozoin produced from Nvjp1-containing reactions. Finally, we observed that nAl assembled with hemozoin from Nvjp1 reactions has a substantially higher energetic output when compared to analogous thermite from the synthetic standard or HisRPII-nucleated hemozoin. Our results demonstrate that a marine sandworm protein can nucleate malaria pigment and set the stage for engineering recombinant hemozoin production for nanoenergetic applications.
Repeated Chlamydia trachomatis infections are associated with lower bacterial loads
K. Gupta, R. K. Bakshi, B. Van Der Pol, G. Daniel, L. Brown, C. G. Press, R. Gorwitz, J. Papp, J. Y. Lee, W. M. Geisler
Published online by Cambridge University Press: 04 October 2018, e18
Chlamydia trachomatis (CT) infections remain highly prevalent. CT reinfection occurs frequently within months after treatment, likely contributing to sustaining the high CT infection prevalence. Sparse studies have suggested CT reinfection is associated with a lower organism load, but it is unclear whether CT load at the time of treatment influences CT reinfection risk. In this study, women presenting for treatment of a positive CT screening test were enrolled, treated and returned for 3- and 6-month follow-up visits. CT organism loads were quantified at each visit. We evaluated for an association of CT bacterial load at initial infection with reinfection risk and investigated factors influencing the CT load at baseline and follow-up in those with CT reinfection. We found no association of initial CT load with reinfection risk. We found a significant decrease in the median log10 CT load from baseline to follow-up in those with reinfection (5.6 CT/ml vs. 4.5 CT/ml; P = 0.015). Upon stratification of reinfected subjects based upon presence or absence of a history of CT infections prior to their infection at the baseline visit, we found a significant decline in the CT load from baseline to follow-up (5.7 CT/ml vs. 4.3 CT/ml; P = 0.021) exclusively in patients with a history of CT infections prior to our study. Our findings suggest repeated CT infections may lead to possible development of partial immunity against CT.
Comparison of basal lamella relaxing incision and combined conventional medialisation and controlled synechiae in functional endoscopic sinus surgery: a randomised prospective study
D D Maharaj, R S Virk, S Bansal, A K Gupta
Journal: The Journal of Laryngology & Otology / Volume 132 / Issue 7 / July 2018
To compare combined conventional Freer medialisation and controlled synechiae, performed for middle meatal access (during the initial steps of functional endoscopic sinus surgery) and post-operative middle turbinate medialisation, with basal lamella relaxing incision, the latter of which is a single step for achieving both middle meatal access and post-operative medialisation. The study also compared the effects of controlled synechiae and basal lamella relaxing incision on post-operative olfaction.
A randomised prospective study was performed on 52 nasal cavity sides (32 patients). Only basal lamella relaxing incision was performed in one group, and both conventional medialisation and controlled synechiae were performed in the other. Intra-operative and post-operative photography was used to measure the middle meatal area. A pocket smell test was used to assess olfaction.
There were no significant differences in operative middle meatal access and post-operative medialisation of the middle turbinate. Post-operative olfaction was affected more in the combined conventional medialisation and controlled synechiae group, compared to the basal lamella relaxing incision group, but this finding was not statistically significant.
Basal lamella relaxing incision is an effective single-step technique for achieving adequate middle meatal access and post-operative medialisation, with no significant effect on olfaction.
P058: Paramedic recognition and management of anaphylaxis in the prehospital setting
M. Welsford, R. Gupta, K. Samoraj, S. Sandhanwalia, M. Kerslake, L. Ryan, C. Shortt
Published online by Cambridge University Press: 11 May 2018, p. S77
Introduction: Anaphylaxis is a life-threatening condition that paramedics are equipped to treat effectively in the field. Current literature suggests improvements in paramedic recognition and treatment of anaphylaxis could be made. The aim of this study was to compare the proportion of cases of anaphylaxis appropriately treated with epinephrine by paramedics before and after a targeted educational intervention. Methods: This was a retrospective medical records review of patients with anaphylaxis managed by primary or advanced care paramedics in five Emergency Medical Service areas in Ontario, before and after an educational module was introduced. This module included education on anaphylaxis diagnosis, recognition, treatment priorities, and feedback on the recognition and management from the before period. All paramedic call records (PCRs) coded as local allergic reaction or anaphylaxis during 12-month periods before and after the intervention were reviewed by trained data abstractors to determine if patients met an international definition of anaphylaxis. The details of interventions performed by the paramedics were used to determine primary and secondary outcomes. Results: Of the 600 PCRs reviewed, 99/120 PCRs in the before and 300/480 in the after period were included. Of the charts included, 63/99 (63.6%) in the before and 136/300 (45.3%) in the after period met criteria for anaphylaxis (p=0.002). Of the cases meeting anaphylaxis criteria, 41/63 (65.1%) in the before and 88/136 (64.7%) in the after period were correctly identified as anaphylaxis (p=0.96). Epinephrine was administered in 37/63 (58.7%) of anaphylaxis cases in the before period and 76/136 (55.9%) in the after period (p=0.70). Anaphylactic patients with only two-system involvement received epinephrine in 20/40 (50.0%) cases in the before period and 45/93 (48.4%) in the after period (p=0.86). Conclusion: There are gaps in paramedic recognition and management of anaphylaxis, particularly in cases of two-system involvement. These gaps persisted after the implementation of an educational intervention. Other quality interventions and periodic refreshers may be necessary to improve prehospital treatment of anaphylaxis. Limitations include an increase in overall cases and decrease in rate of true anaphylaxis in the after period, which may relate to better case identification after electronic PCR implementation and changes in paramedic recognition. | CommonCrawl |
QBMG: quasi-biogenic molecule generator with deep recurrent neural network
Shuangjia Zheng1 na1,
Xin Yan1 na1,
Qiong Gu1,
Yuedong Yang3,
Yunfei Du3,
Yutong Lu3 &
Jun Xu ORCID: orcid.org/0000-0002-1075-03371,2
Journal of Cheminformatics volume 11, Article number: 5 (2019) Cite this article
Biogenic compounds are important materials for drug discovery and chemical biology. In this work, we report a quasi-biogenic molecule generator (QBMG) to compose virtual quasi-biogenic compound libraries by means of gated recurrent unit recurrent neural networks. The library includes stereo-chemical properties, which are crucial features of natural products. QMBG can reproduce the property distribution of the underlying training set, while being able to generate realistic, novel molecules outside of the training set. Furthermore, these compounds are associated with known bioactivities. A focused compound library based on a given chemotype/scaffold can also be generated by this approach combining transfer learning technology. This approach can be used to generate virtual compound libraries for pharmaceutical lead identification and optimization.
Biogenic compounds are important for medicinal chemistry and chemical biology [1]. It was reported that more than 50% of marketed drugs were derived from biogenic molecules [2]. The reason is that both biogenic compounds and pharmaceutical agents are biologically relevant and recognized by organisms [3]. However, it requires tremendous efforts to identify and isolate biogenic compounds from natural resources [4]. Current virtual screening technologies allow us to efficiently identify biogenic molecules for pharmaceutical uses [5], but, it is getting rare to identify biogenic compounds with new scaffolds [6]. Practically, biogenic compounds can be probes for pharmaceutical and biological studies and, inspire chemists to make quasi-biogenic compounds (compounds modified from natural products).
Hence, many experimental approaches have been reported to synthesize quasi-biogenic compound libraries, such as targeted-oriented synthesis [7, 8], diversity-oriented synthesis (DOS) [7, 9, 10], biology-oriented synthesis (BIOS) [11, 12], and functional-oriented synthesis (FOS) [13]. Meanwhile, virtual quasi-biogenic compound library generation methods were also reported, such as Yu reported a recursive atom-based compound library enumeration [14]. Based on property distribution analyses on the differences between drugs, natural products, and combinatorial libraries, Feher and Schmidt reported that those natural product-like libraries were more like synthetic compounds rather than natural products [15]. The entirety of the biologically relevant chemical space is largely ignored [1]. Therefore, the biological relevant features should be taken into account while generating natural product-like libraries.
Recent advances in deep learning technology have brought many achievements in small molecule and peptide design. Aspuru-Guzik's group reported an automated chemical design using a data-driven continuous representation of molecules [16]. Segler and colleagues reported a method to generate virtual focused library using recurrent neural networks fine-tuned with small number of known active compounds [17]. Later, more studies were done by combining deep reinforcement learning [18], Monte Carlo search [19], de novo peptide design method [20], and generative adversarial network [21].
The main deficits of previous approaches are (1) stereochemistry was not explicitly considered in the generated libraries, (2) there was no de novo approach to generate focused libraries biased on a specified scaffold/chemotype (this is important for lead optimization in medicinal chemistry). In the meantime, we are not aware of any models that used to construct specific virtual biogenic-like compounds libraries of the type we envisioned.
In this work, we report a deep recurrent neural networks (RNN) [22] with gate recurrent unit (GRU) [23] to overcome these deficits, and generate quasi-biogenic compound library to explore greater biogenic diversity space for medicinal chemistry and chemical biology studies. By combining transfer learning [24], we can build focused compound libraries biased on a specific chemotype for lead optimization.
Biogenic compound structure data
163,000 biogenic compound structures were derived from biogenic library of ZINC15 [25]. These compounds are primary and secondary metabolites. The chemical structure data were converted in canonicalized SMILES format [26]. The chemical structures were filtered by removing the molecules containing metal elements, small molecules (the number of non-hydrogen atoms less than 10), and larger molecules (the number of non-hydrogen atoms greater than 100). This process resulted in 153,733 biogenic structures.
ZINC biogenic-like compound reference
5060 ZINC biogenic-like compounds were collected from biogenic-like subset of ZINC12 [27]. This library consisted of synthetic compounds that having Tanimoto 80% similarity or better with biogenic library.
Active compound reference
The compounds in ChEMBL23 [28] are used as active compound references.
Molecular representation and tokenization
Biogenic molecules have many chiral centers, charges, cyclic connection descriptive SMILES notations, which are called as tokens in machine learning studies, such as @, = , #, etc. Conventionally, each letter in a molecular SMILES notation was sent to RNNs for training. This process cannot reflect the biogenic features of chiral centers, charges, cyclic connection descriptors. To preserve these important features, we train RNNs with normal tokens and combined tokens (the SMILES notations grouped by a pair of square brackets []). With this rule, the original SMILES data consisted the vocabulary as shown in Fig. 1. Compared to 35 tokens and average sequence length of 82.1 ± 34.9 (mean ± SD) with conventional method, this way resulted in 87 tokens and average sequence length of 62.4 ± 25.27 in biogenic library.
Molecular representation and tokenization. The SMILES notations grouped by a pair of square brackets is considered as one token
Word embedding process
In a conventional one-hot encoding approach, each molecule is represented by a number of token vectors. All token vectors have the same length (in our case, it is 87 as shown in Fig. 1). Each component in a vector is set as zero except the one at the token's index position. This data storage protocol occupies great memory space and result in inefficient performance. Therefore, we adopt word embedding, which is usually used in natural language process [29]. With this method, each conventional token vector was compressed into an information enriched vector. Thus, a token transformed from a space with one dimension per word to a continuous vector through unsupervised learning. This data representation can record the "semantic similarity" of every token. This process expedites the convergence of a training [30]. In summary, each molecular structure in our work is converted in a SMILES string, which is then encoded into a one-hot matrix, and then is transformed to a word embedding matrix at the embedding layer.
The modified recurrent neural network structure is depicted in Fig. 2a. The whole model consists of embedding layer, GRU structure and densely connected layer. The embedding layer consists of 64 units, which translate every single token from a one-hot vector to a 64-dimensional vector. This vector is then transferred to GRUs. The GRUs consists 3 layers, in which each layer has 512 neurons. The GRU output data to the densely connected linear layer with 89 neurons, combining the output signals with a softmax function. The number of the neurons in densely connected layer is the same as the number of the vocabularies. <START> and <END> are additional tokens, which mark the starting and ending of a SMILES string. For a GRU cell (Fig. 2a), \(h_{t }\) is the hidden state and \(\tilde{h}_{t}\) is the candidate hidden state.\(r_{t }\) and \(z_{t }\) are reset gate and update gate. With these gates, the network 'knows' how to combine the new input with the previously memorized data and update the memory. The details of GRU operations are described in Additional file 1.
Network architecture and training procedure. a Unfolded representation of the training model, which contains embedding layer, GRU structure, fully-connected linear layer and output layer. The structure of GRU cell is detailed on the right. b Flow-chart for the training procedure with a molecule. A vectorized token of the molecule is input as \(x_{t}\) in a time step, and the probability of the output to \(x_{t + 1}\) as the next token is maximized. c The new molecular structure is composed by sequentially cascading the SMILES sub-strings replied by the RNN network
Training procedure
Training an RNN for generating SMILES strings is done by maximizing the probability of the next token positioned in the target SMILES string based on the previous training steps. At each step, the RNN model produces a probability distribution over what the next character is likely to be, and the aim is to minimize the loss function value and maximize the likelihood assigned to the expected token. The parameters \(\theta\) in the network were trained with following loss function \(J\left( \theta \right)\):
$$J\left( \theta \right) = - \mathop \sum \limits_{t = 1}^{T} \log P(x^{t} |x^{t - 1} , \ldots ,x^{1} )$$
Simplified depiction of the training procedure with one biogenic molecule has been shown in Fig. 2b.
The model predicts biogenic molecules based upon the token probability distributions learned from the training set. The prediction consists of the following steps:
a <START> token is sent to the network;
the network replies with another token (a SMILES sub-string);
the new token is sent to the network again to get a newer token;
repeat (3) till the network replies with <END> token.
The new molecular structure is composed by sequentially cascading the SMILES sub-strings replied by the RNN network.
The GRU model was implemented using Pytorch library [31], and trained with ADAM optimizer [32] using a batch size of 128 and 0.001 learning rate. The model was trained until convergence. For each training epoch, a sampled set of 512 SMILES strings was generated to evaluate the validity using RDkit [33].
Validating the predicted compound library
The following criteria are used to evaluate the compound library generated by the RNN model.
Natural product-likeness. Natural product-likeness score [34], a Bayesian measure which allows for the determination of how molecules are similar to the chemical space covered by natural products based on atom-center fragment (a kind of fingerprint), were implemented to score the generated molecules. Note that we used the version that was packaged into RDkit in 2015.
Physico-chemical properties/descriptors. To visually compare the generated library against the biogenic library (the training set) and ZINC biogenic-like library, t-SNE (t-distributed stochastic neighbor embedding) maps are calculated with a set of physico-chemical properties/descriptors (cLogP, MW, HDs, HAs, rotatable bonds, number of aromatic ring systems, and TPSA) following the method reported by Chevillard and Kolb [35]. It is believed that biogenic compounds are structurally diverse in terms of molecular weight, polarity, hydrophobicity, and aromaticity [3, 36, 37].
Ability to reproduce biogenic molecules. The generated compound library should be able to reproduced already existed biogenic molecular structures [17]. To validate the ability to reproduce biogenic molecules, a variant five-fold cross-validation method is used. The process consisted of the following steps:
the biogenic library was randomly divided into five sub-libraries (each sub-library has 30,747 compounds);
these sub-libraries were used in a five-fold cross-validation protocol (one sub-library was used as the test set; the others were used as the training sets) to validate the RNN model;
sampling 153,733 (the same number of compounds in the biogenic library) unique compounds excluding the repeated ones in the training sets each fold after training;
comparing the generated library against the test library to identify overlapped molecules, and calculate the ratio of reproduced compounds;
the five-fold cross-validation process was repeated for three times.
Scaffold validation. To validate the new scaffold generation capacity of the RNN model, the generated, training and test libraries were analyzed using scaffold-based classification (SCA) method [38]. The Tanimoto similarities of the scaffolds derived from the generated library and training library were calculated with standard RDKit similarity based on ECFP6 molecular fingerprints [39]. These similarities were used to compare the generated new scaffolds against the biogenic scaffolds.
Transfer learning for chemotype-biased library generation
It is important to generate a chemotype-biased library for lead optimization if a privileged scaffold is known. The transfer learning process consists of the following steps:
selecting focused compound library (FCL) from the biogenic library. All compounds in FCL have a common scaffold/chemotype;
re-trained the RNN model with FCL;
predict a chemotype-biased library.
The ZINC biogenic library with 153,733 compounds were used to train an RNN model. Along with the number of the epochs grew, the model was converging (See Additional file 2 for learning curves). After training for 50 epochs, the model can generate an average of 97% valid SMILES strings. 250,000 valid and unique SMILES strings were generated as the predicted library. After removing compounds that were found in the training set from the predicted library, we got 194,489 compounds. The average number of tokens for each compound was 59.4 ± 23.1 (similar to the one for a compound in the biogenic library). 153,733 (the same number of the compounds in the training library) compounds were selected from the predicted library to study their natural product-likeness and physico-chemical properties/descriptor profiles.
Natural product-likeness of the predicted library
The natural product-likenesses of ZINC biogenic library (ZBL), ZINC biogenic-like library (ZLL), and our predicted compound library (PCL) were compared as shown in Fig. 3. The average natural product-likeness scores of PCL and ZBL were 1.09 ± 1.46 (mean ± SD) and 1.22 ± 1.56, indicating that they were both natural product-like, and similar to each other. The average natural product-likeness of a ZLL compound was − 1.25 ± 0.60, indicating that ZLL compounds were different from ZBL compounds, and compounds only partially overlapped the biogenic chemical diversity space.
PCL is quasi-biogenic and ZBL is biogenic, and they are similar to each other. ZLL is different from ZBL, and only partially overlapping the biogenic chemical diversity space
The chemical structures of top-twelve PCL compounds and their natural product-likeness scores are depicted in Fig. 4. The important feature of our method is that our predicted quasi-biogenic compound library includes chiral molecules, which are important characteristics in natural products. The previous reported methods were not able to generate chiral isomers [14, 17,18,19]. Top-200 PCL compounds and their natural product-likeness scores were listed in Additional file 3.
Top-twelve PCL compounds and their natural product-likeness scores
The physico-chemical properties/descriptors profile of the predicted library
A t-SNE plot was derived based on physico-chemical properties/descriptors (cLogP, MW, HDs, HAs, rotatable bonds, number of aromatic ring systems, and TPSA) to profile compound libraries, and compare their chemical diversity space occupations (Fig. 5). Again, PCL and ZBL occupied almost the same chemical diversity space. ZLL, however, only partially occupies the chemical diversity space occupied by PCL and ZBL. The plot also indicated that ZLL were not structurally as diverse as PCL and ZBL.
Two-dimensional t-distributed stochastic neighbor embedding (t-SNE) plot for PCL, ZBL, and ZLL. PCL and ZBL occupy almost the same chemical diversity space. ZLL partially occupies the chemical diversity space occupied by PCL and ZBL
Ability to reproduce biogenic molecules
Five-fold cross validation experiments indicated that the RNN model was mature after being trained for 20 epochs. The criterion of the training end was determined according to the change of the loss values during training. At this stage, quasi-biogenic molecules were sampled for studying the ability to reproduce already existed biogenic molecules. The results were represented in Table 1. Five-fold cross validation experiments were repeated for three times. The results demonstrated that the model can predict more than 25% compounds existing in the test library (TL). The RNN was robust because there were little fluctuations in three validation experiments as indicated at the last column of Table 1. It is worth noting that the RPP would slightly grow with longer training, even though the loss values were stable. To prevent overfitting of the model, we chose a moderate stage (20 epochs) for later experiments. The Epochs-Loss and Epochs-RPP curves were shown in Additional file 2.
Table 1 The reproducibility studies with five-fold cross validation experiments
At the first trial of the first five-fold cross validation experiment, we also generated a series of libraries with increased sizes (the 1, 5, 10, 25, 50 and 100 times of TL size, which is 30,747). As shown in Fig. 6, RPP increases exponentially when the PCL size grows to 30 × TL. And, RPP trends to be mature when PCL size increases further, and ends around 60% ~ 70%.
Reproducing known biogenic molecules in TL (30747) with different scale of generated set (1, 5, 10, 25, 50 and 100 times to TL)
Several chemical structures reproduced by the RNN model from TL are listed in Fig. 7. More such compounds can be found in Additional file 4.
Chemical structures reproduced by the RNN model from TL
Scaffold diversity and novelty of the predicted library
At the first trial of the first five-fold cross validation experiment, the scaffolds of compound libraries TRL (122,896 compounds), TL (30,747 compounds), and PCL (153,733 compounds) were analyzed with scaffold-based classification approach (SCA). The results are depicted in Fig. 8. 48,444 new scaffolds are derived from PCL, which are 2 times more than the total scaffolds (23,806) derived from TRL and TL. 463 scaffolds are exclusively derived from both PCL and TL, indicating that the RNN model can generate new scaffolds, but predict repeated scaffolds in TL, which are outside the training library (TRL). To summarize, the RNN model is capable of generating diversified and novel compounds.
Scaffold diversity and novelty of the predicted compound library
Another way to measure the structural diversity and novelty of PCL is to check the distribution of the similarity of PCL and TRL. For each scaffold in PCL, we selected the most similar scaffold in TRL through calculating their Tanimoto similarity. The PCL-TRL similarity distribution was depicted in Fig. 9a, which demonstrates an unbalanced Gaussian distribution biased to higher similarity scores. The similarity values range between 50 and 100%. This implied that PCL scaffolds were similar to TRL scaffolds with variations in chemical diversity. We also calculated the nearest-neighbor Tanimoto similarity distributions of the scaffolds of PCL and TRL, which were depicted in Fig. 9b, c. The distributions of Tanimoto similarity indicated that the chemical space of PCL was diverse than TRL. This analysis further proved that the RNN model can generate diversified and novel quasi-biogenic compounds.
The nearest-neighbor scaffold similarity distributions of a PCL-TRL, b PCL–PCL and c TRL–TRL
Some similar compound scaffold pairs between PCL and TRL were listed in Table 2.
Table 2 Six similar compound scaffold pairs between TRL and PCL
Potential bioactivities of the predicted library
For each PCL containing 150 K compounds, there were about 1% (1510 ± 221, mean ± SD) existed in the ChEMBL library, which are associated with bioactivities. Among those generated bioactive compounds, about 25% compound (371 ± 71) were found in the corresponding test libraries. Top-six such compounds and their activities were listed in Table 3.
Table 3 The generated six most bioactive molecules. 1, 2 and 3 existed in test set
Coumarin scaffold broadly exists in Rutaceae and Umbelliferae families. Its derivatives have many bioactivities such as activities of anticancer and anti-inflammatory [40,41,42]. The previously trained RNN model was re-trained with 2237 biogenic coumarin derivatives from ZBL. The model predicted 50 K compounds at 20, 50, or 100 epochs, respectively. In the three batches of the 50 K compounds, the compounds existing in TRL were excluded. 14,192 coumarin derivatives from ChEMBL23 database were extracted as bioactive reference library (BRL), in which the compounds duplicated in ZBL were removed. As a comparison, we also trained biogenic coumarin derivatives without transfer learning and followed the same processes described above. The scaffolds of each generated library were calculated with SCA for analyzing the diversity of chemical space.
The results of the transfer learning for chemotype-biased library generation were listed in Table 4. Comparing with the pre-trained RNN model, the number of coumarin derivatives is significantly increased (from 662 to more than 32 K). Besides, results demonstrated that the model without transfer learning generated compounds libraries with limited structural diversity and low correlation of bioactivity, though it can generate more coumarin derivatives. Also, when the number of transfer training epochs increased, the RNN model with transfer learning generated more coumarin-biased compounds. Table 4 also indicated that the number of coumarin-biased compounds trends mature along with the transfer epochs. The number of epochs should be limited to avoid overfitting.
Table 4 Results of transfer learning for chemotype-biased library generation
The top-six predicted coumarin derivatives that existing in BRL and their bioactivities were listed in Table 5.
Table 5 The top-six predicted coumarin derivatives that existing in BRL and their bioactivities
In this work, for the first time, the gated recurrent unit deep neural network learning approach is applied in quasi-biogenic compound generation. We have also shown that a compound library biased on a specific chemotype/scaffold can be generated by re-training the RNN model through transfer learning with a focused training library.
In summary, our method is able to (1) generate libraries including stereochemistry, (2) significantly repeat compounds containing known bioactive compounds outside of the training sets, (3) create a de novo approach to generate focused libraries biased on a specified scaffold.
Our RNN model predicts biogenic compounds with a number of epochs depending on the size of the training data set. For a training set of about 150 K molecules, the number of training epochs can be less than 50, the optimized epochs can be figured out by monitoring the loss values and the capacity of generating new quasi-biogenic scaffolds. For a predicted biogenic compound, the average number of SMILES tokens is about 60 (similar to the one for a compound in the training set).
QBMG can be used to generate virtual biogenic compound libraries for pharmaceutical lead identification, and design focused library for lead optimization.
RNN:
GRU:
gate recurrent unit
ZBL:
ZINC biogenic library
ZLL:
ZINC biogenic-like library
PCL:
predicted compound library
Hert J, Irwin JJ, Laggner C, Keiser MJ, Shoichet BK (2009) Quantifying biogenic bias in screening libraries. Nat Chem Biol 5(7):479–483. https://doi.org/10.1038/nchembio.180
Newman DJ, Cragg GM (2016) Natural products as sources of new drugs from 1981 to 2014. J Nat Prod 79(3):629–661. https://doi.org/10.1021/acs.jnatprod.5b01055
Pascolutti M, Quinn RJ (2014) Natural products as lead structures: chemical transformations to create lead-like libraries. Drug Discov Today 19(3):215–221. https://doi.org/10.1016/j.drudis.2013.10.013
Rodrigues T, Reker D, Schneider P, Schneider G (2016) Counting on natural products for drug design. Nat Chem 8(6):531–541. https://doi.org/10.1038/nchem.2479
Chen Y, de Bruyn Kops C, Kirchmair J (2017) Data resources for the computer-guided discovery of bioactive natural products. J Chem Inf Model 57(9):2099–2111. https://doi.org/10.1021/acs.jcim.7b00341
Pye CR, Bertin MJ, Lokey RS, Gerwick WH, Linington RG (2017) Retrospective analysis of natural products provides insights for future discovery trends. Proc Natl Acad Sci USA 114(22):5601–5606. https://doi.org/10.1073/pnas.1614680114
Schreiber SL (2000) Target-oriented and diversity-oriented organic synthesis in drug discovery. Science 287(5460):1964–1969. https://doi.org/10.1126/science.287.5460.1964
Burke MD, Lalic G (2002) Teaching target-oriented and diversity-oriented organic synthesis at Harvard University. Chem Biol 9(5):535–541. https://doi.org/10.1016/S1074-5521(02)00143-6
Tan DS (2005) Diversity-oriented synthesis: exploring the intersections between chemistry and biology. Nat Chem Biol 1(2):74–84. https://doi.org/10.1038/nchembio0705-74
Dandapani S, Marcaurelle LA (2010) Current strategies for diversity-oriented synthesis. Curr Opin Chem Biol 14(3):362–370. https://doi.org/10.1016/j.cbpa.2010.03.018
Noren-Muller A, Reis-Correa I Jr, Prinz H, Rosenbaum C, Saxena K, Schwalbe HJ et al (2006) Discovery of protein phosphatase inhibitor classes by biology-oriented synthesis. Proc Natl Acad Sci USA 103(28):10606–10611. https://doi.org/10.1073/pnas.0601490103
Basu S, Ellinger B, Rizzo S, Deraeve C, Schurmann M, Preut H et al (2011) Biology-oriented synthesis of a natural-product inspired oxepane collection yields a small-molecule activator of the Wnt-pathway. Proc Natl Acad Sci USA 108(17):6805–6810. https://doi.org/10.1073/pnas.1015269108
Wender PA, Baryza JL, Brenner SE, Clarke MO, Craske ML, Horan JC et al (2004) Function oriented synthesis: the design, synthesis, PKC binding and translocation activity of a new bryostatin analog. Curr Drug Discov Technol 1(1):1–11. https://doi.org/10.2174/1570163043484888
Yu MJ (2011) Natural product-like virtual libraries: recursive atom-based enumeration. J Chem Inf Model 51(3):541–557. https://doi.org/10.1021/ci1002087
Feher M, Schmidt JM (2003) Property distributions: differences between drugs, natural products, and molecules from combinatorial chemistry. J Chem Inf Comput Sci 43(1):218–227. https://doi.org/10.1021/ci0200467
Gomez-Bombarelli R, Wei JN, Duvenaud D, Hernandez-Lobato JM, Sanchez-Lengeling B, Sheberla D et al (2018) Automatic chemical design using a data-driven continuous representation of molecules. ACS Cent Sci 4(2):268–276. https://doi.org/10.1021/acscentsci.7b00572
Segler MHS, Kogej T, Tyrchan C, Waller MP (2018) Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS Cent Sci 4(1):120–131. https://doi.org/10.1021/acscentsci.7b00512
Olivecrona M, Blaschke T, Engkvist O, Chen H (2017) Molecular de-novo design through deep reinforcement learning. J Cheminform 9(1):48. https://doi.org/10.1186/s13321-017-0235-x
Yang X, Zhang J, Yoshizoe K, Terayama K, Tsuda K (2017) ChemTS: an efficient python library for de novo molecular generation. Sci Technol Adv Mater 18(1):972–976. https://doi.org/10.1080/14686996.2017.1401424
Muller AT, Hiss JA, Schneider G (2018) Recurrent neural network model for constructive peptide design. J Chem Inf Model 58(2):472–479. https://doi.org/10.1021/acs.jcim.7b00414
Putin E, Asadulaev A, Vanhaelen Q, Ivanenkov Y, Aladinskaya AV, Aliper A et al (2018) Adversarial threshold neural computer for molecular de novo design. Mol Pharm. https://doi.org/10.1021/acs.molpharmaceut.7b01137
Cho K, van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al. (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv:1406.1018
Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555v1
Shao L, Zhu F, Li X (2015) Transfer learning for visual categorization: a survey. IEEE Trans Neural Netw Learn Syst 26(5):1019–1034. https://doi.org/10.1109/TNNLS.2014.2330900
Sterling T, Irwin JJ (2015) ZINC 15—ligand discovery for everyone. J Chem Inf Model 55(11):2324–2337. https://doi.org/10.1021/acs.jcim.5b00559
SMILES. http://www.daylight.com/dayhtml/doc/theory/theory.smiles.html. Accessed 15 May 2018
Zni All. http://zinc.docking.org/subsets/zni-all. Accessed 15 May 2018
Gaulton A, Bellis LJ, Bento AP, Chambers J, Davies M, Hersey A et al (2012) ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res 40(Database issue):D1100–D1107. https://doi.org/10.1093/nar/gkr777
Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv:1301.3781
Jaeger S, Fulle S, Turk S (2018) Mol2vec: unsupervised machine learning approach with chemical intuition. J Chem Inf Model 58(1):27–35. https://doi.org/10.1021/acs.jcim.7b00616
Pytorch. Version: 0.4.0. https://pytorch.org/
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980
RDKit: open source cheminformatics. Version: 2017-09-3. http://www.rdkit.org/
Ertl P, Roggo S, Schuffenhauer A (2008) Natural product-likeness score and its application for prioritization of compound libraries. J Chem Inf Model 48(1):68–74. https://doi.org/10.1021/ci700286x
Chevillard F, Kolb P (2015) SCUBIDOO: a large yet screenable and easily searchable database of computationally created chemical compounds optimized toward high likelihood of synthetic tractability. J Chem Inf Model 55(9):1824–1835. https://doi.org/10.1021/acs.jcim.5b00203
Rosen J, Gottfries J, Muresan S, Backlund A, Oprea TI (2009) Novel chemical space exploration via natural products. J Med Chem 52(7):1953–1962. https://doi.org/10.1021/jm801514w
Koch MA, Schuffenhauer A, Scheck M, Wetzel S, Casaulta M, Odermatt A et al (2005) Charting biologically relevant chemical space: a structural classification of natural products (SCONP). Proc Natl Acad Sci USA 102(48):17272–17277. https://doi.org/10.1073/pnas.0503647102
Xu J (2002) A new approach to finding natural chemical structure classes. J Med Chem 45(24):5311–5320. https://doi.org/10.1021/jm010520k
Rogers D, Hahn M (2010) Extended-connectivity fingerprints. J Chem Inf Model 50(5):742–754. https://doi.org/10.1021/ci100050t
Wu L, Wang X, Xu W, Farzaneh F, Xu R (2009) The structure and pharmacological functions of coumarins and their derivatives. Curr Med Chem 16(32):4236–4260. https://doi.org/10.2174/092986709789578187
Kontogiorgis C, Detsi A, Hadjipavlou-Litina D (2012) Coumarin-based drugs: a patent review (2008-present). Expert Opin Ther Pat 22(4):437–454. https://doi.org/10.1517/13543776.2012.678835
Borges F, Roleira F, Milhazes N, Santana L, Uriarte E (2005) Simple coumarins and analogues in medicinal chemistry: occurrence, synthesis and biological activity. Curr Med Chem 12(8):887–916. https://doi.org/10.2174/0929867053507315
SZ, XY, and JX contributed concept and implementation. SZ and XY co-designed experiments. SZ was responsible for programming. All authors contributed to the interpretation of results. SZ and XY wrote the manuscript. JX reviewed and edited the manuscript. All authors read and approved the final manuscript.
National Supercomputer Center in Guangzhou provide computing resources for this project.
The authors declare that they have no competing interests
Project home page: https://github.com/SYSU-RCDD/QBMG. Operating system: Platform independent. Programming language: Python. Other requirements: Python3.6, Pytorch, RDKit. License: MIT.
This work has funded in part of the national science & technology major project of the ministry of science and technology of China (2018ZX09735010), GD Frontier & Key Techn. Innovation Program (2015B010109004), GD-NSF (2016A030310228), Natural Science Foundation of China (U1611261, 61772566) and the program for Guangdong Introducing Innovative and Enterpreneurial Teams (2016ZT06D211), National Science Foundation of China (81473138).
Shuangjia Zheng and Xin Yan are equal contributors
Research Center for Drug Discovery, School of Pharmaceutical Sciences, Sun Yat-Sen University, 132 East Circle at University City, Guangzhou, 510006, China
Shuangjia Zheng
, Xin Yan
, Qiong Gu
& Jun Xu
School of Computer Science and Technology, Wuyi University, 99 Yingbin Road, Jiangmen, 529020, China
Jun Xu
National Supercomputer Center in Guangzhou and School of Data and Computer Science, Sun Yat-Sen University, 132 East Circle at University City, Guangzhou, 510006, China
Yuedong Yang
, Yunfei Du
& Yutong Lu
Search for Shuangjia Zheng in:
Search for Xin Yan in:
Search for Qiong Gu in:
Search for Yuedong Yang in:
Search for Yunfei Du in:
Search for Yutong Lu in:
Search for Jun Xu in:
Correspondence to Xin Yan or Jun Xu.
13321_2019_328_MOESM1_ESM.docx
Additional file 1. GRU operations.
Additional file 2. Learning curves of biogenic library training.
13321_2019_328_MOESM3_ESM.csv
Additional file 3. Top-200 PCL compounds and their natural product-likeness scores.
Additional file 4. Compounds reproduced by the RNN model from test library.
Zheng, S., Yan, X., Gu, Q. et al. QBMG: quasi-biogenic molecule generator with deep recurrent neural network. J Cheminform 11, 5 (2019) doi:10.1186/s13321-019-0328-9 | CommonCrawl |
Modeling and solving alternative financial solutions seeking
Statistical process control optimization with variable sampling interval and nonlinear expected loss
January 2015, 11(1): 135-144. doi: 10.3934/jimo.2015.11.135
Approximate and exact formulas for the $(Q,r)$ inventory model
Zvi Drezner 1, and Carlton Scott 2,
Steven G. Mihaylo College of Business and Economics, California State University-Fullerton, Fullerton, CA 92634, United States
The Paul Merage School of Business, University of California, Irvine, CA 92697, United States
Received January 2012 Revised January 2014 Published May 2014
In this paper, new results are derived for the $(Q,r)$ stochastic inventory model. We derive approximate formulas for the optimal solution for the particular case of an exponential demand distribution. The approximate solution is within 0.29% of the optimal value. We also derive simple formulas for a Poisson demand distribution. The original expression involves double summation. We simplify the formula and are able to calculate the exact value of the objective function in $O(1)$ time with no need for any summations.
Keywords: Inventory models, exponential distribution, poisson distribution..
Mathematics Subject Classification: Primary: 90B05; Secondary: 33F05, 34K2.
Citation: Zvi Drezner, Carlton Scott. Approximate and exact formulas for the $(Q,r)$ inventory model. Journal of Industrial & Management Optimization, 2015, 11 (1) : 135-144. doi: 10.3934/jimo.2015.11.135
M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, 7th printing, Applied Mathematics Series, National Bureau of Standards, Washington, DC., 1968. doi: 10.1119/1.1972842. Google Scholar
R. B. S. Brooks and J. Y. Lu, On the convexity of the backorder function for an E.O.Q policy, Management Science, 15 (1969), 453-454. Google Scholar
A. Federgruen and Y. -S. Zheng, An efficient algorithm for computing an optimal $(r,Q)$ policy in continuous review stochastic inventory systems, Operations Research, 40 (1992), 808-813. doi: 10.1287/opre.40.4.808. Google Scholar
G. Gallego, New bounds and heuristics for ($Q,r$) policies, Management Science, 44 (1998), 219-233. doi: 10.1287/mnsc.44.2.219. Google Scholar
R. Loxton and Q. Lin, Optimal fleet composition via dynamic programming and golden section search, Journal of Industrial and Management Optimization, 7 (2011), 875-890. doi: 10.3934/jimo.2011.7.875. Google Scholar
J. O. Parr, Formula approximations to Brown's service function, Production and Inventory Management, 13 (1972), 84-86. Google Scholar
D. E. Platt, L. W. Robinson and R. B. Freund, Tractable ($Q,R$) heuristic models for constrained service levels, Management Science, 43 (1997), 951-965. Google Scholar
P. Zipkin, Foundations of Inventory Management, McGraw-Hill, New York, 2000. Google Scholar
Kevin Ford. The distribution of totients. Electronic Research Announcements, 1998, 4: 27-34.
Kai Liu, Zhi Li. Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3551-3573. doi: 10.3934/dcdsb.2016110
King-Yeung Lam, Daniel Munther. Invading the ideal free distribution. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3219-3244. doi: 10.3934/dcdsb.2014.19.3219
Katrin Gelfert, Christian Wolf. On the distribution of periodic orbits. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 949-966. doi: 10.3934/dcds.2010.26.949
Quan Hai, Shutang Liu. Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3097-3118. doi: 10.3934/dcdsb.2020221
Victor Berdichevsky. Distribution of minimum values of stochastic functionals. Networks & Heterogeneous Media, 2008, 3 (3) : 437-460. doi: 10.3934/nhm.2008.3.437
I-Lin Wang, Ju-Chun Lin. A compaction scheme and generator for distribution networks. Journal of Industrial & Management Optimization, 2016, 12 (1) : 117-140. doi: 10.3934/jimo.2016.12.117
Yvo Desmedt, Niels Duif, Henk van Tilborg, Huaxiong Wang. Bounds and constructions for key distribution schemes. Advances in Mathematics of Communications, 2009, 3 (3) : 273-293. doi: 10.3934/amc.2009.3.273
Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433
Robert Stephen Cantrell, Chris Cosner, Yuan Lou. Evolution of dispersal and the ideal free distribution. Mathematical Biosciences & Engineering, 2010, 7 (1) : 17-36. doi: 10.3934/mbe.2010.7.17
Pieter Moree. On the distribution of the order over residue classes. Electronic Research Announcements, 2006, 12: 121-128.
R.L. Sheu, M.J. Ting, I.L. Wang. Maximum flow problem in the distribution network. Journal of Industrial & Management Optimization, 2006, 2 (3) : 237-254. doi: 10.3934/jimo.2006.2.237
Alexander A. Davydov, Stefano Marcugini, Fernanda Pambianco. On the weight distribution of the cosets of MDS codes. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021042
Hanqing Jin, Shige Peng. Optimal unbiased estimation for maximal distribution. Probability, Uncertainty and Quantitative Risk, 2021, 6 (3) : 189-198. doi: 10.3934/puqr.2021009
Eunju Hwang, Kyung Jae Kim, Bong Dae Choi. Delay distribution and loss probability of bandwidth requests under truncated binary exponential backoff mechanism in IEEE 802.16e over Gilbert-Elliot error channel. Journal of Industrial & Management Optimization, 2009, 5 (3) : 525-540. doi: 10.3934/jimo.2009.5.525
Ross Callister, Duc-Son Pham, Mihai Lazarescu. Using distribution analysis for parameter selection in repstream. Mathematical Foundations of Computing, 2019, 2 (3) : 215-250. doi: 10.3934/mfc.2019015
Erika Asano, Louis J. Gross, Suzanne Lenhart, Leslie A. Real. Optimal control of vaccine distribution in a rabies metapopulation model. Mathematical Biosciences & Engineering, 2008, 5 (2) : 219-238. doi: 10.3934/mbe.2008.5.219
Richard Miles, Thomas Ward. A directional uniformity of periodic point distribution and mixing. Discrete & Continuous Dynamical Systems, 2011, 30 (4) : 1181-1189. doi: 10.3934/dcds.2011.30.1181
Zvi Drezner Carlton Scott | CommonCrawl |
Technical Advance
Statistical process monitoring to improve quality assurance of inpatient care
Lena Hubig ORCID: orcid.org/0000-0002-9454-12321,
Nicholas Lack2 &
Ulrich Mansmann1
Statistical Process Monitoring (SPM) is not typically used in traditional quality assurance of inpatient care. While SPM allows a rapid detection of performance deficits, SPM results strongly depend on characteristics of the evaluated process. When using SPM to monitor inpatient care, in particular the hospital risk profile, hospital volume and properties of each monitored performance indicator (e.g. baseline failure probability) influence the results and must be taken into account to ensure a fair process evaluation. Here we study the use of CUSUM charts constructed for a predefined false alarm probability within a single process, i.e. a given hospital and performance indicator. We furthermore assess different monitoring schemes based on the resulting CUSUM chart and their dependence on the process characteristics.
We conduct simulation studies in order to investigate alarm characteristics of the Bernoulli log-likelihood CUSUM chart for crude and risk-adjusted performance indicators, and illustrate CUSUM charts on performance data from the external quality assurance of hospitals in Bavaria, Germany.
Simulating CUSUM control limits for a false alarm probability allows to control the number of false alarms across different conditions and monitoring schemes. We gained better understanding of the effect of different factors on the alarm rates of CUSUM charts. We propose using simulations to assess the performance of implemented CUSUM charts.
The presented results and example demonstrate the application of CUSUM charts for fair performance evaluation of inpatient care. We propose the simulation of CUSUM control limits while taking into account hospital and process characteristics.
Statistical Process Monitoring (SPM) as a means to monitor performance has become a popular method in the health care sector, for example in the monitoring of surgical outcomes or clinical performance [1–5]. Using SPM and in particular the cumulative sum (CUSUM) method applied here, it is for example possible to decide whether a recent change in personnel or hospital organisation has lead to a decline in quality or whether, on the contrary, the new surgical team or hospital reorganisation has improved the quality of care in the hospital.
In this contribution, we assess the use of CUSUM method in health care by considering the example of quality assurance of inpatient care and demonstrate the applicability and explain application details of the CUSUM in this context. The use of SPM in this area may greatly benefit the overall performance of the hospital, as processes are continuously monitored and process deviations are detected as soon as data are available, allowing for rapid interventions.
In our example, the hospitals reporting the monitored performance indicators vary greatly in their characteristics. Thus, to ensure fair comparison of hospitals, different cause of variance have to be considered. Adjusting control charts for different risk populations is in many cases available and has been discussed before [6]. Additionally, emphasis should be placed on the difference in number of patients treated by the hospitals in the same time frame. An important parameter to evaluate the performance of control charts is the probability of a false alarm of the control chart. A false alarm is the incorrect signal of a process change. False alarms should be avoided as best one can to enhance the trust in the method and the reliability of the alarms.
CUSUM charts are considered optimal for indicating small performance shifts, although their statistics are not directly interpretable [7–9]. Originally formulated by Page in 1954 [10], CUSUM charts have since been applied to non-risk-adjusted and risk-adjusted processes, which was described by Steiner et al. for the Bernoulli process [11]. Like many control charts, a change in performance of a monitored process is detected by two horizontal control limits, the upper limit indicating process deterioration and the lower limit process improvements. The setting of these control limits is crucial for the sensitivity of the CUSUM chart as they determine if a chart is able to detect deteriorations early on, has a considerable detection delay or overlooks them completely. We must stress that to our knowledge there are no closed analytical expressions for control limits in CUSUM charts which are applicable here. This implies that there may be multiple possible ways to choose those control limits, each with their own advantages and drawbacks.
One frequently considered method to construct control limits are based on selecting the average run length (ARL) of the CUSUM chart [11–13]. For this purpose an appropriate ARL is determined for the process at hand, and subsequently control limits are identified that yield this desired ARL. Markov-Chain-approximations were first described by Brook and Evans for approximating ARL of control charts [14]. Recently, Knoth et al. proposed a more exact and accurate Markov-Chain approach for estimating the ARL of risk-adjusted CUSUM charts [15]. Considering that the distribution of run lengths is skewed to the right, it remains difficult to deduce prior alarm rates of control charts when constructing a control scheme based on ARL, and it is therefore difficult for clinical practitioners to choose an appropriate ARL.
In this work, we suggest to instead use simulation of control limits for Bernoulli log-likelihood CUSUM charts based on the probability of a false alarm within a process. We study control limits determined by a false alarm probability and their dependence on specific features such as hospital volume, baseline failure probability and case risk mix for non-risk-adjusted and risk-adjusted performance indicators. We conduct simulation studies in order to investigate alarm characteristics of CUSUM designs to monitor hospital performance. Subsequently, we give an example based on hospital performance data in Bavaria, Germany.
This work was motivated by the need for real-time detection of quality deficits in quality assurance of hospitals in Bavaria, Germany. External quality assurance (EQA) in Germany is regulated by the Directive on Measures concerning Quality Assurance in Hospitals [16]. According to this directive, each patient's quality of treatment is reflected in a set of nationally standardized performance indicators. The raw case based performance data are transmitted to the regulatory agency by the end of February following the reporting year. The annual mean of the performance indicator is then compared to the national target, and, if relevant deviations are detected, appropriate interventions are initiated.
Consequently, there is a considerable time lag between the date of event, evaluation, and intervention; in some cases up to one and a half years. Moreover, the quality assurance process in use masks trends and seasonal effects by evaluating aggregated data.
The data set allows for early sequential analyses, as hospitals transmit their performance data throughout the year, and all observations are recorded with a date of documentation.
In most cases, however, the date of documentation does not equal date of treatment, and sometimes multiple patients are documented at the same time. Political efforts for earlier data documentation and thus sooner analysis have begun, and hospitals are encouraged to document and transmit their performance data as soon as possible and continuously throughout the year to allow for interim analyses.
The Institute for Quality Assurance and Transparency in Health Care (Institut für Qualitätssicherung und Transparenz im Gesundheitswesen, IQTIG) is the regulatory agency responsible for quality assurance on federal level. At state level (Bundesland), quality assurance is supported by state offices. In Bavaria, this is the Bavarian Agency for Quality Assurance (Bayerische Arbeitsgemeinschaft für Qualitätssicherung).
Three performance indicators were chosen to show the application of CUSUM in EQA (Table 1). The indicators are developed by the IQTIG, and the indicators' specifications are published on the website of the IQTIG [17]. To illustrate the risk-adjusted CUSUM chart, we selected a performance indicator (11724), which reflects in-hospital complications or death after open carotid stenosis surgery. The risk model is published by the IQTIG and updated annually [18]. For 2016, the explanatory variables were: age, indication group, preoperative degree of disability and ASA classification. The hospital result is given as a rate of observed to expected cases. In the years 2016 and 2017, the average result of hospitals in Bavaria were 1.27 and 1.18 respectively.
Table 1 EQA performance indicators selected for simulation
The standard non-risk-adjusted CUSUM chart is illustrated by two crude performance indicators. Indicator (51838) represents a process with very low failure probability. It measures the events of surgically treated necrotizing enterocolitis in small premature infants, a serious intestinal infection often leading to death [19]. The average failure probability for this indicator increased from 1.07% to 1.47% in Bavarian hospitals from 2016 to 2017. Indicator (54030) measures rates of extended preoperative stay of patients with proximal femur fracture. Rapid surgery within 24 hours can prevent severe complications such as thrombosis, pulmonary embolism and pressure ulcers [20]. Bavarian hospitals slightly improved in this performance indicator from 2016 to 2017, as the failure probability decreased from 20.35% to 18.01%.
Cumulative Sum (CUSUM) chart
CUSUM charts for monitoring process performance for a deterioration in quality over time are defined as:[10]
$$ C_{t}=\text{max}(0, C_{t-1} + W_{t}), \quad t=1,2,3,... $$
The dichotomous outcome of observation y equals 0 for every success and 1 for every adverse event. Observations are plotted in sequence of their temporal occurrence. Depending on the outcome, the CUSUM decreases or remains at zero for every success, and increases for every adverse event. The magnitudes of increase and decrease are denoted by CUSUM weights Wt. Following Steiner et al. the weights Wt for the non-risk-adjusted CUSUM (ST-CUSUM) are:[11]
$$ W_{t}=\left\{\begin{array}{cc} \log{\left(\frac{1-c_{A}}{1-c_{0}}\right)} &\text{if}\ y_{t}=0\\ \log{\left(\frac{c_{A}}{c_{0}}\right)} &\text{if}\ y_{t}=1 \end{array}\right., $$
where c0 is the baseline failure probability and cA the smallest unacceptable failure probability, which is the change in performance that is detected. CUSUM weights may be individualized for patient risk in the risk-adjusted CUSUM (RA-CUSUM). Here, the weights are:[11]
$$ W_{t}=\left\{\begin{array}{cc} \log{\left(\frac{1}{1-p_{t}+R_{A}p_{t}}\right)} &\text{if}\ y_{t}=0\\ \log{\left(\frac{R_{A}}{1-p_{t}+R_{A}p_{t}}\right)} &\text{if}\ y_{t}=1 \end{array}\right., $$
where pt represents the individual patient risk score. The baseline failure probability is no longer constant, but tailored to patients' risk. The risk-adjusted CUSUM monitors for a change in risk specified by an odds ratio change from R0 to RA, with RA greater than one indicating process deteriorations.
Factors influencing CUSUM chart performance
Several factors influence the performance and characteristics of the CUSUM performance and are considered in the simulation study. Some factors may be regarded as control switches of the monitoring schemes, as they are configurable and directly influence the control charts. Other factors are mostly fixed by the process that is monitored. Most of these factors are also relevant when applying other types of performance monitoring or SPM. Additionally, other types of variations exits that may influence the performance of CUSUM charts, but they are not accounted for. These may be unknown or random factors that are not measured or difficult to quantify, e.g. the quality of the data. Performance indicator: Performance indicators quantify a process output, indicating quality of care. For each performance indicator, the subset of patients covered by this indicator are specified. The performance indicator establishes the baseline failure probability c0 or the risk-adjustment model for the patients' risk scores pt. Additionally, the performance indicator should be considered when setting up a monitoring scheme due to the implications of the process at hand on detecting performance deteriorations. Hospital volume: Hospital volume is here defined as the annual number of patients per performance indicator and hospital. It is a major source of variation between hospitals and possibly also within hospitals across years and is considered for fair performance evaluation in the control limit simulation as the sequence length n. As the hospital volume directly influences the control limit, it has a considerable effect on CUSUM performance. Case risk mix: Adjusting for individual patient risk is necessary when comparing outcomes, but there is often some uncertainty about the validity of the risk adjustment model. When possible, previous experience of the process can be used to estimate the case risk distribution (Phase I). This estimation of case risk mix is used in the simulation of the control limit, where outcome data is simulated on the estimated risk population. Detection level δ: Detectable changes in performance are determined by an odds ratio multiplier δ. In the ST-CUSUM, this change of δ defines the alternative failure probability cA, which influences the CUSUM weights Wt in Eq. 2. For the RA-CUSUM, δ is equal to RA in Eq. 3. Values of δ greater than one detect process deteriorations, while values less than one detect process improvements. False alarm probability: We define the probability for a false alarm as the type 1 error of the CUSUM chart. It is the probability of a CUSUM signal within the monitoring of a process when the process is truly in control. Here it is applied as the defining parameter to construct CUSUM charts in the simulation of control limits.
Defining the control limit
The CUSUM chart signals a process change when the CUSUM statistic exceeds a control limit. The process should then be investigated for quality deficits and monitoring can restart by resetting the current CUSUM statistic [21]. Control limits should be set after careful consideration of the probability of a false alarm and true alarm. As the alarm probabilities approach 100% with increasing run length, these parameters have to be estimated for a fixed sample size (n). For very small sample sizes it is possible to estimate the exact false alarm probability of possible control limits (Additional file 2). For larger sample sizes, we propose the following algorithm to select a control limit that will result in a specific false alarm probability:
Simulate a sufficiently large number of in-control sequential outcome data for t=1,2,…,n, with baseline failure probability or, if applicable, individual risk probabilities drawn from the population.
Unrestricted CUSUM runs are calculated for these simulated sequences. This means the CUSUM charts do not include a control limit and are not reset.
The maximum CUSUM statistics (Ct) are collected from each CUSUM run.
The desired control limit for a sequence of size n is the (1−P(false alarm))-percentile of the maximum CUSUM statistics.
Software for computing CUSUM charts is available in an open source R package on the Comprehensive R Archive Network (CRAN). The cusum package provides functions to simulate control limits for a false alarm probability, calculate CUSUM charts, and evaluate the true and false alarm probabilites of CUSUM charts [22]. Additional file 1 illustrates the construction of CUSUM charts using the cusum package for hospital performance data taking into account the previously described factors.
We simulated hospital performance data to assess the effect of different influencing factors on the probabilities of false and true alarms of ST-CUSUM and RA-CUSUM charts. Figure 1 illustrates how the described factor influence the construction and simulation of CUSUM charts.
Simulation Plan. Factors influencing simulation of control limits (h) and time to signal (ts) in CUSUM runs simulation
CUSUM runs are simulated for the three previously described IQTIG indicators from EQA. The baseline failure probabilities for the crude performance indicators were set to the national average failure rate of 2016 and 2017 (51838: c0=1.25%; 54030: c0=19.21%). For the risk-adjusted indicator 11724 we resampled risk scores with replacement from the total hospital population of 2016 and 2017. Additionally, we created artificial subpopulation based on case risk mix. For a high risk population risk scores were sampled from the risk population of the upper 25th percentile (≥ 1.04%). A low risk population was considered with risk scores sampled from the risk population of the lower 25th percentile (≤ 0.56%).
Three hospital volumes were derived for small, medium and large hospitals. The volume was estimated by taking the mean of the hospital volume percentiles across all performance indicators. The mean of hospitals below the 25th percentile (ns=7) was used as an estimate for small hospitals, the mean between the 25th and 75th percentile (nm=42) for medium hospitals, and the mean above the 75th percentile (nl=105) for large hospitals.
100 000 CUSUM runs were simulated to estimate control limits based on a false alarm probability. We simulated control limits for false alarm probabilites of 0.1%, 0.5%, 1% and 5%, in accordance with typical values of type 1 error rates. The CUSUM was set to detect deteriorations with δ> 1. The detection level of a doubling (2) of odds was considered as well as one step below (1.5) and one (2.5) and two (3) steps above.
To assess how well the specific CUSUM chart differentiates between good and poor performance, 2000 CUSUM runs were simulated for each control limit for in- and out-of-control performance. From these runs we collected the run length to signal, where the CUSUM statistic first exceeds the control limit. Finally, the signal rates are calculated as the proportion of CUSUM runs that are shorter than the hospital volume.
Simulation results
For every hospital volume, performance indicator, and risk population, sixteen control limits were simulated for varying false alarm probabilities and detection levels.
Control limits are wider when the false alarm probability is small, detection level is high, baseline failure probability or case risk mix is high and hospital volume is large. This pattern was generally reflected in the simulated control limits.
In Figs. 2a and 3a the percentage of in-control CUSUM runs that signalled a process change are presented as signal rates. Here, performance was as expected and thus the signal rates should not exceed the predefined false alarm probability of the control limit.
Simulation of ST-CUSUM. Percentage of ST-CUSUM charts signalling a process deterioration (alarm rate) from 2000 simulated in-control (top) and out-of-control (bottom) ST-CUSUM runs. The desired false alarm probability is marked by black symbols. a In-control signal rate. b Out-of-control signal rate
Simulation of RA-CUSUM. Percentage of RA-CUSUM charts signalling a process deterioration (alarm rate) from 2000 simulated in-control (top) and out-of-control (bottom) for risk-adjusted indicator 11724. RA-CUSUM runs were simulated for mixed, low and high risk populations. The desired false alarm probability is marked by black symbols. a In-control signal rate. b Out-of-control signal rate
The signal rates of in-control simulations were for the most part close to the desired false alarm probability, demonstrating successful simulation of control limits to obtain this false alarm probability. Only two populations of small hospital volume showed deviations from the desired false alarm probability and the observed in-control signal rate. For these scenarios, tight control limits had to be simulated, but due to the discrete nature of the CUSUM there are finite possible CUSUM control limits. For small hospital volume of indicator 51838 (Fig. 2a, bottom right), this was the CUSUM weight of an adverse event, which results in a higher false signal rate of ≈15%. As the risk-adjusted CUSUM individually weights adverse events based on the patient population, the control limit chosen for small hospital volume and low risk population of indicator 11724 (Fig. 3a, top left) was zero. This results in a CUSUM signal at every observation, reflected by the 100% in-control and out-of-control signal rates. For these scenarios it may be reasonable to choose a lower false alarm probability and in turn also accepting a lower power.
Signal rates for out-of-control CUSUM runs (Figs. 2b, 3b) represent the correctly identified deteriorations and ideally should be close to 100%.
Large hospital volumes and higher failure probability resulted in a higher power. Control chart of indicator 54030 achieved 99.25% for the highest false alarm probability and detection level.
Yet, most CUSUM runs had low power; particularly CUSUM runs for small hospital volumes did not trigger an alarm in the majority of CUSUM runs within one observation period.
Application to EQA hospital performance data
CUSUM charts are applied to real data from EQA of inpatient care from the years 2016 and 2017 provided by the Bavarian Agency of Quality Assurance. Performance data from 2016 is used to estimate baseline failure probability and case risk mix (Phase I) to construct CUSUM charts for performance data of 2017 (Phase II), though the monitoring period extends from March 1st 2017 to February 28th 2018. This is because documentation and transmission deadline is February 28th for the previous year with the reporting year shifted by two months.
The hospital results and hospital volumes of the year 2016 are displayed in Fig. 4. Average hospital volume decreased from 2016 to 2017 from 50 observations to 48 observations.
Hospital Results. Annual hospital results of performance indicators in Bavaria 2016
Baseline failure probabilities for the two crude indicators are derived from the overall 2016 average (54030: c0=20.35%; 51838: c0=1.07%). Case risk mix for the risk-adjusted indicator was inferred from the 2016 hospital specific population. Patient individual risk for complications or death ranged between 0.24% to 40.98%, with a median of 0.83% across both years.
CUSUM charts were constructed by simulating the control limit for a false alarm probability of 5%. We set the detection level to δ=2 and constructed control charts for hospitals with hospital with more than one observation in 2016 and 2017. Indicator 54030 covers 163 hospitals, indicator 51838 45 hospitals and indicator 11724 64 hospitals.
We initiated all CUSUM runs with C0=0 and reset Ct to zero after every alarm, which is applicable if an investigation after an alarm takes place and appropriately identifies any problems [21].
Of the 261 hospitals' CUSUM charts, 34 processes triggered an alarm and were identified as out-of-control. Overall, 86.21% of the hospitals were classified as in-control (Table 2). Out-of-control processes of indicators 51838 and 11724 had at most one alarm, and for indicator 54030 seven hospitals had more than one alarm.
Table 2 Percentage of hospitals with CUSUM alarms per performance indicator in Bavaria 2017
Figure 5 displays the resulting control limits for the CUSUM charts of all hospital processes. Exemplary CUSUM charts showing individual hospital processes are given in Figures 6 and 7 for the ST-CUSUM and Figure 8 for the RA-CUSUM. Simulated control limits of ST-CUSUM charts for indicators 54030 and 51838 increased with increasing hospital volume to ensure a constant false alarm probability during one observation period (Fig. 5). Control limits of the RA-CUSUM chart for indicator 11724 increased as well, but adjustment of the different case risk mixes influenced variability of the control limits. Similar to the simulation results, some control limits for smaller hospital volumes were estimated as the CUSUM weight for failure Wt(y=1) or as zero. As the positive CUSUM weights Wt(y=0), which decrease the CUSUM, were smaller for indicators 51838 and 11724 than for indicator 54030, adverse events were more difficult to compensate by good performance (Fig. 7b, Fig. 8f). CUSUM charts for indicators 51838 and 11724 categorized mostly only hospitals with less than two adverse events as in-control. The charts for indicator 54030 allowed for more adverse events, and also signalled multiple deteriorations. Hospitals with multiple alarms possibly had a persistent quality deficit in this indicator and were not able to control the process during the entire monitoring period. Some hospitals showed an accumulation of adverse events at specific points, which may help to locate causal deficits at subsequent investigation (Fig. 6f).
EQA Application. Control limits for hospital performance data of EQA in Bavaria. Control limits were estimated on performance data of 2016 and simulated for δ = 2 and a false alarm probability of 5%
EQA Application Trauma Surgery 54030. Selected CUSUM plots for individual hospital annual performance data of 2017. a Small hospital #69: No CUSUM Signal. b Small hospital #136: CUSUM Signal. c Medium hospital #45: No CUSUM Signal. d Medium hospital #113: CUSUM Signal. e Large hospital #102: No CUSUM Signal. f Large hospital #175: CUSUM Signal IHV denotes indicator specific hospital volume.
EQA Application Neonatology 51838. Selected CUSUM plots for individual hospital annual performance data of 2017. a Small hospital #190: No CUSUM Signal. b Small hospital #62: CUSUM Signal. c Medium hospital #76: No CUSUM Signal. d Medium hospital #46: CUSUM Signal. e Large hospital #214: No CUSUM Signal. f Large hospital #197: CUSUM Signal IHV denotes indicator specific hospital volume.
EQA Application Carotid Stenosis 11724. Selected CUSUM plots for individual hospital annual performance data of 2017. a Small hospital #25: No CUSUM Signal. b Small hospital #185: CUSUM Signal. c Medium hospital #102: No CUSUM Signal. d Medium hospital #211: CUSUM Signal. e Large hospital #181: No CUSUM Signal. f Large hospital #184: CUSUM Signal IHV denotes indicator specific hospital volume.
For larger hospital volume the wider control limits also allowed for more adverse events within a year. The large hospital #102 (Fig. 6e) was categorised as in-control for indicator 54030, although a third of the observations were adverse events. Hospital #113 (Fig. 6d) had 29% adverse events for indicator 54030 and triggered an alarm. This is partly due to the shorter sequence of adverse events and the smaller hospital volume. However, this hospital also had a substantial increase in volume from 2016 to 2017, so that the control limit probably was lower than necessary.
In this work we proposed the construction of CUSUM charts by simulating control limits for a predefined false alarm probability, and showed the alarm characteristics of ST- and RA-CUSUM charts regarding true and false alarm probability for different monitoring schemes and processes. The method of constructing control charts is intuitive and flexible regarding hospital volume and case risk mix.
Strengths and limitations of this study
The control of false alarms in our method worked well for sufficiently large hospital volumes and high baseline failure probability. For very small sample sizes we presented an exact calculation of possible control limits and corresponding false alarm probabilities (Additional file 2). In monitoring schemes of small hospital volumes, it often remains impossible to adjust the control limit to fit a specific false alarm probability, as these control charts are not as flexible as control charts for larger volumes. Small hospitals continue to present an issue in SPM, as corresponding CUSUM charts are difficult to construct and evaluate. In our simulation, it is quite possible that no failure was simulated for small hospital volume processes (ns=7), especially for indicators with a small failure probability such as for indicator 51838 (c0=1.25%). Detecting a doubling or tripling of odds with a small failure probability and small hospital volume is difficult, as even with doubled or tripled odds, the probability to observe no adverse event is still large. Taking this example, 92% of ns=7 observations show no adverse events at failure probability c0 compared to 84% at doubled odds – i.e., in 84% of all possible sets of ns=7 patients, no difference between the in-control and out-of-control state is observable. As most control charts required at least two adverse events to signal, alarms became very unlikely. The hospitals' CUSUM charts in the example showed that small hospitals may still benefit from an individual investigation based on the CUSUM chart as differences in performance are fairly well illustrated. Hospital volume may be increased by extending the data to cover multiple years, if the achievable false alarm probability is not acceptable.
The simulation study showed different results for processes with high baseline failure probability compared to those with low baseline failure probability. Again, this may be due to the simulation process, as low failure probability lead to few observable adverse events even when the process is out of control. Although simulation of out-of-control performance did not result in satisfactory power, the CUSUM did signal in the example and detected quality deficits in a similar rate as the processes with a high failure probability (Table 2). Current German regulations require that in cases of extremely adverse clinical outcome written explanations have to be furnished by the medical staff in every such instance. This strategy does not rule out the use of control charts for indicators with low baseline failure probability, and we suggest that individual investigations of adverse events should accompany CUSUM charts for these indicators. The monitoring of rare events is a common issue in SPM and Woodall and Driscoll gave a comprehensive review on this topic [23]. In this context, our example (c0=1.25%) is not yet regarded as rare, as the methods discussed here consider failure probabilities that are ten or a hundred times smaller.
As CUSUM charts are based on performance data of the previous year, they may be subject to uncertainty of these estimations. Monitoring across different years presents the additional challenge that specifications of performance indicators may change due to clinical recommendations of national advisory panels, and thus indicators may not always be comparable across different monitoring periods. Additionally, hospital volume and case risk mix vary across years, which affect the alarm characteristics of the CUSUM scheme. It has been shown that wrong expectations of risk mix or wrong model specifications can have a significant impact on CUSUM runs [13, 15, 24]. Zhang and Woodall proposed Dynamic Probability Control Limits (DPCL) for the CUSUM chart to address these issues [25]. These limits control the false alarm probability during monitoring by changing and updating the control limit based on new observations, but are more difficult to construct and interpret.
Implications for policy and research
Augmenting established EQA with concurrent SPM may well help to improve the timeliness and the informative value of quality assurance. Quality deficits will be detected sooner than with analyses of aggregated means on an annual or quarterly basis. Thus, performance may be improved before any deteriorations are identified using conventional EQA. Moreover, adverse events are presented in their temporal context and trends or seasonal effects are more apparent in CUSUM charts. CUSUM charts can also be of assistance in the evaluation of intervention and will facilitate showcasing best practice examples.
Typically, there has to be a trade-off between low false alarm and high true alarm probability. Prioritizing a low false alarm probability will protect hospitals with good quality of care from false accusations. As all alarms require investigation at hospital level, false alarms will result in unnecessary draining of resources of monitoring investigators as well as of those investigated. Still, detecting deteriorations should not be disregarded and an adequate balance between false and true alarm probability should be sought out. A false alarm probability of 5%, which can result in an acceptable power, may be a reasonable choice for most scenarios.
In the example, we reset the CUSUM after every alarm to gain a sense of frequency of alarms. However, according to the theoretical background of SPM in industrial process control, this is only appropriate if the process is investigated and brought back in control, which is difficult to ensure in hospitals. Additionally, when the CUSUM restarts with the same control limit as before, the false alarm probability and power may be lower than anticipated, as the hospital volume decreases. If resetting the CUSUM to zero is not reasonable, resetting it to a greater value below the control limit is also an option. This was already proposed by Lucas and Crosier in 1982 [26], and results in faster subsequent alarms.
Use of risk-adjusted performance indicators should be further encouraged. Adjustment for case risk mix is necessary for a fair and robust quality assurance. If a risk-model for the particular indicator exists, risk-adjusted CUSUM charts are easy to implement and their performance in our study was similar to the standard CUSUM charts.
The problem of unknown temporal order of observation and its implication on CUSUM charts are of interest to explore further. For accurate CUSUM charts and for immediate intervention, observations should be recorded automatically with a precise time stamp. So far, the date of documentation remains an unsatisfactory surrogate parameter in place of date of treatment. Thus, hospitals that do not fulfil regular data documentation would have to be excluded from CUSUM analysis. In the future, even further advances in processing electronic health records can help to approximate real time bed-side performance evaluation. However, for the time being performance monitoring is still constrained by unnecessarily complicated and laborious processes of data documentation, transmission, validation and evaluation. Further advances in timely data documentation can be motivated by the prospect of implementing efficient SPM.
Benefits and issues arising from simultaneously monitoring multiple data streams should be dealt with more thoroughly when implementing CUSUM in German EQA. Multiple indicators of one hospital can provide additional information about the hospital's performance. The global false discovery rate can, however, increase with multiple data streams, whether these are multiple indicators per hospital or multiple hospitals in the quality assurance. Previous work introduced controlling the false discovery rate by applying strategies from multiple testing to normally distributed data [27–29], and Mei proposed a scalable global monitoring scheme for concurrent data streams [30]. Methods to control the false discovery rate of multiple data streams need to be evaluated for their suitability in the monitoring scheme of German EQA.
We propose a determination of control limits in CUSUM charts based on the false alarm probability and numerical simulations with appropriate adaptations to hospital volume and case risk mix. Exemplary resulting CUSUM charts are analysed with respect to their effective true and false alarm probabilities and pivotal CUSUM factors are pointed out. We demonstrate the feasibility of hospital volume and case risk mix adapted CUSUM charts in the external quality assurance of inpatient care.
The datasets generated during the simulation study are available from the corresponding author. Hospital performance data are not publicly available due to contracts restricting data access exclusively to explicitly named institutions conducting quality assurance for hospitals.
ARL:
Average run length
CUSUM:
Cumulative sum
EQA:
External quality assurance
IQTIG:
Institute for quality assurance and transparency in health care
RA-CUSUM:
Risk-adjusted CUSUM
SPM:
Statistical process monitoring
IHV:
Indicator specific Hospital Volume
ST-CUSUM:
Standard CUSUM (non-risk-adjusted)
de Leval MR, François K, Bull C, Brawn W, Spiegelhalter D. Analysis of a cluster of surgical failures: Application to a series of neonatal arterial switch operations. J Thorac Cardiovasc Surg. 1994; 107:914–24.
Smith IR, Garlick B, Gardner MA, Brighouse RD, Foster KA, Rivers JT. Use of graphical statistical process control tools to monitor and improve outcomes in cardiac surgery. Heart Lung Circ. 2013; 22(2):92–9. https://doi.org/10.1016/j.hlc.2012.08.060.
Bottle A, Aylin P. Intelligent information: A national system for monitoring clinical performance. Health Serv Res. 2008; 43(1 pt 1):10–31. https://doi.org/10.1111/j.1475-6773.2007.00742.x.
Sketcher-Baker KM, Kamp MC, Connors JA, Martin DJ, Collins JE. Using the quality improvement cycle on clinical indicators - improve or remove?Med J Aust. 2010; 193(8):104–6.
Marshall C, Best N, Bottle A, Aylin P. Statistical issues in the prospective monitoring of health outcomes across multiple units. J R Stat Soc Ser A Stat Soc. 2004; 167(3):541–59.
Woodall WH, Fogel SL, Steiner SH. The monitoring and improvement of surgical-outcome quality. J Qual Technol. 2015; 47(4):383–99.
Cook DA, Duke G, Hart GK, Pilcher D, Mullany D. Review of the application of risk-adjusted charts to analyze mortality outcomes in critical care. Crit Care Resusc. 2008; 10(3):239–51.
Hawkins DM, Wu Q. The CUSUM and the EWMA head-to-head. Qual Eng. 2014; 26(2):215–22. https://doi.org/10.1080/08982112.2013.817014.
Steiner SH, Woodall WH. Debate: what is the best method to monitor surgical performance?BMC Surg. 2016; 16:15. https://doi.org/10.1186/s12893-016-0131-8.
Page ES. Continuous inspection schemes. Biometrika. 1954; 41(1):100–15.
Steiner SH, Cook RJ, Farewell VT, Treasure T. Monitoring surgical performance using risk-adjusted cumulative sum charts. Biostatistics. 2000; 1(4):441–52.
Reynolds MR, Stoumbos ZG. A CUSUM chart for monitoring a proportion when insepcting continuosly. J Qual Technol. 1999; 31(1):87–108.
Jones MA, Steiner SH. Assessing the effect of estimation error on risk-adjusted CUSUM chart performance. Int J Qual Health Care. 2012; 24(2):176–81. https://doi.org/10.1093/intqhc/mzr082.
Brook D, Evans DA. An approach to the probability distribution of cusum run length. Biometrika. 1972; 59(3):539–49.
Knoth S, Wittenberg P, Gan FF. Risk-adjusted CUSUM charts under model error. Stat Med. 2019. https://doi.org/10.1002/sim.8104.
Gemeinsamer Bundesausschuss. Richtlinie über Maßnahmen der Qualitätssicherung in Krankenhäusern / QSKH-RL. 2016. https://www.g-ba.de/downloads/62-492-1280/QSKH-RL_2016-07-21_iK-2017-01-01.pdf. Accessed 17 Apr 2019.
IQTIG. Methodische Grundlagen. Berlin: Institut für Qualitätssicherung und Transparenz im Gesundheitswesen; 2017. Accessed 17 Apr 2019.
IQTIG. Karotis-Revaskularisation. Beschreibung der Qualitätsindikatoren für das Erfassungsjahr 2016. 2017. https://iqtig.org/downloads/auswertung/2016/10n2karot/QSKH_10n2-KAROT_2016_QIDB_V02_2017-04-26.pdf. Accessed 17 Apr 2019.
IQTIG. Neonatologie. Beschreibung der Qualitätsindikatoren für das Erfassungsjahr 2016. 2017. https://iqtig.org/downloads/auswertung/2016/neo/QSKH_NEO_2016_QIDB_V02_2017-04-26.pdf. Accessed 17 Apr 2019.
IQTIG. Hüftgelenknahe Femurfraktur mit osteosynthetischer Versorgung. Beschreibung der Qualitätsindikatoren für das Erfassungsjahr 2016. 2017. https://iqtig.org/downloads/auswertung/2016/17n1hftfrak/QSKH_17n1-HUEFTFRAK_2016_QIDB_V02_2017-04-26.pdf. Accessed 17 Apr 2019.
Montgomery DC. Introduction to Statistical Quality Control, 6th ed. Hoboken: Wiley; 2009.
Hubig L. cusum: CUSUM charts for monitoring of hospital performance. R package version 0.2.1. 2019. https://CRAN.R-project.org/package=cusum.
Woodall WH, Driscoll AR. Some recent results on monitoring the rate of a rare event In: Knoth S, Schmid W, editors. Frontiers in Statistical Quality Control 11. Cham: Springer: 2015. p. 15–27.
Tian W, Sun H, Zhang X, Woodall WH. The impact of varying patient populations on the in-control performance of the risk-adjusted cusum chart. Int J Qual Health Care. 2015; 27(1):31–6. https://doi.org/10.1093/intqhc/mzu092.
Zhang X, Woodall WH. Dynamic probability control limits for risk-adjusted bernoulli cusum charts. Stat Med. 2015; 34(25):3336–48. https://doi.org/10.1002/sim.6547.
Lucas JM, Crosier RB. Fast initial response for CUSUM quality-control schemes: Give your CUSUM a head start. Technometrics. 1982; 24(3):199–205. https://doi.org/10.2307/1268679.
Benjamini Y, Kling Y. A Look at Statistical Process Control Through P-values. Technical Report RP-SOR-99-08. Tel Aviv: Tel Aviv University; 1999.
Grigg OA, Spiegelhalter DJ. An empirical approximation to the null unbounded steady-state distribution of the cumulative sum statistic. Technometrics. 2008; 50(4):501–11.
Spiegelhalter D, Sherlaw-Johnson C, Bardsley M, Blunt I, Wood C, Grigg O. Statistical methods for healthcare regulation: rating, screening and surveillance: Statistical methods for healthcare regulation. J R Stat Soc Ser A Stat Soc. 2012; 175(1):1–47.
Mei Y. Efficient scalable schemes for monitoring a large number of data streams. Biometrika. 2010; 97(2):419–33.
Institute for Medical Information Processing, Biometry, and Epidemiology, Ludwig-Maximilians-Universität, Marchioninistr. 15, Munich, 81377, Germany
Lena Hubig
& Ulrich Mansmann
Bavarian Institute for Quality Assurance, Munich, 80331, Germany
Nicholas Lack
Search for Lena Hubig in:
Search for Nicholas Lack in:
Search for Ulrich Mansmann in:
All authors contributed to the conception and design of this work, read and approved the final manuscript. LH performed the simulation and analyses and drafted the manuscript. UM and NL have contributed with intellectual content and reviewed and edited the manuscript. All authors read and approved the final manuscript.
Correspondence to Lena Hubig.
The ethics committee of the Medical Faculty of Ludwig Maximilian University Munich waived the need for ethical assessment. The Bavarian Agency of Quality Assurance granted approval for the use of anonymised performance data from quality indicator programs in Bavarian hospitals for this research project.
Additional file 1 Construct CUSUM charts for hospital performance. A vignette from the cusum R-package showcasing the construction of CUSUM charts for hospital performance data.
Additional file 2 Exact Control Limits for small sample sizes. Calculation of exact control limits for small volumes and simulation result comparing the exact method to control limits simulated for a false alarm probability.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Hubig, L., Lack, N. & Mansmann, U. Statistical process monitoring to improve quality assurance of inpatient care. BMC Health Serv Res 20, 21 (2020) doi:10.1186/s12913-019-4866-7 | CommonCrawl |
of Victoria Mathematics
& Statistics
Search Math and Stats
All upcoming and recent events from the past six months:
Title: Exact date and time to be confirmed
Speaker: Paul Dourish, Donald Bren School of Information and Computer Science, University of California, Irvine
Date and time: 01 Mar to 31 Mar 2023, 4:30pm - 5:30pm
Location: Bob Wright Centre A104
Event type: PIMS lectures
Title: Odd covers of graphs
Speaker: Jiaxi Nie, Shanghai Centre for Mathematical Sciences
Date and time: 24 Nov 2022, 3:30pm - 4:20pm
Location: MAC D116 to watch talk on Zoom
Event type: Discrete math seminar
Abstract: Given a finite simple graph $G$, an {\em odd cover of $G$} is a collection of complete bipartite graphs, or bicliques, in which each edge of $G$ appears in an odd number of bicliques and each non-edge of $G$ appears in an even number of bicliques. We denote the minimum cardinality of an odd cover of $G$ by $b_2(G)$ and prove that $b_2(G)$ is bounded below by half of the rank over $\mathbb{F}_2$ of the adjacency matrix of $G$. We show that this lower bound is tight in the case when $G$ is a bipartite graph and almost tight when $G$ is an odd cycle. However, we also present an infinite family of graphs which shows that this lower bound can be arbitrarily far away from $b_2(G)$. Babai and Frankl (1992) proposed the ``odd cover problem," which in our language is equivalent to determining $b_2(K_n)$. Radhakrishnan, Sen, and Vishwanathan (2000) determined $b_2(K_n)$ for an infinite but density zero subset of positive integers $n$. In this paper, we determine $b_2(K_n)$ for a density $3/8$ subset of the positive integers. This is joint work with Calum Buchanan, Alexander Clifton, Jason O'Neil, Puck Rombach, and Mei Yin.
Title: Nonparametric high-dimensional multi-sample tests based on graph theory
Speaker: Xiaoping Shi, UBC Okanagan
Location: via Zoom
Event type: Statistics seminar
Zoom link: https://uvic.zoom.us/j/83114822200?pwd=bDY1RnFmb05wZXJRZk52THBGbDFYZz09
High-dimensional data pose unique challenges for data processing in an era of ever-increasing amounts of data availability. Graph theory can provide a structure of high-dimensional data. We introduce two key properties desirable for graphs in testing homogeneity. Roughly speaking, these properties may be described as: unboundedness of edge counts under the same distribution and boundedness of edge counts under different distributions. It turns out that the minimum spanning tree violates these properties but the shortest Hamiltonian path posses them. Based on the shortest Hamiltonian path, we propose two combinations of edge counts in multiple samples to test the homogeneity. We give the permutation null distributions of proposed statistics when sample sizes go to infinity. The power is analyzed by assuming both sample sizes and dimensionality tend to infinity. Simulations show that our new tests behave very well overall in comparison with various competitors. Real data analysis of tumors and images further convince the value of our proposed tests. Software implementing the test is available in the R package Relevance.
Title: A Constrained Minimum Criterion for Regression Model Selection
Speaker: Min Tsao, University of Victoria
Date and time: 25 Oct 2022, 4:00pm - 5:00pm
ABSTRACT: Although log-likelihood is widely used in model selection, the log-likelihood ratio has had few applications in this area. In this talk, I present a log-likelihood ratio based method for selecting regression models which focuses on the set of models deemed plausible by the likelihood ratio test. I show that when the sample size is large and the significance level of the test is small, there is a high probability that the smallest model in the set is the true model; thus, the method selects this smallest model. The significance level of the test serves as a tuning parameter that controls the balance between the false active rate and false inactive rate of the selected model. I consider three levels of this parameter in a simulation study and compare this method with the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to demonstrate its excellent accuracy and adaptability to different sample sizes.
Model selection is an active area of research with a long history, a wide range of perspectives, and a rich collection of methods. For students unfamiliar with this area, this talk includes a review of key methods including the AIC, BIC and modern Lp penalty methods. The new method presented in this talk offers a frequentist perspective on the model selection problem. It is an alternative and a strong competitor to the AIC and BIC for selecting regression models.
Title: Circular flows in mono-directed Eulerian signed graphs
Speaker: Zhouningxin Wang, IRIF, Université Paris Cité
Location: MAC D116
Abstract: Given positive integers $p,q$ where $p$ is even and $p\geq 2q$, a circular $\frac{p}{q}$-flow in a mono-directed signed graph $(G, \sigma)$ is a pair $(D, f)$ where $D$ is an orientation on $G$ and $f: E(G)\to \mathbb{Z}$ satisfies that for each positive edge $e$, $q\leq |f(e)|\leq p-q$ and for each negative edge $e$, either $0\leq |f(e)|\leq \frac{p}{2}-q$ or $\frac{p}{2}+q\leq |f(e)|\leq p-1$, and the total in-flow equals the total out-flow at each vertex. This is the dual notion of circular $\frac{p}{q}$-coloring of signed graphs recently introduced in ``Circular chromatic number of signed graphs. R. Naserasr, Z. Wang, and X. Zhu. Electronic Journal of Combinatorics, 28(2)(2021), \#P2.44''. In this talk, we consider bipartite analogs of Jaeger's circular flow conjecture and its dual, Jaeger-Zhang conjecture. We show that every $(6k-2)$-edge-connected Eulerian signed graph admits a circular $\frac{4k}{2k-1}$-flow and every signed bipartite planar graph of negative-girth at least $6k-2$ admits a circular $\frac{4k}{2k-1}$-coloring. We also provide some recent results about the circular flow index of signed graphs with high edge-connectivities. This is joint work with Jiaao Li, Reza Naserasr, and Xuding Zhu.
Title: An AI + HI Hybrid Content Moderation Solution for Microsoft News and Feeds
Speaker: Lizhen Peng, Microsoft WebXT Content Services
Abstract: Content Moderation is the key and fundamental component for any content services and platforms to operate and to offer friendly, meaningful and non-toxic content for consumers to enjoy and engage with, and for users to build an online community to interact and communicate as well. Moderation service is the safety gatekeeper for other features to build on top of, such as content recommendations and personalization, targeted advertisement and so on. However, there are many big practical challenges we are facing on a daily basis. In this talk, we will present the trending solutions for content moderation in the Tech Industry, by leveraging both Artificial Intelligence (AI) and Human Intelligence (HI) to overcome multi-dimensional obstacles and to achieve the goals from multiple perspectives in real practices.
Title: Distinguished Women Scholars Lecture: The card game SET and some results in extremal combinatorics
Speaker: Dr. Lisa Sauermann, MIT
Location: via Zoom - Registration required
Event type: Colloquia
FREE AND OPEN TO EVERYONE
This talk is aimed at a general audience and does not require a mathematics background. The talk will start by discussing the popular card game SET and in particular the question of how many cards one can have in this game without creating a so-called "SET". Considering this question for extended versions of the game, we will find a connection to a recent breakthrough result of Ellenberg and Gijswijt on a famous problem in the area of extremal combinatorics. The talk will also discuss related results due to the speaker, some of which are of a geometric flavor.
Our Distinguished Women Scholars Lecture series was established by the Vice-President Academic and Provost to bring distinguished women scholars to the University of Victoria.
Download the poster (PDF file)
Title: Leveraging spatial transcriptomics data to recover cell locations in single-cell RNA-seq with CeLEry
Speaker: Qihuang Zhang, Department of Epidemiology, Biostatistics and Occupational Health, McGill University
Abstract: Single-cell RNA sequencing (scRNA-seq) has transformed our understanding of cellular heterogeneity in health and disease, but the lack of physical relationships among dissociated cells has limited its applications. In this talk, we present CeLEry, a supervised deep learning algorithm to recover the spatial origins of cells in scRNA-seq by leveraging gene expression and spatial location relationships learned from spatial transcriptomics. CeLEry has a data augmentation procedure via a variational autoencoder to enlarge the training sample size, which improves the robustness of the method and overcomes noise in scRNA-seq. CeLEry can infer the spatial origins of cells in scRNA-seq at multiple levels, including 2D location as well as the spatial domain or tissue layer of a cell. CeLEry also provides uncertainty estimates for the recovered locations. This framework can be applied to study the changing of cell distribution in cerebral cortex layers during the progression of Alzheimer's disease.
Title: Stable Matchings
Speaker: Ndiamé Ndiaye, McGill University
The set of stable matchings induces a distributive lattice. The supremum of the stable matching lattice is the boy-optimal (girl-pessimal) stable matching and the infimum is the girl-optimal (boy-pessimal) stable matching. The classical boy-proposal deferred-acceptance algorithm returns the supremum of the lattice, that is, the boy-optimal stable matching. In this paper, we study the smallest group of girls, called the minimum winning coalition of girls, that can act strategically, but independently, to force the boy-proposal deferred-acceptance algorithm to output the girl-optimal stable matching. We characterize the minimum winning coalition in terms of stable matching rotations and show that its cardinality can take on any value between $0$ and $\floor{\frac{n}{2}}$, for instances with $n$ boys and $n$ girls. Our two main results concern the random matching model. First, the expected cardinality of the minimum winning coalition is small, specifically $(\frac{1}{2}+o(1))\log{n}$. This resolves a conjecture of Kupfer. Second, in contrast, a randomly selected coalition must contain nearly every girl to ensure it is a winning coalition asymptotically almost surely. Equivalently, for any $\varepsilon>0$, the probability a random group of $(1-\varepsilon)n$ girls is not a winning coalition is at least $\delta(\varepsilon)>0$. This is joint work with Sergey Norin and Adrian Vetta.
Title: Networks of Classifiers and Classifiers with Feedback - Fairness and Equilibria
Speaker: Sampath Kannan, University of Pennsylvania
Location: ECS 660
Join us for this talk in the Seminar Series: Mathematics of Ethical Decision-making Systems
Talk at 3:30 PM
Reception at 4:30 PM
Fairness in machine learning classification has been a topic of great interest given the increasing use of such classifiers in critical settings.
There are many possible definitions of fairness and many potential sources of unfairness. Given this complex landscape, most research has focused on studying single classifiers in isolation.
In reality an individual is subjected to a network of classifiers: for example, one is classified at each stage of life (school, college, employment to name a few), and one may also be classified in parallel by many classifiers (such as when seeking college admissions). In addition, individuals may modify their behavior based on their knowledge of the classifier, leading to equilibrium phenomena. Another feedback effect is that the result of the classifier may affect the features of an individual (or of the next generation) for future classifications.
In this talk we present work that takes the first steps in exploring questions of fairness in networks of classifiers and in systems with feedback. Given the inherent complexity of the analysis, our models are very stylized, but it is our belief that some of the qualitative conclusions apply to real-world situations.
***For those unable to attend this talk in person, we have a Zoom alternative. For the Zoom meeting ID/Passcode, please send an email to [email protected]. Thank you ***
Download Poster (PDF)
Title: Meta-clustering of Genomic Data
Speaker: Yingying Wei, Department of Statistics, The Chinese University of Hong Kong
Abstract: Like traditional meta-analysis that pools effect sizes across studies to improve statistical power, it is of increasing interest to conduct clustering jointly across datasets to identify disease subtypes for bulk genomic data and discover cell types for single-cell RNA-sequencing (scRNA-seq) data. Unfortunately, due to the prevalence of technical batch effects among high-throughput experiments, directly clustering samples from multiple datasets can lead to wrong results. The recent emerging meta-clustering approaches require all datasets to contain all subtypes, which is not feasible for many experimental designs.
In this talk, I will present our Batch-effects-correction-with-Unknown-Subtypes (BUS) framework. BUS is capable of correcting batch effects explicitly, grouping samples that share similar characteristics into subtypes, identifying features that distinguish subtypes, and enjoying a linear-order computational complexity. We prove the identifiability of BUS for not only bulk data but also scRNA-seq data whose dropout events suffer from missing not at random. We mathematically show that under two very flexible and realistic experimental designs—the "reference panel" and the "chain-type" designs—true biological variability can also be separated from batch effects. Moreover, despite the active research on analysis methods for scRNA-seq data, rigorous statistical methods to estimate treatment effects for scRNA-seq data—how an intervention or exposure alters the cellular composition and gene expression levels—are still lacking. Building upon our BUS framework, we further develop statistical methods to quantify treatment effects for scRNA-seq data.
Title: Hartree equation in the Schatten class
Speaker: Kenji Nakanishi , RIMS, Kyoto-Japan
Event type: Applied math seminar
Abstract: This is joint work with Sonae Hadama (Kyoto). We consider a system of Schrodinger equations with the Hartree interaction, which is a simplified mean-field model for fermions. The equation may be rewritten for operators, where the trace class corresponds to the case of finite total mass (L^2). Lewin and Sabin proved stability of some translation-invariant stationary solutions from physics, using the Strichartz estimate for the free equation in the Schatten class, where the mass is merely p-th power summable with respect to the number of particles for some p>1. In this case, the perturbation argument for the Duhamel integral is not so easy as in the scalar case, because the Schatten class is not simply embedded into or interpolated with the space-time Lebesgue norms for the Strichartz estimate. We propose some framework to solve the equation in the Schatten class. The main novelties are norms for propagators corresponding to the best constants of the Strichartz estimate in the Schatten class, and a Schatten version of the Christ-Kiselev lemma for the Duhamel integral on operators.
Title: Asymptotic Distribution of Quadratic Forms
Speaker: Sumit Mukherjee, Columbia University
Event type: Probability and Dynamics seminar
Please email the organizer for the Zoom link.
Abstract: In this talk we will give an exact characterization for the asymptotic distribution of quadratic forms in IID random variables with finite second moment, where the underlying matrix is the adjacency matrix of a graph. In particular we will show that the limit distribution of such a quadratic form can always be expressed as the sum of three independent components: a Gaussian, a (possibly) infinite sum of centered chi-squares, and a Gaussian with a random variance. As a consequence, we derive necessary and sufficient conditions for asymptotic normality, and universality of the limiting distribution.
Title: Product structure of graph classes with bounded treewidth
Speaker: Robert Hickingbotham, Monash University
Date and time: 29 Sep 2022, 3:30pm - 4:30pm
Abstract: This talk will introduce the topic of graph product structure theory. I will show that many graphs with bounded treewidth can be described as subgraphs of the strong product of a graph with smaller treewidth and a bounded-size complete graph. To this end, define the \emph{underlying treewidth} of a graph class \GG to be the minimum non-negative integer c such that, for some function f, for every graph G \in \GG there is a graph H with \tw(H) \leq c such that G is isomorphic to a subgraph of H \boxtimes K_{f(\tw(G))}. I'll introduce \emph{disjointed partitions} of graphs and show they determine the underlying treewidth of any graph class. Using this result, I will show that the class of planar graphs has underlying treewidth 3; the class of K_{s,t}-minor-free graphs has underlying treewidth s (for {t \geq \max\{s,3\}}); and the class of K_t-minor-free graphs has underlying treewidth t-2. This is joint work with Rutger Campbell, Katie Clinch, Marc Distel, Pascal Gollin, Kevin Hendrey, Tony Huynh, Freddie Illingworth, Youri Tamitegama, Jane Tan and David Wood [https://arxiv.org/abs/2206.02395].
Title: Type III Noncommutative Geometry and hyperbolic groups
Speaker: Heath Emerson, University of Victoria
Location: DSB C114
Event type: Operator theory seminar
Abstract: A Gromov hyperbolic group is a group with a certain large-scale negative curvature property. Almost all groups are hyperbolic (e.g. fundamental groups of a compact Riemann surface of genus g is hyperbolic unless g=0 or 1. The theory of hyperbolicity is important in topology in the classification of manifolds. Hyperbolic groups also are interesting from the point of view of dynamical systems. Any hyperbolic group can be compactified by adding a boundary to it. The boundary is a compact metrizable space on which the group acts by homeomorphisms. These boundary actions of hyperbolic groups code in special cases, asymptotic behaviour of geodesics on negatively curved surfaces, and determine simple purely infinite C*-algebras with Type III von Neumann closures. This means that the traditional tools of Noncommutative Geometry cannot be used to endow the corresponding `noncommutative spaces' with geometric structure. In these talks we report on progress on developing a `twisted' NCG for them, building on previous work of the author and Bogdan Nica.
Title: Entropy upper bounds for Glass networks
Speaker: Benjamen Wild, University of Victoria
Abstract: A Glass network is a system of first order ODEs with discontinuous right hand side coming from step function terms. The "ON/OFF" switching dynamics from the step functions makes Glass networks effective at modelling switching behaviour typical of gene and neural networks. They also have potential application as models of true random number generators (TRNGs) in electronic circuits. As random number generators, it is desirable for networks to behave as irregularly as possible to thwart potential hacking attempts. Thus, a measure of irregularity is necessary for analysis of proposed circuit designs. The cybersecurity industry wants bit sequences generated by the circuit to have positive entropy. The nature of the discontinuities allows for Glass networks to be transformed into discrete time dynamical systems, where discrete maps represent transitions through boxes in phase space, where all possible box transitions are represented using a directed graph called the transition Graph (TG). Dynamics on the TG naturally allows for the network dynamics to be represented by shift spaces with an alphabet of symbols representing boxes. For shift spaces, entropy is used to gauge dynamical irregularity. As a result it is a perfect measure for the application to TRNGs. Previously it was shown that the entropy of the TG acts as an upper bound for the entropy of the actual dynamics realized by the network. By considering more dynamical information from the continuous system we have shown that the TG can be reduced to achieve more accurate entropy upper bounds. We demonstrate this by considering examples and use numerical simulations to gauge the accuracy of our improved upper bounds.
Title: Ensembling Classification Models Based on Phalanxes of Variables with Applications in Drug Discovery
Speaker: Dr. Jabed Tomal, Department of Mathematics and Statistics, Thompson Rivers University
Abstract: Statistical detection of a rare class of objects in a two-class classification problem can pose several challenges. Because the class of interest is rare in the training data, there is relatively little information in the known class response labels for model building. At the same time the available explanatory variables are often moderately high dimensional. In the four assays of our drug-discovery application, compounds are active or not against a specific biological target, such as lung cancer tumor cells, and active compounds are rare. Several sets of chemical descriptor variables from computational chemistry are available to classify the active versus inactive class; each can have up to thousands of variables characterizing molecular structure of the compounds. The statistical challenge is to make use of the richness of the explanatory variables in the presence of scant response information. Our algorithm divides the explanatory variables into subsets adaptively and passes each subset to a base classifier. The various base classifiers are then ensembled to produce one model to rank new objects by their estimated probabilities of belonging to the rare class of interest. The essence of the algorithm is to choose the subsets such that variables in the same group work well together; we call such groups phalanxes.
Title: UVic Math Competition
Location: CLE C108
Event type: Education and outreach
The UVic Mathematics Competition is held annually in the fall. This year it will be held on Monday, September 26, 2022 between 3:00-5:00 pm in CLE C108. There are monetary prizes. To participate, just show up. The competition is open to all undergraduate students at UVic, including first year students. In the past some prizes were won by first year students.
Here are some recent question papers: 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006.
Title: Spanning trees and loop soups on surfaces
Speaker: Gourab Ray, University of Victoria
Abstract: A spanning tree is a connected subgraph of a graph with no cycles. I will explain some well-known connection between such collection of trees and some Poissonian collection of loops, and a magical algorithm (known as Wilson's alorithm) to sample such trees very fast. Then I will try to explain a recent extension to the multiply connected setting, and pose an open question in the end. Joint work with N. Berestycki and B. Laslier.
Title: On the ergodicity of a class of 1-dimensional probabilistic cellular automata with size-3 neighbourhoods
Speaker: Moumanti Podder, Indian Institute of Science Education and Research (IISER) Pune
See the abstract (PDF file).
Please contact the organizer for the Zoom link.
Title: Deciphering tissue microenvironment from Next Generation Sequencing data
Speaker: Dr. Jian Hu, Department of Human Genetics, Emory School of Medicine
ABSTRACT: The advent of high-throughput next-generation sequencing (NGS) technologies has transformed our understanding of cell biology and human disease. As NGS has been adopted earliest by the scientific community, its use has now become widespread, and the technology has improved rapidly. At present, it is now common for laboratories to assay genome-wide transcriptomes of thousands of cells in a single scRNA-seq experiment. In addition, technologies that enable the measurement of new information, for example, chromatin accessibility, protein quantification, and spatial location, have been developed. In order to take full advantage of the multi-modality information when analyzing NGS data, new methods are demanded. This seminar will introduce several machine learning algorithms for NGS data analysis with different aims, including cell type classification, spatial domain detection, and tumor microenvironment annotation.
KEYWORDS: single cell RNA sequencing (scRNA-seq), Spatial transcriptomics (ST), tumor microenvironment, machine learning
Title: Disjoint isomorphic balanced clique subdivisions
Speaker: Joseph Hyde, University of Victoria
Abstract: A classical result, due to Bollobás and Thomason, and independently Komlós and Szemerédi, states that a graph with average degree O(k^2) guarantees the existence of a K_k-subdivision. We study two directions extending this result.
Firstly, Verstraëte conjectured in 2002 that the quadratic bound O(k^2) would guarantee already two vertex-disjoint isomorphic copies of a K_k-subdivision. Secondly, Thomassen conjectured in 1984 that for each k \in \mathbb{N} there is some d = d(k) such that every graph with average degree at least d contains a balanced subdivision of K_k. Recently, Liu and Montgomery confirmed Thomassen's conjecture, but the optimal bound on d(k) remains open.
In this talk, we show that the quadratic bound O(k^2) suffices to force a balanced K_k-subdivision. This gives the optimal bound on d(k) needed in Thomassen's conjecture and implies the existence of O(1) many vertex-disjoint isomorphic K_k-subdivisions, confirming Verstraëte's conjecture in a strong sense.
Title: Groupoid C*-algebras and the Elliott classification program
Speaker: Ian Putnam, University of Victoria
Abstract: The construction of C*-algebras from groupoids is a very general method for constructing C*-algebras, including many of great importance. I will give a short review the construction for etale groupoids. The Elliott classification program for C*-algebras has been a huge undertaking over the past three decades and has given many new insights which were unimagined thirty years ago. I will give a short overview of the subject (from a non-expert). The obvious question which links these topics is: Which C*-algebras which are classified by Elliott arise from groupoids? I will discuss various results to answer this. I will try to keep the two talks at a fairly elementary level, although this will involve avoiding a lot of technical issues.
Title: Independence Testing with Permutations
Speaker: Gabriel Crudele, University of Victoria
Location: CLE A216
See abstract (PDF)
Title: Tight Bounds on 3-Neighbor Bootstrap Percolation
Speaker: Abel Romer, University of Victoria
Date and time: 29 Aug 2022, 11:00am - 12:00pm
Event type: Graduate dissertations
See announcement and abstract (PDF).
Title: Statistical Estimation with Differential Privacy
Speaker: Gautam Kamath, Cheriton School of Computer Science, University of Waterloo
Date and time: 25 Aug 2022, 3:30pm - 4:30pm
Download poster PDF
Naively implemented, statistical procedures are prone to leaking information about their training data, which can be problematic if the data is sensitive. Differential privacy, a rigorous notion of data privacy, offers a principled framework to dealing with these issues. I will survey recent results in differential private statistical estimation, presenting a few vignettes which highlight novel challenges for even the most fundamental problems, and suggesting solutions to address them. Along the way, I'll mention connections to tools and techniques in a number of fields, including information theory and robust statistics.
Title: The Speed and Threshold of the Biased Hamilton Cycle Game.
Speaker: Bruce Reed, McGill University
In the biased Hamilton Cycle Maker-Breaker game, two players alternate choosing edges from a complete graph. In the game with bias b, Maker chooses one previously unchosen edge in each turn and Breaker chooses b. The game is Maker-win for the given bias if Maker can ensure she chooses the edges of a Hamilton Cycle and Breaker-win otherwise.
Letting n be the number of vertices, if the bias is 0 then Maker wins the game in n moves. On the other hand, if the bias b=b(n) is {n choose 2}-1 then Breaker wins. Furthermore, if Breaker wins for some bias b, then she also wins for bias b+1. We discuss the threshold at which the game becomes Breaker win, and the number of moves maker needs to ensure she wins for b below this threshold, which is called the speed of the game.
Warmup: What bounds can you obtain on the threshold and speed without a literature search? Any ideas how to proceed?
Title: Deep Learning Methods May Not Outperform Other Machine Learning Methods on Analyzing Genomic Studies
Speaker: Shaoze Zhou, University of Victoria
Location: Virtual Defence
Notice of the Final Oral Examination for the Degree of Master of Science of
SHAOZE ZHOU
BSc (University of Victoria, 2015)
"Deep Learning Methods May Not Outperform Other Machine Learning Methods on Analyzing Genomic Studies"
Department of Mathematics and Statistics
Friday, August 19, 2022 4:00 P.M.
Virtual Defence
Supervisory Committee:
Dr. Xuekui Zhang, Department of Mathematics and Statistics, University of Victoria (Co-Supervisor)
Dr. Min Tsao, Department of Mathematics and Statistics, UVic (Co-Supervisor)
External Examiner:
Dr. Xiaojian Shao, Digital Technologies Research Centre, National Research Council Canada
Chair of Oral Examination:
Dr. Dennis Hore, Department of Chemistry, UVic
Deep Learning (DL) has been broadly applied to solve big data problems in biomedical fields, which is most successful in image processing. Recently, many DL methods have been applied to analyze genomic studies. However, genomic data usually has too small a sample size to fit a complex network. They do not have common structural patterns like images to utilize pre-trained networks or take advantage of convolution layers. The concern of overusing DL methods motivates us to evaluate DL methods' performance versus popular non-deep Machine Learning (ML) methods for analyzing genomic data with a wide range of sample sizes. In this paper, we conduct a benchmark study using the UK Biobank data and its many random subsets with different sample sizes. The original UK biobank data has about 500k patients. Each patient has comprehensive patient characteristics, disease histories, and genomic information, i.e., the genotypes of millions of Single-Nucleotide Polymorphism (SNPs). We are interested in predicting the risk of three lung diseases: asthma, COPD, and lung cancer. There are 205,238 patients who have recorded disease outcomes for these three diseases. Five prediction models are investigated in this benchmark study, including three non-deep machine learning methods (Elastic Net, XGBoost, and SVM) and two deep learning methods (DNN and LSTM). Besides the most popular performance metrics, such as the F1-score, we promote the hit curve, a visual tool to describe the performance of predicting rare events. We discovered that DL methods frequently fail to outperform non-deep ML in analyzing genomic data, even in large datasets with over 200k samples. The experiment results suggest not overusing DL methods in genomic studies, even with biobank-level sample sizes. The performance differences between DL and non-deep ML decrease as the sample size of data increases. This suggests when the sample size of data is significant, further increasing sample sizes leads to more performance gain in DL methods. Hence, DL methods could be better if we analyze genomic data bigger than this study.
Title: C*-algebras constructed from factor groupoids and their analysis through relative K-theory and excision
Speaker: Mitchell Haslehurst, University of Victoria
Date and time: 17 Aug 2022, 12:00pm - 1:00pm
Location: David Strong Building Room C128
Notice of the Final Oral Examination for the Degree of Doctor of Philosophy of
MITCHELL HASLEHURST
MMath (University of Waterloo, 2016)
BA Hons. (Nipissing University, 2015)
"C*-algebras constructed from factor groupoids and their analysis through relative K-theory and excision"
David Strong Building Room C128
Dr. Ian Putnam, Department of Mathematics and Statistics, University of Victoria (Supervisor)
Dr. Marcelo Laca, Department of Mathematics and Statistics, UVic (Member)
Dr. Heath Emerson, Department of Mathematics and Statistics, UVic (Member)
Dr. Michel Lefebvre, Department of Physics and Astronomy, UVic (Outside Member)
Dr. Aaron Tikuisis, Department of Mathematics and Statistics, University of Ottawa
Dr. Terri Lacourse, Department of Biology, UVic
We address the problem of finding groupoid models for C*-algebras given some prescribed K-theory data. This is a reasonable question because a groupoid model for a C*-algebra reveals much about the structure of the algebra. A great deal of progress towards solving this problem has been made using constructions with inductive limits, subgroupoids, and dynamical systems. This dissertation approaches the question with a more specific methodology in mind, with factor groupoids.
In the first part, we develop a portrait of relative K-theory for C*-algebras using the general framework of Banach categories and Banach functors due to Max Karoubi. The purpose of developing such a portrait is to provide a means of analyzing the K-theory of an inclusion of C*-algebras, or more generally of a *-homomorphism between two C*-algebras. Another portrait may be obtained using a mapping cone construction and standard techniques (it is shown that the two presentations are naturally and functorially isomorphic), but for many examples, including the ones considered in the second part, the portrait obtained by Karoubi's construction is more convenient.
In the second part, we construct examples of factor groupoids and analyze the relationship between their C*-algebras. A factor groupoid setup (two groupoids with a surjective groupoid homomorphism between them) induces an inclusion of two C*-algebras, and therefore the portrait of relative K-theory developed in the first part, together with an excision theorem, can be used to elucidate the structure. The factor groupoids are obtained as quotients of AF-groupoids and certain extensions of Cantor minimal systems using iterated function systems. We describe the K-theory in both cases, and in the first case we show that the K-theory of the resulting C*-algebras can be prescribed through the factor groupoids.
Title: Statistics in Genomics and Pharmaceutical Science Conference
Date: 15 Aug to 17 Aug 2022
Location: University of Victoria
Event type: Conferences and workshops
Please see the conference website.
Title: Sub-phenotypes of Macrophages and Monocytes in COPD and Molecular Pathways for Novel Drug Discovery
Speaker: Yichen Yan, University of Victoria
Location: Virtual Defense
YICHEN YAN
BSc (Xi'an University of Technology, 2020)
"Sub-phenotypes of Macrophages and Monocytes in COPD and Molecular Pathways for Novel Drug Discovery"
Dr. Xuekui Zhang, Department of Mathematics and Statistics, University of Victoria (Supervisor)
Dr. Min Tsao, Department of Mathematics and Statistics, UVic (Member)
Dr. Shijia Wang, School of Statistics and Data Science, Nankai University
Dr. Adam Con, School of Music, UVic
Chronic obstructive pulmonary disease (COPD) is a common respiratory disorder and the third leading cause of mortality. In this thesis we performed a clustering analysis of four specific immune cells in the GSE136831 dataset, using the default recommended parameters of the Seurat package in R, and obtained 16 subclasses with various COPD and cell-type proportions. Clusters 3, 7 and 9 had more pronounced independence and were all composed of macrophage-dominated control samples. The results of the pseudo-time analysis based on Monocle 3 package in R showed three different patterns of cell evolution. All started with a high percentage of COPD states, one ended with a high rate of Control states, and the other two still finished with a high percentage of COPD states. The results of differentially expressed gene analysis corroborated the existence of finer clusters and provided support for their rational categorization based on the similar marker genes. The gene ontology (GO) enrichment analysis for cluster 0 and cluster 6 provided feedback on enriched biological process terms with significant and unique characteristics, which could help explore latent novel COPD treatment directions. Finally, some top-ranked potential pharmaceutical molecules were searched via the connectivity map (cMAP) database.
Title: Random Forests on Trees
Speaker: Ben Xiao, University of Victoria
Date and time: 05 Aug 2022, 9:00am - 10:00am
BEN XIAO
"Random Forests on Trees"
Dr. Gourab Ray, Department of Mathematics and Statistics, University of Victoria (Supervisor)
Dr. Anthony Quas, Department of Mathematics and Statistics, UVic (Member)
Dr. Tyler Helmuth, Department of Mathematical Sciences, Durham University
Dr. David Harrington, Department of Chemistry, UVic
This thesis focuses a mathematical model from statistical mechanics called the Arboreal gas. The Arboreal gas on a graph G is Bernoulli bond percolation on G with the conditioning that there are no "loops". This model is related to other models such as the random cluster measure. We mainly study the Arboreal gas and a related model on the d-ary wired tree which is simply the d-ary wired tree with the leaves identified as a single vertex. Our first result is finding a distribution on the infinite d-ary tree that is the weak limit in height n of the Arboreal gas on the d-ary wired tree of height n. We then study a similar model on the infinite d-ary wired tree which is Bernoulli bond percolation with the conditioning that there is at most one loop. In this model, we only have a partial result which proves that the ratio of the partition function of the one loop model in the wired tree of height n and the Arboreal gas model in the wired tree of height n goes to 0 as n → ∞ This allows us to prove certain key quantities of this model is actually the same as analogues of that quantity in the Arboreal gas on the d-ary wired tree, under an additional assumption.
Title: scAnnotate: An Automated Cell Type Annotation Tool for Single-cell RNA-Sequencing Data
Speaker: Xiangling Ji, University of Victoria
Date and time: 27 Jul 2022, 1:00pm - 2:00pm
XIANGLING JI
BSc (Simon Fraser University, 2019)
"scAnnotate: An Automated Cell Type Annotation Tool for Single-cell RNA-Sequencing Data"
Wednesday, July 27, 2022 1:00 P.M.
Dr. Longhai Li, Department of Mathematics and Statistics, University of Saskatchewan
Dr. Alison Murray, Department of Anthropology, UVic
Single-cell RNA-sequencing (scRNA-seq) technology enables researchers to investigate a genome at the cellular level with unprecedented resolution. An organism consists of a heterogeneous collection of cell types, each of which plays a distinct role in various biological processes. Hence, the first step of scRNA-seq data analysis often is to distinguish cell types so that they can be investigated separately. Researchers have recently developed several automated cell type annotation tools based on supervised machine learning algorithms, requiring neither biological knowledge nor subjective human decisions. Dropout is a crucial characteristic of scRNA-seq data which is widely utilized in differential expression analysis but not by existing cell annotation methods. We present scAnnotate, a cell annotation tool that fully utilizes dropout information. We model every gene's marginal distribution using a mixture model, which describes both the dropout proportion and the distribution of the non-dropout expression levels. Then, using an ensemble machine learning approach, we combine the mixture models of all genes into a single model for cell-type annotation. This combining approach can avoid estimating numerous parameters in the high-dimensional joint distribution of all genes. Using fourteen real scRNA-seq datasets, we demonstrate that scAnnotate is competitive against nine existing annotation methods, and that it accurately annotates cells when training and test data are (1) similar, (2) cross-platform, and (3) cross-species. Of the cells that are incorrectly annotated by scAnnotate, we find that a majority are different from those of other methods, which suggests that further ensembling scAnnotate with other methods may largely improve annotation precision.
Title: Optimality Conditions for Cardinality Constrained Optimization Problems
Speaker: Zhuoyu Xiao, University of Victoria
Date and time: 19 Jul 2022, 10:00am - 11:00am
ZHUOYU XIAO BSc (Jinan University, 2020)
"Optimality Conditions for Cardinality Constrained Optimization Problems"
Dr. Jane Ye, Department of Mathematics and Statistics, University of Victoria (Supervisor)
Dr. Yu-Ting Chen, Department of Mathematics and Statistics, UVic (Member)
Dr. Tim Hoheisel, Department of Mathematics and Statistics, McGill University
Dr. Rogério de Sousa, Department of Physics and Astronomy, UVic
Cardinality constrained optimization problems (CCOP) are a new class of optimization problems with many applications. In this thesis, we propose a new framework called mathematical programs with disjunctive subspaces constraints (MPDSC), a special case of mathematical programs with disjunctive constraints (MPDC), to investigate CCOP. Our method is different from the relaxed complementarity-type reformulation in the literature.
The first contribution of this thesis is that we study various stationarity conditions for MPDSC and apply them to CCOP. In particular, we obtain new strong (S-) stationarity and new Mordukhovich (M-) stationarity for CCOP, which are sharper than those obtained from the relaxed complementarity-type reformulation.
The second contribution of this thesis is that we obtain some new results for MPDSC, which do not hold for MPDC in general. We show that many constraint qualifications like relaxed constant positive linear dependence (RCPLD) coincide with their piecewise versions for MPDSC. Based on these results, we prove that RCPLD implies error bounds for MPDSC. These two results also hold for CCOP. All of these new constraint qualifications for CCOP derived from MPDSC are weaker than those from the relaxed complementarity-type reformulation.
Title: IMAGINING UVic (Inspiring Mathemetical Growth and Intuition in Girls)
Date and time: 04 Jul to 08 Jul 2022, 10:00am - 3:30pm
The UVic Department of Mathematics and Statistics, in conjunction with the Association for Women in Mathematics and the Pacific Institute for the Mathematical Sciences, is pleased to announce an exciting new program for high school students. IMAGINING UVic (Inspiring Mathematical Growth and Intuition in Girls) is a summer camp and seminar series aimed at encouraging young women to pursue STEM fields. More information, including how to apply, can be found on our website: https://onlineacademiccommunity.uvic.ca/imagininguvic/
Title: Dynamical Classification of the Two-body and Hill's Lunar Problems with Quasi-homogeneous Potentials
Speaker: Lingjun Qian, University of Victoria
Date and time: 30 Jun 2022, 3:00pm - 4:00pm
Title: Erdos-Deep Families of Arithmetic Progressions
Speaker: Tao Gaede, University of Victoria
Date and time: 21 Jun 2022, 10:00am - 11:00am
Location: David Strong Building C128
Title: New Developments in Four Dimensions
Date and time: 13 Jun to 17 Jun 2022, 9:00am - 5:00pm
Samantha Allen (University of Georgia)
Gregory Arone* (Stockholm University)
Irving Dai (Stanford University)
David Gabai (Princeton University)
Daniel Hartman (University of Georgia)
Kyle Hayden (Columbia University/Rutgers University-Newark)
Alexandra Kjuchukova (University of Notre Dame)
Robin Koytcheff (University of Louisiana at Lafayette)
Alexander Kupers (University of Toronto Scarborough)
Miriam Kuzbary (Georgia Institute of Technology)
Peter Lambert-Cole (University of Georgia)
Arunima Ray (Max Planck Institute for Mathematics)
Keiichi Sakai* (Shinshu University)
Victor Turchin (Kansas State University)
Tadayuki Watanabe* (Kyoto University)
Ian Zemke (Princeton University)
*virtual talk
For more information about this event, please see the conference website.
This conference will bring together experts in various aspects of four-dimensional topology. Themes include diffeomorphism groups of four-manifolds, construction and detection of exotic four-manifolds, and trisections of four-manifolds. In addition to standard plenary talks, we will have a lightning talk session open to submissions from all participants.
This conference is anticipated to occur in-person at the University of Victoria. A limited number of talks will be over Zoom, with most speakers presenting in-person.
Registration space for this conference is limited. To apply to participate in this conference, please visit the conference website and fill out the application form by April 10th.
Ryan Budney (University of Victoria) ([email protected])
Jeffrey Meier (Western Washington University) [email protected]
Maggie Miller (Stanford University) [email protected]
Title: Statistical Research on COVID-19 Response
Speaker: Xiaolin Huang, University of Victoria
Date and time: 27 May 2022, 1:00pm - 2:00pm
Notice of the Final Oral Examination for the Degree of Master of Science
Xiaolin Huang
BSc (Washington University, 2019)
Supervisory Committee
Dr. Li Xing, Department of Mathematics and Statistics, UVic (Member)
External Examiner
Dr. You Liang, Department of Mathematics, Toronto Metropolitan University
Chair of Oral Examination
Dr. Kirstin Lane, School of Exercise Science, Physical and Health Education, UVic
COVID-19 has affected the lives of people worldwide. This thesis includes two studies on the response to COVID-19 using statistical methods. The first study explores the impact of lockdown timing on COVID-19 transmission across US counties. We used Functional Principal Component Analysis to extract COVID-19 transmission patterns from county-wise case counts, and used machine learning methods to identify risk factors, with the timing of lockdowns being the most significant. In particular, we found a critical time point for lockdowns, as lockdowns implemented after this time point were associated with significantly more cases and faster spread. The second study proposes an adaptive sample pooling strategy for efficient COVID-19 diagnostic testing. When testing a cohort, our strategy dynamically updates the prevalence estimate after each test, and uses the updated information to choose the optimal pool size for the subsequent test. Simulation studies showed that our strategy reduces the number of tests required to test a cohort compared to traditional pooling strategies.
Title: PIMS Postdoctoral Seminar: Subgraphs in Semi-random Graphs
Speaker: Natalie Clare Behague, University of Victoria
Date and time: 25 May 2022, 9:30am - 10:30am
Location: via Zoom requires registration
The semi-random graph process can be thought of as a one player game. Starting with an empty graph on n vertices, in each round a random vertex u is presented to the player, who chooses a vertex v and adds the edge uv to the graph (hence 'semi-random'). The goal of the player is to construct a small fixed graph G as a subgraph of the semi-random graph in as few steps as possible. I will discuss this process, and in particular the asympotically tight bounds we have found on how many steps the player needs to win. This is joint work with Trent Marbach, Pawel Pralat and Andrzej Rucinski.
Speaker Biography: Natalie completed her PhD in 2020 at Queen Mary University of London under the supervision of Robert Johnson. Prior to this, she completed both her Bachelors and Masters degrees at the University of Cambridge. After finishing her PhD she spent a year at the University of Ryerson in Toronto with the Graphs at Ryerson research group. She has worked on various problems under the broad umbrella of probabilistic and extremal combinatorics, including automata, graph saturation, graph factorization and probabilistic zero-forcing (a model for infection or rumour spreading across networks). Since the start of 2022 she has been a postdoctoral fellow at the University of Victoria, working with Natasha Morrison and Jonathan Noel.
Read more about our PIMS PDFs on our Medium feature here.
For more information and registration: https://www.pims.math.ca/seminars/PIMSPDF
Download poster (PDF).
Title: Free Screening of Secrets of the Surface: The Mathematical Vision of Maryam Mirzakhani
Speaker: hosted by UVic Math & Stats EDI and Women in Math UVic Student Chapter
Join us for a free screening of Secrets of the Surface: The Mathematical Vision of Maryam Mirzakhani in celebration of Women in Math Day!
Examine the life and mathematical work of Maryam Mirzakhani, an Iranian immigrant to the United States who became a superstar in her field. In 2014, prior to her untimely death at the age of 40, she became both the first woman and the first Iranian to be awarded the Fields Medal, the most prestigious award in mathematics, often equated in stature with the Nobel Prize.
Title: Victoria Probability Day
University of Victoria will host a one-day mini-conference focussing on recent developments in probability theory. The goal of this endeavour is to bring together probabilisits in the northwest pacific area and provide a platform for possible future collaborations.
For more information, list of speakers and abstracts see the conference website.
Title: Well-posedness and Blowup Results for the Swirl-free and Axisymmetric Primitive Equations in a Cylinder
Speaker: Narges Sadat Hosseini Khajouei, University of Victoria
Date and time: 22 Apr 2022, 10:00am - 11:00am
Location: CLE B007
Dr. Slim Ibrahim, Department of Mathematics and Statistics, University of Victoria (Co-Supervisor)
Dr. David Goluskin, Department of Mathematics and Statistics, UVic(Co-Supervisor)
Dr. Quyuan Lin, Department of Mathematics, University of California Santa Barbara
Dr. Raad Nashmi, Department of Biology, UVic
This thesis is devoted to the motion of the incompressible and inviscid ow which is axisymmetric and swirl-free in a cylinder, where the hydrostatic approximation is made in the axial direction. It addresses the problem of local existence and uniqueness in the spaces of analytic functions for the Cauchy problem for the inviscid primitive equations, also called the hydrostatic incompressible Euler equations, on a cylinder, under some extra conditions. Following the method introduced by Kukavica-Temam-Vicol-Ziane in Int. J. Differ. Equ. 250 (2011) , we use the suitable extension of the Cauchy-Kowalewski theorem to construct locally in time, unique and real-analytic solution, and find the explicit rate of decay of the radius of real- analiticity. Furthermore, this thesis discusses the problem of finite-time blowup of the solution of the system of equations. Following a part of the method introduced by Wong in Proc Am Math Soc. 143 (2015), we prove that the first derivative of the radial velocity blows up in time, using primary functional analysis tools for a certain class of initial data. Taking the solution frozen at r = 0, we can apply an a priori estimate on the second derivative of the pressure term, to derive a Ricatti type inequality.
Title: The Dynamics of Pythagorean Triples
Speaker: Nazim Acar, University of Victoria
Date and time: 14 Apr 2022, 11:00am - 12:00pm
Nazim Acar
BSc (Uludağ Universitesi, 1998)
Dr. Christopher Bose, Mathematics and Statistics, University of Victoria (Supervisor)
Dr. Ahmed Sourour, Mathematics and Statistics, University of Victoria (Member)
Dr. Shafiqul Islam, School of Mathematical and Computational Sciences, University of Prince Edward Island
Dr. Richard Keeler, Department of Physics, UVic
A Pythagorean Triple (PT) is a triple of positive integers (a, b, c), that satisfies a2 + b2 = c2. By requiring two of the entries being relatively prime, (a, b, c) becomes a Primitive Pythagorean Triple (PPT). This removes trivially equivalent PTs. Following up on the unpublished paper by D. Romik [1] we develop a sequence of mappings and show how each PPT has a unique path starting from one of the two initial nodes (3, 4, 5), (4, 3, 5). We explain a way of generating the PPTs through paper folding. Using a various techniques from dynamics we show how these mappings can be carried over to their conjugates in the first unit arc x2 + y2 = z2, x, y ≥ 0 and the unit interval [0, 1]. Under these mappings and through the conjugacies we show that the PPTs, the pair of rational points on the first unit arc and the rational numbers on the unit interval correspond to each other with the forward orbits exhibiting similar behavior. We identify infinite, σ-finite invariant measures for one-dimensional systems. With the help of the developed conjugacies we extend the dynamics of the PPTs to the continued fraction expansion of the real numbers in the unit interval and show a connection to the Euclidean algorithm. We show that the dynamical system is conservative and ergodic.
Title: Dependent Random Choice: A Pretty Powerful Probabilistic Proof Technique
Speaker: Shannon Ogden, University of Victoria
The talk will be based on the survey paper "Dependent Random Choice" by Jacob Fox and Benny Sudakov.
Title: Orthogonal Common-source and Distinctive-source Decomposition between High-dimensional Data Views
Speaker: Hai Shu, Biostatistics, NYU
Date and time: 08 Apr 2022, 1:00pm - 2:00pm
Zoom link.
Abstract: Modern biomedical studies often collect multi-view data, that is, multiple types of data measured on the same set of objects. A typical approach to the joint analysis of two high-dimensional data views/sets is to decompose each data matrix into three parts: a low-rank common-source matrix that captures the shared information across data views, a low-rank distinctive-source matrix that characterizes the individual information within each single data view, and an additive noise matrix. Existing decomposition methods often focus on the orthogonality between the common-source and distinctive-source matrices, but inadequately consider the more necessary orthogonal relationship between the two distinctive-source matrices. The latter guarantees that no more shared information is extractable from the distinctive-source matrices. We propose a novel decomposition method that defines the common-source and distinctive-source matrices from the L2 space of random variables rather than the conventionally used Euclidean space, with a careful construction of the orthogonal relationship between distinctive-source matrices. The proposed estimators of common-source and distinctive-source matrices are shown to be asymptotically consistent and have reasonably better performance than some state-of-the-art methods in both simulated data and the real data analysis.
Title: PIMS Distiniguished Lecture - Projections and circles
Speaker: Malabika Pramanik, University of British Columbia
Please contact [email protected] for the Zoom link.
Large sets in Euclidean space should have large projections in most directions. Projection theorems in geometric measure theory make this intuition precise, by quantifying the words "large" and "most".
How large can a planar set be if it contains a circle of every radius? This is the quintessential example of a curvilinear Kakeya problem, central to many areas of harmonic analysis and incidence geometry.
What do projections have to do with circles?
The talk will survey a few landmark results in these areas and point to a newly discovered connection between the two.
Title: How to Lose at Tic-Tac-Toe, and Other (More Transitive) Games
Abstract: Achievement games are a class of combinatorial games in which two players take turns selecting points from a set (the board), with the goal of being the first to occupy one of the previously designated "winning" subsets. In this talk, we will consider the avoidance variant, in which the first player to occupy such a set loses the game. As a strategy-stealing argument can be used to show that an achievement game cannot be a second-player win, one might expect that the avoidance variant cannot be a first-player win. However, it turns out that we can find transitive avoidance games that are first-player wins for all board sizes which are not primes or powers of two. This talk is based on the paper "Transitive Avoidance Games" by J. Robert Johnson, Imre Leader, and Mark Walters.
Title: Dynamical classification of the two-body and Hill's lunar problems with quasi-homogeneous potentials.
Location: COR B111
As studied in many examples, higher order correction added to the Newtonian potential often provides more realistic and accurate quasi-homogeneous models in astrophysics. Important examples include the Schwarzschild and the Manev potentials. The quasi-homogeneous N-body problem aims to study the interaction between N point particles under a prescribed potential. The classical (Newtonian) Hill's lunar problem aims to improve the solution accuracy of the lunar motion obtained by solving the two-body (Earth-Moon) system. Hill's lunar equation under the Newtonian or homogeneous potentials has been derived from the Hamiltonian of the three-body problem in a uniform rotating coordinate system with angular speed $\omega$, by using symplectic scaling and heuristic arguments on various physical quantities. In this talk, we first introduce a new variational method characterizing relative equilibria with minimal energy. This enables us to classify the dynamic in terms of global existence and singularity for all possible ranges of the parameters. Then we derive Hill's lunar problem with quasi-homogenous potential, and finally, we implement the same ideas to demonstrate the existence of ``black hole effect" for a certain range of the parameters: below and at some energy threshold, invariant sets (in the phase space) with non-zero Lebesgue measure that either contain global solutions or solutions with singularity are constructed.
Title: First steps towards a quantitative Furstenberg criterion and applications
Speaker: Alex Blumenthal, Georgia Tech
Abstract: I will present our recent results on estimating the Lyapunov exponents of weakly-damped, weakly-dissipated stochastic differential equations. Our primary tool is a new, mildly-quantitative version of Furstenberg's criterion.
Conference and workshops
Graduate dissertations
PIMS lectures
Math & stats assistance centre
Apply for summer TA positions
NSERC CREATE in Quantum Computing Program
Statistics student Jahid wins fellowship in marine sciences
Srivastava on Web of Science Group list of Highly Cited Researchers
Putnam Math Competition
Heesoo Lee wins 2022 UVic Math Competition
Career possibilities
David Turpin Building A425
[email protected] | CommonCrawl |
Théorie spectrale
Org: Richard Froese (UBC), Dmitry Jakobson (McGill) et Mahta Khosravi (UBC)
BEN ADCOCK, Simon Fraser University
Stable sampling in Hilbert spaces [PDF]
In this talk we consider the problem of reconstructing an element of a Hilbert space in a particular basis, given its samples with respect to another basis. Such a problem lies at the heart of modern sampling theory. The last several decades have witnessed the development of a general framework for this problem, which, as we describe, admits a simple operator-theoretic interpretation in terms of finite sections of infinite matrices. Unfortunately, operators occurring in sampling problems are often non-self adjoint. Hence, the spectral properties of the infinite-dimensional operator are not typically inherited at the finite-dimensional level, leading to issues with both convergence and stability.
Recently, much progress has been made towards the approximation of spectra and pseudospectra of non-self adjoint linear operators. By using ideas developed therein, we present a new generalised sampling framework that overcomes these issues and possesses both guaranteed convergence and stability.
Joint work with Anders Hansen (Cambridge).
TAYEB AISSIOU, McGill University
Semiclassical limits of eigenfunctions of the Laplacian on $\mathbb{T}^{n}$ [PDF]
We will present a proof of the conjecture formulated by D. Jakobson in 1995, which states that on a $n$-dimensional flat torus $\mathbb{T}^{n}$, the Fourier series of squares of the eigenfunctions $|\phi_\lambda|^2$ of the Laplacian have uniform $l^n$ bounds that do not depend on the eigenvalue $\lambda$. The proof is a generalization of the argument presented in papers [1,2] and requires a geometric lemma that bounds the number of codimension one simplices which satisfy a certain restriction on an $n$-dimensional sphere $S^n(\lambda)$ of radius $\sqrt{\lambda}$. We will present a sketch of the proof of the lemma.
SHIMON BROOKS, Stony Brook University
Spectral Multiplicities and Arithmetic QUE [PDF]
We present joint work with E. Lindenstrauss, showing that joint eigenfunctions of the Laplacian and one Hecke operator on congruence surfaces become equidistributed as the Laplace eigenvalue grows to infinity. We will then discuss what can be inferred from this--- and conjectured--- about the relationship between equidistribution of Laplace eigenfunctions and degeneracies in the spectrum.
YAIZA CANZANI, McGill University
Scalar curvature of random metrics [PDF]
We study Gauss curvature for random Riemannian metrics on a compact surface, lying in a fixed conformal class; our questions are motivated by comparison geometry. We explain how to estimate the probability that Gauss curvature will change sign after a random conformal perturbation of a metric, and discuss some extremal problems for that probability, and their relation to other extremal problems in spectral geometry.
If time permits, analogous questions will be considered for the scalar curvature in dimension $n > 2$, as well as other related problems (e.g. $Q$-curvature in even dimensions).
This is joint work with D. Jakobson and I. Wigman.
SERGUEI DENISSOV, UW-Madison
Generic behavior of evolution equations: asymptotics and Sobolev norms [PDF]
Consider the transport equation on the unit circle $\mathbb{T}$ \[ u_t={ku_x}-{q(t,x)u}, \quad u(0,x,k)=1 \] The application of the Carleson theorem yields
{\bf Theorem.} { \it If $q$ is real-valued, $q(t,x)\in L^2([0,\infty),\mathbb{T})$, and \[ \int_\mathbb{T} q(t,x)dx=0 \] then there is $\nu(x,k)$ such that for a.e. $k$ we have \[ \|u(t,x-kt,k)-\nu(x,k)\|_{H^{1/2}(\mathbb{T})}\to 0, \quad t\to\infty \]} \smallskip We obtain analogous results for other evolution equations (e.g., the Schrodinger flow) that do not allow the exact formula for their solutions.
AILANA FRASER, UBC
Geometric bounds on low Steklov eigenvalues [PDF]
I will talk about joint work with R. Schoen on the relationship of the geometry of compact Riemannian manifolds with boundary to the first nonzero eigenvalue of the Dirichlet-to-Neumann map (Steklov eigenvalue). For surfaces with boundary we obtain an upper bound on the first Steklov eigenvalue in terms of the genus and the number of boundary components of the surface. This generalizes a result of Weinstock from 1954 for surfaces homeomorphic to the disk. We attempt to find the best constant in this inequality for annular surfaces. Motivated by the annulus case, we explore a connection between the Dirichlet-to-Neumann map and minimal submanifolds of the ball that are solutions to the free boundary problem. We prove general upper bounds for the first Steklov eigenvalue for conformal metrics on manifolds of any dimension which can be properly conformally immersed into the unit ball in terms of certain conformal volume quantities.
JOEL FRIEDMAN, University of British Columbia
Sheaves in Algebraic Graph Theory and the Hanna Neumann Conjecture [PDF]
We present some aspects of spectral theory related to sheaves on graphs. We explain that a sheaf on a graph can be viewed as giving a "block matrix" as an incidence matrix, i.e., giving an incidence matrix where a vertex or edge may have not just one row or column associated to it. These sheaves therefore have adjacency matrices, Laplacians, and other features found in ordinary spectral graph theory. We will indicate how they are involved in a proof of the Strengthened Hanna Neumann Conjecture; this proof requires a lot of work to set up the foundations of sheaf homology, but is quite simple once the foundations are established.
CAROLYN GORDON, Dartmouth College
Quantum equivalent magnetic fields that are not classically equivalent [PDF]
We construct pairs of topologically distinct Hermitian line bundles over a flat torus for which the associated Laplacians on the line bundles and on all their tensor powers are isospectral. In the context of geometric quantization, we interpret these examples as magnetic fields that are quantum equivalent but not classically equivalent. We also illustrate additional spectral phenomena on line bundles over tori, Riemann surfaces, and other Hermitian locally symmetric spaces. This talk is based on work with various collaborators: Pierre Guerini, Thomas Kappeler, William Kirwin, Dorothee Schueth, and David Webb.
ALEX IOSEVICH, Rochester
On a fractal variant of the regular value theorem [PDF]
We shall discuss a fractal analog of the regular value theorem from differential geometry. Connections with problems in geometric measure theory and theory of Fourier integral operators will also be addressed
DMITRY JAKOBSON, McGill
Lower bounds for Resonances [PDF]
This is joint work with Frederic Naud. For infinite area, geometrically finite hyperbolic surfaces we prove new lower bounds on the local density of resonances for points lying in a logarithmic neighborhood of the real axis. These lower bounds involve the dimension of the limit set of the fundamental group of the surface. The first bound is general and shows logarithmic growth of the number of resonances at high energy. The second bound holds if the fundamental group is an infinite index subgroup of certain arithmetic groups. In this case we obtain a polynomial lower bound. As time permits, we shall discuss generalizations to hyperbolic three-manifolds, as well as new results on the existence of infinitely many resonances in an effective strip depending on the Haudorff dimension of the limit set.
VOJKAN JAKSIC, McGill University
Quantum Chernoff Bound [PDF]
In this talk I will present an elementary proof of the Quantum Chernoff Bound (recently discovered by N.Ozawa) and discuss its implications for quantum hypothesis testing and non-equilibrium quantum statistical mechanics.
EUGENE KRITCHEVSKI, University of Toronto
The scaling limit of the critical one-dimensional random Schrodinger operator [PDF]
We study the one dimensional discrete random Schrodinger operator $$(H_n\psi)_\ell =\psi_{\ell -1}+\psi_{\ell +1}+v_\ell \psi_\ell ,$$ $\psi_0=\psi_{n+1}=0$, in the scaling limit ${\rm Var}(v_\ell )=\sigma^2/n$. We show that, in the bulk of spectrum, the eigenfunctions are delocalized and that there is a very strong repulsion of eigenvalues. The analysis is based on a stochastic differential equation for the evolution of products of transfer matrices. This talk is based on a joint work with Benedek Valko and Balint Virag.
PETER PERRY, University of Kentucky
The Davey-Stewartson Equation Revisited [PDF]
This is joint work with Peter Topalov (Northeastern University) The Davey-Stewartson II (DS II) equation is a completely integrable dispersive equation in two dimensions which describes surface waves on shallow water and is a completely integrable generalization of the one-dimensional cubic nonlinear Schr\"{o}dinger equation. Its solution by inverse scattering was developed by Beals-Coifman, Fokas-Ablowitz, Fokas-Sung, and Sung in the 1980's and 1990's but the rigorous theory does not fully describe the behavior and stability of soliton solutions. In this talk we will review the Beals-Coifman $\overline{\partial}$-method and our recent result on global well-posedness for DS II in the Sobolev space $$H^{1,1}(\mathbb{R}^2 = \{ u \in L^2: \nabla u, |x|u \in L^2 \}$$ by the method of inverse scattering. We will also discuss asymptotics of solutions and stability of solitons.
EMMANUEL SCHENCK, Northwestern
Stabilization for the wave equation without geometric control [PDF]
We consider the damped wave equation $(\Delta_{g}-\partial^2_t - 2a\partial_t)u=0$ on a compact, negatively curved manifold $(M,g)$ of dimension $d>1$ with a damping term $a\in C^\infty(M)$ positive and non-identically zero. In this situation, the energy decays to zero as time goes to infinity : the goal of the stabilization problem is to determine the speed of this decay. Under an hypothesis involving the negativity of a topological pressure, we obtain a spectral gap near the real axis, and an exponential decay of the energy for all initial data sufficiently regular. In particular, this result can hold in cases where the geometric control condition fails.
Toward high-rank Quantum Unique Ergodicity [PDF]
I will discuss results with N. Anatharaman on the equidistribution problem for eigenfunctions on higher rank locally symmetric spaces. The high-eigenvalue limit is a semiclassical limit where the classical dynamical system is a multiparameter flow with many symmetries (arising from number theory), which give a-priori restrictions on the possible quantum limits. We show that in some cases the limiting measure cannot be entirely singular to the uniform (Lebesgue) measure, without assuming the eigenfunctions are also eigenfunctions of the Hecke operators (under the stronger hypothesis stronger results are known).
MELISSA TACY, Institute for Advanced Study
Classical flow and semiclassical eigenfunction estimates [PDF]
To understand their concentration phenomena we study the $L^{p}$ norms of eigenfunctions (or approximate eigenfunctions) restricted to hypersurfaces. Of particular interest are possible concentrations for values of $p$ near $p=2$. From our intuitive expectation that, in the high energy limit, quantum mechanics converges to classical mechanics we expect that properties of the classical flow should be evident in these estimates. In this talk I will introduce the semiclassical framework in which we study approximate eigenfunctions and discuss some results relating classical flow to eigenfunction concentration.
BALINT VIRAG, Toronto
VITALI VOUGALTER, University of Toronto
On the solvability conditions for the diffusion equation with convection terms [PDF]
A linear second order elliptic equation describing heat or mass diffusion and convection on a given velocity field is considered in three dimensions. The corresponding operator L may not satisfy the Fredholm property. In this case, solvability conditions for the equation Lu=f are not known. In this work, we derive solvability conditions in $H^2$ for the non self-adjoint problem by relating it to a self-adjoint Schroedinger type operator, for which solvability conditions are obtained in our previous work.
STEVE ZELDITCH, Northwestern
Intertwining classical and quantum mechanics on hyperbolic surfaces [PDF]
The well known Egorov theorem in microlocal analysis says that the conjugate $U(-t) A U(t)$ of a pseudo-differential operator A by the wave group or Schrodinger group $U(t)$ is another pseudo-differential operator whose principal symbol is $\sigma_A \circ g^t$ where $\sigma_A$ is the principal symbol of $A$ and $g^t$ is the geodesic flow. Very rarely does it happen that the conjugation is exact, i.e. without remainder terms, and it depends on how one quantizes symbols to operators. My talk, joint work with Nalini Anantharaman, is about an exact Egorov theorem on hyperbolic surfaces. We construct an explicit intertwining operator $L$ which conjugates the wave group and geodesic flow. Equivalently, $L$ maps Wigner distributions to certain explicit eigendistributions of the geodesic flow, which we call Patterson-Sullivan distributions. | CommonCrawl |
Exploring the causal and effect nature of EQ-5D dimensions: an application of confirmatory tetrad analysis and confirmatory factor analysis
Thor Gamst-Klaussen ORCID: orcid.org/0000-0002-8497-85021,
Claire Gudex2 &
Jan Abel Olsen1,3
The relationship between the various items in an HRQoL instrument is a key aspect of interpreting and understanding preference weights. The aims of this paper were i) to use theoretical models of HRQoL to develop a conceptual framework for causal and effect relationships among the five dimensions of the EQ-5D instrument, and ii) to empirically test this framework.
A conceptual framework depicts the symptom dimensions [Pain/discomfort (PD) and Anxiety/depression (AD)] as causal indicators that drive a change in the effect indicators of activity/participation [Mobility (MO), Self-care (SC) and Usual activities (UA)], where MO has an intermediate position between PD and the other two effect dimensions (SC and UA). Confirmatory tetrad analysis (CTA) and confirmatory factor analysis (CFA) were used to test this framework using EQ-5D-5L data from 7933 respondents in six countries, classified as healthy (n = 1760) or in one of seven disease groups (n = 6173).
CTA revealed the best fit for a model specifying SC and UA as effect indicators and PD, AD and MO as causal indicators. This was supported by CFA, revealing a satisfactory fit to the data: CFI = 0.992, TLI = 0.972, RMSEA = 0.075 (90% CI 0.062–0.088), and SRMR = 0.012.
The EQ-5D appears to include both causal indicators (PD and AD) and effect indicators (SC and UA). Mobility played an intermediate role in our conceptual framework, being a cause of problems with Self-care and Usual activities, but also an effect of Pain/discomfort. However, the empirical analyses of our data suggest that Mobility is mostly a causal indicator.
Health-related quality of life (HRQoL) instruments comprise items that relate to various aspects of health and functioning. Previous research has attempted to classify the items included in these instruments as being causal or effect indicators of HRQoL [1]. Effect indicators (also called reflective indicators) can be seen as manifestations of an underlying construct. Thus, a change in the construct will lead to, or drive, a change in the effect indicators. In contrast, causal indicators (also called formative indicators) drive a change in the construct. There is evidence to suggest that symptoms have a strong causal component that drives a change in other items [2, 3]. The research into the causal nature of various HRQoL items has been limited to disease-specific instruments. No studies have investigated causal relationships in generic preference-based measures of HRQoL, commonly referred to as health state utility (HSU) instruments [4], which have an important role in cost-effectiveness analyses that are increasingly being used to aid policy decisions. Based on theoretical models, and methodological lessons from previous research, this paper seeks to fill a knowledge gap by identifying a causal pattern in the most widely applied HSU instrument, the EQ-5D [5,6,7]. The causal pattern of items in the cancer-specific EORTC QLQ-C30 instrument has been investigated in three studies. Using applied graphical methods and cross-tabulation of response frequencies, Fayers et al. found strong evidence that physiological symptom items (e.g. nausea, memory problems, shortness of breath) were causal, while items such as poor concentration, irritability, and feeling tense were likely to be effect indicators [2]. Boehmer and Luszczynska applied confirmatory factor analysis and found satisfactory fit for a model with both causal indicators (symptoms e.g. fatigue, pain) and effect indicators (e.g. physical, role, cognitive, social, and emotional functioning) [3]. It was noted that physical functioning and pain might be intermediate types of indicators. Using eight EORTC QLQ-C30 items, Bollen et al. provided an example of confirmatory tetrad analysis (CTA) and concluded that symptom items (e.g. shortness of breath, problems sleeping, lack of appetite) should be treated as causal indicators, while global health status and quality of life should be treated as effect indicators [8].
Factor analysis is a common psychometric approach to investigate the relationship between items and unobserved constructs, which is one technique in structural equation modelling (SEM) used for scale design and validation. However, factor analysis usually depends on a set of homogenous items and is often not appropriate if both causal and effect items are present [2]. However, other SEM techniques incorporate causal paths to model the relationship among different types of items [9, 10]. Confirmatory tetrad analysis may be the best empirical approach for determining if items should be treated as causal or effect indicators [8]. This paper is the first to apply CTA in HSU instruments.
The aims of the current paper were: first, to develop a conceptual framework for causal and effect relationships among the five dimensions of the EQ-5D instrument based on theoretical models of HRQoL, and second, to test this framework using data on EQ-5D-5L from six countries (N = 7933). More knowledge on the causal pattern is useful for at least two reasons: i) it provides a better understanding of the relative importance of the five health dimensions as reflected in the preference-based value sets, and; ii) it provides insights into the discussion on whether and how the QALY might be extended, e.g. by expanding the descriptive system to include additional symptom items (causal) or functioning items (effect).
A conceptual framework for EQ-5D dimensions
The International Classification of Functioning, Disability and Health (ICF) and the Wilson and Cleary model [11] are two recommended models for conceptualizing the relationships between dimensions in HRQoL instruments. The ICF provides a standard language and framework for describing health and health-related states and comprises two parts, each with two components [12]. Part 1 refers to functioning and disability and consists of (a) body functions and structures, and (b) activities and participation. Part 2 refers to contextual factors incorporating (a) environmental factors, and (b) personal factors. Body functions refer to physiological and psychological functions of body systems (e.g. symptoms such as pain or anxiety), while activity refers to the execution of a task or action (e.g. self-care), and participation refers to involvement in a life situation (e.g. work). The EQ-5D-3L was classified in an ICF framework [13] using linking rules [14]. Its five dimensions were classified into two ICF components, such that pain/discomfort (PD) and anxiety/depression (AD) were linked to the ICF component of body functions, while mobility (MO), self-care (SC), and usual activities (UA) were linked to the ICF component of activity and participation.
The ICF has considerable overlap with the Wilson and Cleary model [15, 16] that depicts dominant causal pathways between five levels of health outcomes: biological and physiological factors, symptoms (corresponding to the ICF component of body functions and defined as the patient's perception of an abnormal physical, emotional or cognitive state), functioning (corresponding to the ICF component of activity and participation), general health perceptions, and overall quality of life. The Wilson and Cleary conceptual model has been empirically validated in populations with different health conditions [17,18,19,20,21,22,23,24].
Based on these models, we propose the following causal pattern between the 5 EQ-5D dimensions. Firstly, the "symptom" dimensions of pain/discomfort (PD) and anxiety/depression (AD) were assumed to be primarily causal indicators, and the "activity/participation" dimensions of mobility (MO), self-care (SC), and usual activities (UA) to be effect variables, i.e. PD and AD cause changes in the HRQoL construct that are manifested as changes in MO, SC, and UA. Physiological symptoms such as pain and discomfort are clear drivers of activity/participation items and influence walking and self-care [25, 26] and daily activities [27]. Such symptoms are likely to be unidirectional, as it is unlikely that a change in mobility or self-care would alter the level of pain experienced. We assume a predominantly causal link between AD and activity/participation (MO, SC and UA), though with AD having less influence on MO (i.e. walking) than on SC and UA, as depressive symptoms explain only a small portion of the variability in mobility scores [28]. Anxiety and depression can cause disability by worsening other symptoms or by leading to limitations in activity, e.g. lack of interest in self-care [29] and activities of daily living [30]. It was noted, however, that emotional well-being may be bidirectional [2, 15], because physical symptoms, impairments, activity limitations, or participation restrictions can cause anxiety and/or depression [29].
Secondly, we assume mobility (MO) to be both cause and effect in nature, e.g. pain/discomfort (PD) can cause limitations in MO, which in turn can cause changes in SC and UA. This places MO in an intermediate position between PD and the other two activity/participation dimensions [3, 31]. Temporal priority has further been indicated by a hierarchical onset of disability among elderly people, where problems with walking preceded problems with self-care (e.g. bathing and dressing) [32].
Thirdly, we consider self-care (SC) and usual activities (UA) as similar dimensions that tap into activities of daily living. However, SC is more specific in that it refers to washing and dressing, while UA has a wider scope and encompasses participation in educational, employment, and social activities. Based on this conceptual framework, a number of testable models were specified (see Figs. 1 and 2) to be explained further below.
An all-effect indicator model (Model 1) and two multiple indicator multiple cause (MIMIC) models (Model 2 & Model 3). Mobility [MO], self-care [SC], usual activities [UA], pain/discomfort [PD], anxiety/depression [AD]
Multiple indicator multiple cause (MIMIC) model. Mobility [MO], self-care [SC], usual activities [UA], pain/discomfort [PD], anxiety/depression [AD]
An online survey was administered in 2012 in six countries (Australia, Canada, Germany, Norway, UK, US) by a global panel company [33]. Respondents were initially asked if they had any of seven listed chronic diseases and to rate their overall health on a [0–100] visual analogue scale (VAS), where 0 represented the least desirable health and 100 represented the best possible physical, mental, and social health. Respondents qualified for the "healthy group" if they reported no chronic diseases and a VAS rating of overall health of at least 70. Respondents then completed several HRQoL instruments, including the EQ-5D-5L. Of the 7933 respondents, 6173 reported a chronic disease (arthritis, asthma, cancer, depression, diabetes, hearing loss, heart disease). For further details on respondent recruitment, see Richardson et al., 2012 [33].
Distribution of EQ-5D health states
Spearman's rank correlations were computed across the responses to the 5 EQ-5D dimensions. Frequency distributions of EQ-5D health states were used to examine the pattern of responses across the main distinction between symptoms (causes) vs activity/participation (effects). Two subscales were created with EQ-5D items: a Symptom subscale formed by summing the PD and AD level numbers (each from level 1 to 5), and an Activity/participation subscale formed by summing the MO, SC and UA level numbers. The relationship between the two subscales are illustrated with a graph, and descriptive statistics are provided in the Appendix.
Two model-testing procedures in SEM were used: confirmatory tetrad analysis (CTA) and confirmatory factor analysis (CFA). While CTA is assumed to be the best empirical approach for determining whether items should be treated as causal or effect indicators [8], agreement between the two approaches would provide more confidence in our conceptual model than either one alone [34, 35]. While both procedures investigate the path directionality between items and an underlying construct, they both have unique features that are applicable for the current investigation. First, CFA enables testing of the hypothesised intermediate position of mobility between PD and the underlying construct, while CTA allows comparison of models that are not nested in the standard log-likelihood ratio (LR) test, but nested according to the implied vanishing tetrads (explained below).
Confirmatory tetrad analysis
CTA seeks to determine whether items of a latent variable should be treated as causal or effect indicators [34, 36]. While a parameter estimator such as maximum likelihood (ML) method is usually applied when testing general SEM, the CTA test does not estimate parameters, but only tests model fit using Chi-square (χ2). The CTA test statistic depends on the tetrads produced by a model. Following Bollen and Ting [36], consider a latent variable indicated by four observed items (×1 – ×4). The effect of the latent variable to the items can be written as Eq. 1:
$$ {x}_i={\lambda}_{i1}{\xi}_1+{\delta}_i $$
where δi is the random measurement error (disturbance) term with Ε (δi) = 0 for all i,, COV (δi, δj) = 0 for i ≠ j, and COV (ξ1, δi) = 0 for all i. The population covariances (σij) of the observed items are given as Eq. 2 below:
$$ {\upsigma}_{\mathrm{ij}}={\lambda}_{i1}{\lambda}_{j1}\phi $$
where σij is the population covariance matrix of i and j items, and ϕ is the variance of ξ1.
A tetrad is 'the difference between the product of a pair of covariances and the product of another pair among four random variables' (Bollen & Ting, 2000, p.5) [34]. Thus, the four observed items produce six covariances, which can be arranged into three tetrads using Kelley's notation [37], i.e.
$$ {\displaystyle \begin{array}{c}{\uptau}_{1234}={\upsigma}_{12}{\upsigma}_{34}-{\upsigma}_{13}{\upsigma}_{24}\\ {}{\uptau}_{1342}={\upsigma}_{13}{\upsigma}_{42}-{\upsigma}_{14}{\upsigma}_{32}\\ {}{\uptau}_{1423}={\upsigma}_{14}{\upsigma}_{23}-{\upsigma}_{12}{\upsigma}_{43}\end{array}} $$
where τijkl is the population tetrad that refers to σijσkl – σikσjl. If the tetrad equals to zero, that is τijkl = 0, it is referred to as a vanishing tetrad. Hence, if the four observed items were effect indicators, the model would imply three vanishing tetrads (i.e. all tetrads in Eq. 3 should equal to 0). Furthermore, vanishing tetrads implied by a model include redundant vanishing tetrads (i.e. any two of the vanishing tetrads in Eq. 3 would imply the third) [34]. Therefore, only two vanishing tetrads are non-redundant. Redundant vanishing tetrads should be excluded from the test. This exclusion makes covariance matrix of the tetrads that is part of the test statistic non-singular, and hence its inverse will exist. For a theoretical background on the tetrad, see [36].
Regardless of the number observed items, only four random variables (e.g. σ12, σ34, σ13 and σ24) are considered at a time, and this process is repeated for all combinations of the observed items. For every foursome of items, there are three possible vanishing tetrads. Considering an all-effect model with five observed variables (e.g. one item for each of the 5 EQ-5D dimensions), there will be five different combinations of four items, and each set will have three tetrads. Thus, the model would imply 15 vanishing tetrads. We could then test the hypothesis that H0: τ = 0 and H1: τ ≠ 0 based on sample data. If the vanishing tetrads implied by the model do vanish, it would produce a good fit of the model (a non-significant χ2 test), which would not reject the null hypothesis. If the test were highly significant, it would favour a causal indicator structure. However, if the χ2 test was 0 with 0 degrees of freedom, it would indicate an all-causal indicator model (as there are no model implied non-redundant vanishing tetrads with this structure) [8].
SEM models are traditionally referred to as nested when we constrain or free a set of parameters and conduct the LR test to statistically compare models. However, some models that are not nested in parameters can be nested in terms of vanishing tetrads. That is, models are nested 'if the model-implied non-redundant vanishing tetrads from one model are contained within the set of implied non-redundant vanishing tetrads from the other model' ([8], p.1532). When models are compared (i.e. nested), a χ2- difference test is formed, and a highly significant p-value would provide support for the model with fewest implied vanishing tetrads.
Three alternative models were developed for the CTA of EQ-5D dimensions (Fig. 1). Model 1 tested for any causal pattern, where all 5 EQ-5D items were treated as effect indicators, indicated by the arrows pointing away from the HRQoL construct. Models 2 and 3 are multiple cause multiple indicator (MIMIC) models: Model 2 tested whether symptom items (PD and AD) should be treated as causal indicators (indicated by the arrows pointing from the items to the HRQoL construct) and activity/participation items (MO, SC and UA) as effect indicators. Model 3 treated symptom items (PD and AD) and mobility (MO) as causal indicators, and SC and UA as effect indicators. A bootstrap tetrad test was used to minimize the problem of non-normality [38].
As explained above, an all-effect indicator model with the 5 EQ-5D items (Model 1) would imply 15 vanishing tetrads. However, a model specifying only the three activity/participation items as effect indicators (Model 2) would imply only nine vanishing tetrads (as a subset of the 15 vanishing tetrads). As illustrated in Bollen and Ting [34], this model implies nine tetrads as we always consider four random variables at a time, and any foursome of the items in Model 2 with 3 effect indicators would imply either three or one vanishing tetrads. Removing one causal indicator thus always leaves three items specified as effect indicators, whereas removing one effect indicator would always leave two items specified as effect indicators. A foursome that includes three or four effect indicators implies three vanishing tetrads (i.e. they are tetrad equivalent, which means they cannot be distinguished in terms of vanishing tetrads), while a foursome with two effect indicators implies only one vanishing tetrad. Considering Model 2 with three effect indicators and two causal indicators, the five subsets of four items would produce nine model-implied vanishing tetrads. That is, removing a casual indicator would imply three vanishing tetrads each (3 + 3). Removing an effect indicator would imply one vanishing tetrad each (1 + 1 + 1).
Following a similar procedure, Model 3 implies three vanishing tetrads. Note that a model with only one effect indicator has zero vanishing tetrads [34]. Both Model 2 and Model 3 could be compared with the all-effect indicator model with a nested CTA using χ2 difference test. If this test is highly significant, the model with the fewest vanishing tetrads would be favoured. In this scenario, the test is against the appropriateness of the additional vanishing tetrads implied by the all-effect indicator model. Note that models that are not nested in standard LR test can be nested in CTA. For instance, Model 3 in CTA has fewer vanishing tetrads than Model 2 and is therefore nested in Model 2. CTA is estimated using the Stata user command referred to as "tetrad" [39].
Confirmatory factor analysis
The models in Fig. 1 can be tested using CFA. Furthermore, a MIMIC model illustrated in Fig. 2 specified the hypothesized relationships among EQ-5D dimensions where MO has an intermediate position. (Due to the uncertain nature of AD and the investigation of reversed causality, alternative models were specified, not illustrated).
Maximum likelihood (ML) estimation is considered robust when using non-continuous data [40,41,42] or data that violate multivariate normality assumptions [43,44,45]. However, since ML can be affected by deviation from normality [46], bootstrap standard errors (with 1000 bootstrap draws) were used [47]. Model fit to data was examined using fit indices, i.e. the comparative fit index (CFI), the Tucker-Lewis index (TLI), root-mean square error of approximation (RMSEA), standardized root-mean square residual (SRMR), Akaike information criterion (AIC) and sample-size adjusted Bayesian information criterion (SABIC). CFI and TLI values greater than 0.95, and SRMR less than 0.08 represent a well-fitting model [48]. While RMSEA less than 0.05 is considered to reflect a good fit [49], values as high as 0.08 reflect adequate fit [50]. AIC and SABIC are only meaningful when different models are compared, and models with the lowest values are those with the best fit.
Statistical analyses were performed in Stata version 14.0 (StataCorp LP), except the path analyses which were performed with Mplus version 6.11.
Respondent characteristics on age, sex, education, and disease groups are provided in Tables 4 and 5 in Appendix. The healthy respondents and those reporting chronic disease were similar on gender and education, but those with chronic disease were older, as could be expected. As shown in Table 1, the highest Spearman's rank correlation were between MO and UA (0.73), while the lowest were between AD and MO (0.26), indicating support for our conceptual model. The correlation between PD and SC was lower than that between PD and MO or UA.
Table 1 Spearman's rank correlations between the EQ-5D dimensions (N = 7933)
Table 2 shows the frequency distribution of EQ-5D-5L health states in terms of decrements in symptom items or activities/participation items. Excluding those who reported full health (health state 11,111), the most prevalent combinations were three health states that only had slight decrements in PD and/or AD, i.e. 11121 (slight pain/discomfort), 11122 (slight pain/discomfort and slight anxiety/depression), 11112 (slight anxiety/depression). These three accounted for more than one-third (34.9%) of all possible combinations of non-perfect health states. When all health states with decrements in symptoms without any decrements in activity/participation (i.e. MO + SC + UA = 3, PD + AD > 2) were included, 47% (3031 respondents) of the sample was covered. In contrast, only 1.5% (94) of all respondents reported decrements in activity/participation without any decrements in symptoms (i.e. MO + SC + UA > 3, PD + AD = 2), suggesting that symptoms precede problems with activity/participation. Figure 3 shows the relationship between increases in the summary score of the symptom items (from 2 to 10 on the horizontal axis) and the corresponding summary score of the activity/participation items (from 3 to 15 on the vertical axis). The corresponding data are shown in Table 6 in Appendix. The results indicate that increasing pain/discomfort and anxiety/depression is associated with increasing problems with mobility, self-care and usual activities, but the problems on these activity/participation items appear to lag after the symptoms. This supports the suggestion from Table 2 that symptoms precede problems with activity/participation.
Table 2 Distribution of EQ-5D-5 L health states showing frequency of symptoms (pain/discomfort and anxiety/depression) vs activity/participation (mobility, self-care, and usual activities)
Mean summary score of effect items (MO + SC + UA) vs summary score of symptoms (PD + AD). Mobility [MO], self-care [SC], usual activities [UA], pain/discomfort [PD], anxiety/depression [AD]
The results of the CTA for Model 1 (χ2 = 1500.00, df = 15), Model 2 (χ2 = 893.79, df = 6) and Model 3 (χ2 = 105.84, df = 3) revealed highly significant χ2 estimates (P < 0.0001). Model 3 clearly produced the lowest χ2 estimates, suggesting it to be the best model. Although the significant χ2 estimate indicates poor fit to the data, it is usual that χ2 estimates are significant in large samples [51]. A nested CTA test that compared Model 2 and Model 3 revealed a highly significant χ2 - difference (χ2 diff = 787.62, df = 6, p < 0.0001), indicating that the model with fewest vanishing tetrads (Model 3) is favoured.
The results of the CFA are presented in Table 3. Model 1 and Model 2 produced poor fit to the data, while Model 3 produced satisfactory model fit based on CFI, TLI, RMSEA, and SRMR. These results are in line with the finding from CTA that Model 3 produced a better fit than the first two models. Model 4 (only tested with CFA) produced a satisfactory fit similar to Model 3. However, the information criteria AIC and SABIC indicate that Model 3 is the preferred one.
Table 3 Confirmatory factor analysis (CFA) estimates (N = 7933)
An alternative model specifying AD as an effect indicator with SC and UA did not produce a good fit, either with CTA (χ2 = 927.93, df = 6, p < 0.0001) or CFA (CFI = 0.965; TLI = 0.922; RMSEA = 0.122; SRMR 0.026). Further models investigated other specifications of the interrelationships between the three causal indicators (MO, PD and AD) in Model 4, including PD causing AD (or reversed causality), PD causing AD and MO, and PD causing AD and MO including MO as a cause of AD. All these models had a poor fit compared to the chosen model (results not reported here). The main CTA and CFA analyses were performed using the full sample (N = 7933), and removing the 1530 respondents reporting full health (11111) produced similar results.
We developed a conceptual framework for an empirical investigation of the causal and effect nature of EQ-5D dimensions. Based on theoretical models of HRQoL, the dimensions were classified as either symptoms, and thus causal, variables (PD and AD), or activities/participation and thus effect indicators (MO, SC and UA) [2, 12, 15]. While SC and UA acted as effect indicators, MO, PD and AD appeared to be causal in nature, driving changes in SC and UA. Although MO could play an intermediate role as indicated in Fig. 2, the results suggest that MO is predominantly causal.
There are reasons to believe that the role of AD might vary depending on the severity of anxiety or depression. If moderate or severe (levels 3–5), AD could reflect more of a clinical symptom that may cause dysfunctions (MO, SC, UA) and typically requires treatment. If mild (level 2), it could reflect more subjective well-being, which may vary according to personality traits (e.g. optimist vs pessimist, or level of neuroticism) and thus acts more as an effect variable (in line with the finding that emotional well-being in EORTC was an effect variable) [3]. Further investigation into the various disease groups might have indicated that the causal nature of AD is disease-specific.
Our observation of a causal pattern across EQ-5D dimensions supports the need for preference weighting [2]. The EQ-5D-5L values sets based on population preferences in four western countries (Canada, England, Spain, the Netherlands) [52,53,54,55] reveal striking similarities in the relative importance of the five dimensions. The dimensions that our conceptual model classified as causal indicators (PD and AD) have similar preference weightings, and they are on average 50% stronger than each of the three effect indicators (MO, SC, UA), i.e. the sum of the weights of the two symptom dimensions equals the sum of the three functioning items. The basis for the two causal dimensions being more important to people than the three effect dimensions might be that people find it easier to adapt to functional impairments than to pain/discomfort and anxiety/depression.
The current findings may be useful when exploring additional dimensions that could act as 'bolt-ons' to the five core EQ-5D dimensions. While these five dimensions have proved relevant to patients across the spectrum of diagnoses and to the general population, the EuroQol Group has been experimenting to investigate whether additional dimensions such as vision, tiredness, or sleep could enhance the instrument's performance in some settings [56]. An interesting question is whether an HSU instrument like the EQ-5D should broaden its operationalization of the HRQoL concept in the direction of effect dimensions (e.g. social connections/network or general well-being) or in the direction of causal dimensions (e.g. vision or tiredness). Most quality of life instruments include both causal and effect indicators [57]. Causal indicators are important to measure because they affect HRQoL [2] and are often treated to avoid disruption of HRQoL. This is the rationale behind many healthcare interventions (e.g. treating arthritic pain to enable a person to continue working).
Some limitations should be acknowledged with respect to the data analyses presented here. The MIC study is based on respondents who have volunteered to participate, something which might lead to self-selection bias. Second, it is difficult to claim causality from cross-sectional data. Third, CTA is primarily intended to test for model misspecification, which does not necessarily mean that indicators are causal rather than effect indicators [35]. Future research should ideally apply panel data, which would provide better illustration of the expected temporal relationship between causal and effect dimensions.
Based on theoretical models of HRQoL, we develop a conceptual framework for causal and effect relationships among the five dimensions of the EQ-5D instrument. Empirical testing on EQ-5D-5L data from a large multinational survey provided supporting evidence that the EQ-5D comprises both causal variables (Mobility, Pain/discomfort, Anxiety/depression) and effect variables (Self-care and Usual activities).
AIC:
Akaike information criterion
CFA:
CFI:
Comparative fit index
CTA:
HRQoL:
HSU:
Health state utility
ICF:
International Classification of Functioning, Disability and Health
MIMIC:
Multiple cause multiple indicator model
ML:
Maximum likelihood
Pain/discomfort
RMSEA:
Root-mean square error of approximation
SABIC:
Sample-size adjusted Bayesian information criterion
SC:
Structural equation modelling
SRMR:
Standardized root-mean square residual
TLI:
Tucker-Lewis index
UA:
Usual activities
Costa DS. Reflective, causal, and composite indicators of quality of life: a conceptual or an empirical distinction? Qual Life Res. 2015;24:2057–65.
Fayers PM, Hand DJ, Bjordal K, Groenvold M. Causal indicators in quality of life research. Qual Life Res. 1997;6:393–406.
Boehmer S, Luszczynska A. Two kinds of items in quality of life instruments: 'indicator and causal variables' in the EORTC qlq-c30. Qual Life Res. 2006;15:131–41.
Brazier J, Ratcliffe J, Salamon J, Tsuchiya A. Measuring and valuing health benefits for economic evaluation. 2edn ed. New York: Oxford university press; 2016.
Wisloff T, Hagen G, Hamidi V, Movik E, Klemp M, Olsen JA. Estimating QALY gains in applied studies: a review of cost-utility analyses published in 2010. Pharmacoeconomics. 2014;32:367–75.
EuroQol. EuroQol - a new facility for the measurement of health-related quality of life. Health Policy. 1990;16:199–208.
Herdman M, Gudex C, Lloyd A, Janssen M, Kind P, Parkin D, Bonsel G, Badia X. Development and preliminary testing of the new five-level version of EQ-5D (EQ-5D-5L). Qual Life Res. 2011;20:1727–36.
Bollen KA, Lennox RD, Dahly DL. Practical application of the vanishing tetrad test for causal indicator measurement models: an example from health-related quality of life. Stat Med. 2009;28:1524–36.
Bollen K, Lennox R. Conventional wisdom on measurement: a structural equation perspective. Psychol Bull. 1991;110:305–14.
Fayers PM, Hand DJ. Causal variables, indicator variables and measurement scales: an example from quality of life. J R Stat Soc A Stat Soc. 2002;165:233–53.
Bakas T, McLennon SM, Carpenter JS, Buelow JM, Otte JL, Hanna KM, Ellett ML, Hadler KA, Welch JL. Systematic review of health-related quality of life models. Health Qual Life Outcomes. 2012;10:134.
WHO. International classification of functioning, disability and health (ICF), vol. 2016. Geneva: World Health Organization; 2001.
Cieza A, Stucki G. Content comparison of health-related quality of life (HRQOL) instruments based on the international classification of functioning, disability and health (ICF). Qual Life Res. 2005;14:1225–37.
Cieza A, Brockow T, Ewert T, Amman E, Kollerits B, Chatterji S, Ustun TB, Stucki G. Linking health-status measurements to the international classification of functioning, disability and health. J Rehabil Med. 2002;34:205–10.
Wilson IB, Cleary PD. Linking clinical variables with health-related quality of life. A conceptual model of patient outcomes. Jama. 1995;273:59–65.
Valderas JM, Alonso J. Patient reported outcome measures: a model-based classification system for research and clinical practice. Qual Life Res. 2008;17:1125–35.
Chrischilles EA, Rubenstein LM, Voelker MD, Wallace RB, Rodnitzky RL. Linking clinical variables to health-related quality of life in Parkinson's disease. Parkinsonism Relat Disord. 2002;8:199–209.
Krethong P, Jirapaet V, Jitpanya C, Sloan R. A causal model of health-related quality of life in Thai patients with heart-failure. J Nurs Scholarsh. 2008;40:254–60.
Lee DTF, Yu DSF, Woo J, Thompson DR. Health-related quality of life in patients with congestive heart failure. Eur J Heart Fail. 2005;7:419–22.
Mayo NE, Scott SC, Bayley M, Cheung A, Garland J, Jutai J, Wood-Dauphinee S. Modeling health-related quality of life in people recovering from stroke. Qual Life Res. 2015;24:41–53.
Penckofer S, Ferrans CE, Fink N, Barrett ML, Holm K. Quality of life in women following coronary artery bypass graft surgery. Nurs Sci Q. 2005;18:176–83.
Wettergren L, Björkholm M, Axdorph U, Langius-Eklöf A. Determinants of health-related quality of life in long-term survivors of Hodgkin's lymphoma. Qual Life Res. 2004;13:1369–79.
Williams KB, Gadbury-Amyot CC, Bray KK, Manne D, Collins P. Oral health-related quality of life: a model for dental hygiene. J Dent Hyg. 1998;72:19–26.
Wilson IB, Cleary PD. Clinical predictors of functioning in persons with acquired immunodeficiency syndrome. Med Care. 1996;34:610–23.
Fearon A, Neeman T, Smith P, Scarvell J, Cook J. Pain, not structural impairments may explain activity limitations in people with gluteal tendinopathy or hip osteoarthritis: a cross sectional study. Gait Posture. 2016;52:237–43.
Pollard B, Johnston M, Dieppe P. Exploring the relationships between international classification of functioning, disability and health (ICF) constructs of impairment, activity limitation and participation restriction in people with osteoarthritis prior to joint replacement. BMC Musculoskelet Disord. 2011;12:97.
Kose G, Hatipoglu S. The effect of low back pain on the daily activities of patients with lumbar disc herniation: a Turkish military hospital experience. J Neurosci Nurs. 2012;44:98–104.
Peel C, Sawyer Baker P, Roth DL, Brown CJ, Brodner EV, Allman RM. Assessing mobility in older adults: the UAB study of aging life-space assessment. Phys Ther. 2005;85:1008–119.
Chao SF. Functional disability and depressive symptoms: longitudinal effects of activity restriction, perceived stress, and social support. Aging Ment Health. 2014;18:767–76.
Parikh RM, Robinson RG, Lipsey JR, Starkstein SE, Fedoroff J, Price TR. The impact of poststroke depression on recovery in activities of daily living over a 2-year follow-up. Arch Neurol. 1990;47:785–9.
Sullivan KJ, Cen SY. Model of disablement and recovery: knowledge translation in rehabilitation research and practice. Phys Ther. 2011;91:1892–904.
Dunlop DD, Hughes SL, Manheim LM. Disability in activities of daily living: patterns of change and a hierarchy of disability. Am J Public Health. 1997;87:378–83.
Richardson J, Kahn M, Lezzi A, Maxwell A. Cross-national comparison of twelve quality of life instruments: MIC paper 1: background, questions, instruments, research paper 76. Melbourne, Australia: Monash University; 2012.
Bollen KA, Ting KF. A tetrad test for causal indicators. Psychol Methods. 2000;5:3–22.
Roos JM. The vanishing tetrad test: another test of model misspecification. Meas: Interdisciplinary Res Perspect. 2014;12:109–14.
Bollen KA, Ting KF. Confirmatory tetrad analysis. In: Marsden P, editor. Sociological methodology. Washington, DC: American Socio-logical Association; 1993. p. 147–1750.
Kelley TL. Crossroads in the mind of man. Stanford: California; 1928.
Johnson TR, Bodner TE. A note on the use of bootstrap tetrad tests for covariance structures. Struct Equ Model Multidiscip J. 2007;14:113–24.
Bauldry S, Bollen KA. Tetrad: a set of Stata commands for confirmatory tetrad analysis. Struct Equ Model Multidiscip J. 2016;23:921–30.
Lee S-Y, Poon W-Y, Bentler PM. Structural equation models with continuous and polytomous variables. Psychometrika. 1992;57:89–105.
Lee S-Y, Poon W-Y, Bentler PM. Full maximum likelihood analysis of structural equation models with polytomous variables. Stati Probab Lett. 1990;9:91–7.
Lee S-Y, Shi J-Q. Maximum likelihood estimation of two-level latent variable models with mixed continuous and Polytomous data. Biometrics. 2001;57:787–94.
Muthén B, Kaplan D. A comparison of some methodologies for the factor analysis of non-normal Likert variables. Br J Math Stat Psychol. 1985;38:171–89.
L-t H, Bentler PM, Kano Y. Can test statistics in covariance structure analysis be trusted? Psychol Bull. 1992;112:351–62.
Chou C-P, Bentler PM. Estimates and tests in structural equation modeling. In: Structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA, US: Sage Publications, Inc; 1995. p. 37–55.
Nevitt J, Hancock GR. Performance of bootstrapping approaches to model test statistics and parameter standard error estimation in structural equation modeling. Struct Equ Model Multidiscip J. 2001;8:353–77.
Bollen KA, Stine RA. Bootstrapping goodness-of-fit measures in structural equation models. Sociol Methods Res. 1992;21:205–29.
Hu L-t, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6:1–55.
MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996;1:130–49.
Browne MW, Cudeck R. Alternative ways of assessing model fit. Sociol Methods Res. 1992;21:230–58.
Vehkalahti K. Structural equation modeling with Mplus: basic concepts, applications, and programming by Barbara M. Byrne. Int Stat Rev. 2014;82:141–2.
Xie F, Pullenayegum E, Gaebel K, Bansback N, Bryan S, Ohinmaa A, Poissant L, Johnson JA. A time trade-off-derived value set of the EQ-5D-5L for Canada. Med Care. 2016;54:98–105.
Devlin N, Shah K, Feng Y, Mulhern B, Van Hout B. Valuing health-related quality of life: an EQ-5D-5L value set for England. Health Econ. 2018;27(1):7–22.
Ramos-Goni JM, Pinto-Prades JL, Oppe M, Cabases JM, Serrano-Aguilar P, Rivero-Arias O. Valuation and modeling of EQ-5D-5L health states using a hybrid approach. Med Care. 2017;55:51–8.
Versteegh MM, Vermeulen KM, Evers SMAA, de Wit GA, Prenger R, Stolk EA. Dutch tariff for the five-level version of EQ-5D. Value Health. 2016;19:343–52.
Devlin NJ, Brooks R. EQ-5D and the EuroQol group: past, present and future. Appl Health Econ Health Policy. 2017;15:127–37.
Fayers PM, Machin D. Quality of Life : The Assessment, Analysis and Reporting of Patient-reported Outcomes. Hoboken: Hoboken: Wiley; 2015.
The Research Council of Norway, grant number 221452, funded the preparation of this manuscript. The Australian National Health and Medical Research Council, grant number 1006334, funded data collection in five countries (Australia, Canada, Germany, UK, and the US) while the University of Tromsø funded the Norwegian part. No parties involved in this study has any commercial interest.
Department of Community Medicine, UIT the Arctic University of Norway, 9016, Tromsø, Norway
Thor Gamst-Klaussen & Jan Abel Olsen
Department of Clinical Research, University of Southern Denmark, Odense, Denmark
Claire Gudex
Division of Health Services, Norwegian Institute of Public Health, 0213, Oslo, Norway
Jan Abel Olsen
Thor Gamst-Klaussen
TGK analyzed and interpreted the data. CG and JAO were major contributors in writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Thor Gamst-Klaussen.
Data for this study were obtained from the multi-instrument comparison study which was approved by the Monash University Human Research Ethics Committee (Project numbers: CF11/1758–2,011,000,974 and CF11/3192–2,011,001,748).
CG is a member of the EuroQol Group. The other authors declare that they have no competing interests.
Table 4 Respondent characteristics
Table 5 Respondents by country and disease group
Table 6 Mean and standard deviation (SD) of Activity/participation scale for each value on symptom scale
Gamst-Klaussen, T., Gudex, C. & Olsen, J.A. Exploring the causal and effect nature of EQ-5D dimensions: an application of confirmatory tetrad analysis and confirmatory factor analysis. Health Qual Life Outcomes 16, 153 (2018). https://doi.org/10.1186/s12955-018-0975-y
EQ-5D-5 L
Preference weights
Causal indicators
Effect indicators | CommonCrawl |
Search SpringerLink
Were recently reported MHz events planet mass primordial black hole mergers?
Guillem Domènech1
The European Physical Journal C volume 81, Article number: 1042 (2021) Cite this article
A bulk acoustic wave cavity as high frequency gravitational wave antenna has recently detected two rare events at 5.5MHz. Assuming that the detected events are due to gravitational waves, their characteristic strain amplitude lies at about \(h_c\approx 2.5 \times 10^{-16}\). While a cosmological signal is out of the picture due to the large energy carried by the high frequency waves, the signal could be due to the merging of two planet mass primordial black holes (\(\approx 4\times 10^{-4} M_\odot \)) inside the Oort cloud at roughly 0.025 pc (5300 AU) away. In this short note, we show that the probability of one such event to occur within this volume per year is around \(1:10^{24}\), if such Saturn-like mass primordial black holes are \(1\%\) of the dark matter. Thus, the detected signal is very unlikely to be due the merger of planet mass primordial black holes. Nevertheless, the stochastic background of saturn mass primordial black holes binaries might be seen by next generation gravitational wave detectors, such as DECIGO and BBO.
Gravitational waves offer new means to probe the unexplored universe and may lead to new discoveries in cosmology and astrophysics. Gravitational wave interferometers present the opportunity to test compact objects, such as black holes, with masses ranging from tenth of solar masses at frequencies 10–\(1000\,\mathrm{Hz}\), the LIGO range, to million solar masses at frequencies \(10^{-4}{-}10^{-2}\,\mathrm{Hz}\), the LISA range. Pulsar timing arrays may probe supermassive black holes with tenths of billions of solar masses at frequencies \(10^{-9}{-}10^{-7}\,\mathrm{Hz}\). These are also very interesting windows for cosmology as it gives access to extraordinary physics in the early universe when the temperature was around \(0.1{-}10^{11}\,\mathrm{GeV}\). Such powerful events could be (see [1] for a review) first order phase transitions, cosmic strings, etcetera. It also provides the means to test the abundance of primordial black holes [2,3,4,5,6,7,8,9,10,11,12,13,14], that is black holes that formed in the early universe, with the gravitational waves due to the mergers of binaries and the secondary gravitational waves produced around the time of primordial black hole formation. See [11] for a review on the former and [14, 15] for reviews on the latter.
In principle, cosmology as well as exotic astrophysical objects, such as bosons stars, might also produce gravitational waves with much higher, e.g. from MHz to GHz. Interestingly, as no known astrophysical source emits such high frequency waves, this is a unique window to test early universe physics [16]. For example, high frequency cosmological gravitational waves could have been produced during preheating and phase transitions [17,18,19]. Resonant mass detectors are a promising way to detect such high frequency gravitational waves [16, 20]. These high frequency waves can excite the vibrational eigenmodes of, e.g., a spherical mass which are then translated and amplified to electric signals. However, the main obstacle is that due to the large energy carried by such high frequency waves, the characteristic strain of a cosmological stochastic gravitational wave background must be minuscule.Footnote 1 Nevertheless, we might hope to detect nearby astrophysical sources.
Recently, there has been a detection of two rare events at around \(5\,\mathrm{MHz}\) from a bulk acoustic wave antenna [23], which might be due to gravitational waves, though further confirmation is needed. Note that the MHz interferometer at Fermi Lab, called Holometer [24], did not detect any signal in the range of 1–13 MHz in their first run. However, it was not sensitive to fast transient signals [23]. In any case, assuming that the events of [23] are due to gravitational waves, the corresponding characteristic strain is around \(h_c\sim 2.5\times 10^{-16}\). As already noted in [23] the signal could be due to the merger of small compact objects such as primordial black holes with masses of about \(10^{-4}M_\odot \), where \(M_\odot \approx 2\times 10^{33}\,\mathrm{g}\) is a solar mass, and at a distance of roughly \(0.01\,\mathrm{pc}\). In this note, we investigate this claim in more detail to show that the probability of detecting the gravitational waves from the merger of a primordial black hole binary with such small masses is extremely small. Nevertheless, we may hope to detect the stochastic signal due to the merger of sub-solar and planet-mass primordial black hole binaries in the far future [12, 25, 26]. Throughout the paper we work in units where \(c=\hbar =1\). Also, whenever needed we use the cosmological parameters provided by the Planck collaboration [22].
Gravitational waves from primordial black holes mergers
Binaries emit gravitational waves as they inspiral with a final burst when the two black holes merge [21]. The largest gravitational wave amplitude is generated in the latest stages of the merger at the so-called chirp. We can estimate the chirp frequency by using the frequency associated to the radius of the last Innermost Stable Circular Orbit (ISCO). In this way we have that [21]
$$\begin{aligned} f_{\mathrm{GW,max}}&\approx 2f_{\mathrm{ISCO}}\nonumber \\&\approx 4.4\,\mathrm{MHz}\left( \frac{10^{-3}M_\odot }{m_1+m_2}\right) \,, \end{aligned}$$
where \(m_1\) and \(m_2\) are the individual masses of the black holes. From now on, and otherwise stated, we assume for simplicity that the primordial black hole mass spectrum is monochromatic and so \(m_1=m_2=M_{\mathrm{PBH}}\) where PBH stands for primordial black hole. The conclusion does not change significantly if \(m_1\) and \(m_2\) differ slightly. From (1) we see that in order to explain the detected \(5.5\,\mathrm{MHz}\) events we need \(M_{\mathrm{PBH}}\sim 4\times 10^{-4} M_\odot \). These primordial black holes have a Schwarzschild radiusFootnote 2 of \(120 \,\mathrm{cm}\). So, they are the size of a giant Pilates ball.
However, the amplitude of the gravitational waves generated by such very small objects is rather tiny and, as we shall see, they must have merged close to the edge of the solar system, inside the Oort cloud. Since they merged in the local neighbourhood, we shall neglect the cosmological expansion and use that the characteristic strain of a inspiraling binary is given by [21]
$$\begin{aligned} h_c&\approx \frac{4}{r}\left( GM_c\right) ^{5/3}\left( \pi f_{\mathrm{GW}}\right) ^{2/3}\,. \end{aligned}$$
Using the chirp frequency estimate in (1) we find that
$$\begin{aligned} h_{c,\mathrm{max}}&\approx \frac{4}{r}\left( GM_c\right) ^{5/3}\left( \pi f_{\mathrm{GW,max}}\right) ^{2/3}\nonumber \\&\approx 2\times 10^{-17}\left( \frac{M_c}{10^{-3}M_\odot }\right) \left( \frac{r}{\mathrm{pc}}\right) ^{-1}\,. \end{aligned}$$
Thus, in order to match the amplitude of the possibly detected signal of about \(h_{c,\mathrm max}\approx 2.5\times 10^{-16}\) [23] the primordial black holes merged at \(r\sim 2.6\times 10^{-2}\,\mathrm{pc}\). Note that taking the peak strain sensitivity of \(5\times 10^{-19}\) at \(5\mathrm{MHz}\), the observable volume is around \((0.1\,\mathrm{pc})^3\). Now, to claim that the merger of primordial black holes is a plausible explanation we must compute the merger rate of such binaries.
Merger rate
To compute the rate at which primordial black holes merge within the observable volume of \((0.1\,\mathrm{pc})^3\), we follow references [2, 9,10,11]. In particular, using the results of [10] we have that the current merger rate is given by
$$\begin{aligned} {{\mathcal {R}}}&\equiv \frac{dN_{\mathrm{merge}}}{dtdV}\approx {1.5\times 10^{-18}}\text {pc}^{-3} \text {yr}^{-1}\nonumber \\&\quad \times \frac{f_{\mathrm{PBH}}^2}{\left( f_{\mathrm{PBH}}^2+\sigma _{\mathrm{eq}}^2\right) ^{21/74}} \left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) ^{-32/37}\,, \end{aligned}$$
where \(f_{\mathrm{PBH}}\) is the energy density fraction of primordial black holes with respect to dark matter and \(\sigma _{\mathrm{eq}}^2\approx 2.5\times 10^{-5}\) is the variance of density perturbations of the dark matter not in the form of primordial black holes at matter-radiation equality. This estimate assumes that primordial black holes form randomly from the collapse of large primordial fluctuations and are uniformly distributed in the early universe (see [11] for a review). Note that in the recent years there have been several improvements of the PBH binary merger rate with respect to Eq. (5), which take into account the torque from all PBHs, later interaction with other PBHs and accretion of surrounding matter [27,28,29,30,31]. These effects tend to change the estimate (5) by an \(O(1)\sim O(10)\) factor, generally suppressing the merger rate. However, since we find that the merger rate is extremely low to be able to explain the MHz events, by several orders of magnitude, we prefer to use (5) for simplicity. Any additional factors do not change the main result of this work.
From now on we will only be interested in the case where \(f_{\mathrm{PBH}}>\sigma _{\mathrm{eq}}\) and so we can safely drop the \(\sigma _{\mathrm{eq}}\) dependence in (5). Assuming that \(f_{\mathrm{PBH}}\sim 0.01\) so that observational constraintsFootnote 3 from microlensing are satisfied [33,34,35,36], we find that \({{\mathcal {R}}}\lesssim 5\times 10^{-24}\) for \(M_{\mathrm{PBH}}=4\times 10^{-4} M_\odot \) within \((0.1\,\mathrm{pc})^3\). Thus, we see that the probability that such rare events are due to the merger of Saturn-like mass primordial black holes is extremely low. Nevertheless, even if we have been very lucky to detect two of such mergers, we also must take into account the stochastic background of gravitational waves generated by the large number of mergers scattered across the universe.
Stochastic gravitational wave background
In the scenario under study there would be many primordial black hole binaries merging since their formation throughout the universe's history. The spectral density of the cumulus of inspiral and mergers is calculated according to [6, 7, 12, 13, 37]
$$\begin{aligned} \varOmega _{\mathrm{GW}}=\frac{1}{3H_0^2M_{\mathrm{pl^2}}}\int _0^{\frac{f_{\mathrm{cut}}}{f}-1}dz\frac{{{\mathcal {R}}}(z)}{(1+z)H(z)}f\frac{dE_{\mathrm{GW}}}{df_s}\,, \end{aligned}$$
where f is the measured gravitational wave frequency, z is the redshift, H(z) is the Hubble parameter or expansion rate, \(H_0=H(0)\) is the Hubble parameter today, \(M_{\mathrm{pl}}^2=(8\pi G)^{-1}\), \({{\mathcal {R}}}(z)\) is the merger rate of primordial black holes in terms of z which can be found in [10], and \({dE_{\mathrm{GW}}}/{df_s}\) is the energy emitted per binary per frequency where \(f_s=(1+z)f\) is the frequency at the source. The spectrum is cut-off at \(f_{\mathrm{cut}}\) where the binary completely merged and no further gravitational waves are emitted. Using the formulas in the appendix of [12] we find that
$$\begin{aligned} f_{\mathrm{cut}}\approx 11\, \mathrm{MHz} \left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) ^{-1}\,. \end{aligned}$$
We also have from [12] that
$$\begin{aligned} \frac{dE_{\mathrm{GW}}}{df_s}&\approx \frac{\pi ^{2/3}}{3G}\left( GM_c\right) ^{5/3}\nonumber \\&\quad \times \left\{ \begin{aligned}&f_s^{-1/3}&f_s<f_1\\&\omega _1 f_s^{2/3}&f_1<f_s<f_2\\&\omega _2 \frac{\sigma ^4f_s^{2}}{\left( \sigma ^2+(f_s-f_2)^2\right) ^2}&f_2<f_s<f_{\mathrm{cut}}\\&0&f_{\mathrm{cut}}<f_s \end{aligned} \right. \,, \end{aligned}$$
where \(\omega _1\) and \(\omega _2\) are found by continuity and [12, 38]
$$\begin{aligned} f_1&\approx 3.6\, \mathrm{MHz} \left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) ^{-1}\,, \end{aligned}$$
$$\begin{aligned} \sigma&\approx 1.9\, \mathrm{MHz} \left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) ^{-1}\,. \end{aligned}$$
We can have a rough estimation of the magnitude of the stochastic background due to the binaries as follows. We note that most of the energy is carried away by the high frequency waves emitted by the final gravitational wave bursts of the primordial black hole binary mergers. Most of the high frequency gravitational waves must have been emitted in the nearby universe, so that they are not significantly affected by the cosmological expansion. This is confirmed by a numerical calculation [12] where it is found that the peak of the gravitational wave spectrum is close to the binary cut-off frequency, which is around 5 times higher than the frequency corresponding to the ISCO (1). As a rough approximation we can thus evaluate (6) at low redshift and at \(f_s\sim f=\alpha f_{\mathrm{cut}}\) with \(z=\alpha ^{-1}-1<1\), which yields
$$\begin{aligned} \varOmega ^{\mathrm{peak}}_{\mathrm{GW}}\sim \frac{\left( 1-\alpha \right) }{3H_0^2M_{\mathrm{pl^2}}}\frac{{{\mathcal {R}}}}{H_0}\frac{dE_{\mathrm{GW}}}{df}\bigg |_{\alpha f_{\mathrm{cut}}}\,. \end{aligned}$$
The maximum of such function in terms of \(\alpha \) is found at \(\alpha \sim 0.73\), which is close enough to \(f_2\) but still \(\alpha f_{\mathrm{cut}}>f_2\). This implies that \(z\sim 0.37\) and so our approximation should give a fair enough estimate. With these values we find that
$$\begin{aligned} \varOmega ^{\mathrm{peak}}_{\mathrm{GW}}\approx 6.1\times 10^{-9}\left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) ^{5/37}\left( \frac{f_{\mathrm{PBH}}}{0.01}\right) ^{53/37}\,, \end{aligned}$$
which is close enough to the numerical results shown in [12].Footnote 4 Then, we know that the spectrum roughly decays as \(f^{2/3}\) for \(f<\alpha f_{\mathrm{cut}}\) and is practically zero for \(f>\alpha f_{\mathrm{cut}}\) [12]. Interestingly, while this type of gravitational wave background (13) may be far from future BBN bounds, e.g. by CMB-S4 [39], and resonant mass detectors, the low frequency tail of the spectrum (6) for \(M_{\mathrm{PBH}}\sim 10^{-4}-10^{-3}M_\odot \) and \(f_{\mathrm{PBH}}\sim 0.01\) enters the observable range of future gravitational wave interferometers such as DECIGO [40,41,42], LISA [43], BBO [44], Einstein Telescope [45], AEDGE [46] and Cosmic Explorer [47], as was shown in [12, 48].Footnote 5 For example, let us extrapolate (14) to a frequency of \(f\sim 0.1\,\mathrm{Hz}\). This gives
$$\begin{aligned}&\varOmega _{\mathrm{GW}}(0.1\,\mathrm{Hz})\sim \varOmega _{\mathrm{GW}}^{\mathrm{peak}}\left( \frac{f}{\alpha f_{\mathrm{cut}}}\right) ^{2/3}\bigg |_{f=0.1\,\mathrm{Hz}}\nonumber \\&\quad \approx 3\times 10^{-14}\left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) ^{89/111}\left( \frac{f_{\mathrm{PBH}}}{0.01}\right) ^{53/37}\,. \end{aligned}$$
Let us stress that these are order or magnitude estimates and that they differ from the numerical calculation by a factor O(1). For instance, (14) is found to be roughly a factor 2 smaller than the results of [12]. The DECIGO and BBO peak sensitivities at \(0.1\,\mathrm{Hz}\) are expected to be around \(\varOmega ^{\mathrm{DECIGO}}_{\mathrm{GW}}\sim 10^{-14}\) and \(\varOmega ^{\mathrm{BBO}}_{\mathrm{GW}}\sim 10^{-15}\). Thus, any MHz gravitational wave signal due to Saturn-like mass primordial black holes which make up for \(O(1\%)\) of dark matter must have an stochastic background signal in principle detectable by future gravitational wave detectors.
It is an exciting time for cosmology and astrophysics as new gravitational wave data is becoming available. Recently, two rare events at frequencies of \(5.5\,\mathrm{MHz}\) have been detected by a bulk acoustic detector [23]. If these events are shown to be gravitational waves, they would be an important hint of exotic physics, either from cosmological or astrophysical origin. In this note we have shown that, if indeed it is the case, the \(5.5\,\mathrm{MHz}\) gravitational waves with characteristic strain \(h_c\sim 2.5\times 10^{-16}\) are very unlikely to be due to the merger of Saturn-like mass primordial black hole binary, with \(M_{\mathrm{PBH}}\sim 4\times 10^{-4} M_\odot \). The probability of detecting a single such event is less than \(1:10^{24}\). This renders this scenario practically implausible unless these primordial black holes are extremely clustered around us. Even so, microlensing observations would probably rule out this option. Thus, Saturn-like mass primordial black holes cannot account for the rare events. Nevertheless, the stochastic background of saturn mass primordial black hole binaries might be in principle detectable by DECIGO and BBO if primordial black holes make up to \(O(1\%)\) of dark matter.
Note: After submission, another work by Lasky and Thrane [49] appeared to cast doubts on the detection of the MHz gravitational waves [23]. They argue that a large gravitational wave memory signal should have been seen by LIGO. The non-detection of such memory signals apparently rule out the gravitational wave explanation for the MHz events of [23].
This manuscript has no associated data or the data will not be deposited. [Authors' comment: Data sharing not applicable to this article as no datasets were generated or analysed during the current study.]
An estimation tells us that [21]
$$\begin{aligned} \varOmega _{\mathrm{GW,0}}h^2&=\frac{4\pi ^2}{3H_{100}^2}f_{\mathrm{GW}}^2h_c^2\\&\approx 1.3\times 10^{-6}\left( \frac{f_{\mathrm{GW}}}{\mathrm{MHz}}\right) ^2\left( \frac{h_c}{10^{-27}}\right) ^2\,. \end{aligned}$$
Compare this value with the current bounds from Big Bang Nucleosyntesis (BBN) which are \(\varOmega _{\mathrm{GW,BBN,0}}h^2<1.8\times 10^{-6}\) [1, 22]. This means that in order to be competitive, we need a very high sensitivity of the resonant mass detectors to reach \(h_c\lesssim 10^{-27}\).
In more detail the Schwarzschild radius \(r_s\) is given by
$$\begin{aligned} r_s=2GM_{\mathrm{PBH}}\approx 300 \,\mathrm{cm} \left( \frac{M_{\mathrm{PBH}}}{10^{-3}M_\odot }\right) \,. \end{aligned}$$
Note that OGLE reported the detection of few microlensing events of earth mass objects [32] which could potentially be primordial black holes. These objects are too light to explain the MHz events.
Note that the magnitude of the gravitational wave peak power (13) is similar to the total power emitted by the inspiral phase today, i.e. \(\rho _{\mathrm{GW}}^{\mathrm{ins}}\sim {{\mathcal {R}}}\Delta E_{\mathrm{max}}/H_0\) where \(\Delta E_{\mathrm{max}}={\pi ^{2/3}}\left( GM_c\right) ^{5/3}f_{\mathrm{GW, max}}^{2/3}/(3G)\) [21].
For future prospects on the detectability of the stochastic background due to the merger solar and sub-solar mass PBH binaries, \(M_{\mathrm{PBH}}\sim O(0.1)-O(10)M_\odot \), see [25, 26].
C. Caprini, D.G. Figueroa, Cosmological backgrounds of gravitational waves. Class. Quantum Gravity 35, 163001 (2018). https://doi.org/10.1088/1361-6382/aac608. arXiv:1801.04268
ADS MathSciNet Article MATH Google Scholar
T. Nakamura, M. Sasaki, T. Tanaka, K.S. Thorne, Gravitational waves from coalescing black hole MACHO binaries. Astrophys. J. Lett. 487, L139 (1997). https://doi.org/10.1086/310886. arXiv:astro-ph/9708060
ADS Article Google Scholar
M.Y. Khlopov, Primordial black holes. Res. Astron. Astrophys. 10, 495 (2010). https://doi.org/10.1088/1674-4527/10/6/001. arXiv:0801.0116
R. Saito, J. Yokoyama, Gravitational wave background as a probe of the primordial black hole abundance. Phys. Rev. Lett. 102, 161101 (2009). https://doi.org/10.1103/PhysRevLett.102.161101. arXiv:0812.4339
R. Saito, J. Yokoyama, Gravitational-wave constraints on the abundance of primordial black holes. Prog. Theor. Phys. 123, 867 (2010). https://doi.org/10.1143/PTP.126.351. arXiv:0912.5317
ADS Article MATH Google Scholar
V. Mandic, S. Bird, I. Cholis, Stochastic gravitational-wave background due to primordial binary black hole mergers. Phys. Rev. Lett. 117, 201102 (2016). https://doi.org/10.1103/PhysRevLett.117.201102. arXiv:1608.06699
S. Wang, Y.-F. Wang, Q.-G. Huang, T.G.F. Li, Constraints on the primordial black hole abundance from the first advanced LIGO observation run using the stochastic gravitational-wave background. Phys. Rev. Lett. 120, 191102 (2018). https://doi.org/10.1103/PhysRevLett.120.191102. arXiv:1610.08725
S. Bird, I. Cholis, J.B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E.D. Kovetz et al., Did LIGO detect dark matter? Phys. Rev. Lett. 116, 201301 (2016). https://doi.org/10.1103/PhysRevLett.116.201301. arXiv:1603.00464
M. Sasaki, T. Suyama, T. Tanaka, S. Yokoyama, Primordial black hole scenario for the gravitational-wave event GW150914. Phys. Rev. Lett. 117, 061101 (2016). https://doi.org/10.1103/PhysRevLett.117.061101. arXiv:1603.08338
Y. Ali-Haïmoud, E.D. Kovetz, M. Kamionkowski, Merger rate of primordial black-hole binaries. Phys. Rev. D 96, 12523 (2017). https://doi.org/10.1103/PhysRevD.96.123523. arXiv:1709.06576
M. Sasaki, T. Suyama, T. Tanaka, S. Yokoyama, Primordial black holes—perspectives in gravitational wave astronomy. Class. Quantum Gravity 35, 063001 (2018). https://doi.org/10.1088/1361-6382/aaa7b4. arXiv:1801.05235
S. Wang, T. Terada, K. Kohri, Prospective constraints on the primordial black hole abundance from the stochastic gravitational-wave backgrounds produced by coalescing events and curvature perturbations. Phys. Rev. D 99, 103531 (2019). https://doi.org/10.1103/PhysRevD.99.103531. arXiv:1903.05924
ADS MathSciNet Article Google Scholar
K. Kohri, T. Terada, Solar-mass primordial black holes explain NANOGrav hint of gravitational waves. Phys. Lett. B 813, 136040 (2021). https://doi.org/10.1016/j.physletb.2020.136040. arXiv:2009.11853
MathSciNet Article Google Scholar
C. Yuan, Q.-G. Huang, A topic review on probing primordial black hole dark matter with scalar induced gravitational waves. arXiv:2103.04739
G. Domènech, Scalar induced gravitational waves review. arXiv:2109.01398
N. Aggarwal et al., Challenges and opportunities of gravitational wave searches at MHz to GHz frequencies. arXiv:2011.12414
J. Liu, Z.-K. Guo, R.-G. Cai, G. Shiu, Gravitational waves from oscillons with cuspy potentials. Phys. Rev. Lett. 120, 031301 (2018). https://doi.org/10.1103/PhysRevLett.120.031301. arXiv:1707.09841
J. Liu, Z.-K. Guo, R.-G. Cai, G. Shiu, Gravitational wave production after inflation with cuspy potentials. Phys. Rev. D 99, 103506 (2019). https://doi.org/10.1103/PhysRevD.99.103506. arXiv:1812.09235
R.-G. Cai, Z.-K. Guo, P.-Z. Ding, C.-J. Fu, J. Liu, Dependence of the amplitude of gravitational waves from preheating on the inflationary energy scale. arXiv:2105.00427
M. Goryachev, M.E. Tobar, Gravitational wave detection with high frequency phonon trapping acoustic cavities. Phys. Rev. D 90, 102005 (2014). https://doi.org/10.1103/PhysRevD.90.102005. arXiv:1410.2334
M. Maggiore, Gravitational Waves. Volume 1: Theory and Experiments, Oxford Master Series in Physics (Oxford University Press, Oxford, 2007)
Planck collaboration, Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641, A6 (2020). https://doi.org/10.1051/0004-6361/201833910. arXiv:1807.06209
M. Goryachev, W.M. Campbell, I.S. Heng, S. Galliou, E.N. Ivanov, M.E. Tobar, Rare events detected with a bulk acoustic wave high frequency gravitational wave antenna. Phys. Rev. Lett. 127, 071102 (2021). https://doi.org/10.1103/PhysRevLett.127.071102. arXiv:2102.05859
Holometer collaboration, MHz gravitational wave constraints with Decameter Michelson interferometers. Phys. Rev. D 95, 063002 (2017). https://doi.org/10.1103/PhysRevD.95.063002. arXiv:1611.05560
S. Mukherjee, J. Silk, Can we distinguish astrophysical from primordial black holes via the stochastic gravitational wave background? Mon. Not. R. Astron. Soc. 506, 3977 (2021). https://doi.org/10.1093/mnras/stab1932. arXiv:2105.11139
S. Mukherjee, M.S.P. Meinema, J. Silk, Prospects of discovering sub-solar primordial black holes using the stochastic gravitational wave background from third-generation detectors. arXiv:2107.02181
M. Raidal, C. Spethmann, V. Vaskonen, H. Veermäe, Formation and evolution of primordial black hole binaries in the early universe. JCAP 02, 018 (2019). https://doi.org/10.1088/1475-7516/2019/02/018. arXiv:1812.01930
L. Liu, Z.-K. Guo, R.-G. Cai, Effects of the surrounding primordial black holes on the merger rate of primordial black hole binaries. Phys. Rev. D 99, 063523 (2019). https://doi.org/10.1103/PhysRevD.99.063523. arXiv:1812.05376
L. Liu, Z.-K. Guo, R.-G. Cai, Effects of the merger history on the merger rate density of primordial black hole binaries. Eur. Phys. J. C 79, 717 (2019). https://doi.org/10.1140/epjc/s10052-019-7227-0. arXiv:1901.07672
V. Vaskonen, H. Veermäe, Lower bound on the primordial black hole merger rate. Phys. Rev. D 101, 043015 (2020). https://doi.org/10.1103/PhysRevD.101.043015. arXiv:1908.09752
G. Hütsi, M. Raidal, V. Vaskonen, H. Veermäe, Two populations of LIGO-Virgo black holes. JCAP 03, 068 (2021). https://doi.org/10.1088/1475-7516/2021/03/068. arXiv:2012.02786
P. Mróz, A. Udalski, J. Skowron, R. Poleski, S. Kozłowski, M.K. Szymański et al., No large population of unbound or wide-orbit jupiter-mass planets. Nature D 85(548), 183. https://doi.org/10.1038/nature23276. arXiv:1707.07634
H. Niikura, M. Takada, S. Yokoyama, T. Sumi, S. Masaki, Constraints on earth-mass primordial black holes from OGLE 5-year microlensing events. Phys. Rev. D 99, 083503 (2019). https://doi.org/10.1103/PhysRevD.99.083503. arXiv:1901.07120
B. Carr, K. Kohri, Y. Sendouda, J. Yokoyama, Constraints on primordial black holes. arXiv:2002.12778
B. Carr, F. Kuhnel, Primordial black holes as dark matter: recent developments. Ann. Rev. Nucl. Part. Sci. 70, 355 (2020). https://doi.org/10.1146/annurev-nucl-050520-125911. arXiv:2006.02838
A.M. Green, B.J. Kavanagh, Primordial black holes as a dark matter candidate. J. Phys. G 48, 4 (2021). https://doi.org/10.1088/1361-6471/abc534. arXiv:2007.10722
shape LIGO Scientific, Virgo collaboration, GW150914: implications for the stochastic gravitational wave background from binary black holes. Phys. Rev. Lett. 116, 131102 (2016). https://doi.org/10.1103/PhysRevLett.116.131102. arXiv:1602.03847]
P. Ajith et al., Inspiral-merger-ringdown waveforms for black-hole binaries with non-processing spins. Phys. Rev. Lett. 106, 241101 (2011). https://doi.org/10.1103/PhysRevLett.106.241101. arXiv:0909.2867
CMB-S4 collaboration, CMB-S4 Science Book, First Edition. arXiv:1610.02743
N. Seto, S. Kawamura, T. Nakamura, Possibility of direct measurement of the acceleration of the universe using 0.1-Hz band laser interferometer gravitational wave antenna in space. Phys. Rev. Lett. 87, 221103 (2001). https://doi.org/10.1103/PhysRevLett.87.221103. arXiv:astro-ph/0108011
K. Yagi, N. Seto, Detector configuration of DECIGO/BBO and identification of cosmological neutron-star binaries. Phys. Rev. D 83, 044011 (2011). [Erratum: Phys. Rev. D 95(10), 109901 (2017)]. https://doi.org/10.1103/PhysRevD.95.109901. arXiv:1101.3940]
S. Kawamura et al., Current status of space gravitational wave antenna DECIGO and B-DECIGO. arXiv:2006.13545
LISA collaboration, Laser interferometer space antenna. arXiv:1702.00786
C.J. Moore, R.H. Cole, C.P.L. Berry, Gravitational-wave sensitivity curves. Class. Quant. Grav. 32, 015014 (2015). https://doi.org/10.1088/0264-9381/32/1/015014. arXiv:1408.0740
M. Maggiore et al., Science case for the Einstein telescope. JCAP 03, 050 (2020). https://doi.org/10.1088/1475-7516/2020/03/050. arXiv:1912.02622
AEDGE collaboration, AEDGE: atomic experiment for dark matter and gravity exploration in space. EPJ Quantum Technol. 7, 6 (2020). https://doi.org/10.1140/epjqt/s40507-020-0080-0. arXiv:1908.00802]
LIGO Scientific collaboration, Exploring the sensitivity of next generation gravitational wave detectors. Class. Quantum Gravity 34, 044001 (2017). https://doi.org/10.1088/1361-6382/aa51f4. arXiv:1607.08697]
O. Pujolas, V. Vaskonen, H. Veermäe, Prospects for probing gravitational waves from primordial black hole binaries. arXiv:2107.03379
P.D. Lasky, E. Thrane, Did Goryachev et al. detect megahertz gravitational waves? arXiv:2110.13319
We would like to thank Misao Sasaki for useful discussions and feedback. We would also like to thank Paul Lasky and Eric Thrane for sharing an earlier version of their paper. G.D. as a Fellini fellow was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no 754496.
INFN Sezione di Padova, 35131, Padua, Italy
Guillem Domènech
Correspondence to Guillem Domènech.
Domènech, G. Were recently reported MHz events planet mass primordial black hole mergers?. Eur. Phys. J. C 81, 1042 (2021). https://doi.org/10.1140/epjc/s10052-021-09853-8
DOI: https://doi.org/10.1140/epjc/s10052-021-09853-8
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
Holonomy and Differential Characters
This question is going to be rather vague, but I'm just trying to see if there are obvious connections between these two concepts.
So the holonomy of a vector bundle with Lie group $G$ is $$h(A)=\mathcal{P}\exp\left(\int_\gamma A\right)$$ where $\mathcal{P}$ is the path-ordering symbol and the integral over the connection $A$ is taken over a curve $\gamma$. These elements form the holonomy group, which relates to the curvature of the connection via Ambrose-Singer.
A differential character is an element $$h\in Hom(C_{k-1}(M;\mathbb{Z}),U(1)),\quad h\circ \partial \in\Omega^k(M)$$ defined on a chain $c\in C_{k}(M;\mathbb{Z})$ to be $$h(\partial c)=\exp \left(\int_c \omega(h)\right)$$ where $\omega(h)$ is an element in $\Omega^k(M)$ (Called the curvature of $h$). Differential characters form a group $\hat{H}^*(M,\mathbb{Q}/\mathbb{Z})$, which are related to homology groups and are key objects in topological quantum field theory.
So my question is essentially how these two things are related to each other. For instance, one might think that the differential characters evaluated on points would be equal to the holonomies of a $U(1)$ bundle. Thus, can we think of differential characters as something like "higher-order holonomies"? At least of $U(1)$ bundles?
What if we generalize in the other direction, change the image of the exponentials to be a general Lie group $G$? Would this be a generalization of holonomies to a higher $k$-skeleton?
Does anyone know if what I propose is natural, totally wrong, or very complicated?
differential-geometry characters gauge-theory holonomy
levitopherlevitopher
A differential character in degree 1 with coefficients in U(1) is precisely (an isomorphism class of) a principal U(1)-bundle with connection. (The group in your description should be R/Z=U(1), not Q/Z.)
With a little bit more effort one can recover the corresponding categories and not just isomorphism classes.
If you evaluate a differential character on a circle C, then you recover the holonomy of the corresponding principal U(1)-bundle around C.
Similarly, in degree n one recovers U(1)-bundle (n-1)-gerbes with connection. For an arbitrary abelian Lie group A we get A-bundle (n-1)-gerbes with connection.
All of this can be found on nLab, see, for example, http://ncatlab.org/nlab/show/Cheeger-Simons+differential+character.
For a general noncommutative Lie group G the construction can only work in degree 1, and once the definitions are set up properly (holonomy around a circle no longer makes sense as an element of G) one recovers principal G-bundles with connections, see, for example, Theorem 5.4 in http://arxiv.org/abs/0705.0452.
Dmitri PavlovDmitri Pavlov
Not the answer you're looking for? Browse other questions tagged differential-geometry characters gauge-theory holonomy or ask your own question.
What is curvature, in terms of holonomy functors?
About two notions of holonomy
Flat connection with non-trivial holonomy? I cannot get it
The holonomy group is a Lie group
Holonomy bundle is a covering space
Holonomy group is closed or not?
Image of Curvature form on a Principal Bundle | CommonCrawl |
Mathematics Geometry & Topology
Fields Institute Communications
Lectures and Surveys on G2-Manifolds and Related Topics
Editors: Karigiannis, Spiro, Leung, Naichung Conan, Lotay, Jason D. D. (Eds.)
Is among the first new books on G2 manifolds in decades
Features introductory lectures on several aspects of G2 geometry by recognized experts in the field
Serves as an accessible entry point for early career researchers looking to work in the field
Contains survey articles on numerous recent developments in G2 geometry and related topics
This book, one of the first on G2 manifolds in decades, collects introductory lectures and survey articles largely based on talks given at a workshop held at the Fields Institute in August 2017, as part of the major thematic program on geometric analysis. It provides an accessible introduction to various aspects of the geometry of G2 manifolds, including the construction of examples, as well as the intimate relations with calibrated geometry, Yang-Mills gauge theory, and geometric flows. It also features the inclusion of a survey on the new topological and analytic invariants of G2 manifolds that have been recently discovered.
The first half of the book, consisting of several introductory lectures, is aimed at experienced graduate students or early career researchers in geometry and topology who wish to familiarize themselves with this burgeoning field. The second half, consisting of numerous survey articles, is intended to be useful to both beginners and experts in the field.
Introduction to $$\mathrm {G}_2$$ Geometry
Karigiannis, Spiro
Constructions of Compact $$G_2$$-Holonomy Manifolds
Kovalev, Alexei
Calibrated Submanifolds
Lotay, Jason D.
Calibrated Submanifolds in $$G_{2}$$ Geometry
Chan, Ki Fung (et al.)
Geometric Flows of $${{\,\mathrm{G\!}\,}}_2$$ Structures
Distinguishing $$G_2$$-Manifolds
Crowley, Diarmuid (et al.)
Gravitational Instantons and Degenerations of Ricci-flat Metrics on the K3 Surface
Foscolo, Lorenzo
Frölicher–Nijenhuis Bracket on Manifolds with Special Holonomy
Kawai, Kotaro (et al.)
Distinguished $$G_2$$-Structures on Solvmanifolds
Lauret, Jorge
On G$$_{\mathbf 2}$$-Structures, Special Metrics and Related Flows
Fernández, Marisa (et al.)
Laplacian Flow for Closed $$\mathrm{G}_2$$ Structures
Wei, Yong
Flows of Co-closed $$G_{2}$$-Structures
Grigorian, Sergey
$$\mathrm{G}_2$$-Instantons on Noncompact $$\mathrm{G}_2$$-Manifolds: Results and Open Problems
Lotay, Jason D. (et al.)
Current Progress on $$\mathrm{{G}}_2$$-Instantons over Twisted Connected Sums
Sá Earp, Henrique
Complex and Calibrated Geometry
Moore, Kim
Deformations of Calibrated Submanifolds with Boundary
Spiro Karigiannis
Naichung Conan Leung
Jason D. D. Lotay
Springer Science+Business Media, LLC, part of Springer Nature
XXII, 382
79 b/w illustrations, 6 illustrations in colour | CommonCrawl |
Why are sine/cosine always used to describe oscillations?
What I am really asking is are there other functions that, like $\sin()$ and $\cos()$ are bounded from above and below, and periodic?
If there are, why are they never used to describe oscillations in Physics?
Actually I have just thought of a cycloid, which indeed is both bounded and periodic. Any particular reason as to why it doesn't pop up in science as much as sines / cosines?
waves fourier-transform mathematics oscillators
SuperCiociaSuperCiocia
$\begingroup$ Related: physics.stackexchange.com/q/108423 $\endgroup$ – DumpsterDoofus Apr 18 '14 at 23:42
$\begingroup$ Basically, the answer is: yes, there are many other periodic functions, and the reason you typically see harmonics (like $\sin,e^{i\omega t}$) used is because in most simple applications of interest which are easily understandable, either the behavior itself is harmonic, or the behavior is most easily understood in terms of harmonics. There is also a bit of a confirmation bias: the systems which are not easily understood in terms of harmonics are often very difficult, and thus less people know about them, and so less textbooks are written about them. $\endgroup$ – DumpsterDoofus Apr 19 '14 at 0:17
$\begingroup$ Actually, the simple method is purely calculus: sine (and cosine, which is the same with a lag) is the solution of $\ddot{x}=-x$, which is the canonical, linear equation with an oscillatory solution. This means that this solution will be present in any linear oscillator. Then, the general approach with nonlinear problems is to break them down into linear problems, so you get back to sine. $\endgroup$ – Joce Apr 23 '14 at 7:47
Part of it is that Newtonian mechanics is described in terms of calculus.
When we consider vibrational motions, we're talking about some particle that tends to not be displaced from some equilibrium position. That is, the force on the particle, at displacement $x$, $F(x)$, is equal to some function of displacement $x$, $g(x)$.
There are two ways calculus gets involved here. Firstly, $F=ma$, and $a$, acceleration, is a "rate of change" and therefore a calculus concept. So we have $ma(x)=g(x)$.
Now, dealing with a general function $g$ is too difficult - we won't get anywhere with it. So how can we proceed in the most general way? One fruitful method is to do a Taylor expansion. $g(x)=g(0)+g'(0) x+\frac{1}{2} g''(0) x^2+\frac{1}{3!} g^{(3)}(0)x^3+\cdots$, where these are the $g^{(n)}(x)$ is the nth derivative of g at point x.
If we want $x=0$ to be an equilibrium position, we must have $g(0)=0$ - there isn't any force on the particle at equilibrium. If we want it to be a stable equilibrium that will tend to turn back to its original position, we must have $g'(0)<0$. All other derivatives are fair game. Writing $-k=g'(0)$:
$$m a(x)=-k x+\frac{1}{2} g''(0) x^2+\frac{1}{3!} g^{(3)}(0)x^3+\cdots$$ as is so useful in physics, we now suppose that $x$ is small, so that $x^2$ is very small, and $x^3$ is even smaller. That is, we ignore all powers of $x$ greater than one. We wind up with: $$m a(x)=-k x$$ Hooke's law. The solution to this equation is sinusoidal, always. (that is, it can be written in the form $x=a \cos(\omega t-\varphi)$)
So it is inevitable that, with these definitions of "stable equilibrium", the resulting vibrational pattern at small amplitudes will be sinusoidal. Always. That's what makes $\cos$ and $\sin$ special from a physical point of view.
(of course, we've also tacitly assumed that $g$ is a nice function that is nice and smooth and differentiable, but one generally does that when working on Newtonian style problems)
$\begingroup$ IMO, of the four answers presented so far, this is the only correct one. $\endgroup$ – garyp Apr 19 '14 at 2:57
$\begingroup$ Plus, of course, the fact that any periodic function can be expressed as a decomposition into sine functions (of increasing frequency). It's just, as NeuroFuzzy says, a question of how many terms you want to carry thru the calculation. $\endgroup$ – Carl Witthoft Apr 19 '14 at 16:35
$\begingroup$ "So it is inevitable that, with these definitions of "stable equilibrium", the resulting vibrational pattern at small amplitudes will be sinusoidal. Always." Well, occasionally you get cases where $k=0$ and the first non-trivial term is the $g^{(3)}$ term (not $g^{(2)}$, because that makes your equilibrium unstable again), but these situations are few and far between. $\endgroup$ – dmckee♦ Apr 19 '14 at 20:18
$\begingroup$ @dmckee Another situation that occasionally comes up is when the energy function has a minimum, but is not differentiable, say $U(x)=k|x|$, $ma(x)=-k\operatorname{sgn}x$ (like the oscillations due to gravity when you jump in a portal on the ground in the game Portal). $\endgroup$ – Mario Carneiro Apr 19 '14 at 21:15
$\begingroup$ It is also common in vectors, Fourier series and are the fundamental aspects of trigonometry. $\endgroup$ – Renae Lider Apr 20 '14 at 16:02
One of the big reasons not discussed above is Fourier theory -- any function $f(x)$ can be expressed in the form $f(x) = \int dk\, A(k)e^{ikx}$, which basically means that any function can be decomposed into an infinite sum of sines and cosines. Since this is the case and dealing with sine and cosine is mathematically simpler than the general case of periodic functions, why worry about the latter, when you can always reexpress any function as a sum of sines and consines, and a solution in this form is completely isomorphic with the general case, assuming your base equation is linear?
Jerry SchirmerJerry Schirmer
$\begingroup$ Good point. But the question then becomes; Can you decompose any periodic function using something other than sine/cosine functions? $\endgroup$ – spinkus Apr 20 '14 at 10:59
$\begingroup$ The point is that for a large class of very, very common functions and relations encountered by simply looking around the real world you don't need to express anything as a sum (much less an infinite sum) of sin/cos, but a single sin/cos is sufficient - while another representation might need an [infinite] sum of those terms instead. $\endgroup$ – Peteris Apr 20 '14 at 14:42
$\begingroup$ @SamPinkus: Absolutely. See for example the wavelet transform. $\endgroup$ – Nate Eldredge Apr 20 '14 at 16:16
$\begingroup$ @SamPinkus: See the 'generalized Fourier series,' which allows one to compute the Fourier series of a function in terms of any set of functions, providing they satisfy the orthogonality relations. $\endgroup$ – JamalS Apr 26 '14 at 12:58
Because cycles and oscillations and things with periodicity, are all intimately related to the circle. And $sin$ and $cos$ are defined based on the circle.
Travis BemroseTravis Bemrose
$\begingroup$ This is better than dancing hamsters :-) $\endgroup$ – Peter Mortensen Apr 21 '14 at 15:30
A literal response to your title question would simply be "because in the physical world, oscillations behave in ways consistent with $\sin$ and $\cos$." Of course, one then wonders why these functions are so ubiquitous.
Depending on your level of physics background, you may be familiar with the harmonic oscillator - that is, a system for which there exists a restoring force proportional to displacement. For example, motion of a spring is simple harmonic (since by Hooke's Law, the restoring force is proportional to the amount a string is stretched) and motion of a pendulum for small angle amplitudes is simple harmonic. As a matter of fact, any object in stable equilibrium will move harmonically for small perturbations.
Quantitatively speaking, we mean to say that for simple harmonic motion, $F = - k x$ for some displacement $x$. Moreover, $F = ma = m \frac{d^2x}{dt^2}$, so combining these two equations, we find that
$$\frac{d^2x}{dt^2} = -\frac{k}{m} x$$
This is a differential equation that must be solved to find $x(t)$. It turns out that the solution to this equation is an expression of the form $A \sin( \omega t - \phi)$ for constants $A$, $\omega$, and $\phi$ - to verify this yourself, plug in a function like $x(t) = 2 sin\left(\sqrt{\frac{k}{m}}t - \frac{\pi}{2}\right)$.
Since simple harmonic motion is the most common form of oscillation, and simple harmonic motion is described using $\sin$ and $\cos$, most oscillations in physics follow these trigonometric functions.
The cycloid doesn't appear as much as $\sin$ and $\cos$ simply because there's no reason for it to. There aren't many physical phenomena which follow cycloid paths, since the cycloid is such a complex shape compared to the fairly simple $\sin(\theta) = \operatorname{Im}({e^{i \theta}})$ and $\cos(\theta) = \operatorname{Re}({e^{i \theta}})$.
Shivam SarodiaShivam Sarodia
This question reminds me of some remarks in pages 14-16 of Peter Woit's quantum mechanics and representation theory notes
Basically, imagine you are looking at all periodic functions from the real to the complex numbers. This is equivalent to looking at all functions from the unit circle to the complex numbers. Let's add the property that we want our function $f$ to be such that $f(\theta_1 + \theta_2) = f(\theta_1)f(\theta_2)$ (it's admittedly arbitrary to do this at this point, but at least it's an elegant property : ) ).
Then, it is a fact that our function has to be of the form $\theta \rightarrow e^{ik\theta} = \cos(k\theta) + i\sin(k\theta)$, for an integer number $k$. So this introduces why one would care about trigonometric options in particular and not at others ways of describing oscillations.
If one looks at what is going on in the proof, it basically reduces to the fact that the trigonometric functions have nice properties in terms of differentiation, and relation to each other, e.g. $$\sin(\theta_1 + \theta_2) = \sin(\theta_1)\cos(\theta_2) + \sin(\theta_2)\cos(\theta_1)$$
Abel MolinaAbel Molina
$\begingroup$ +1 I don't believe your $f(\theta_1+\theta_2)=f(\theta_1) f(\theta_2)$ is too far out of left field: you can think of seeking a flow as a way to make your physics time shift invariant, so this may be a better answer than you seem to give yourself credit for! $\endgroup$ – WetSavannaAnimal Apr 26 '14 at 9:08
Yes, there are alternatives. But a big part of the reliance on sines and cosines is historical. The analysis of oscillating mechanical systems naturally focused on sines, since that's the way things vibrate. With that framework in place, it turned out the electrical and magnetic systems also respond. Plus (or perhaps you might say, alternatively) circular motion decomposes easily into sines and cosines. And the list goes on.
As for alternatives, well-behaved, bounded, periodic waveforms can be decomposed into sines and cosines via the Fourier Transform, and the FT can be powerfully extended into the Laplace Transform. Cycloids are interesting, but they are badly behaved - they require infinite dy/dx. And that, in turn, makes them ill-suited to apply to problems which DON'T have infinite dy/dx - which is just about everything.
That's not to say there are NO alternatives for some applications - see wavelet theory.
WhatRoughBeastWhatRoughBeast
Another reason is that we perceive time as always advancing, and we see many examples of rotation with consistent revs, so it is natural for us to express oscillations in terms of time to angle plus radius and from that x,y position.
Sin and Cosine by definition give us the x,y coordinates given an angle and radius of 1, so it is convenient to use them.
iheggieiheggie
Simply put, our universe tends to operate in a manner where the presence of something alters the rate of some related variable. You could say the universe consistently operates off interest for every action, where every instantaneous change affects the future of the system. It turns out that mathematically this is a real world example of exponential growth. You could say that in essence most properties of things we observe in the universe occur exponentially. The canonical exponential we think about is e^x.
Now I bet you are wondering why I am bringing up exponentials in a question about sines and cosines. The reason is that sines and cosines are actually exponential functions when the exponent of our interest involves imaginary numbers. Exactly like how the rate of growth of something in classical exponential growth increases as the independent variable increases, when we deal with sines and cosines the exact amount of material present is creating a system that responds by increasing at certain intervals and decreasing at others.
In essence, you can think of it like this a special form of exponential growth where all your previous value outside the current wavelength you are examining do not matter, and its actually just the amount of your variable in excess of some multiple of your material that affects bulk properties.
To think about it pictorially, draw a graph with a real and imaginary axis. Draw a point on the numberline, then multiply that number by a real number and then an imaginary number and repeat this. You will notice the real number scales the value while the imaginary number rotates the arrow. Mapping this behavior to exponentials creates exponentials where the growth 'scales' (the classical e^x we think of), or the growth 'rotates' (sines and cosines).
SkylerSkyler
Not the answer you're looking for? Browse other questions tagged waves fourier-transform mathematics oscillators or ask your own question.
Why are cosine and sine functions used when representing a signal or a wave?
What's the difference between mechanical and electromagnetic waves?
Which solution to the electromagnetic wave equation is the most accurate model of monochromatic light?
Why is the bispectrum not commonly used in experimental physics?
Why higher frequencies in Fourier series are more suppressed than lower frequencies?
Discrete Fourier Transform: Why do we only consider a full cycle?
Do eigenvalues in a cylindrical symmetric problem tell us anything about the Fourier spectrum?
Motion of string fixed at both ends
Combination of Simple Harmonic Motions
General conditions for a wave to be considered a standing wave?
Discontinuity in Fourier sine series for vibrating string
Why are subwavelength structures not resolved by light?
Division by 0 when calculating wave transmission and reflection coefficients | CommonCrawl |
FreeMind
Learning everyday!
Reverse Engineering 389 389 11 gold badge66 silver badges1414 bronze badges
Philosophy 189 189 55 bronze badges
Ask Different 157 157 11 silver badge44 bronze badges
11 What does mov qword ptr ds:[rax+18], r8 mean?
5 How to reverse a dll and call its functions?
definite-integrals
sequences-and-series
complex-analysis
linear-algebra
11 Calculating $\int x dx$ using trigonometric functions Jun 4 '14
8 Convergence of $\int_{0}^{+\infty} \frac{x}{1+e^x}\,dx$ Feb 20 '16
6 Evaluate $\int_{0}^{\frac {\pi}{3}}x\log(2\sin\frac {x}{2})\,dx$ Sep 29 '14
5 How to show $\sum_{n=1}^{\infty} \frac{1}{n^{3} \binom{2n}{n}} = 4 \int_{0}^{\frac{1}{2}} \frac{\arcsin^{2}(x)}{x} \ dx$? Sep 28 '14
4 How to plot complex functions on the paper by your hand? Dec 28 '13
4 How to show that $\int_0^{\pi/2}\int_0^{\pi/2}\left(\frac{\sin\phi}{\sin\theta}\right)^{1/2}\,d\theta\,d\phi=\pi$? Nov 28 '16
4 Suppose $A$ is a countable subset of $\mathbb{R}$. Prove that there exists a continuous function $\phi$ from $A$ to $A^c$ Jun 22 '19
3 Approximating definite integral $\int_0^T e^{-t^2} \sin(wt)dt$ Jun 1 '15
3 How to show $\left(1+\frac{1}{3}x+\frac{1}{24}x^2+\frac{1}{300}x^3+…\right)^2 = 1 +\frac{2}{3}x+\frac{7}{36}x^2+\frac{1}{30}x^3+…$? Dec 5 '16
3 Suppose that $f$ is entire and that for each $z$, either $|f(z)| \le 1$ or $|f'(z)| \le 1$. How can I prove that $f$ is a linear polynomial? [duplicate] Apr 9 '19
Mortarboard | CommonCrawl |
Three plots about the Cremona groups
Izvestiya: Mathematics, England. 2019. Vol. 83. No. 4. P. 830-859.
V. L. Popov.
The first group of results of this paper concerns the compressibility of finite subgroups of the Cremona groups. The second concerns the embeddability of other groups in the Cremona groups and, conversely, the Cremona groups in
other groups. The third concerns the connectedness of the Cremona groups.
Research target: Mathematics
Priority areas: mathematics
Keywords: Cremona groupalgebraic varietygroup actions
Rationality and the FML invariant
Popov V. Linear Algebraic Groups and Related Structures. LAGRS. Bielefeld University, 2012. No. 485.
We construct counterexamples to the rationality conjecture regar-ding the new version of the Makar-Limanov invariant introduced in A. Liendo, Ga-actions of fiber type on affine T-varieties, J. Algebra 324 (2010), 3653–3665.
Added: Jan 9, 2013
Cesàro Convergence of Spherical Averages for Measure-Preserving Actions of Markov Semigroups and Groups
Khristoforov M., Klimenko A. V., Bufetov A. I. International Mathematics Research Notices. 2012. No. 21. P. 4797-4829.
On the Local Structure Theorem and equivariant geometry of cotangent bundles
Zhgoon V. Journal of Lie Theory. 2013. Vol. 23. P. 607-638.
Let $G$ be a connected reductive group acting on an irreducible normal algebraic variety $X$. We give a slightly improved version of Local Structure Theorems obtained by Knop and Timashev, which describe the action of some parabolic subgroup of $G$ on an open subset of $X$. We also extend various results of Vinberg and Timashev on the set of horospheres in $X$. We construct a family of nongeneric horospheres in $X$ and a variety $\Hor$ parameterizing this family, such that there is a rational $G$-equivariant symplectic covering of cotangent vector bundles $T^*\Hor \dashrightarrow T^*X$. As an application we recover the description of the image of the moment map of $T^*X$ obtained by Knop. In our proofs we use only geometric methods which do not involve differential operators.
Added: Feb 6, 2013
A paradigm for codimension one foliations
Dmitry Filimonov, Kleptsyn V., Navas A. et al. In bk.: Advanced Studies in Pure Mathematics. Vol. 72: Geometry, Dynamics, and Foliations 2013: In Honor of Steven Hurder and Takashi Tsuboi on the Occasion of Their 60th Birthdays. Mathematical Society of Japan, 2017. P. 59-69.
We summarize some of the recent works, devoted to the study of one-dimensional (pseudo)group actions and codimension one foliations. We state a conjectural alternative for such actions (generalizing the already obtained results) and describe the properties in both alternative cases. We also discuss the generalizations for holomorphic one-dimensional actions. Finally, we state some open questions that seem to be already within the reach.
Tori in the Cremona groups
Popov V. Izvestiya. Mathematics. 2013. Vol. 77. No. 4. P. 742-771.
We classify up to conjugacy the subgroups of certain types in the full, affine, and special affine Cremona groups. We prove that the normalizers of these subgroups are algebraic. As an application, we obtain new results in the linearization problem by generalizing Bia{\l}ynicki-Birula's results of 1966--67 to disconnected groups. We prove fusion theorems for n-dimensional tori in the affine and in special affine Cremona groups of rank n and introduce and discuss the notions of Jordan decomposition and torsion prime numbers for the Cremona groups.
Сабейские этюды
Коротаев А. В. М.: Восточная литература, 1997.
Advanced Studies in Pure Mathematics
Edited by: T. Asuke, S. Matsumoto, Y. Mitsumatsu. Vol. 72: Geometry, Dynamics, and Foliations 2013: In Honor of Steven Hurder and Takashi Tsuboi on the Occasion of Their 60th Birthdays. Mathematical Society of Japan, 2017.
Dynamics of Information Systems: Mathematical Foundations
Iss. 20. NY: Springer, 2012.
This proceedings publication is a compilation of selected contributions from the "Third International Conference on the Dynamics of Information Systems" which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
Popov V. arxiv.org. math. Cornell University, 2012. No. arXiv:1207.5205v3.
We classify up to conjugacy the subgroups of certain types in the full, in the affine, and in the special affine Cremona groups. We prove that the normalizers of these subgroups are algebraic. As an application, we obtain new results in the Linearization Problem generalizing to disconnected groups Bialynicki-Birula's results of 1966-67. We prove ``fusion theorems'' for n-dimensional tori in the affine and in the special affine Cremona groups of rank n. In the final section we introduce and discuss the notions of Jordan decomposition and torsion prime numbers for the Cremona groups.
Classification of Algebraic Varieties
Zürich: European Mathematical Society Publishing house, 2010.
Fascinating and surprising developments are taking place in the classification of algebraic varieties. Work of Hacon and McKernan and many others is causing a wave of breakthroughs in the Minimal Model Program: we now know that for a smooth projective variety the canonical ring is finitely generated. These new results and methods are reshaping the field. Inspired by this exciting progress, the editors organized a meeting at Schiermonnikoog and invited leading experts to write papers about the recent developments. The result is the present volume, a lively testimony of the sudden advances that originate from these new ideas. This volume will be of interest to a wide range of pure mathematicians, but will appeal especially to algebraic and analytic geometers.
On families of lagrangian tori on hyperkaehler manifolds
Amerik E., Campana F. algebraic geometry. arxiv.org. Cornell University library, 2013
This is a note on Beauville's problem (solved by Greb, Lehn and Rollenske in the non-algebraic case and by Hwang and Weiss in general) whether a lagrangian torus on an irreducible holomorphic symplectic manifold is a fiber of a lagrangian fibration. We provide a different, very short solution in the non-algebraic case and make some observations suggesting a different approach in the algebraic case.
Added: Apr 9, 2013
Cremona Groups and the Icosahedron
Cheltsov I., Shramov K. CRC Press, 2015.
Cremona Groups and the Icosahedron focuses on the Cremona groups of ranks 2 and 3 and describes the beautiful appearances of the icosahedral group A5 in them. The book surveys known facts about surfaces with an action of A5, explores A5-equivariant geometry of the quintic del Pezzo threefold V5, and gives a proof of its A5-birational rigidity.
The authors explicitly describe many interesting A5-invariant subvarieties of V5, including A5-orbits, low-degree curves, invariant anticanonical K3 surfaces, and a mildly singular surface of general type that is a degree five cover of the diagonal Clebsch cubic surface. They also present two birational selfmaps of V5 that commute with A5-action and use them to determine the whole group of A5-birational automorphisms. As a result of this study, they produce three non-conjugate icosahedral subgroups in the Cremona group of rank 3, one of them arising from the threefold V5.
This book presents up-to-date tools for studying birational geometry of higher-dimensional varieties. In particular, it provides readers with a deep understanding of the biregular and birational geometry of V5.
p-elementary subgroups of the Cremona group of rank 3
Prokhorov Y. In bk.: Classification of Algebraic Varieties. Zürich: European Mathematical Society Publishing house, 2010. P. 327-338.
For the subgroups of the Cremona group $\mathrm{Cr}_3(\mathbb C)$ having the form $(\boldsymbol{\mu}_p)^s$, where $p$ is prime, we obtain an upper bound for $s$. Our bound is sharp if $p\ge 17$.
Bass' triangulability problem
Vladimir L. Popov. In bk.: Advanced Studies in Pure Mathematics. Vol. 75: Algebraic Varieties and Automorphism Groups. Tokyo: The Mathematical Society of Japan, 2017. P. 425-441.
Exploring Bass' Triangulability Problem on unipotent algebraic subgroups of the affine Cremona groups, we prove a triangulability criterion, the existence of nontriangulable connected solvable affine algebraic subgroups of the Cremona groups, and stable triangulability of such subgroups; in particular, in the stable range we answer Bass' Triangulability Problem in the affirmative. To this end we prove a theorem on invariant subfields of 1-extensions. We also obtain a general construction of all rationally triangulable subgroups of the Cremona groups and, as an application, classify rationally triangulable connected one-dimensional unipotent affine algebraic subgroups of the Cremona groups up to conjugacy.
Model for organizing cargo transportation with an initial station of departure and a final station of cargo distribution
Khachatryan N., Akopov A. S. Business Informatics. 2017. No. 1(39). P. 25-35.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the "correct" extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Nullstellensatz over quasi-fields
Trushin D. Russian Mathematical Surveys. 2010. Vol. 65. No. 1. P. 186-187.
Деловой климат в оптовой торговле во II квартале 2014 года и ожидания на III квартал
Лола И. С., Остапкович Г. В. Современная торговля. 2014. № 10.
Прикладные аспекты статистики и эконометрики: труды 8-ой Всероссийской научной конференции молодых ученых, аспирантов и студентов
Вып. 8. МЭСИ, 2011.
Laminations from the Main Cubioid
Timorin V., Blokh A., Oversteegen L. et al. arxiv.org. math. Cornell University, 2013. No. 1305.5788.
According to a recent paper \cite{bopt13}, polynomials from the closure $\ol{\phd}_3$ of the {\em Principal Hyperbolic Domain} ${\rm PHD}_3$ of the cubic connectedness locus have a few specific properties. The family $\cu$ of all polynomials with these properties is called the \emph{Main Cubioid}. In this paper we describe the set $\cu^c$ of laminations which can be associated to polynomials from $\cu$.
Entropy and the Shannon-McMillan-Breiman theorem for beta random matrix ensembles
Bufetov A. I., Mkrtchyan S., Scherbina M. et al. arxiv.org. math. Cornell University, 2013. No. 1301.0342.
Bounded limit cycles of polynomial foliations of ℂP²
Goncharuk N. B., Kudryashov Y. arxiv.org. math. Cornell University, 2015. No. 1504.03313.
In this article we prove in a new way that a generic polynomial vector field in ℂ² possesses countably many homologically independent limit cycles. The new proof needs no estimates on integrals, provides thinner exceptional set for quadratic vector fields, and provides limit cycles that stay in a bounded domain.
Метод параметрикса для диффузий и цепей Маркова
Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012.
Added: Dec 5, 2012
Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action?
Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466.
Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself.
Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem
Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1.
We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results.
Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау
Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214.
Hypercommutative operad as a homotopy quotient of BV
Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749.
We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras.
Cross-sections, quotients, and representation rings of semisimple algebraic groups
V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856.
Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Математическое моделирование социальных процессов
Edited by: А. Михайлов Вып. 14. М.: Социологический факультет МГУ, 2012. | CommonCrawl |
Cooperative spectrum sensing and data transmission optimization for multichannel cognitive sonar communication network
Xin Liu1 &
Min Jia2
The natural acoustic system used by marine mammals and the artificial sonar system used by humans coexist in the underwater cognitive sonar communication networks (CSCN). They share the spectrum when they are in the same waters. The CSCN detects the natural acoustic signal depending on cooperative spectrum sensing of sonar nodes. In order to improve spectrum sensing performance of CSCN, the optimization of cooperative spectrum sensing and data transmission is investigated. We seek to obtain spectrum efficiency maximization (SEM) and energy efficiency maximization (EEM) of CSCN through jointly optimizing sensing time, subchannel allocation, and transmission power. We have formulated a class of optimization problems and obtained the optimal solutions by alternating direction optimization and Dinkelbach's optimization. The simulation results have indicated that SEM can achieve higher spectrum efficiency while EEM may get higher energy efficiency.
Cognitive radio (CR) can improve spectrum utilization greatly through letting secondary user (SU) to access the idle spectrum licensed to the primary user (PU) [1]. However, the SU has to detect the absence of the PU through performing spectrum sensing, in order to guarantee the normal communications of the PU [2]. Energy detection is widely used as an effective spectrum sensing method due to the unnecessary prior information of the detected signal [3]. But the performance of energy detection will decrease if the PU is in fading or shadowing path, which is called "hidden terminal problem." Cooperative spectrum sensing has been proposed to cope with this problem through letting multiple SUs detect the PU and exchange the sensing information collaboratively [4].
Recently, underwater communication network has attracted the attention, which can be used in underwater environment monitoring and submarine resource exploration. The underwater communication network transmits data using the underwater acoustic channel [5]. However, there are often multiple different underwater acoustic systems in the same water area, which may cause the interference to each other. In order to improve the spectrum utilization, CR has been introduced in the underwater communication network, called as cognitive sonar communication networks (CSCN) [6]. CSCN can use the sonar nodes to detect the presence of the other underwater acoustic systems in the surrounding water environment and transmit data on the premise of detecting the idle underwater acoustic channel. The underwater acoustic environment in the ocean is very complex, and usually, multiple acoustic systems coexist in the same water area, such as artificial echolocation systems, monitoring systems, and the nature acoustic systems of marine mammals [7]. The spectrum sharing of artificial acoustic network system and natural acoustic system will affect the transmission efficiency and the survival of marine mammals. The CSCN can realize the coexistence of a variety of artificial underwater acoustic systems in the protection of marine animals [8]. Thus, in the CSCN, the sonar node and the nature acoustic system can be seen as a SU and a PU, respectively.
Spectrum efficiency is an important index to evaluate the transmission performance of CSCN. Most of the previous works focused on optimizing either cooperative sensing or data transmission to maximize the spectrum efficiency of the CSCN. For instance, the listening-before-transmitting spectrum access is proposed, which can maximize the spectrum efficiency through optimizing sensing time [9]; both [10] and [11] assumed that each SU had a fixed transmission power, which ignored the potential gain of dynamic resource optimization. The water filling algorithm could maximize the spectrum efficiency of multichannel CSCN through optimizing subchannel power [12]. Recently, green communications have attracted the attentions of the scholars. Energy efficiency has been proposed to measure the effectiveness of energy utilization [13].
In this paper, we investigate the joint parameter optimization of cooperative spectrum sensing and data transmission for multichannel CSCN. The contributions of the paper are listed as follows:
We seek to maximize spectrum efficiency and energy efficiency of CSCN, respectively, while considering both spectrum sensing performance for detecting the nature acoustic system and the power constraint of each sonar node.
We have formulated the cooperative spectrum sensing and data transmission optimization as a class of optimization problems about sensing time, subchannel allocation, and transmission power. In the optimization problem, the nature acoustic system is fully protected, while the transmission performance of the CSCN is improved as much as possible.
The joint optimization algorithm is proposed to obtain the solutions to the optimization problems, which is based on the alternating direction optimization and the Dinkelbach optimization.
We consider L nature acoustic systems occupying L subchannels, which are seen as PUs and a CSCN constituting of N sonar nodes, which are seen as SUs. We also suppose that in an artificial acoustic system, there is an access point that controls the channel usage of the SUs in the locating water, which can be seen as a fusion center. In order to avoid causing harmful interference to the PU, the SU has to detect the utilization state of the licensed spectrum and decide the activation of the PU. Hence, the periodic cooperative spectrum sensing based on "listening before transmitting" is proposed in this paper, which divides the communication time into several frames, each of which constitutes of three time slots including local sensing slot, cooperative interaction slot, and data transmission slot, as shown in Fig. 1. In the local sensing slot, all the SUs sense the activation of the PU in L subchannels and obtain the local sensing information.
Periodic cooperative spectrum sensing model
Then, in the cooperative interaction slot, each SU uses one common channel to report the local sensing information to the fusion center, which makes a final decision on the activation of the PU by the soft-decisional combination of the local sensing information. To avoid the mutual interference, the SUs cannot report from the common channel simultaneously at the same time. Hence, in order to save the bandwidth of the common channel, the SU adopts time division multiple access (TDMA) to report the local sensing information. Finally, in the data transmission slot, the SUs access the licensed spectrum to communicate according to the decision result of the fusion center.
Underwater acoustic channel model
The fusion center maintains a collection of free channels and the corresponding channel gain matrix in the underwater acoustic region by scanning and spectrum sensing. Supposing that h n,l is the channel gain of SU n using subchannel l, the channel gain matrix H, can be given by
$$ \mathbf{H}=\left(\begin{array}{cccc} h_{11} & h_{12} & \cdots & h_{1N}\\ h_{21} & h_{22} & \cdots & h_{2N}\\ \vdots & \vdots & \ddots & \vdots\\ h_{L1} & a_{L2} & \cdots & h_{LN} \end{array}\right) $$
Due to the reflection of the sea surface, submarine, and underwater medium, there are complicated multipath propagation in shallow sea acoustic channel, which can be considered as generalized uncorrelated scattering condition in general wireless channel. When the number of multipaths is relatively large, the underwater acoustic channel obeys the Rayleigh distribution. Thus, the channel gain distribution is given by
$$ f_{h_{n,l}}\left(h_{n,l}\right)=\frac{1}{\bar{h}_{n,l}} \exp\left(-\frac{h_{n,l}}{\bar{h}_{n,l}}\right) $$
where \(\bar {h}_{n,l}\) is the statistical average of h n,l .
Cooperative spectrum sensing
The sensing signal of SU n for n=1,2,…,N in subchannel l for l=1,2,…,L, y n,l (m), is denoted as follows:
$$ y_{n,l}(m)=\theta_{l} h_{n,l} p^{s}_{l} (m)+ \sigma_{l}^{2} (m), m=1,2, \ldots,M $$
where θ l =0 and θ l =1 denote the absence and presence of the PU in subchannel l, respectively; h n,l denotes the subchannel gain from SU n to the PU; \(p^{s}_{l}\) is the transmission power of the PU in subchannel l; \(\sigma _{l}^{2}\) is the power of the noise in subchannel l; and M is the number of the samplings. M is given by M=τ f s , where τ is the local sensing time and f s is the sampling frequency. We can get the energy statistic of the sensing signal y n,l by accumulating the energy of M samplings as follows:
$$ \psi_{n,l}=\frac{1}{M}\sum_{m=1}^{M} \|y_{n,l}(m)\|^{2} $$
Each SU reports ψ n,l to a fusion center, which combines ψ n,l of N SUs to get an overall energy statistic of the PU signal in subchannel l as follows:
$$ \Psi_{l}=\frac{1}{N}\sum_{n=1}^{N} \psi_{n,l}=\frac{1}{MN}\sum_{n=1}^{N} \sum_{m=1}^{M} \|y_{n,l}(m)\|^{2} $$
The overall energy statistic Ψ l is compared with a threshold λ. If Ψ l ≥λ, the presence of the PU is detected; otherwise, the absence of the PU is determined. When M is large enough, Ψ l obeys the Gaussian distribution approximatively according to the central limit theorem. Hence, the cooperative probabilities of false alarm and detection for subchannel l is given as follows:
$$ \begin{array}{l} Q^{f}_{l}=Q \left(\left(\frac{\lambda}{\sigma_{l}^{2}}-1\right)\sqrt{N \tau f_{s}}\right) \\ Q^{d}_{l}=Q \left(\left(\frac{\lambda}{\sigma_{l}^{2}}-\bar{\gamma}_{l}-1\right)\sqrt{\frac{N \tau f_{s}}{\left(\bar{\gamma}_{l}+1\right)^{2}}}\right) \end{array} $$
where \(\bar {\gamma }_{l}=\frac {1}{N} {\sum _{n=1}^{N} {\frac {p^{s}_{l} h_{n,l}^{2}}{\sigma _{l}^{2}}}}\) is the average sensing SNR of N SUs in subchannel l, and the function \(Q(x)=\frac {1}{\sqrt {2\pi }}\int _{x}^{+\infty }\exp \left (-\frac {z^{2}}{2}\right) \ \mathbf{d} z\).
In order to avoid causing harmful interference to other underwater acoustic systems, the detection probability must be guaranteed. The sensing threshold λ can be obtained by fixing \(Q^{d}_{l}\) with some value, which is given as follows:
$$ \lambda = \left[Q^{-1}\left(Q^{d}_{l}\right)\sqrt {\frac{\left(\bar{\gamma}_{l}+1\right)^{2}}{N \tau f_{s}}} + \bar{\gamma}_{l}+1\right]\sigma_{l}^{2} $$
The selection of sensing threshold is important to the sensing performance. The detection probability will improve with the increasing of sensing threshold; however, the false alarm probability will also increase, thus decreasing the spectrum access probability. Through removing the threshold λ, \(Q^{f}_{l}\) is denoted by \(Q^{d}_{l}\) as follows:
$$ Q^{f}_{l}=Q\left(Q^{-1}\left(Q^{d}_{l}\right)\left(\bar{\gamma}_{l}+1\right)+\bar{\gamma}_{l}\sqrt{N \tau f_{s}}\right) $$
In underwater, since the channel between the signal source and the sonar is often in severe fading, the received signal by one sonar may be too weak to be detected accurately. However, with cooperative spectrum sensing, the same signal can be received by multiple sonars from different paths. If the received signal by one sonar is too weak, the detection performance cannot be decreased through sharing the sensing information with the other sonars. Thus, the sensing diversity gain can be achieved to improve the final detection performance through cooperative spectrum sensing.
Spectrum efficiency maximization (SEM)
Spectrum efficiency is an important indicator to evaluate the spectrum sensing performance. In spectrum sensing, the decreasing of detection performance may decrease the spectrum access opportunity of the SU because of the increasing false alarm probability; however, improving detection performance needs to increase the sensing time, which decreases the data transmission time. Hence, we need to optimize the sensing parameters to maximize the spectrum efficiency. The SU can transmit data effectively only when the absence of the PU is detected accurately. The accurate idle detection probability of subchannel l is \(P_{r}(\theta _{l}=0)\left (1-Q^{f}_{l}\right)\), and the transmission rate of SU n in subchannel l is given by \( R_{n,l}=a_{n,l}\log \left (1+\frac {p_{n,l} g_{n,l}^{2}}{\sigma _{l}^{2}}\right)\), where a n,l ={0,1} denotes whether subchannel l is allocated to SU n, p n,l indicates the transmission power of the SU n in subchannel l, and g n,l denotes the subchannel gain between receiver and transmitter of SU n. Supposing the frame duration is T and the length of each cooperative interaction slot is ε, the overall spectrum efficiency of N SUs over L subchannels is given as follows:
$$ \begin{aligned} \eta_{\text{SE}}&=\frac{T-\tau-N \varepsilon}{T}\sum_{n=1}^{N}\sum_{l=1}^{L}\\ &\quad\times\left[{P_{r}\left(\theta_{l}=0\right)\left(1-Q^{f}_{l}\right)}a_{n,l}\log \left(1+\frac{p_{n,l} g_{n,l}^{2}}{\sigma_{l}^{2}}\right)\right] \end{aligned} $$
Our goal is to maximize the spectrum efficiency of the SU by jointly optimizing sensing time, subchannel allocation, and transmission power, subject to the constraints that the detection probability is above the lower limit \(Q_{d}^{\text {min}}\), the total power of SU n is below the maximal power \(p^{\text {max}}_{n}\), and one subchannel is only allocated to one SU. Supposing the spectrum sensing power is p c , the optimization problem of SEM is given as follows:
$$\begin{array}{*{20}l} \max_{\tau,\{a_{n,l}\},\{p_{n,l}\}} & \eta_{\text{SE}} \end{array} $$
$$\begin{array}{*{20}l} \text{s.t.} \ \ & Q^{d}_{l} \geq Q_{d}^{\text{min}}, l=1,2, \ldots,L \end{array} $$
$$\begin{array}{*{20}l} \ \ & \sum_{l=1}^{L} a_{n,l} p_{n,l} + p_{c} \leq p^{\text{max}}_{n}, n=1,2, \ldots,N \end{array} $$
(10c)
$$\begin{array}{*{20}l} \ \ & \sum_{n=1}^{N} a_{n,l}=1, a_{n,l}=\{0,1\}, l=1,2, \ldots,L \end{array} $$
(10d)
$$\begin{array}{*{20}l} \ \ & 0 \leq \tau \leq T-N \varepsilon; \end{array} $$
(10e)
$$\begin{array}{*{20}l} \ \ & p_{n,l} \geq 0, \ \ n=1,2, \ldots,N, l=1,2, \ldots,L \end{array} $$
(10f)
For simplifying the optimization problem (10), we relax the integer a n,l with any value within [0,1] and let ω n,l =a n,l p n,l . Then, we have \(R_{n,l}=a_{n,l}\log \left (1+\frac {\omega _{n,l} g_{n,l}^{2}}{a_{n,l} \sigma _{l}^{2}}\right)\). From (8) and (9), η SE improves with the decreasing of \(Q^{f}_{l}\) while \(Q^{f}_{l}\) reduces with the decreasing of \(Q^{d}_{l}\). Hence, η SE may reach the maximum only when \(Q^{d}_{l} = Q_{d}^{\text {min}}\). The optimization problem is rewritten as follows:
$$\begin{array}{*{20}l} \max_{\tau,\{a_{n,l}\},\{\omega_{n,l}\}} \eta_{\text{SE}}&=\frac{T-\tau-N \varepsilon}{T} \sum_{n=1}^{N}\sum_{l=1}^{L}\\ &\quad\times\!\left[{P_{r}(\theta_{l}=0)\!\left(1\,-\,Q\left(\kappa\,+\,\bar{\gamma}_{l}\sqrt{N \tau f_{s}}\right)\!\right)}R_{n,l}\right] \end{array} $$
$$\begin{array}{*{20}l} \text{s.t.} \sum_{l=1}^{L} \omega_{n,l}& \leq \hat{p}^{\text{max}}_{n}, n=1,2, \ldots,N \end{array} $$
$$\begin{array}{*{20}l} \sum_{n=1}^{N} a_{n,l}&=1, a_{n,l}\in [0,1], l=1,2, \ldots,L \end{array} $$
$$\begin{array}{*{20}l} 0 &\leq \tau \leq T-N \varepsilon \end{array} $$
$$\begin{array}{*{20}l} \omega_{n,l} &\geq 0, \ \ n=1,2, \ldots,N, l=1,2, \ldots,L \end{array} $$
where \(\kappa =Q^{-1}(Q_{d}^{\text {min}})(\bar {\gamma }_{l}+1)\) and \(\hat {p}^{\text {max}}_{n}=p^{\text {max}}_{n}-p_{c}\). Then, we will give the following theorem.
Theorem 1
Problem (11) is a convex optimization problem.
We often let \(Q_{f}^{l}\leq 0.5\) and then have \(\kappa +\bar {\gamma }_{l}\sqrt {N \tau f_{s}}\geq 0\). Fixing R n,l , We can get the secondary derivative of η SE in τ as follows:
$$ \begin{aligned} \frac{\partial^{2} \eta_{\text{SE}}}{\partial^{2} \tau}&=-\frac{P_{r}(\theta_{l}=0)\sqrt{N f_{s}}}{T\sqrt{2 \pi \tau}}\sum_{n=1}^{N} \sum_{l=1}^{L} \\ &\quad\times{\left[\bar{\gamma}_{l} R_{n,l} \exp \left(-\frac{\left(\kappa+\bar{\gamma}_{l}\sqrt{N \tau f_{s}}\right)^{2}}{2}\right)\right]}\\ &\quad- \frac{T-\tau-N \varepsilon}{4T \tau \sqrt{2 \pi}} \times \sum_{n=1}^{N} \sum_{l=1}^{L}\\ &\quad\times\left[P_{r}(\theta_{l}=0) R_{n,l} \exp \left(-\frac{\left(\kappa+\bar{\gamma}_{l}\sqrt{N \tau f_{s}}\right)^{2}}{2}\right)\right.\\ &\quad\times\left.\vphantom{\frac{\left(\kappa+\bar{\gamma}_{l}\sqrt{N \tau f_{s}}\right)^{2}}{2}} \left(\left(\kappa+\bar{\gamma}_{l}\sqrt{N \tau f_{s}}\right) \bar{\gamma}_{l}^{2} N f_{s} + \bar{\gamma}_{l}\sqrt{N f_{s}} \tau^{\frac{1}{2}} \right)\right]<0 \end{aligned} $$
which indicates that η SE is convex in τ. Moreover, R n,l is obviously convex in (a n,l ,ω n,l ). Since η SE is the nonnegative linear combination of R n,l , η SE is also convex in ({a n,l },{ω n,l }). Hence, η SE is a convex optimization problem about (τ,{a n,l },{ω n,l }). □
We use the alternating direction optimization (ADO) to get the solution to the optimization problem. Firstly, fixing τ, we optimize (a n,l ,ω n,l ). Using the Lagrange multiplier, the optimization function is given by
$$ \begin{aligned} \Gamma (a_{n,l},\omega_{n,l})&= \sum_{n=1}^{N}\sum_{l=1}^{L}\left[\rho_{l} a_{n,l}\log \left(1+\frac{\omega_{n,l} g_{n,l}^{2}}{a_{n,l} \sigma_{l}^{2}}\right)\right]\\ &\quad-\sum_{n=1}^{N} \left[\mu_{n} \left(\sum_{l=1}^{L} \omega_{n,l} - \hat{p}^{\text{max}}_{n}\right)\right]\\ &\quad-\sum_{l=1}^{L} \left[\nu_{l} \left(\sum_{n=1}^{N} a_{n,l}-1\right)\right] \end{aligned} $$
where \(\rho _{l}=P_{r}(\theta _{l}=0)\left (1-Q^{f}_{l}\right)\) is constant and μ n for n=1,…,N and ν l for l=1,…,L are the Lagrange multipliers. Through calculating \(\frac {\partial \Gamma }{\partial a_{n,l}}=0\) and \(\frac {\partial \Gamma }{\partial \omega _{n,l}}=0\), we have
$$\begin{array}{*{20}l} \omega_{n,l}&=\rho_{l} a_{n,l} \left[\frac{1}{\mu_{n}}-\frac{\sigma_{l}^{2}}{g_{n,l}^{2}}\right]^{+} \end{array} $$
$$\begin{array}{*{20}l} a_{n,l} &= \left\{ \begin{array}{l} 0,\ \ \nu_{l} \leq \Phi_{n,l}(\mu_{n})\\ 1, \ \ \nu_{l} > \Phi_{n,l}(\mu_{n}) \end{array} \right. \end{array} $$
where \(\Phi _{n,l}(\mu _{n})=\rho _{l} \left (\log \left (\frac {g_{n,l}^{2}}{\mu _{n} \sigma _{l}^{2}}\right)+\frac {\mu _{n} \sigma _{l}^{2}}{g_{n,l}^{2}}-1\right)\). Then, the Lagrange multipliers μ n and ν l can be obtained by using the gradient method that leads to the following update equations
$$ \begin{aligned} \mu_{n} (t+1)&=\left[\mu_{n} (t)+\zeta_{1}(t)\times \left(\sum_{l=1}^{L} \omega_{n,l} - \hat{p}^{\text{max}}_{n}\right)\right] \\ \nu_{l} (t+1)&=\left[\nu_{l} (t)+ \zeta_{2}(t)\times\left(\sum_{n=1}^{N} a_{n,l}-1\right)\right] \end{aligned} $$
where t≥0 is the iteration index and ζ 1(t) and ζ 2(t) are both the positive step sizes. Then, the updated Lagrange multipliers in (16) is used for updating the power allocation in (14). Secondly, we fix R n,l with the optimized (a n,l ,ω n,l ). Using the Newton iteration method, we can get the optimal τ through iteratively updating τ {k} until it is convergent, as follows:
$$ \tau^{(k+1)}=\tau^{(k)}-\frac{\frac{\partial \eta_{\text{SE}}}{\partial \tau^{(k)}}}{\frac{\partial^{2} \eta_{\text{SE}}}{\partial^{2} \tau^{(k)}}} $$
where k≥0 is the iteration index. As mentioned above, we use ADO to optimize (a n,l ,ω n,l ) and τ alternatively until a n,l , ω n,l , and τ are all convergent, as shown in Algorithm 1. Then, if a n,l =0, the transmission power p n,l =0; otherwise, \(p_{n,l}=\frac {\omega _{n,l}}{a_{n,l}}\). Since η SE is convex in τ, a and ω, η SE is non-decreasing during each iteration, which is described as follows:
$$ \begin{aligned} \eta_{\text{SE}}\!\left(\!\tau^{(k)}\!,\!\{a_{n,l}\}^{(k)}\!,\!\{\omega_{n,l}\}^{(k)}\!\right)\!\! &\leq \eta_{\text{SE}}\left(\tau^{(k+1)},\{a_{n,l}\}^{(k)},\{\omega_{n,l}\}^{(k)}\right) \\ &\leq\!\eta_{\text{SE}}\!\left(\tau^{(k+1)},\{a_{n,l}\}^{(k+1)}\!,\!\{\omega_{n,l}\}^{(k)}\!\right)\\ &\leq\!\eta_{\text{SE}}\!\left(\!\tau^{(k+1)}\!,\!\{a_{n,l}\}^{(k+1)}\!,\!\{\omega_{n,l}\}^{(k+1)}\!\right) \end{aligned} $$
Hence, the convergence of η SE can be obtained after some iterations.
Supposing the estimation error is δ, η SE will be convergent when τ (k),{a n,l }(k), and {ω n,l }(k) are all convergent. Thus, the iterative complexity of the joint optimization algorithm is given by \(O\left (\frac {1}{\delta ^{3}}\right)\).
Energy efficiency maximization (EEM)
Compared to the traditional communication system, the SU may consume more energy due to spectrum sensing power. Thus, we use energy efficiency to evaluate the energy consumption of cooperative spectrum sensing. We define the energy efficiency as spectrum efficiency to energy consumption ratio. The consumed total energy for spectrum sensing and data transmission within unit time is given by
$$ E_{T}=\frac{1}{T}\left(N p_{c} (\tau+\varepsilon)+(T-\tau-N\varepsilon)\sum_{n=1}^{N} \sum_{l=1}^{L} \left[a_{n,l} p_{n,l} \right]\right) $$
Hence, the energy efficiency of cooperative spectrum sensing is given by
$$ \begin{aligned} \eta_{\text{EE}}&=\frac{\eta_{\text{SE}}}{E_{T}}\\ &=\frac{\left(T\,-\,\tau\,-\,N \varepsilon\right)\! \sum_{n=1}^{N}\sum_{l=1}^{L}\!\left[{P_{r}\left(\theta_{l}\,=\,0\right)\left(1\,-\,Q^{f}_{l}\right)}a_{n,l}\!\log\! \left(\!1\,+\,\frac{p_{n,l} g_{n,l}^{2}}{\sigma_{l}^{2}}\!\right)\!\right]}{N p_{c} \left(\tau\,+\,\varepsilon\right)\,+\,\left(T\,-\,\tau\,-\,N\varepsilon\right)\!\sum_{n=1}^{N} \!\sum_{l=1}^{L}\!\left[ a_{n,l} p_{n,l} \right]} \end{aligned} $$
We seek to maximize η EE by jointly optimizing {τ,{a n,l },{ω n,l }}. The optimization problem of EEM is given by
$$\begin{array}{*{20}l} \max_{\tau,\{a_{n,l}\},\{p_{n,l}\}} & \eta_{\text{EE}} \end{array} $$
We have proven η SE(τ,{a n,l },{p n,l }) to be a continuous positive convex function. As E T (τ,{a n,l },{p n,l }) is a linear positive function, we can solve the optimization problem (21) by the Dinkelbach optimization [14].
Setting \(q=\frac {\eta _{\text {SE}}(\tau,\{a_{n,l}\},\{p_{n,l}\})}{E_{T}(\tau,\{a_{n,l}\},\{p_{n,l}\})}\), the optimization problem (21) can be rewritten as follows:
$$\begin{array}{*{20}l} &\max_{\tau,\left\{a_{n,l}\right\},\left\{p_{n,l}\right\},q} \eta_{\text{SE}}\left(\tau,\{a_{n,l}\},\{p_{n,l}\}\right)-q E_{T}\left(\tau,\left\{a_{n,l}\right\},\left\{p_{n,l}\right\}\right) \end{array} $$
$$\begin{array}{*{20}l} &\text{s.t.} (21b)-(21f) \end{array} $$
Supposing the feasible region of the solutions to (21) is denoted as S, the joint optimization algorithm based on the Dinkelbach optimization is described in Algorithm 2.
Simulations and discussions
In the simulations, the number of SUs is N=10, the number of subchannels is L=32, the channels obey the Rayleigh distribution with the mean −10 dB, the absence probability of the PU is P r (θ l =0)=0.5, the frame duration is T=100 ms, the length of cooperative interaction slot is ε=1 ms, the sampling frequency is f s =100 KHz, the maximal power of each SU is \(p_{n}^{\text {max}}=10\) mW, the sensing power p c =1 mW, and the noise power is \(\sigma _{l}^{2}=0.01\) mW.
Figures 2 and 3 show spectrum efficiency η SE and energy efficiency η EE varying with sensing time τ, respectively. We can see that there exists an optimal τ to maximize η SE or η EE. When τ is smaller, the detection performance will degrade to decrease the spectrum access; however, when τ is larger, both transmission time and transmission energy will decrease. Thus, there is a tradeoff between spectrum sensing and data transmission. We also see that η SE and η EE improve as detection probability decreases and sensing SNR increases. Figures 4 and 5 compare spectrum efficiency and energy efficiency of SEM and EEM, respectively. We can see that SEM can obtain higher spectrum efficiency while EEM may achieve higher energy efficiency, due to the different optimization objectives of SEM and EEM.
Spectrum efficiency with sensing time
Energy efficiency with sensing time
Spectrum efficiency comparison
Energy efficiency comparison
Then, we use power utilization ratio ρ to describe the usage of power in CSCN, which represents the power consumption required to accomplish the desired communication task when the maximum power is limited. ρ can be formulated as the ratio of the total consumed power to the total transmission power, as follows:
$$ \rho=\frac{\sum_{n=1}^{N}\sum_{l=1}^{L}\left(a_{n,l}p_{n,l}\right)+Np_{c}}{\sum_{n=1}^{N} p_{n}^{\text{max}}} $$
Figure 6 indicates the power utilization ratio comparison with detection probability. It can be seen that, as Q d grows, the power utilization ratio of SEM is always 100%. SEM only focuses on maximizing spectrum efficiency and ignores the power consumption. Thus, it tends to use the entire power to maximize the spectrum efficiency during the optimization, leading to the 100% power utilization ratio. However, in EEM, the power consumption gains lots of attention and the power utilization ratio always keeps a low level, which indicates that detection performance is improved via setting a limit to the power consumption. Figure 7 shows the power utilization ratio comparison with maximal power. It is seen that EEM can also achieves higher power utilization compared with SEM.
Power utilization ratio comparison with detection probability
Power utilization ratio comparison with maximal power
In this paper, we have maximized spectrum efficiency and energy efficiency of periodic cooperative spectrum sensing for multichannel CSCN, respectively, through formulating optimization problems and jointly optimizing sensing time, subchannel allocation, and transmission power. We have got the following conclusions: (1) there is a tradeoff between spectrum sensing and data transmission; (2) SEM can obtain higher spectrum efficiency while EEM may achieve higher energy efficiency.
J Mitola, Cognitive radio for flexible mobile multimedia communications. Mob. Netw. Appl. 6(5), 435–41 (2001).
M Jia, X Gu, Q Guo, W Xiang, N Zhang, Broadband hybrid satellite-terrestrial communication systems based on cognitive radio toward 5G. IEEE Wirel. Commun. 23(6), 96–106 (2016).
J Shen, S Liu, Y Wang, Robust energy detection in cognitive radio. IET Commun. 3(6), 1016–23 (2009).
X Liu, J Min, X Tan, Threshold optimization of cooperative spectrum sensing in cognitive radio network. Radio Sci. 48(1), 23–32 (2013).
P Casari, M Zorzi, Protocol design issues in underwater acoustic networks. Comput. Commun. 34(17), 2013–2015 (2011).
AA Syed, W Ye, J Heidemann, in Proc. IEEE 27th Conf. Comput. Commun. (INFOCOM). T-Lohi A new class of MAC protocols for underwater acoustic sensor networks (IEEE, Phoenix, 2008), pp. 231–235.
M Stojanovic, On the relationship between capacity and distance in an underwater acoustic communication channel. ACM SIGMOBILE Mobile Comput. Commun. Rev. 11(4), 34–43 (2007).
JR Nedwell, J Lovell, AW Turnpenny, Experimental validation of a species-specific behavioral impact metric for underwater noise. J. Acoust. Soc. Amer. 118(3), 2019 (2005).
X Liu, X Tan, Optimization algorithm of periodical cooperative spectrum sensing in cognitive radio. Int. J. Commun. Syst. 27(5), 705–20 (2014).
YC Liang, Y Zeng, ECY Peh, Sensing-throughput tradeoff for cognitive radio networks. IEEE Trans. Wirel. Commun. 7(4), 1326–36 (2008).
R Fan, H Jiang, Optimal multi-channel cooperative sensing in cognitive radio networks. IEEE Trans. Wirel. Commun. 9(3), 1128–38 (2010).
P He, L Zhao, S Zhou, Z Niu, Water-filling: a geometric approach and its application to solve generalized radio resource allocation problems. IEEE Trans. Wirel. Commun. 12(7), 3637–47 (2013).
X Liu, M Jia, X Gu, J Yan, J Zhou, Optimal spectrum sensing and transmission power allocation in energy-efficiency multichannel cognitive radio with energy harvesting. Int. J. Commun. Syst. 30(5), 1–15 (2017).
W Zhong, K Chen, X Liu, Joint optimal energy-efficient cooperative spectrum sensing and transmission in cognitive radio. China Commun. 14(1), 98–110 (2017).
This work was supported by the National Natural Science Foundations of China under Grant Nos. 61601221, 91438205, and 61671183; the Natural Science Foundations of Jiangsu Province under Grant No. BK20140828; the China Postdoctoral Science Foundations under Grant No. 2015M580425; the Fundamental Research Funds for the Central Universities under Grant No. DUT16RC(3)045; and the Open Research Fund of State Key Laboratory of Space-Ground Integrated Information Technology under Grant No. 2015_SGIIT_KFJJ_TX_02.
School of Information and Communication Engineering, Dalian University of Technology, Dalian, 116024, China
Communication Research Center, Harbin Institute of Technology, Harbin, 150080, China
Min Jia
Search for Xin Liu in:
Search for Min Jia in:
XL conceived and designed the study. MJ performed the simulation experiments. XL wrote the paper. MJ reviewed and edited the manuscript. Both authors read and approved the final manuscript.
Correspondence to Xin Liu.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Liu, X., Jia, M. Cooperative spectrum sensing and data transmission optimization for multichannel cognitive sonar communication network. J Wireless Com Network 2017, 171 (2017) doi:10.1186/s13638-017-0962-8
Spectrum efficiency | CommonCrawl |
Chat with tutor
Ask Questions, Get Answers
home ask homework questions practice papersmobiletutorspricing
Recent questions tagged exercise5-2
Questions from:
Number of solutions of the equation $\;z^{2}+|z|^{2} =0\;$ is
complex-numbers-and-quadratic-equations
exercise5-2
exemplar
asked Apr 19, 2014 by yamini.v
The amplitude of $\; sin \large\frac{\pi}{5} + i\; \normalsize cos (1-\large\frac{\pi}{5})\;$ is
The equation $\;|z+1-i|=|z-1+i|\;$ represents a
The area of the triangle on the complex plane formed by the complex number $\;z \;,iz \;,$ and $\;z+iz\;$ is :
If the complex number $\;z=x+iy\;$ satisfies the condition $\;|z+1|=1\;$ , then z lies on
$1+i+i^{2}+i^{3} + i^{4}+i^{5}+i^{6}---+i^{2n}\;$ is
If $\;1-i\;$ is a root of the equation $\;x^{2} +ax + b=0\;$ where $\;a,b \in R\;$ , then find the values of a and b .
What is the locus of z , if amplitude of $\;z-2-3i\;$ is $\;\large\frac{\pi}{4}\;$ ?
What is the polar form of the complex number $\;(i^{25})^{3} \;$ ?
What is the principle value of amplitude of $\;1-i\;$ ?
What is the conjugate of $\; \large\frac{\sqrt{5+12 i } +\sqrt{5-12 i } } {\sqrt{5+12 i } -\sqrt{5-12 i } }$
If $\;z_{1} = \sqrt{3} + i \sqrt{3}\;$ and $\;z_{2} =\sqrt{3} + i\;$ , then find the quadrant in which $\;(\large\frac{z_{1}}{z_{2}})\;$ lies
What is the reciprocal of $\;3+ \sqrt{7} i$
What is the smallest positive integer n , for which $\; (1+i)^{2n} = (1-i)^{2n}$
What is the value of $\; \large\frac{i^{4n+1} - i^{4n-1}}{2}\;$ ?
State true or false for the following : If n is a positive integer , then the value of $\;i^{n} + i^{n+1} + i^{n+2} + i^{n+3} \;$ is 0
State true or false for the following : If three complex numbers $\;z_{1} , z_{2}\;$ and $\;z_{3}\;$ are in A.P. then they lie on a circle in the complex plane .
State true or false for the following : The points representing the complex number z for which $\;|z+1| < |z-1|\;$ lies in the interior of a circle
State true or false for the following : The argument of the complex number $\;z=(1+ i\sqrt{3})(1+i)(cos \theta + i sin \theta)\;$ is $\; \large\frac{7 \pi}{12} + \theta$
State true or false for the following : If a complex number coincides with its conjugate , then the number must lie on imaginary axis .
State true or false for the following : The complex number $\;cos \theta + i sin \theta\;$ can be zero for some $\;\theta$
State true or false for the following : Multiplication of non-zero complex number by i rotates it through a right angle in the anti clock - wise direction .
If $\;(2+i)(2+2i)(2+3i)....(2+ni)=x+iy\;$ , then$\;5\;.8\;.13....(4+n^{2})=$----
If a complex number lies in the third quadrant , then its conjugate lies in the ----
The conjugate of the complex number $\;\large\frac{1-i}{1+i}\;$ is
The value of $\;(-\sqrt{-1})^{4n-3}\;$ , where $\;n \in N \;$ is
The locus of $z$ satisfying $\;arg(z) = \large\frac{\pi}{3}\;$ is
If $\;|z|=2\;$ and $\;arg(z) = \large\frac{\pi}{4}\;$ , then $\;z=$
The real value of $' a '$ for which $\;3i^{3}-2ai^{2}+(1-a)i+5\;$ is real is
If $\;z_{1}\;$ and $\;z_{2}\;$ both satisfy $\;z+\overline{z} =2|z-1|\;arg(z_{1}-z_{2}) = \large\frac{\pi}{4}\;,$ then find $\;Im(z_{1}+z_{2})\;.$
Find the value of $k$ if for the complex numbers $\;z_{1}\;$ and $\;z_{2}\;,|1-\overline{z_{1}}z_{2}|^{2} - |z_{1} - z_{2}|^{2}= k (1-|z_{1}|^{2}) (1-|z_{2}|^{2})$
Find the value of a such that the sum of the squares of the roots of the equation $\;x^{2}-(a-2)x-(a+1)=0\;$ is least .
Find the value of P such that the difference of the roots of the equation $\;x^{2}-Px+8=0\;$ is $2$
Find the value of $\;2x^{4} + 5 x^{3} +7x^{2} - x+41\;,$ when $\;x=-2-\sqrt{3} i$
Locate the points for which $\; 3 < |z| < 4$
If a complex number z lies on the interior or on the boundary of a circle of radius 3 units and centre (-4,0) find the greatest and least values of $\;|z+1| \;.$
If $\;z_{1} , z_{2} ,z_{3}\;$ are complex numbers such that $\;|z_{1}|=|z_{2}|=|z_{3}|=|\large\frac{1}{z_{1}} +\large\frac{1}{z_{2}} + \large\frac{1}{z_{3}}|=1\;$ , then find the value of $\;|z_{1}+ z_{2}+z_{3}|\; .$
Let $\;z_{1}\;$ and $\;z_{2}\;$ be two complex numbers such that $\;|z_{1}+z_{2}| = |z_{1}|+|z_{2}|\;$ Then show that $\;arg(z_{1}) - arg(z_{2}) =0$
Let $\;z_{1}\;$ and $\;z_{2}\;$ be two complex numbers such that $\;\overline{z_{1} }+ i\overline{z_{2}}=0\;$ and $\;arg(z_{1} z_{2}) = \pi\;$ . Then find $\;arg(z_{1})$
If $\;|z^{2}-1| = |z|^{2}+1\;$, then show that $z$ lies on imaginary axis
If the imaginary part of $\;\large\frac{2z+1}{iz+1}\;$ is $-2$ , then show that the locus of the point representing $z$ in the argand plane is a straight line .
Solve the equation $\;z^{2} = \overline{z}\;$ where $\;z=x+iy$
If $\;(x+iy)^{\large\frac{1}{3}}= a+ib\;$ where $\;x\;,y\;,a\;,b \in R\;$ show that $\; \large\frac{x}{a} - \large\frac{y}{b} = -2 (a^{2} + b^{2})$
Evaluate : $\; (1+i)^{6} +(1-i)^{3}$
Convert the given complex number in polar form :$\;i\;$
sec-b
Convert the given complex number in polar form :$\;\sqrt{3}+i\;$
Convert the given complex number in polar form :$\;-3$
Convert the given complex number in polar form :$\;-1-i\;$
Convert the given complex number in polar form :$\;-1+i\;$
Convert the given complex number in polar form :$\;1-i\;$
Feedback Terms Facebook Twitter About Us
Help Clay6 to be free
Clay6 needs your help to survive. We have roughly 7 lakh students visiting us monthly. We want to keep our services free and improve with prompt help and advanced solutions by adding more teachers and infrastructure.
A small donation from you will help us reach that goal faster. Talk to your parents, teachers and school and spread the word about clay6. You can pay online or send a cheque.
Please choose your payment mode to continue
Online Payment (Net Banking, Credit/Debit card etc)
100 200 500 or
Bank Transfer (IMPS, NEFT etc)
Below are our bank details. If doing online transfer (IMPS, NEFT), please send a screenshot of the payment transaction receipt from bank (send to [email protected]) along with your name and contact details, and we will email you a thank you note when we receive your payment.
Name on the Account: GEAZY TECHNOLOGIES LLP
A/c Number: 10020391336
IFSC code: IDFB0080102
Bank Name: IDFC First Bank
Branch: NUNGAMBAKKAM, CHENNAI
Please give us at least 24 hours for the payment to be processed.
Send a crossed cheque made to "GEAZY TECHNOLOGIES LLP" and on a separate sheet please mention your name and contact details. We will email you a thank you note when we receive your payment. Send the cheque to:
GEAZY TECHNOLOGIES LLP
FIRST FLOOR DOOR NO 119
FIRST MAIN ROAD, BHUVANESHWARI NAGAR
VELACHERY, CHENNAI - 600042
Please give us at least one week for the cheque to be processed.
Home Ask Homework Questions
Your payment for is successful.
Clay6 tutors use Telegram* chat app to help students with their questions and doubts.
Do you have the Telegram chat app installed?
Already installed Install now
*Telegram is a chat app like WhatsApp / Facebook Messenger / Skype. | CommonCrawl |
A machine learning toolkit for genetic engineering attribution to facilitate biosecurity
Ethan C. Alley ORCID: orcid.org/0000-0002-8219-73821,2,3,
Miles Turpin4,
Andrew Bo Liu5,
Taylor Kulp-McDowall6,
Jacob Swett1,
Rey Edison2,
Stephen E. Von Stetina2,
George M. Church1,3 &
Kevin M. Esvelt ORCID: orcid.org/0000-0001-8797-39451,2
Nature Communications volume 11, Article number: 6293 (2020) Cite this article
The promise of biotechnology is tempered by its potential for accidental or deliberate misuse. Reliably identifying telltale signatures characteristic to different genetic designers, termed 'genetic engineering attribution', would deter misuse, yet is still considered unsolved. Here, we show that recurrent neural networks trained on DNA motifs and basic phenotype data can reach 70% attribution accuracy in distinguishing between over 1,300 labs. To make these models usable in practice, we introduce a framework for weighing predictions against other investigative evidence using calibration, and bring our model to within 1.6% of perfect calibration. Additionally, we demonstrate that simple models can accurately predict both the nation-state-of-origin and ancestor labs, forming the foundation of an integrated attribution toolkit which should promote responsible innovation and international security alike.
After a nearly decade-long, $100 million dollar investigation into the 2001 Amerithrax anthrax attacks1, the National Academy of Sciences reported that "it is not possible to reach a definitive conclusion about the origins…based solely on the available scientific evidence"2. In the aftermath of Amerithrax, the scientific community mobilized to develop genetic3 and phenotypic4 methods for forensic attribution of biological attacks. However, these efforts were severely constrained by the limits of biotechnology; satisfactory sequencing of the anthrax agent's genome would have cost $500,000 dollars at the time5. Today, exponential improvement in tools for biotechnology have made this and other life-saving tasks increasingly inexpensive and accessible, yet these advancements have not been matched with corresponding improvements in tools to support responsible innovation in genetic engineering. In particular, attribution is still considered technically challenging and unsolved6,7,8,9.
Just as programmers leave clues in code, biologists "programming" an engineered organism differ on unique design decisions (e.g., promoter choice), style (e.g., preferred codon optimization), intent (e.g., functional gene selection), and tools (e.g., cloning method). These design elements, together with evolutionary markers, form designer "signatures". Advances in high-throughput, spatial, and distributed sequencing10,11,12, and omic-scale phenotyping13 make these signatures easier to collect but require complex data analysis. Recent work suggests that deep learning, a flexible paradigm for statistical models built from differentiable primitives, can facilitate complex biological data analysis14,15,16,17,18,19. We propose deploying these methods to develop a toolkit of machine-learning algorithms which can infer the attributes of genetically engineered organisms—like lab-of-origin—to support biotechnology stakeholders and enable ongoing efforts to scale and automate biosecurity20.
A prior attempt to use deep learning for genetic engineering attribution provided evidence that it might be possible but was <50% accurate21. Here, we reach over 70% lab-of-origin attribution accuracy using a biologically motivated approach based on learned DNA motifs, simple phenotype information, and Recurrent Neural Networks (RNNs) on a model attribution scenario with data from the world's largest plasmid repository, Addgene22 (Fig. 1a). We show that this algorithm, which we call deteRNNt, can provide more calibrated probabilities—a prerequisite for practical use—and with simple models demonstrate that a wider range of attribution tools are possible. By simultaneously addressing the need for accuracy, more calibrated uncertainty, and broad capabilities, this computational forensic framework can help enable the characterization of engineered biological materials, promote technology development which is accountable to community stakeholders, and deter misuse.
Fig. 1: Deep learning on DNA motifs and minimal phenotype information enables state-of-the-art genetic engineering attribution.
a The Addgene plasmid repository (bottom) provides a model through which to study the deployment scenario (top) for genetic engineering attribution. In the model scenario, research laboratories engineer organisms and share their genetic designs with the research community by depositing the DNA sequence and phenotypic metadata information to Addgene. In the corresponding deployment scenario, a genetically engineered organism of unknown origin is obtained, for example, from an environmental sample, lab accident, misuse incident, or case of disputed authorship. By characterizing this sample in the laboratory with sequencing and phenotype experiments, the investigator identifies the engineered sequence and phenotype information. In either case, the sequence and phenotype information are input to an attribution model which predicts the probability the organism originated from individuals connected to a set of known labs, enabling further conventional investigation; * indicates a hypothetical "best match" predicted by the imagined model. Above, the same information may be input to a wider toolkit of methods which provide actionable leads and characterization of the sample to support the investigator. b DNA motifs are inferred through the Byte Pair Encoding (BPE) algorithm25, which successively merges the most frequently occurring pairs of tokens to compress input sequences into a vocabulary larger than the traditional four DNA bases. Progressively, sequences become shorter and new motif tokens become longer. c BPE on the training set of Addgene plasmid sequences. The x axis shows the tokens rank-ordered by frequency in the sequence set (decreasing). The y axis shows token length, in base pairs. Example tokens (bold, numbered) are linked to biologically meaningful sequence motifs. d The deteRNNt method takes 100 random subsequences from the plasmid encoded with BPE and embeds them into a continuous space via a learned word embedding59 matrix layer. These (potentially variable-length) sequences are processed by an LSTM network. We average the predictions from each subsequence to obtain a softmax probability that the plasmid originated in a given lab. e Top-k prediction accuracy on the test set. Compared: deteRNNt, deteRNNt trained without phenotype, BLASTn, CNN deep-learning state-of-the-art method21, a baseline guessing the most abundant labs from the training set, and guessing uniformly randomly (so low, cannot be seen).* indicates P < 10−10, by Welch's two-tailed t test on n = 30 × 50% bootstrap replicates compared to BLAST. f Top-k prediction accuracy on test set with and without phenotype information. * indicates P < 10−10, by Welch's two-tailed t test on n = 30 × 50% bootstrap replicates compared to no phenotype.
Training and model evaluation
DNA sequences are patterned with frequent recurring motifs of various lengths, like codons, regulatory regions, and conserved functional regions in proteins. Workhorse bioinformatic approaches like profile Hidden Markov Models23 and BLAST24 often rely on implicit or explicit recognition of local sequence motifs. Using an algorithm called Byte-Pair Encoding (BPE)25, which was originally designed to compress text by replacing the most frequent pairs of tokens with new symbols26, we inferred 1000 salient motifs directly from the Addgene primary DNA sequences (Fig. 1b and "Methods") in a format suitable for deep learning. We found that the learned motifs appear to be biologically relevant, with the longer high-ranked motifs including fragments from promoters and plasmid origins of replication (ORIs) (Fig. 1c). Each of these inferred motifs is represented as a new token so that a primary DNA sequence is translated into a motif token sequence that models can use to infer higher-level features.
Next, we prepared the Addgene dataset for training and model evaluation. The data were minimally cleaned to create a supervised multiclass classification task ("Methods"), in which a plasmid DNA sequence and six simple phenotype characteristics (Supplementary Table 1) are used to predict which of the 1314 labs in the dataset deposited the sequence. This setup mirrors a scenario in which a biological sample is obtained, partially sequenced, and assayed to measure simple characteristics like growth temperature and antibiotic resistance (Supplementary Table 1). The method can be trivially extended to incorporate any measurable phenotypic characteristic as long as a sufficiently large and representative dataset of measurements and lab-of-origin can be assembled, enabling integration with laboratory assays.
Before any analysis began, the data were split to withhold a ~10% test set for evaluation ("Methods"). We further decreased the likelihood of overfitting by ensuring plasmids known to be derived from one another were not used in both training and evaluation ("Methods").
The deteRNNt model predicts lab-of-origin from DNA motif sequence and phenotypic metadata
For our model, we considered a family of RNNs called long-short term memory (LSTM) networks27, which process variable-length sequences one symbol at a time, recurrently, and are designed to facilitate the modeling of long-distance dependencies (Supplementary Fig. 1). We hypothesized that the intrinsically variable-length nature of RNNs would be well suited for biological sequence data and robust to data-processing artifacts, and that attribution may rely on identifying patterns between distant elements, such as regulatory sequences and functional genes. We tokenized the sequences with variations of BPE, as described above, and searched over 250 configurations of architectures and hyperparameters (Supplementary Figs. 2 and 3, "Methods").
The best-performing sequence-only model from this search (Supplementary Fig. 1) was then augmented with the phenotypic data (Supplementary Table 1) and fine-tuned ("Methods"). We elected to average the predictions of subsequences within each plasmid, as ensembling is widely recognized to reduce variance in machine-learning models28. On an Nvidia K80 GPU, the resulting deteRNNt prediction pipeline (Fig. 1d) takes <10 s to produce a vector (summing to 1) of model confidences that the plasmid belongs to each lab.
We compared Top-k lab-of-origin prediction accuracy on held-out test-set sequences (Fig. 1e and Supplementary Table 2). Our model, with the basic phenotype information, reaches 70.1% top 1 accuracy. Our model additionally reached 84.7% top 10 accuracy, a more relevant metric for narrowing down leads. This is a ~1.7× reduction in the top 10 error rate of a deep-learning Convolutional Neural Network (CNN) method previously considered the deep-learning state of the art21 and similar reduction in BLAST29 top 10 error. We note that BLAST cannot produce uncertainty estimates and has other drawbacks to deployment, described elsewhere21. As expected, training without phenotype information reduced the Top-k accuracy of our model (Fig. 1f).
DeteRNNt is reasonably well-calibrated and can be improved with temperature scaling
Having achieved lab-of-origin attribution accuracy of over 70%, we next considered the challenge posed to practical use by the black-box nature of deep learning. While powerful prediction tools, deep-learning algorithms are widely recognized to be difficult to interpret30,31. Investigators of biological accidents or deliberate misuse will need to weigh the evidence from computational tools and other indicators, such as location, the incident's consequences, and geopolitical context. Without this capability, computational forensic tools may not be practically useful. Consequently, as a starting point, we should demand that our models precisely represent prediction uncertainty: they must be well-calibrated. An attribution model is perfectly calibrated if, when it predicts that a plasmid belongs to Lab X with Y% confidence, it is correct Y% of the time. Performant deep-learning models are not calibrated by default; indeed, many high-scoring models exhibit persistent overconfidence and miscalibration32.
Calibration can be measured empirically by taking the average model confidence within some range (e.g., 90–95%) and comparing it to the ground-truth accuracy of those predictions. Taking the average difference across all binned ranges from 0 to 100% is called the Expected Calibration Error (ECE). For situations where conservatism is warranted, the maximum calibration error (MCE), which instead measures the maximum deviation between confidence and ground truth, can be used ("Methods").
Following standard calibration analysis32,33, we find that our model is calibrated within 5% of the ground truth on average (ECE = 4.7%, MCE = 8.9%, accuracy = 70.1%, Fig. 2a). For comparison, the landmark image recognition model ResNet 11034 is within 16.53% of calibration on average on the CIFAR-100 benchmark dataset32. We next deployed a simple technique to improve the calibration of deteRNNt called temperature scaling32. After training, we learn a parameter called "temperature" which divides the unnormalized log probabilities (logits) in the softmax function by performing gradient descent on this scalar parameter to maximize the log probability of a held-out subset of data, our validation set. While more sophisticated techniques exist, temperature scaling provides the first-order correction to over or under-confidence by increasing or decreasing the Shannon entropy of predictions (Fig. 2a, b). After scaling, the re-calibrated model achieved a lower ECE of just 1.6% with a marginal decrease in accuracy (MCE = 3.7%, accuracy = 69.3%, Fig. 2b, "Methods"). Using such calibrated predictions, investigators will be able to put more weight on a 95% prediction than a 5% prediction, choose not to act if the model is too uncertain, and have a basis for considering the relative importance of other evidence.
Fig. 2: Model calibration and improvement with temperature scaling.
a The effect of temperature scaling on the top ten predictions of a randomly selected plasmid from the validation set's predicted logits. The temperature parameter divides the class logits in the softmax, increasing (higher temperature) or decreasing (lower temperature) entropy of predictions. T = 1 corresponds to default logits without temperature scaling. b Effective number of labs, exp(entropy), for the same random example as (a) as a function of temperature scaling parameter T with the unscaled default in black and the value of T learned by the temperature scaling calibration procedure shown in dotted outline. c Reliability diagram, pre-calibration. Every prediction in the test set is binned by predicted probability into 15 bins (x axis). Plotted for each bin is the model's predicted probability, or the confidence (dot) and the percentage of the time the model was correct within that bin, or the accuracy (dash). Difference between accuracy and confidence is shown with a shaded bar, red indicating confidence > accuracy, blue indicating confidence < accuracy. For a perfectly calibrated model, the accuracy and confidence would be equal within each bin. Overall prediction accuracy across all the bins and ECE are shown. d Same as (a) but after temperature scaling re-calibration.
Predicting plasmid nation-of-origin with Random Forests
With a framework for calibration of attribution models that can in principle be applied to any deep-learning classification algorithm, we next sought to expand the toolkit of genetic engineering attribution. While important, lab-of-origin prediction is only one component of the attribution problem. A fully developed framework would also include tools focused on laboratory attributes, which could narrow the number of labs to investigate, and the research dynamics by which designs collaboratively evolve, which could generate investigative leads. Furthermore, a suite of such tools may improve our understanding of the research process itself. Here, we begin lab attribute prediction with nation-of-origin prediction, which is particularly relevant for the Biological Weapons Convention (BWC). In terms of research dynamics, we begin by predicting the ancestry-descendent relationships of genetically engineered material.
For these analyses, we used a simple machine-learning model called a Random Forest (RF)35,36, which ensembles decision trees, and encoded the DNA sequence of each plasmid with n-grams, which count the occurrence of n-mers in the primary sequence ("Methods"), instead of the more complex BPE-DNA motif encoding and highly parameterized deteRNNt model. We see this as a simple, but the reasonably performant method for checking the feasibility of each task.
We inferred the nation-of-origin of a majority of the labs in the Addgene dataset through provided metadata and, in a few hundred cases, manual human annotation, without changing the boundaries of the train-validate-test split ("Methods"). There were 34 countries present after cleaning ("Methods"). We proposed a two-step nation-of-origin prediction process: (1) binary classification of a plasmid into Domestic vs. International, to identify domestic incidents like local lab accidents or domestic bioterrorism; (2) multiclass classification to directly predict a plasmid's nation-of-origin, excluding the domestic origin, to identify the source of an international accident or foreign bio-attack. We found the United States to make up over 50% of plasmid deposits in all the subsets (Fig. 4b), so we selected it as the domestic category for this analysis; in principle, any nation could play this role, and other training datasets would have a different geographic emphasis. We trained an RF model, as described above, to perform the binary U.S. vs. International classification. As before, a ~10% test set was withheld for evaluation ("Methods"). We found that RF performed comparably to BLAST (84.2% and 85.1%, respectively), but that both performed substantially better than guessing the most abundant class and uniformly choosing a class (Fig. 3a, RF ROC in Fig. 3b).
Fig. 3: Simple machine-learning methods can accurately attribute nation-of-origin of engineered DNA.
a Domestic vs. International binary classification accuracy, with the United States defined as domestic of BLAST, Random Forests (RF), a baseline of predicting the most abundant class from the training set and guessing uniformly randomly. Random Forests reach 84% accuracy. b Receiver-operating characteristic (ROC) curve for the RF model, with the area under the curve (AUC) shown. c Top-k test-set accuracy of multiclass classification of nation-of-origin (excluding the United States, which is classified in (a, b)) for RF, BLAST, and a baseline of predicting the most abundant class from the training set, and guessing uniformly randomly. With only 33 countries to choose between, guessing based on top ten abundance reaches 83.8%. * indicates P < 10−10, by Welch's two-sided t test on 30 × 50% bootstrap replicates compared to BLAST. d Test-set accuracy of RF on the multiclass classification of nation-of-origin, as in (c). Colored by prediction accuracy within each nation class. Enlarged Europe and East Asia shown above. e Nation-specific test-set prediction accuracy correlates with the log10 of the number of training examples. Shown in green is the mean of 30× bootstrap replicates subsampling 50% of the test-set examples for each class and computing the prediction accuracy within that subset. Error bars represent standard deviations of the same.
We proceeded to nation-of-origin prediction with the remaining international country assignments. A multiclass RF model was trained on the training set. Prediction accuracies within each country varied substantially (Fig. 3d), but overall RF accuracy was 75.8% with top 10 accuracy reaching 96.7% (Fig. 3c and Supplementary Table 3). If asked to predict the top three most likely countries, which could be more useful for investigators, we find 87.7% top 3 accuracy for RF compared to 47.0% for guessing the abundant classes. As shown in Fig. 3e, classification performance is degraded by low sample size for many countries in the dataset, explaining some of the variability in Fig. 3d. We, therefore, expect performance to improve and variability to be reduced as more researchers in these countries publish genetic designs. Together, these results suggest that it may be possible to model genetic engineering provenance at a coarse-grained level, which should be helpful in cases where a lab has never been seen before. However, the dataset used here has substantial geographic bias due to Addgene's location in North America and the difference in data and materials sharing practices as a function of geography. Additionally, the accuracy of our simple model could be substantially improved. Even with those considerations addressed, improved nation-of-origin models, as with the other methods in this paper, should be one part of an integrated toolkit that assists human decision-makers rather than assigning origin autonomously.
Inferring collaboration networks by predicting ancestor lab-of-origin
We continued to expand the toolkit of attribution tools by evaluating the feasibility of predicting plasmid research dynamics in the form of ancestry-descendent relationships. These might help elucidate connections between research projects, improve credit assignment and provide evidence in cases when the designer lab has not been observed in the data but may have collaborated with a known lab. Ancestry relationships may be caused by deliberate collaboration between one lab sharing an "ancestor" plasmid with another lab who creates a derivative, or by incidental reuse of genetic components from published work. In our usage, these ancestry-descendent relationships form "lineages" of related plasmids.
We inferred plasmid ancestor-descendant linkages from the relevant fields in Addgene which acknowledge sequence contributions from other plasmids ("Methods"). By following these linkages, we identified 1223 internally connected lineage networks of varying sizes (example networks in Fig. 4a). Analysis of the networks revealed that the number of plasmids deposited by lab and country (Fig. 4b), number of lab-to-lab connections, and number of connections per plasmid (Fig. 4c) exhibited extremely skewed, power-law like distributions ("Methods"). These patterns reflect some combinations of the trends in who deposits to Addgene and the underlying collaboration process.
Fig. 4: Lineage networks reveal skewed distributions in Addgene deposits and facilitate ancestor lab attribution with simple machine-learning models.
a Example ancestry-descendent lineages inferred from Addgene plasmid data. The largest lineage top, followed next down by the largest diameter, followed by a collection of examples from the more complex lineages. b Number of deposits. Above, the number of plasmids deposited per lab, with ranked labs (descending) on the x axis and log10 of the number of plasmids deposited on the y axis. Top contributing labs in the table. Below, the number of deposits per country. c Number of ancestry-descendent connections. Above, number of ancestry-descendent linkages per lab, with most linked shown in the table as in (b). Below, a number of linkages per plasmid. For each, subpanel shows the left side of the distribution by cutting out single linkages which constitute the right tail. d Lineage network between nations. Links indicate that at least one lab in a country has a plasmid derived from the other country or vice versa. Width of the link is proportional to the number of connected labs. e Top four PageRank scores from the lineage network from (d), represented as a directed weighted graph. The score is shown here as a percentage. f Top-k accuracy predicting the ancestor lab: of a simple Random Forest (RF) model, the baseline of predicting the most abundant class(es) from the training set, and guessing uniformly randomly.
Next, we constructed an international lineage network: a weighted directed graph of ancestry-descendent connections between labs in different countries, with weights proportional to the number of unique connections from lab to lab (Fig. 4d and "Methods"). We used the Google PageRank algorithm, which scores the importance of each country in the network by measuring the percentage of time spent in each country if one followed the links in the network randomly (Fig. 4e and "Methods"). As expected from the geographic centrality of the United States in this dataset, it scores by far the highest at 36.2%. Unexpectedly, Denmark, Austria, and Japan followed as the next most important nodes despite not being in the top four countries in terms of raw Addgene contributions.
Finally, we labeled each plasmid with the lab of its most recent ancestor, e.g., the lab which contributed part of the source sequence of the plasmid. We call this ancestor lab attribution, to distinguish that it occurs one step above lab-of-origin attribution in the lineage network. We re-split the data to ensure the representation of each ancestor lab in the training and test sets ("Methods"). On the held-out test set, we find top 1 accuracy of 87.0% and top 10 accuracy of 96.5%, compared to guessing the most frequent class(es) at 13.6% (k = 1) and 45.0% (k = 10), respectively (Fig. 4f and Supplementary Table 4). We note that this high accuracy compared to standard lab-of-origin prediction is likely explained by having only 188 ancestor labs to guess between (compared to 1314 previously), and is limited by the lineage structure we were able to infer from Addgene, which may be particular to this unique resource instead of representative of broader trends in collaboration.
Exploring the deteRNNt model
A better understanding of the predictions of and features learned by neural networks for attribution is desirable. However, while model calibration can be a universal criterion for any black-box model, it is notoriously challenging (and often model specific) to visualize and understand the high-dimensional non-linear function of the input data which a deep neural network represents30,31.
We began by visualizing the learned feature representation of the model. We took the model's 1000-dimensional hidden activations and projected them into two dimensions using a t-distributed stochastic neighbor embedding37 (tSNE) ("Methods"). We saw many large, well-separated clusters of plasmids that are assigned high probability by the model (Fig. 5a). This is in line with the expectation that a model can classify a group of plasmids more accurately if they are more separable in the hidden space.
Fig. 5: Interpreting the deteRNNt model.
a Two-dimensional tSNE of the pre-logit hidden layer of deteRNNt for the validation set. Colored by probability assigned to the true lab. An interactive 3d visualization is available at http://papers.altlabs.tech/visualize-hiddens.html. b Scanning mask of 10 N's across linear sequence (x axis) of Chris Voigt lab plasmid pCI-YFP, (Genbank JQ394803.1) on the brink of predicting Baojun Wang Lab vs. Voigt Lab. At each x position is shown the probabilities given by softmax on the mean of the logits from all the sequences which include that position masked with N. Plasmid schematic is shown below. c The scanning 10-N mask analysis from (b) applied to Edward Boyden Lab Plasmid pAAV-Syn-SomArchon (Addgene #126941). The second-most likely lab predicted by deteRNNt on this plasmid is shown in red, with plasmid schematic above. d Predicting lab-of-origin from subsequences of varying lengths K scanned across the linear sequence. At each x, the probabilities are given by softmax of the mean of logits from subsequences which include that position in the window. Color indicates subsequence K-mer length. Linear sequence position is given by the x axis and shared with (b) with the plasmid schematic above. e Custom-designed gene-drive plasmid derived from an Omar Akbari Aedes aegypti germline-Cas9 gene-drive backbone (AAEL010097-Cas9, Addgene #100707) carrying a payload of Cas9-dsRed, a guide cassette, and SomArchon from the Boyden plasmid pAAV-Syn-SomArchon from (c), with scanning subsequence window as in (d) and K = 1024. Predicted probabilities for Omar Akbari (red) and Boyden (blue) are shown, along with the correspondingly colored plasmid schematic. All unlabeled regions are from the Akbari vector. The right and left homology arms shown in gray were introduced by our design and come from neither lab's material, and the U6a and U6c were from Akbari lab material (Addgene #117221 and #117223, respectively) but introduced into the backbone by our design (gray outlined in red).
We next examined three case studies to inspect the contribution of DNA sequence features to deteRNNt predictions. These should be taken as suggestive, given the anecdotal nature of analyzing an example plasmid. We first looked at a plasmid from Chris Voigt's lab, pCI-YFP (Genbank JQ394803.1) which is not in Addgene and has been used for this purpose in prior work21. pCI-YFP is notable because it is composed of very widespread and frequently used components (Fig. 5b, below). We found that sequence-based deteRNNt is uncertain whether this plasmid was designed by the Voigt Lab or by Baojun Wang's Lab (~25% probability to each). Notably, the Wang lab does substantial work in genetic circuit design and other research areas which overlap with the Voigt lab. We adapted a method described previously21 to perform an ablation analysis, where the true DNA sequence within a window is replaced by the unknown symbol N. We scan a window of 10 N's across the length of the sequence and predict the lab-of-origin probabilities for each.
We visualized what sequences were critical for predicting the Voigt lab (drop-in Voigt probability when ablated) and which were "distracting" the model into predicting the Wang label (rise in Voigt probability when ablated) (Fig. 5b). Among the sequence features we identified, we found a sequence in a Tn903 inverted repeat (upstream of the p15a origin of replication) which, when deleted, causes a spike in Voigt Lab probability, and also found two unlabeled primer sites, VF2 and VR which seem to be important for predicting Voigt and Wang, respectively.
Given the age of pCI-YFP (published in 2012), we were concerned that it might not be the most representative case study because (1) genetic engineering is a rapidly evolving field and more recent techniques and research objectives could likely have shifted the data distribution since this plasmid's design and publication, and (2) older plasmids are more likely to have their subcomponents used in other plasmids, especially within the same lab where a material transfer is instantaneous.
We identified the plasmid pAAV-Syn-SomArchon (Addgene #126941) from Edward Boyden's lab for an additional case study. This plasmid was also outside our dataset because it was published38 after our data were downloaded. Beyond representing a case of generalizing forward in time and to an unpublished research project, this plasmid also contains the coding sequence for SomArchon, which is the product of a screening-based protein engineering effort. We note that despite being a design newer than our data, this plasmid builds on prior work, and we expect some microbial opsins and elements of the backbone to be present elsewhere in Addgene.
Given the entire sequence, sequence-based deteRNNt predicts the plasmid belongs to the Boyden Lab as its top choice at 44%, followed by the "Unknown Engineered"' category at 10%, followed by the Adam Cohen lab at 4%, which has also published on microbial-opsin based voltage indicators39,40. Performing the same analysis as above, we examined the predictions while scanning 10 Ns along the sequence (Fig. 5c). Interestingly, there were many restriction sites that when deleted dropped the Boyden probability. Restriction sites choices can often be a hallmark of a research group. In addition, the choice of what to do to the ATG start codon in a fusion protein (in this case the eGFP fused to ArchonI) can differ between labs. One could leave it be, delete it, or mutate it, as the Boyden group chose to do in this case (ATG to GTG, M to V), which our model appears to identify as a critical feature (drop-in Boyden probability of ~10% when deleted) (Fig. 5c).
Unlike architectures which assume a fixed-length sequence, deteRNNt can accept any length sequence as input. We elected to use this capability to examine the marginal contribution of K-mer subsequences of various length to the overall prediction. Unlike the analysis above, this tests what the model believes about a fragment of the plasmid in isolation. We selected a sliding window of a subsequence of length K across pCI-YFP and visualized the positional average Voigt lab predicted probability (Fig. 5d). We see that in general subsequence predictions produce probabilities substantially lower than the full sequence ~25%, with higher-frequency features showing much lower predicted probability. K-mers containing BBa_B0015 on the right-hand terminus of the sequence have a surprisingly high marginal probability predicting Voigt, around 10% for K = 256, while the ablation analysis does not identify this as a critical region.
We were curious to what degree the predictions of deteRNNt on these two case studies were dependent on having the highly discriminative motifs, like restriction sites and codon selection noted above, co-occur with backbone elements which are incidental to the functioning of the system. In practice, the backbone plasmid may substantially change as part of a new project within a lab, or an actor who is not inside addgene may combine elements from existing plasmids for a new purpose. While we cannot expect a multiclass classification model to identify a never-before-seen lab, we might hope that it could identify which components of a plasmid were likely derived from, or designed by, a given lab. If this subcomponent attribution could be achieved, it might provide another useful tool for understanding the origin of an unknown genetic design.
To test this, we designed a gene-drive plasmid for use in Aedes aegypti by starting from a germline-Cas9 backbone designed in the Omar Akbari lab (AAEL010097-Cas9, Addgene #100707). In our design, the original Cas9 is fused with dsRed from elsewhere in the plasmid, and this backbone was modified to introduce homology arms and include the payload of a guide RNA cassette and the SomArchon CDS from the Boyden Lab's pAAV-Syn-SomArchon examined above ("Methods"). As we might expect, the model is very uncertain about this hybrid plasmid and assigns ~35% probability to it being from the "Unknown Engineered" class. Among the true lab classes it assigns the Akbari lab highest probability at ~3%, but only gives the Boyden lab 0.3%. When we apply the scanning K-mer analysis from Fig. 5d, with K = 1024, we find that the model peaks its Akbari lab predictions on backbone elements around two piggyBac sites, and furthermore, the Boyden lab-derived SomArchon payload has a peak of predicted Boyden probability, indicating that the model could identify this functional payload even absent clues from the pAAV-Syn-SomArchon backbone. While these analyses should be taken as anecdotes as with any case study, we note that future work could explicitly model the attribution problem as predicting the lab-of-origin of every position in a sequence.
Together, these results suggest a practical and accurate toolkit for genetic engineering forensics is within reach. We achieve 70% accuracy on lab-of-origin prediction, a problem previously thought challenging if not intractable. Our work has the advantages of using biologically motivated, motif-based sequence models, and leveraging phenotype information to both improve accuracy and interface with laboratory infrastructure. Furthermore, with model calibration, we provide the first framework for weighing the predictions of attribution forensics models against other evidence. Finally, we establish new attribution tasks—nation-of-origin and ancestor lab prediction—which promise to aid in bioweapons deterrence and open the possibility of more creative attribution technologies. While we focus here on security, attribution has wide implications. For example, attribution could promote better lab safety by tracing accidental release41. Additionally, we discovered an interesting, albeit anecdotal, power-law like skew in Addgene plasmid deposits, which may reflect the particular dynamics of this unique resource, but is also consonant with prior work on scale-free patterns of scientific influence42,43,44. We believe that a deeper understanding of the biotechnology enterprise will continue to be a corollary of attribution research, and perhaps more importantly, that computational characterization tools, like ancestor lab prediction, will directly promote openness in science, for example by increasing transparency in the acknowledgement of sequence contributions from other labs, and establishing a mechanism by which community stakeholders and policymakers can probe the research process.
Our analysis has limitations. Machine learning depends on high-quality datasets and the data required to train attribution models is both diverse and scenario dependent. That said, we believe it is the responsibility of the biotechnology community to develop forensic attribution techniques proactively, and while the Addgene attribution data provide a reasonable model scenario, future work should look to build larger and more balanced datasets, validate algorithms on other categories of engineered sequences like whole viral and bacterial genomes, and analyze their robustness to both obfuscation efforts and dataset shifts, especially considering new methods for robust calibration45. We further note that 70% lab-of-origin attribution accuracy is not conclusive on its own, and any real investigation with tools at this level of accuracy would do well to almost entirely rely on human expertise. However, combined with the traditional investigation, microbial forensics3,4, isotopic ratio analysis46,47,48, evolutionary tracing, sequence watermarking49,50, and a collection of machine-learning tools targeting nation-states, ancestry lineages, and other angles, our results suggest that a powerful integrated approach can, with more development, amplify human expertise with practically grounded forensic algorithms. In the meantime, developing automated attribution methods will help scale efforts to understand, characterize and study the rapidly expanding footprint of biotechnology on society, and may in doing so promote increased transparency and accountability to the communities affected by this work.
We see our results as the first step toward this integrated approach, yet more work is needed. The bioengineering, deep learning, and policy communities will need to creatively address multidisciplinary problems within genetic engineering attribution. We are hopeful that the gap between these fields can be closed so that tools from deep learning and synthetic biology are proactively aimed at essential problems of responsible innovation.
Graphing and visualizations
All graphs and plots were created in python using a combination of the seaborn (https://seaborn.pydata.org/), matplotlib (https://matplotlib.org/3.1.0/index.html), plotly (for geographic visualization and interactive 3d hidden state visualization: https://plot.ly/) and networkx (https://networkx.github.io/). The graphics in Fig. 1a were created in part with BioRender.com.
Processing the Addgene dataset
The dataset of deposited plasmid sequences and phenotype information was used with permission from Addgene, Inc.
Scientists using Addgene may upload full or partial sequences along with metadata such as growth temperature, antibiotic resistance, vector backbone, vector manufacturer, host organism, and more. For quality control, Addgene sequences portions of deposited plasmids, and in some cases sequences entire constructs. As such, plasmid entries featured sequences categorized as addgene-full (22,937), depositor-full (32,669), addgene-partial (56,978), depositor-partial (25,654). When more than one category was listed, we prioritized plasmids in the order listed above. When there was more than one entry for a category, the longest sequence was chosen. If more than one partial sequence was present, we concatenated them into a single sequence. The final numbers by sequence category were addgene-full (22937), depositor-full (28465), addgene-partial (27185), and depositor-partial (3247). Plasmids were dropped if they did not have any registered sequences. Any Us were changed to Ts and letters other than A, T, G, C, or N were changed to Ns. The resulting dataset contained 81834 plasmids, from 3751 labs.
The raw dataset contains as many as 18 metadata fields from Addgene. In the final dataset, we kept only host organism species, growth temperature, bacterial resistance, selectable markers, growth strain, and copy number. These fields were selected because they are phenotypic characteristics we expect to be easy to measure in the scenario considered here, where a sample of the organism is available for sequencing and wet-lab experimentation. For more information on what these phenotyping assays might look like, see Supplementary Table 1.
Metadata fields on Addgene are not standardized and have many irregularities as a result. To deal with the lack of standardization, our high-level approach was to default to conservatism in assigning a given plasmid some phenotype label. We used an "other" category for each field to avoid noisy or infrequent labels, in particular, we assigned any label that made up <1% of all labels in some field to "other". Additionally, some of the fields had multiple labels e.g., the species field may list multiple host organisms: "H. sapiens (human), M. musculus (mouse), R. norvegicus (rat)". These fields were one-hot encoded, allowing multiple columns to take on positive values if multiple labels were present. If the field contained no high-frequency labels (>1%) and was not missing, the "other" column was set to 1. A small number of plasmids (117) had sequences but no metadata. While far from perfect, our choice to use the default category of "other" to avoid introducing noisy information should prevent spurious features from being introduced by including phenotypic metadata. Given the performance boost achieved by including even this very minimal phenotype information, we are enthusiastic about future efforts to collate more standardized, expressive and descriptive phenotype information.
Inferring plasmid lineage networks
Many plasmids in the Addgene database reference other plasmids used in their construction. Within the metadata of each plasmid, we searched for references to other sequences in the Addgene repository, either by name or by their unique Addgene identifier. Plasmid names were unique except for 1519 plasmids that had names associated with more than one Addgene ID (331 of these also had duplicate sequences). However, none of these plasmids with duplicate names were referenced by name by some other plasmid. We considered a plasmid reference in one of the following metadata fields to be a valid reference: A portion of this plasmid was derived from a plasmid made by, Vector backbone, Backbone manufacturer, and Modifications to backbone. Self-references were not counted, and in the rare case where two plasmids referred to each other, the descendent/ancestor relationship was picked at random.
We were interested in discovering networks of plasmids with shared ancestors—collectively we may call this subset of plasmids a lineage network. The problem of assigning plasmids to their associated network reduces to the problem of finding a node's connected component from an adjacency list of a directed, potentially cyclic, graph. The algorithm proceeded iteratively: in each round we picked some unvisited node. We then performed breadth-first search (plasmids were allowed to have multiple ancestors) and assigned all nodes visited in that round to a lineage network. In the case where a visited node pointed to a node that was already a member of some network, the two networks were merged. Keeping track of nodes visited in each round prevented the formation of cycles. We verified this result by reversing our adjacency list and running the same algorithm, equivalent to traversing by descendants instead of ancestors.
Train-test-validation split
To rigorously evaluate the performance of a predictive algorithm, strong boundaries between the datasets used for training and evaluation are needed to prevent overfitting. We follow best practices by pre-splitting the lab-of-origin Addgene data into an 80% training set, ~10% validation set for model selection, and ~10% test set held out for final model evaluation. In our vocabulary, the training set can be used for any optimization, including fitting an arbitrarily complex model. The validation set may only be used to measure the performance of an already trained model, e.g., to select architecture or hyperparameters; no direct training. Finally, the test set may only be used after the analysis is completed, the architecture and hyperparameters are finalized, as a measure of generalization performance. We addressed two additional considerations with the split:
A large number of labs only deposited one or a few sequences. This is insufficient data to either train a model to predict that lab class or reasonably measure generalization error.
Like many biological sequence datasets, the Addgene data are not independent and identically distributed because many plasmids are derived from others in the dataset, potentially creating biased accuracy measures due to overperformance on related plasmids used for both training and evaluation.
To handle the first, we choose to pool plasmids with fewer than ten examples into an auxiliary category called "Unknown Engineered", and additionally stratify the split to ensure that every lab has at least three plasmids in the test set.
For the second, we inferred lineage networks (see above). We stratified the split such that multi-lab networks were not split into multiple sets. In other words, each lineage was assigned either to the training, validation, or test set as a group, not divided between them.
We used the GroupShuffleSplit function in python with sklearn (http://scikit-learn.org/stable/) to randomly split given these constraints. The final split had 63,017 training points, 7466 validation points, and 11,351 test points. The larger test set is a direct result of enforcing 1/3 of rare lab's plasmids are split there for generalization measurement. We note that, because model selection is occurring on the validation set which has no representation of a number of rare labs, there is a built-in distributional shift that makes generalizing to our test set particularly challenging. However, we believe that this is appropriate to the problem setting—attribution algorithms should be penalized if they cannot detect rare labs, because in a deployment scenario the responsible lab may be unexpected. A visualization of this phenomena, and the lab distribution after splitting can be found in Supplementary Fig. 4.
We confirmed that this cleaning and splitting procedure did not dramatically change the difficulty of the task from prior work by reproducing the model architecture, hyperparameters, and training procedure of a model with known performance on a published dataset (see Baselines in "Methods")21.
Byte-pair encoding
The sequences from the training set were formatted as a newline-separated file for Byte Pair Encoding inference. The inference was performed in python on Amazon Web Services (AWS) with the sentencepiece package (https://github.com/google/sentencepiece) using both the BPE and Unigram51 algorithms, with vocabulary sizes in [100, 1000, 5000, 10000], no start or end token, and full representation of all characters. The resulting model and vocabulary files were saved for model training, which used sentencepiece to tokenize batches of sequence on the fly during training. For the Unigram model, which is probabilistic, we sampled from the top ten most likely sequence configurations.
For the visualization and interpretation of the 1000-token BPE vocabulary ultimately selected by our search algorithm, we took the vocabulary produced by sentencepiece, which has a list of tokens in order of merging (which is based on their frequency), and plotted this ordering vs. the length of the detected motif. We selected three example points visually for length at a given ranking. These sequence motifs were interrogated with BLAST24 with the NCBI web tool, and additionally with BLAST tool on the iGEM Registry of Standard Biological Parts (http://parts.igem.org/sequencing/index.cgi). For each motif, a collection of results were compared with their plasmid maps to place the motif sequences within a plasmid component. We found, as numbered in the figure, motif 1 repeated twice in the SV40 promoter, motif 2 repeated twice in the CMV promoter, and motif 3 occurring slightly downstream of the pMB1 origin of replication.
Training deep recurrent neural networks
We consider a family of models based on the LSTM recurrent neural network and a DNA motif-base encoding. We formulated an architecture and hyperparameter search space based on prior experience with these models. In particular, we searched over categorical options of learning rate, batch size, bidirectionality, LSTM hidden size, LSTM number of layers, number of fully connected layers, extent of dropout, class of activations, maximum length of the input sequence and word embedding dimension. We further searched over parameters of the motif-based encoding, including whether it was Unigram or BPE based and the vocabulary size. Configurations from this categorical search space were sampled and evaluated by the Asynchronous Hyperband52 algorithm, which evaluates a population of configurations in tandem and halts poor performing models periodically. Thus, computational resources are more efficiently allocated to the better performing models at each step in training. Our LSTM architecture was optimized with Adam using categorical cross-entropy loss in PyTorch (https://pytorch.org/) and hyperparameter tuning with Asynchronous Hyperband used ray tune (https://ray.readthedocs.io/en/latest/tune.html).
In early exploratory experiments, we found that including metadata in the initial training process caused a rapid increase but quick plateau of the Hyperband population. We noticed that these models usually had small LSTM components, suggesting that they were ignoring sequence information. This led us to hypothesize that adding metadata early in training led to an attractive local minimum for the tuning process which neglected sequence, exploiting the fact that Hyperband penalizes slow-improving algorithms.
We, therefore, adopted a progressive training policy as follows. First, 250 configurations of the search space described above were evaluated with ray tune (Supplementary Figs. 2 and 3) over the course of ~1 week on an AWS p2.8xlarge machine with K80 GPUs. The best-performing model was selected for a stable and steadily decreasing loss curve (Supplementary Fig. 3) after 300 Hyperband steps, each of 300 weight updates (~90,000 updates total). This model configuration was saved and trained from scratch to ~200,000 weight updates, selected based on an early stopping heuristic on the validation loss. Next, this model was truncated to the pre-logit layer, and metadata was concatenated with the output of this sequence-only model (Supplementary Fig. 1), followed by one hidden layer and the logit layer. This was further trained, but with the LSTM sequence model frozen until validation loss plateaued. Finally, the full model was jointly trained until validation loss plateaued. The effect of this approach was to prevent the model converging on a metadata-focused local optimum without overfitting on the training set (which was facilitated by training only components of the model at a time).
After fully training and finalizing results using our original random split (see above), three additional random splits were performed and training was repeated as before using different random seeds but the same hyperparameters as were found in the first hyperband search. We found that even without hyperparameter tuning on each newly split dataset, the results for the full and sequence-only deteRNNt models were consistent with our earlier results (Supplementary Fig. 5).
Calibration analysis
We followed the methodology of Guo et al.32. Prediction on the test set was binned into 15 bins, Bm. The difference between the confidence of a model and the accuracy of the resulting model's predictions, termed calibration can be measured by two metrics, Expected Calibration Error (ECE) measuring the average difference between prediction confidence and ground-truth accuracy, and Maximum Calibration Error (MCE) measuring the maximum thereof. They are defined below (n is the number of samples).
$${\mathrm{ECE}} = \mathop {\sum}\limits_{m = 1}^M {\frac{{\left| {B_m} \right|}}{n}\left| {{\mathrm{acc}}\left( {B_m} \right) - {\mathrm{conf}}\left( {B_m} \right)} \right|},$$
$${\mathrm{MCE}} = \mathop {{\max }}\limits_{m \in \{ 1, \ldots ,M\} } \left| {{\mathrm{acc}}\left( {B_m} \right) - {\mathrm{conf}}\left( {B_m} \right)} \right|.$$
Temperature scaling
Temperature scaling adjusts the logits (pre-softmax output) of a multiclass classifier of c classes by dividing them by a single scalar value called the temperature. For a categorical prediction q for a single example, logits z\(\in {\Bbb R}^{\boldsymbol{c}}\) predicted by the network, scalar temperature T, and the softmax function σSM we have a the temperature-scaled prediction:
$$\hat q = \mathop {{\max }}\limits_k \sigma _{SM}\left( {\frac{{{\boldsymbol{z}}_i}}{T}} \right)^{(k)}.$$
The temperature is learned on the validation set after the model is fully trained. We used PyTorch and gradient descent with Adam optimizer to fit the temperature value. Subsequently, all of the logits predicted for the test set by the original model were divided by the temperature and softmaxed to get confidences as shown in Fig. 2 (right). Because the maximum of the softmaxed vector is mathematically equivalent to the maximum of a softmax on scaled logits, we concluded that the slight difference in the accuracy of the calibrated model was due to floating-point errors.
Random Forest models
We used scikit-learn package (https://scikit-learn.org/stable/index.html) implementation of Random Forest Regressor. Unless otherwise specified, we used 1000 estimators with 0.5 as the maximum proportion of features to look at while searching for the best split and with class weights inversely proportional to class frequency.
For Random Forest analysis, we represented the sequences as frequencies of 1, 2, 3, and 4-grams. We constructed the n-gram vocabulary using the training set, and then only used the frequencies of n-grams included in the vocabulary to construct features (n-gram frequencies) for the validation and test sets. Where specified, we concatenated one-hot-encoded metadata (phenotypic information) to these n-gram frequencies. We transformed the n-grams features using TF-IDF weighting53 before using them for the Random Forest models.
Nation-of-origin data
Addgene has lab country information for many depositing labs (https://www.addgene.org/browse/pi/). For those missing, publication links were followed to affiliation addresses, and the country of the lab was manually read off the address and cross-checked with a web search. When country information was missing from a lab, if there was conflicting information, and for very rare countries, these classes were dropped and all the corresponding plasmids for that lab were dropped from all three training split sets. No reshuffling of the train, validation and test data occurred.
Lineage network analysis
Lab, country, and ancestry-descendent linkage counts were obtained from the training set and plotted as described above, rank-ordering where specified. Networks were analyzed for size and graph diameter with NetworkX. Lab lineages were obtained from plasmid lineage data by considering the presence of a link from plasmid X from lab A to plasmid Y from Lab B to be a directed edge from Lab A to B. Parallel edges were not allowed and weight was not considered. Self-edges were disregarded. The country lineage network was constructed from the lab lineage network, by considering a connection between lab X in country A and lab Y in country B to be a directed edge between country A and B. This time, weights were given as the number of lab-to-lab connections. Parallel edges were not allowed, but self-edges were. For simplicity, the arrows of the directed graph were not shown in the NetworkX visualization. A version of Google PageRank54 was computed on the directed, weighted graph with NetworkX.
Ancestor lab prediction
Due to the earlier biased train-validation-test split which deliberately segregated lineage networks into one of the three sets to minimize ancestry relationships that could lead to overfitting, we reconsidered the dataset for ancestor lab prediction. By definition, an ancestor plasmid and all its descendants will always be in the same set. So, if the ancestor is in the validation set, none of its descendants is available for training.
Therefore, we first parsed the most recent ancestor for each plasmid from the lineage data and assigned each plasmid that ancestor's lab. We then randomly 80-10-10 re-split the data.
We recognize that there is some potential for meta-overfitting by performing this reshuffle, even though ancestor lab prediction is a unique task from any of the others done so far. However, as this analysis was using a simple model intending to show tractability rather than peak performance, we decided to enable the right train-test-split was worth the chance.
Interpreting the deteRNNt model
To visualize the hidden states of the model, we first performed inference of the deteRNNt model on the validation set and extracted the activations of the last hidden layer (just prior to the layer which outputs logits). These hidden states were 1000-dimensional. To visualize them, we projected them into two and three dimensions using tSNE37 in scikit-learn with default hyperparameters. We colored each point by P(True Lab) as assigned by the model for that example. Note that throughout this section when we refer to model probabilities, we mean the probabilities given by the model after temperature scaling calibration has been applied. The three-dimensional interactive plot was made using plotly and labeled with each plasmid's true lab-of-origin.
For the analyses of sequence motifs (Fig. 5b–e), we use the deteRNNt sequence-only model as phenotype information was not available for all plasmids. For the scanning-N ablation analysis (Fig. 5b, c), we made all possible sequences with a window of 10 Ns inserted and performed standard deteRNNt inference to predict logits. We then generated per-position predicted logits by selecting all the sequences which include a given position as mutated to N, and averaging their logits together. We then apply softmax over each of these position logits to generate per-position predicted probabilities, which are indexed by the relevant labs to visualize.
For the scanning subsequence analysis (Fig. 5d, e), we made all possible subsequences of given length K and predicted logits for each, as above. Note that these are subsequences in isolation, as if they were sequences from a new plasmid, rather than padded with Ns or similar. For each position in the full sequence, we selected all the subsequences which include that position and averaged together their predicted logits, softmaxing to visualize probabilities as above.
We custom-designed a gene-drive vector using the Akbari germline-Cas9 plasmid AAEL010097-Cas9 (Addgene #100707) as a baseline. We modified it by removing the eGFP sequence attached to Cas9 and replacing it with the dsRed1 sequence within the same plasmid (but removing the Opie2 promoter in the process). We then identified two Cas9 guide RNAs against the Aedes aegypti AeAct-4 gene55 (Genbank Accession Number: AY531223), designed to remove most of the coding region, that are predicted to have high activity using CHOPCHOP (https://chopchop.cbu.uib.no)56. These guides were placed downstream of the Akbari identified AeU6a and AeU6c promoters(Addgene #117221 and #117223, respectively)57. We also included somArchon-GFP from pAAV-Syn-SomArchon (Addgene #126941, deposited by Edward Boyden's group) as a non-Akbari derived sequence. The Cas9-dsRed1_guide cassette_somArchon-GFP payload was flanked by 500 bp homology arms (upstream of 5′ guide and downstream of 3′ guide).
To analyze the results of our ablation and subsequence analyses, we indexed out the positions in the sequence with the most extreme changes in predicted probability and manually examined these regions in Benchling. We performed automated annotation, used BLAST58, and searched various repositories for the highest-ranked fragments in order to identify restriction sites, primers sites and other features.
The comparison with BLAST was performed using the blastn command line tool from NCBI58. At a high level, we can consider the BLAST baseline to be a nearest-neighbor algorithm, where the blast e-value is used to define neighbors in the training set. For each of the lab-of-origin and nation-of-origin prediction tasks, a fasta file of plasmid sequences from the training set was formatted as a BLAST database. Then, each test set was blasted against this database with an e-value threshold of 10. The resulting training set hits were sorted by e-value, from lowest to highest, and used to look up the training set labels for each sequence. For top 1 accuracy, the lowest e-value sequence class was compared to the true class. For top 10 accuracy, an example was marked correct if one of the labels of the lowest 10 e-value hits, after dropping duplicate hits to the same lab, corresponded to the correct test-set label. This dropping of duplicates ensured that the BLAST baseline was permitted up to 10 unique lab "guesses", which is necessary because occasionally the top-k ranked sequence results all have the same lab label. To perform the nation-of-origin and U.S. vs. foreign comparison, the same blast results were filtered to drop the U.S. or binarized so the U.S. was a positive class, respectively.
The comparison with the Convolutional Neural Network (CNN) method copied the architecture and hyperparameters reported in Nielsen & Voigt (2018)21. The model was implemented in PyTorch. We trained for 100 epochs, as reported in previous work21, on an Nvidia K80 GPU using the Amazon Web Services cloud p2.xlarge instance. Training converged on a validation score of 57.1% and appeared stable (Supplementary Fig. 6). After training for this duration, we saved the model and evaluated performance on the held-out validation and test sets. Test-set performance was 50.2%, which was very near the previously reported accuracy of 48%21, leading us to conclude that this model's performance was reproducible and robust to increases in both the number of labs and number of plasmids in our dataset compared to Nielsen & Voigt (2018) (827 vs. 1314 labs, 36,764 vs. 63,017 plasmids)21. In other words, this replication of the architecture, hyperparameters and training procedure of previous work with everything held constant to the best of our knowledge except the dataset, suggests that the effect of having more examples (typically associated with an easier task) and more labs to distinguish between (typically associated with a more difficult task) approximately cancel out, or perhaps net to a very weak (2%) difference in the difficulty of our dataset.
In lab-of-origin, U.S. vs. foreign, nation-of-origin, and ancestor lab prediction, we show comparisons with a baseline based on guessing the most abundant class, or classes (in the case of top 10 accuracy) from the training set. We also show the frequency of success based on uniformly guessing between the available labels (1/number of categories).
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The sequence of pCI-YFP sequence can be obtained from Genbank (Genbank JQ394803.1). pAAV-Syn-SomArchon is available on Addgene (Addgene #126941). AAEL010097-Cas9 is available on Addgene (Addgene #100707). U6a and U6c can be obtained from Addgene (Addgene #117221 and #117223, respectively). The sequence of the custom Akbari/ Boyden gene drive (Fig. 5e) is available in the github repository associated with this paper at https://github.com/altLabs/attrib/blob/master/sequences/custom_drive.fasta. All other data are available on request.
Code is available in the following github repository: https://github.com/altLabs/attrib.
Engelberg, S. New evidence adds doubt to FBI's case against anthrax suspect—ProPublica. ProPublica https://www.propublica.org/article/new-evidence-disputes-case-against-bruce-e-ivins (2011).
Skane, W. Science alone does not establish source of anthrax used in 2001 mailings. http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=13098 (2011).
Cummings, C. A. & Relman, D. A. Microbial forensics-'cross-examining pathogens'. Science 296, 1976–1979 (2002).
Budowle, B. et al. Building microbial forensics as a response to bioterrorism. Science 301, 1852–1853 (2003).
Shane, S. & Wade, N. Pressure grows for F.B.I.'s anthrax evidence. NY Times (2008).
Cameron, E., Katz, R., Konyndyk, J. & Nalabandian, M. A spreading plague: lessons and recommendations for responding to a deliberate biological event. https://media.nti.org/documents/NTI_Paper_A_Spreading_Plague_FINAL_061119.pdf (2019).
Budowle, B. Genetics and attribution issues that confront the microbial forensics field. Forensic Sci. Int. 146(Suppl), S185–S188 (2004).
Markon, J. Justice Dept. takes on itself in probe of 2001 anthrax attacks. Washington Post https://www.washingtonpost.com/politics/justice-dept-takes-on-itself-in-probe-of-2001-anthrax-attacks/2012/01/05/gIQAhGLlVQ_story.html (2012).
National Academies of Sciences, Engineering, and Medicine, Division on Earth and Life Studies, Board on Life Sciences, Board on Chemical Sciences and Technology & Committee on Strategies for Identifying and Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology. Biodefense in the Age of Synthetic Biology. (National Academies Press (US), 2019).
Lee, J. H. et al. Fluorescent in situ sequencing (FISSEQ) of RNA for gene expression profiling in intact cells and tissues. Nat. Protoc. 10, 442–458 (2015).
Fuller, C. W. et al. Real-time single-molecule electronic DNA sequencing by synthesis using polymer-tagged nucleotides on a nanopore array. Proc. Natl Acad. Sci. USA 113, 5233–5238 (2016).
ADS CAS Article Google Scholar
Shendure, J. et al. DNA sequencing at 40: past, present and future. Nature 550, 345–353 (2017).
Ritchie, M. D., Holzinger, E. R., Li, R., Pendergrass, S. A. & Kim, D. Methods of integrating data to uncover genotype–phenotype interactions. Nat. Rev. Genet. 16, 85 (2015).
Biswas, S. et al. Toward machine-guided design of proteins. Preprint at https://www.biorxiv.org/content/10.1101/337154v1https://doi.org/10.1101/337154 (2018)
Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M. & Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. Nat. Methods 16, 1315–1322 (2019).
AlQuraishi, M. End-to-end differentiable learning of protein structure. Cell Syst. 8, 292–301.e3 (2019).
Avsec, Ž. et al. The Kipoi repository accelerates community exchange and reuse of predictive models for genomics. Nat. Biotechnol. 37, 592–600 (2019).
Riesselman, A. J., Ingraham, J. B. & Marks, D. S. Deep generative models of genetic variation capture the effects of mutations. Nat. Methods 15, 816–822 (2018).
Quang, D. & Xie, X. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res. 44, e107–e107 (2016).
Diggans, J. & Leproust, E. Next steps for access to safe, secure DNA synthesis. Front. Bioeng. Biotechnol. 7, 86 (2019).
Nielsen, A. A. K. & Voigt, C. A. Deep learning to predict the lab-of-origin of engineered DNA. Nat. Commun. 9, 3135 (2018).
ADS Article Google Scholar
Kamens, J. The Addgene repository: an international nonprofit plasmid and data resource. Nucleic Acids Res. 43, D1152–D1157 (2014).
Eddy, S. R. Profile hidden Markov models. Bioinformatics 14, 755–763 (1998).
Altschul, S. F. et al. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 25, 3389–3402 (1997).
Sennrich, R., Haddow, B. & Birch, A. Neural machine translation of rare words with subword units. Preprint at https://arxiv.org/abs/1508.07909 (2015).
Shibata, Y. et al. Speeding up pattern matching by text compression. In Lecture Notes in Computer Science 1767, (eds Bongiovanni, G., Petreschi, R. & Gambosi, G.) 306–315 (Springer, Berlin, Heidelberg, 2000) https://doi.org/10.1007/3-540-46521-9_25.
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Computation 9, 1735–1780 (1997).
Hansen, L. K. & Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12, 993–1001 (1990).
Altschul, S. F., Gish, W., Miller, W., Myers, E. W. & Lipman, D. J. Basic local alignment search tool. J. Mol. Biol. 215, 403–410 (1990).
Amodei, D. et al. Concrete problems in AI safety. Preprint at: https://arxiv.org/abs/1606.06565 (2016).
Doshi-Velez, F. & Kim, B. Towards a rigorous science of interpretable machine learning. Preprint at https://arxiv.org/abs/1702.08608 (2017).
Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On calibration of modern neural networks. Preprint at https://arxiv.org/abs/1706.04599 (2017).
Shrikumar, A. & Kundaje, A. Calibration with bias-corrected temperature scaling improves domain adaptation under label shift in modern neural networks. Preprint at https://arxiv.org/abs/1901.06852v1 (2019).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. Preprint at https://arxiv.org/abs/1512.03385 (2015).
Liaw, A. & Wiener, M. C. Classification and regression by randomForest. R news. 2, 18–22 (2007).
Breiman, L. Random forests. Mach. Learn 45, 5–32 (2001).
van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).
MATH Google Scholar
Piatkevich, K. D. et al. Population imaging of neural activity in awake behaving mice. Nature 574, 413–417 (2019).
Chow, B. Y. et al. High-performance genetically targetable optical neural silencing by light-driven proton pumps. Nature 463, 98–102 (2010).
Hochbaum, D. R. et al. All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins. Nat. Methods 11, 825–833 (2014).
Lipsitch, M. & Bloom, B. R. Rethinking biosafety in research on potential pandemic pathogens. MBio 3, e00360–12 (2012).
Spearman, C. M., Quigley, M. J., Quigley, M. R. & Wilberger, J. E. Survey of the h index for all of academic neurosurgery: another power-law phenomenon?: clinical article. J. Neurosurg. 113, 929–933 (2010).
Brzezinski, M. Power laws in citation distributions: evidence from Scopus. Scientometrics 103, 213 (2015).
Quigley, M. R., Holliday, E. B., Fuller, C. D., Choi, M. & Thomas, C. R. Distribution of the h-Index in radiation oncology conforms to a variation of power law: implications for assessing academic productivity. J. Cancer Educ. 27, 463–466 (2012).
Ovadia, Y. et al. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. Advances in Neural Information Processing Systems. 13991–14002 (2019).
Benson, S., Lennard, C., Maynard, P. & Roux, C. Forensic applications of isotope ratio mass spectrometry-a review. Forensic Sci. Int. 157, 1–22 (2006).
Kreuzer-Martin, H. W. & Jarman, K. H. Stable isotope ratios and forensic analysis of microorganisms. Appl. Environ. Microbiol. 73, 3896–3908 (2007).
West, J. B., Bowen, G. J., Cerling, T. E. & Ehleringer, J. R. Stable isotopes as one of nature's ecological recorders. Trends Ecol. Evol. 21, 408–414 (2006).
Lee, S.-H. DNA sequence watermarking based on random circular angle. Digit. Signal Process. 25, 173–189 (2014).
Heider, D. & Barnekow, A. DNA-based watermarks using the DNA-Crypt algorithm. BMC Bioinforma. 8, 176 (2007).
Kudo, T. Subword regularization: improving neural network translation models with multiple subword candidates. Preprint at https://arxiv.org/abs/1804.10959 (2018).
Li, L. et al. Massively parallel hyperparameter tuning. Preprint at https://arxiv.org/abs/1810.05934v1 (2018).
Ramos, J. E. Using TF-IDF to determine word relevance in document queries. Proceedings of the first instructional conference on machine learning. 242, 133–142 (2003).
ADS Google Scholar
Page, L., Brin, S., Motwani, R. & Winograd, T. The pagerank citation ranking: bringing order to the web.Stanford InfoLab (1999).
Muñoz, D., Jimenez, A., Marinotti, O. & James, A. A. The AeAct-4 gene is expressed in the developing flight muscles of female Aedes aegypti. Insect Mol. Biol. 13, 563–568 (2004).
Labun, K. et al. CHOPCHOP v3: expanding the CRISPR web toolbox beyond genome editing. Nucleic Acids Res. 47, W171–W174 (2019).
Li, M. et al. Development of a confinable gene drive system in the human disease vector Aedes aegypti. https://doi.org/10.7554/eLife.51701 (2020).
Quick start. in BLAST® Command Line Applications User Manual [Internet] (National Center for Biotechnology Information (US), 2008).
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. Distributed representations of words and phrases and their compositionality. in Advances in Neural Information Processing Systems 3111–3119 (2013).
We thank Grigory Khimulya, Gregory Lewis, Michael Montague, Andrew Snyder-Beattie, Surojit Biswas, Mohammed AlQuraishi, Gregory Koblentz, and Gabriella Deich for valuable feedback and discussion. We also thank Christopher Voigt and Alec Nielsen for their critical commentary. We thank Addgene, particularly Jason Niehaus and Joanne Kamens, for data access and thoughtful feedback. E.C.A. was supported by the Center for Effective Altruism and the Open Philanthropy Project. A.B.L. was supported by NIH training grant 2T32HG002295-16. Compute for this project was partially provided by the generosity of Lambda Labs, Inc.
Alt. Technology Labs, Inc., Berkeley, CA, USA
Ethan C. Alley, Jacob Swett, George M. Church & Kevin M. Esvelt
Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
Ethan C. Alley, Rey Edison, Stephen E. Von Stetina & Kevin M. Esvelt
Department of Genetics, Harvard Medical School, Boston, MA, USA
Ethan C. Alley & George M. Church
Duke University, Durham, NC, USA
Miles Turpin
Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
Andrew Bo Liu
Unaffiliated, Washington, DC, 20009, USA
Taylor Kulp-McDowall
Ethan C. Alley
Jacob Swett
Rey Edison
Stephen E. Von Stetina
George M. Church
Kevin M. Esvelt
E.C.A. conceived the study and designed the analyses. M.T. processed and cleaned the data. T.K. split the data. M.T. managed data and software for part of Random Forest and network analysis. A.B.L. managed data and software for part of nation-state and ancestor analysis. J.S. managed part of the CNN baseline. E.C.A. managed the data and software for all other analyses. R.E. and S.E.V.S. designed the gene-drive plasmid and performed sequence analysis to interpret model predictions. G.M.C. and K.M.E. supervised the project. E.C.A. wrote the paper with help from all authors.
Correspondence to Ethan C. Alley.
E.C.A. is President and J.S. is Co-Founder of Alt. Technology Labs (altLabs), a not-for-profit organization hosting an open data science attribution prize. E.C.A. and K.M.E. are board members of altLabs, and G.M.C. is a member of the altLabs Scientific Advisory Board. A full list of G.M.C.'s tech transfer, advisory roles, and funding sources can be found on the lab's website http://arep.med.harvard.edu/gmc/tech.html. The remaining authors declare no competing interests.
Peer review information Nature Communications thanks Megan Palmer and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Alley, E.C., Turpin, M., Liu, A.B. et al. A machine learning toolkit for genetic engineering attribution to facilitate biosecurity. Nat Commun 11, 6293 (2020). https://doi.org/10.1038/s41467-020-19612-0
Editors' Highlights
Nature Communications ISSN 2041-1723 (online) | CommonCrawl |
Trace: • linear_regression
manual:linear_regression
Power for Linear Regression
Effect Size
Number of predictors
Significance Level (alpha)
Power curve
Power calculation / sample size planning for linear regression based on F test.
Among sample size, number of predictors, effect size, significance level, and power, one and only one can be left blank.
Provide the number of observations per group. Multiple sample sizes can be provided in two ways. First, multiple sample sizes can be supplied separated by white spaces, e.g., 100 150 200 will calculate power for the three sample sizes. A sequence of sample sizes can be generated using the method s:e:i with s denoting the starting sample size, e as ending sample size, and i as the interval. For example, 100:150:10 will generate a sequence 100 110 120 130 140 150.
By default, the sample size is 100.
The effect size to be used. Multiple effect sizes or a sequence of effect sizes can be supplied using the same method for sample size. By default, the value is 0.15.
f2 is the effect size measure as in Cohen (1988) that evaluates the impact of a set of predictors on an outcome. \[ f^{2}=\frac{R^{2}}{1-R^{2}} \] where $R^{2}$ is population $R^{2}$ of regression analysis.
When we are evaluating the impact of one set of predictors above and beyond a second set of predictors (or covariates). \[ f^{2}=\frac{R_{AB}^{2}-R_{A}^{2}}{1-R_{AB}^{2}} \] where
$R_{A}^{2}$ is variance accounted for by variable set A
$R_{AB}^{2}$ is variance accounted for by variable set A and variable set B together.
Cohen suggests $f^{2}$ values of 0.02, 0.15, and 0.35 represent small, medium, and large effect sizes.
For a regression model with $p$ predictor, the numerator df $u=p$ and the denominator df $v=n-p-1$. If we compare two regression models, first with $p1$ predictors (smaller model) and the second with $p2$ predictors (larger model), then u=$p2-p1$ and $v=n-p2-1$. Note that $f^{2}$ is often calculated from R-squared. For example, if $R^{2}$ is .4, then $f^{2}$=.67.
By clicking the show button, a table is show to calculate the effect size and number of predictors based on the $R^2$.
The number of predictors for the reduced model (0 if for testing all predictors in a model) and full model.
The significance level (Type I error rate) for power calculation withe the default 0.05.
The power of the test.
Whether to generate the power curve.
A note (less than 200 characters) can be provided to provide basic information on the analysis.
manual/linear_regression.txt · Last modified: 2017/01/17 16:15 by 10.45.80.140 | CommonCrawl |
Can the twin prime problem be solved with a single use of a halting oracle?
It occurred to me that if it were possible to determine whether a given program halts, that could be used to answer the twin primes conjecture
A) Write a program which takes input n and then counts upward until it's found n pairs of twin primes B) Write a program which for any input n returns true if A halts and false otherwise C) Write a program which counts upward running B on every n until B returns false D) If C halts, there are finitely many twin primes, otherwise infinite.
I was wondering if there was a way to do this without nesting halting problems... ie if you only get one chance to ask whether a program halts, is that sufficient to answer the twin primes conjecture
prime-numbers computability-theory
François G. Dorais♦
dspyzdspyz
$\begingroup$ The title doesn't match the question. Perhaps you could change the title to something like, "Can the twin prime problem be solved with a single use of a halting oracle?". $\endgroup$ – S. Carnahan♦ Jul 23 '11 at 6:16
$\begingroup$ .. and the answer would be yes, to S. Carnahan's rewrite, if you get the code number for the program correct. Gerhard "Ask Me About System Design" Paseman, 2011.07.22 $\endgroup$ – Gerhard Paseman Jul 23 '11 at 6:24
$\begingroup$ I feel like it is worth remarking that there are reasonable strengthinings of the twin prime conjecture which are $\Pi_1$. For example, in the notation of bit.ly/n8AyP4, the statement that $10^{-100}<\frac{2C_2n}{\pi_2(n)\ln(n)^2}<10^{100}$ for all $n$. $\endgroup$ – Kevin Ventullo Jul 24 '11 at 0:38
Note that your program is actually using a lot more than the halting oracle $0'$. It is using $0''$ — the halting oracle for machines using the $0'$ oracle. The oracle $0''$ is capable of deciding any $\Pi_2$ statement (like the twin prime conjecture) with a single definite query. Let's look at the twin prime conjecture in further detail.
For any fixed $N$, the $\Pi_1$ statement "there are no twin prime pairs after $N$" can be resolved by a single query to $0'$. Thus, if there are only finitely many twin primes, then there is a single query to $0'$ that will let us know that — the catch is that we don't know which query will give us the answer. Note that we can still get by with finitely many queries to $0'$ by trying all natural numbers $N$ in order until we get a positive answer to the query "there are no twin prime pairs after $N$" (assuming the twin prime conjecture is actually false).
To say "there are infinitely many twin primes" is a $\Pi_2$ statement. In general, one cannot positively decide a $\Pi_2$ statement by a single query to $0'$. However, the twin prime conjecture is a very specific $\Pi_2$ statement, so these general case arguments do not necessarily apply.
For example, it is conceivable that the existence of infinitely many twin primes is in fact equivalent to the existence of a magic twinmaker, which is a certain $\Pi_1$ property of a natural number. In this case, we could resolve the twin prime conjecture by making a single query to $0'$: we could ask whether "there are no twin prime pairs after $N$" for some suitably chosen $N$, or we could ask whether "$N$ is a magic twinmaker" for some suitably chosen $N$. Again, the catch is that we don't know $N$ and, moreover, we don't even know which of the two questions to ask!
However, the situation is not so bad, we could still get by with only finitely many queries to $0'$ without making lucky guesses. We go through all the natural numbers $N$ in order, in each case asking whether "there are no twin primes after $N$" or whether "$N$ is a magic twinmaker" until we get a positive answer. Since one of the two cases must occur for some $N$, we will eventually get a positive answer.
Unfortunately, this magic twinmaker concept is completely made up for the purpose of illustration. It could be that the twin prime conjecture is a generic $\Pi_2$ statement, in which case we cannot expect to decide it positively with a single query to $0'$.
François G. Dorais♦François G. Dorais
I have no disagreement with the answer of François Dorais, but I have a different take on the problem.
Let $S$ be any statement of number theory, such as the Twin prime conjecture, Goldbach's conjecture, etc. of any quantifier complexity.
Let us say that $S$ is $ZF$-decidable if $ZF$ either proves $S$, or $ZF$ proves the negation of $S$ (here $ZF$ is Zermelo-Fraenkel set theory).
Proposition. Under the assumption that $ZF$ is arithmetically sound (i.e., it proves no false arithmetical sentence), there is a recursive function $f$ such that truth of any $ZF$-decidable statement $S$ of number theory can be determined by one query "Is $f(S)$ $\in K?$" (where $S$ is identified with its the Gödel number, and $K$ is the halting oracle).
The above proposition follows immediately from the well-known fact that $K$ is a complete r.e. set; i.e., every recursively enumerable set $X$ is Turing-reducible to $K$; indeed, given such an $X$ there is a even a 1-1 recursive function $f$ such that $n\in X$ iff $f(n) \in K$. The $X$ at work here is the set of (Gödel numbers of) theorems of $ZF$.
Therefore, if the twin prime conjecture is decidable within $ZF$, and $ZF$ is arithmetically sound, then its truth-value can be determined by a single query to the halting oracle.
Two closing comments are in order:
(1) It is well-known that if a statement $S$ of number theory is $ZFC$-decidable (where $ZFC$ is $ZF$ plus the axiom of choice), then $S$ is $ZF$-decidable. The proof is nontrivial and makes a detour through Gödel's constructible universe, and absoluteness considerations (this is due to Kreisel; according to McIntyre, it was surprisingly missed by Gödel himself).
(2) There is nothing special about $ZF$ here, the above proposition holds for any axiomatic system $T$ with a recursive set of axioms, including those weaker than $ZF$, such as $PA$ (Peano arithmetic) or stronger than $ZF$, e.g., $ZF$ with "large cardinals".
Ali EnayatAli Enayat
$\begingroup$ If the twin prime conjecture is decidable in ZF, and ZF is arithmetically (or $\Sigma^0_2$) sound, then its truth value can be determined by an algorithm using no oracle whatsoever: enumerate all ZF proofs until you find a proof of the conjecture or of its negation. $\endgroup$ – Emil Jeřábek supports Monica Nov 29 '13 at 14:33
I just saw according to this thread that it's an open problem: http://boards.straightdope.com/sdmb/archive/index.php/t-569801.html
$\begingroup$ @GH: I am afraid you are confusing the fact that there is no automatic way to solve the halting problem (for all computer programs), with the fact that perhaps we can prove that a particular program halts. The question talks about one particular computer program, which is unknown whether it halts or not; indeed this particular program halts iff the twin conjecture holds. $\endgroup$ – boumol Jul 23 '11 at 10:31
$\begingroup$ @GH: One of the ways to formulate this kind of question is as follows: Is an explicit algorithm A using the oracle for the halting problem known such that A answers "Yes" if and only if the twin prime conjecture holds? $\endgroup$ – Tsuyoshi Ito Jul 23 '11 at 15:03
$\begingroup$ @boumol: I disagree. The question asks how can we decide the twin prime conjecture with a halting oracle. Actually within a formal system like ZFC one can reformulate the twin prime conjecture to a single halting problem: the program generates all conclusions of ZFC and stops if the twin prime conjecture is above them. This program halts iff the twin prime conjecture is valid within ZFC. $\endgroup$ – GH from MO Jul 23 '11 at 19:22
$\begingroup$ @GH "There is no halting oracle" No, this is incorrect. There is a halting oracle. There's just no computable halting oracle. $\endgroup$ – Sam Alexander Jul 23 '11 at 21:12
$\begingroup$ I dropped the condition "use the oracle only once" by mistake. What I meant to say was that one way to formulate this question rigorously is: Is an explicit algorithm A using the oracle for the halting problem only once known such that A answers "Yes" if and only if the twin prime conjecture holds? Without some kind of formulation, "The twin prime conjecture is (or is not) decidable" does not make much sense. $\endgroup$ – Tsuyoshi Ito Jul 23 '11 at 21:21
Let $g(3)=4$, and let $g(n+1)=g(n)!$ for every integer $n \geq 3$. For an integer $n \geq 3$, let $\Psi_n$ denote the statement: if a system $$S \subseteq \{x_i!=x_{i+1}: 1 \leq i \leq n-1\} \cup \{x_i \cdot x_j=x_{j+1}: 1 \leq i \leq j \leq n-1\}$$ has at most finitely many solutions in positive integers $x_1,\dots,x_n$, then each such solution $(x_1,\dots,x_n)$ satisfies $x_1,\dots,x_n \leq g(n)$. We conjecture that the statements $\Psi_3,\dots,\Psi_{16}$ are true. The statement $\Psi_{16}$ proves the implication: if there exists a twin prime greater than $g(14)$, then there are infinitely many twin primes, please see: A. Tyszka, A common approach to Brocard's problem, Landau's problem, and the twin prime problem,
http://arxiv.org/abs/1506.08655v21
That is, assuming the statement $\Psi_{16}$, a simple single query to $0′$ decides the twin prime problem.
Gerry Myerson
Apoloniusz TyszkaApoloniusz Tyszka
Not the answer you're looking for? Browse other questions tagged prime-numbers computability-theory or ask your own question.
Kolmogorov complexity is the strongest noncomputable function
Is the theory of categories decidable?
A ("Rice-like") conjecture about the decidability of primitive recursive (PR) problems
What is the Turing degree associated with an ultrafilter $U$?
Is $n$ uniformly computable from an oracle for the $n^{\rm th}$ jump $0^{(n)}$?
Is there a known Turing machine which halts if and only if the Collatz conjecture has a counterexample?
Do "seemingly impossible functional programs" work with arrow types interpreted as Turing machines? | CommonCrawl |
Multi-positional image-based vibration measurement by holographic image replication
Simon Hartlieb 1 , * , , ,
Michael Ringkowski 2 ,
Tobias Haist 1 ,
Oliver Sawodny 2 ,
Wolfgang Osten 1 , ,
Institute for Applied Optics, University of Stuttgart, Pfaffenwaldring 9, Stuttgart 70569, Germany
Institute for System Dynamics, University of Stuttgart, Waldburgstraße 17/19, Stuttgart 70563, Germany
Simon Hartlieb ([email protected])
In this study we present a novel and flexibly applicable method to measure absolute and relative vibrations accurately in a field of 148 mm × 110 mm at multiple positions simultaneously. The method is based on imaging in combination with holographic image replication of single light sources onto an image sensor, and requires no calibration for small amplitudes. We experimentally show that oscillation amplitudes of 100 nm and oscillation frequencies up to 1000 Hz can be detected clearly using standard image sensors. The presented experiments include oscillations of variable amplitude and a chirp signal generated with an inertial shaker. All experiments were verified using state-of-the-art vibrometers. In contrast to conventional vibration measurement approaches, the proposed method offers the possibility of measuring relative movements between several light sources simultaneously. We show that classical band-pass filtering can be omitted, and the relative oscillations between several object points can be monitored.
[1] Y. Fu et al. Spatially encoded multibeam laser doppler vibrometry using a single photodetector. Optics Letters 35, 1356-1358 (2010). doi: 10.1364/OL.35.001356
[2] P. B. Phua et al. Multi-beam laser doppler vibrometer with fiber sensing head. AIP Conference Proceedings 1457, 219-226, 06 (2012).
[3] T. Haist et al. Characterization and demonstration of a 12-channel laser-doppler vibrometer. volume 8788. Proceedings of SPIE, 2013.
[4] C. Yang et al. A multi-point laser doppler vibrometer with fiber-based configuration. Review of scientific instruments 84, 121702 (2013). doi: 10.1063/1.4845335
[5] R. D. Burgett et al. Mobile mounted laser Doppler vibrometer array for acoustic landmine detection. In Detection and Remediation Technologies for Mines and Minelike Targets VIII, volume 5089, pages 665–672. Proceedings of SPIE, 2003.
[6] W. N. MacPherson et al. Multipoint laser vibrometer for modal analysis. Applied Optics 46, 3126-3132 (2007). doi: 10.1364/AO.46.003126
[7] J. M. Kilpatrick & Markov V. Matrix laser vibrometer for transient modal imaging and rapid nondestructive testing. In Eighth International Conference on Vibration Measurements by Laser Techniques: Advances and Applications, volume 7098. Proceedings of SPIE, 2008.
[8] Polytec GmbH. https://www.polytec.com/de/vibrometrie/produkte/full-field-vibrometer/mpv-800-multipoint-vibrometer. (2021-11-04).
[9] D. Kim et al. 3-d vibration measurement using a single laser scanning vibrometer by moving to three different locations. IEEE Transactions on Instrumentation and Measurement 63, 2028-2033 (2014). doi: 10.1109/TIM.2014.2302244
[10] K. Kokkonen & M. Kaivola. Scanning heterodyne laser interferometer for phase-sensitive absoluteamplitude measurements of surface vibrations. Applied Physics Letters 92, (2008).
[11] S. Kim et al. A vision system for identifying structural vibration in civil engineering constructions. In Proceedings of 2006 SICE-ICASE International Joint Conference. IEEE, 2006.
[12] D. Ribeiro et al. Non-contact measurement of the dynamic displacement of railway bridges using an advanced video-based system. Engineering Structures 75, 164-180 (2014). doi: 10.1016/j.engstruct.2014.04.051
[13] A. M. Wahbeh et al. A vision-based approach for the direct measurement of displacements in vibrating systems. Smart Materials and Structures 12, 785-794 (2003). doi: 10.1088/0964-1726/12/5/016
[14] S. Patsias & W. J. Staszewskiy. Damage detection using optical measurements and wavelets. Structural Health Monitoring 1, 5-22 (2002). doi: 10.1177/147592170200100102
[15] H. Y. Wu et al. Eulerian video magnification for revealing subtle changes in the world. ACM Transactions on Graphics 31, (2012).
[16] J. G. Chen et al. Modal identification of simple structures with high-speed video using motion magnification. JournalofSoundandVibration 345, 58-71 (2015). doi: 10.1016/j.jsv.2015.01.024
[17] J. G. Chen et al. Video camera-based vibration measurement for civil infrastructure applications. Journal of Infrastructure Systems 23, B4016013 (2017).
[18] N. Wadhwa et al. Phase-based video motion processing. ACM Transactions on Graphics 32, (2013).
[19] Z. Liu et al. Time-varying motion filtering for visionbased nonstationary vibration measurement. IEEE Transactions on Instrumentation and Measurement 69, 3907-3916 (2020). doi: 10.1109/TIM.2019.2937531
[20] Z. Liu et al. Vision-based vibration measurement by sensing motion of spider silk. Procedia Manufacturing 49, 126-131 (2020).
[21] L. P. Yu & B. Pan. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement. Mechanical Systems and Signal Processing 94, 374-383 (2017). doi: 10.1016/j.ymssp.2017.03.008
[22] G. D'Emilia, L. Razzè & E. Zappa. Uncertainty analysis of high frequency image-based vibration measurements. Measurement 46(8), 2630-2637 (2013). doi: 10.1016/j.measurement.2013.04.075
[23] T. Haist et al. Multi-image position detection. Optics express 22, 14450-14463 (2014). doi: 10.1364/OE.22.014450
[24] R. D. Gow et al. A comprehensive tool for modeling cmos image-sensor-noise performance. IEEE Transactions on Electron Devices 54, 1321-1329 (2007). doi: 10.1109/TED.2007.896718
[25] B. F. Alexander & K. C. Ng. Elimination of systematic error in subpixel accuracy centroid estimation [also Letter 34(11)3347-3348(Nov1995)]. Optical Engineering 30, 1320-1331 (1991). doi: 10.1117/12.55947
[26] M. R. Shortis, T. A. Clarke & T. Short. Comparison of some techniques for the subpixel location of discrete target images. In Videometrics III, pages 239–250. Proceedings of SPIE 2350, 1994.
[27] S. Thomas. Optimized centroid computing in a shackhartmann sensor. In Advancements in Adaptive Optics, volume 5490. Proceedings of SPIE, 2004.
[28] F. Schaal et al. Applications of diffractive optical elements for optical measurement techniques. In Holography, Diffractive Optics, and Applications VI, volume 9271, pages 1–7. SPIE, 2014.
[29] S. Hartlieb et al. Hochgenaue kalibrierung eines holografischen multi-punkt positionsmesssystems. tm - Technisches Messen 87, 504-513 (2020). doi: 10.1515/teme-2019-0153
[30] S. Hartlieb et al. Highly accurate imaging based position measurement using holographic point replication. Measurement 172, 108852 (2021). doi: 10.1016/j.measurement.2020.108852
[31] T. Schmidt, J. Tyson & K. Galanulis. Full-field dynamic displacement and strain measurement using advanced 3d image correlation photogrammetry: part 1. Experimental Techniques 27, 47-50 (2003).
[32] F. Guerra et al. Precise building deformation measurement using holographic multipoint replication. Applied Optics 59(9), 2746-2753 (2020). doi: 10.1364/AO.385594
[33] S. Hartlieb et al. Accurate 3D coordinate measurement using holographic multipoint technique. In Optics and Photonics for Advanced Dimensional Metrology, volume 11352, pages 1–12. Proceedings of SPIE, 2020.
High resolution image based vibration measurement
We present an imaging based vibration measurement method, that can detect in-plane vibration amplitudes of 100 nm at multiple object points in a field of 140 mm x 110 mm. The measurement method is based on imaging of single point light sources onto a camera. The classical approach to measure the object position is to calculate the center of the resulting spot in image plane. With our method this single spot is replicated into a predefined pattern of spots. Therefore, the imaging lens is upgraded with an optical element, a so called computer generated hologram, to perform this replication. By averaging the center positions of all replicated spots, the precision of position measurement can be improved by a factor equal to the square root of the number of replications.
Simon Hartlieb1, *, , ,
Michael Ringkowski2,
Tobias Haist1,
Oliver Sawodny2,
Wolfgang Osten1,
1. Institute for Applied Optics, University of Stuttgart, Pfaffenwaldring 9, Stuttgart 70569, Germany
2. Institute for System Dynamics, University of Stuttgart, Waldburgstraße 17/19, Stuttgart 70563, Germany
Simon Hartlieb, [email protected];
Abstract: In this study we present a novel and flexibly applicable method to measure absolute and relative vibrations accurately in a field of 148 mm × 110 mm at multiple positions simultaneously. The method is based on imaging in combination with holographic image replication of single light sources onto an image sensor, and requires no calibration for small amplitudes. We experimentally show that oscillation amplitudes of 100 nm and oscillation frequencies up to 1000 Hz can be detected clearly using standard image sensors. The presented experiments include oscillations of variable amplitude and a chirp signal generated with an inertial shaker. All experiments were verified using state-of-the-art vibrometers. In contrast to conventional vibration measurement approaches, the proposed method offers the possibility of measuring relative movements between several light sources simultaneously. We show that classical band-pass filtering can be omitted, and the relative oscillations between several object points can be monitored.
Introduction and state of the art
Precise measurement of deformations and vibrations is required in a wide range of industrial applications. Classical approaches such as laser Doppler vibrometers (LDVs) offer the possibility of measuring object vibrations at high temporal and spatial resolution. However, as soon as multiple simultaneous measurements are required, the use of single LDVs quickly becomes impractical and costly. The market-driven need for simultaneous vibration measurements at multiple positions manifested itself in the development of LDVs with multi-beam1−4 and multi-sensor5−7 applications. In addition, commercial solutions consisting of multiple sensor heads are available8, but these systems are either limited by the flexibility of beam orientation or costly in terms of adjustment of sensor heads, signal synchronization, and especially in terms of price. Scanning laser interferometers9, 10 can be used to measure full-field vibrations, but the obtained signals cannot be acquired simultaneously, making it impossible to measure transient signals.
Image-based vibration measurement techniques inherently offer the possibility of measuring the movement of an object remotely at multiple positions as well as measuring the displacement of several objects simultaneously.
In many cases, planar, passive targets are used, reaching accuracies in the range of 0.6 mm at a distance of 100 m and 0.1 mm for a distance of 15 m and only low-frequency vibrations are detected11, 12. As an alternative to the planar target, active elements (LEDs) and edge detection can be used13, 14. Especially in the field of edge detection, a lot of research regarding motion magnification, a technique that allows magnifying small displacements, has been conducted recently. A promising approach was presented by Wu et al., where small movements were magnified in video sequences by applying a bandwidth filter in the spatial frequency domain15. This method was used in the vibration analysis of structures and achieved impressive accuracies of 0.006 pixel16, 17. Nevertheless, the method is limited to stationary vibrations and can produce misleading artifacts18. The time-varying motion filtering proposed by Liu et al. can be a solution for the problem of limited bandwidth19.
Higher frequency vibrations of up to 1000 Hz were analyzed by several authors20−22, using techniques such as phase-based optical flow and pattern-matching. The reported standard uncertainties are 0.032 mm for vibration frequencies of 100−300 Hz and 0.013 mm for 400−600 Hz using passive targets and a pattern-matching technique22. This corresponds to a dynamic range of 12000:1 (measurement range/uncertainty).
Principle and methods
Our approach for measuring the vibration of an object is shown in Fig. 1. Multiple light sources (shown in red) were attached to a vibrating object and imaged to a camera. By monitoring the position of each object light source with a fixed acquisition frame rate, movements between each frame can be ascribed to a superposition of the object vibration and ambient vibrations of the laboratory room, neglecting other error sources such as air turbulence and sensor noise. One static light source (green) is not attached to the vibrating object; therefore, it is only exposed to ambient vibrations. By calculating the relative displacement between an object and a static light source, these unwanted vibrations can be compensated for. This is schematically visualized in the two spectra shown in Fig. 1. The upper spectrum is given for the absolute movement of an object light source $ x_{abs} $ in the presence of unwanted ambient vibrations. The lower shows the spectrum of the relative displacement $ x_{rel} $ between a vibrating and a static light source without ambient vibrations.
Fig. 1 Scheme of the vibration measurement method, including the spectra of the absolute and relative spot position signals.
Point light sources on a vibrating object (red) and a static object (green) are imaged to a camera. The center of the imaged spot is used as the position signal. Environmental vibrations (marked with a green square) are present in the absolute position spectrum $ x_{abs}$ of the red light sources. By calculating the relative movement $ x_{rel}$ between the red and green light sources, ambient vibrations can be removed.
In the following, this method is improved by using holographical image replication to reach very high resolution, and we present experimental results to support the ability to compensate for ambient vibration in relative position measurement.
In our measurement system, the position of each light source is defined by the location of the intensity distribution (spot) on the camera sensor. Therefore, it is crucial for our measurement system to detect the spot position as accurately as possible. To this end, the holographic multipoint method is applied23. In the following, we provide a brief introduction to the underlying spot detection.
Holographic multipoint method
The accuracy of subpixel spot position detection is limited by several factors such as photon noise, electronic noise24, discretization and quantization of the camera sensor25, and the choice of centroiding algorithm26. The number of collected photons that contribute to the intensity distribution of a spot is directly correlated with the uncertainty of its position determination. Starting from a single object point, if M photons are imaged to the camera and individual position measurements are made for each photon, the mean over all individual measurements leads to a positional standard deviation that is reduced by a factor of $ \sqrt{M} $ 27. Temporal averaging is one way to increase the number of collected photons and therefore improve accuracy. However, temporal averaging reduces neither the fixed pattern noise of the image sensor nor the discretization error. In terms of image-based vibration measurements, temporal averaging would also reduce temporal resolution.
The principle of our vibration measurement method is based on spatial averaging. The spot of a single light source is holographically replicated into a cluster of $ N\; = \; 21 $ spots. The spot cluster is generated using a lithographically fabricated diffractive optical element (DOE)28, which is placed in the Fourier plane of a bi-telecentric lens, as shown in Fig. 2. If the light source is moved, all replicated spots move by the same amount (Fourier shift theorem). As described above, by making the light source brighter, the number of photons and simultaneously the number of pixels that sample the intensity distribution are increased. By calculating the mean of all spot centers per cluster, the position accuracy of the light source can be improved ideally by a factor of $ \sqrt{N} $ . This has been shown by Haist et al.23, where a single light source was replicated into a cluster of $ N\; = \; 16 $ spots. The detection accuracy was improved in experiments from 0.01 pixels to 0.0028 pixels, which corresponds to an improvement factor of 3.6 (theoretical factor of 4).
Fig. 2 Scheme of a bi-telecentric lens with f1, f2 as the focal lengths of the lenses L1 and L2.
A diffractive optical element (DOE) is placed in the aperture (Fourier) plane. The DOE replicates a single light source into a predefined cluster of N = 21 spots.
Measurement setup
A schematic of the measurement setup is depicted in Fig. 2. We use a camera system consisting of a bi-telecentric lens (6) (Vico DTCM110-150-AL, $ NA $ = 0.0085, $ |\beta'| $ = 0.11, distortion < 0.1%) and a high-speed camera (8) (Mikrotron EoSens 4CXP, 4 Mpix, pixel size $ d_p $ = 7 µm, 563 fps full frame). In the aperture plane of the bi-telecentric lens (6), a DOE (7) is mounted to perform the multipoint replication. Light sources (Roithner LaserTechnik, SMB1N-D630, $ \lambda $ = 630 nm, FWHM = 16 nm, radiant intensity = 120 mW/sr) are attached to a static object (5) and to an object (5) that is mechanically actuated by a piezo stage (2) (Physik Instrumente P-611.2S, $ X/Y $ -travel range = 100 µm, resolution = 0.2 nm). In front of each light source, a pinhole ($ D $ = 200 µm) is mounted to obtain comparable radiation profiles and small spots on the camera. The displacement of the object (4) is measured in the $ X_W $ - and $ Z_W $ -directions using two vibrometers (1) and (3) (Polytec NLV 2500). In this study we use only the $ X_W $ -direction.
The piezo stage is operated in a closed-loop with a commercial controller (PI E-727.3SDA, 20 kHz). The position references for generating the vibrations are supplied from a dSpace DS-1005 1 GHz PowerPC (2 kHz/ 3 kHz sampling rate), which is also used for signal acquisition. The camera is hardware-triggered using a frequency generator (Rigol DG1022Z). The maximum acquisition frame rate of the camera is defined by the size of the captured image region of interest (ROI). For an ROI size of 900 pixels $ \times $ 576 pixels, the maximum frame rate is 2000 frames per second (fps). The size of one cluster in the $ X $ - and $ Y- $ directions is 170 pixels $ \times $ 170 pixels, so a total of 15 object points (clusters) can be monitored simultaneously at 2000 fps. For an image size of 700 pixels $ \times $ 300 pixels, the maximum frame rate can be increased to 3000 fps.
The two-dimensional measurement field is defined by the camera sensor size divided by the magnification of the imaging lens. For the measurement setup presented in this study, the field size is 148 mm $ \times $ 110 mm. This enables the simultaneous monitoring of a total number of 130 equally spaced object points with a frame rate of 563 Hz.
To measure the vibration of each light source, it is necessary to calculate the position of each corresponding cluster in the image. The procedure for determining the subpixel cluster position is summarized in Fig. 4.
Fig. 4 Three-step image processing procedure:
a Initialization with spot map; b Find clusters; c Calculate averaged subpixel cluster positions.
Step 1: In the first step, an image region containing one cluster is selected, saved, and used for cross correlation with the captured images. In this template image, the coarse position of each spot is localized by convolution with a blur kernel and by the application of a local maxima algorithm. The coarse spot positions (marked as blue crosses) are given in the local template coordinate system and are stored in a vector (spot map). As a consequence of the Fourier-Shift-Theorem, all replicated spots move by the same amount and, therefore, the relative position between all spots per cluster is in good approximation constant over the whole image. Therefore, once the spot map is generated, it can be applied to all clusters in all images of the image stack.
Step 2: In the second step each cluster position is localized coarsely. The upper left corners of each cluster are found using cross-correlation with the template from step 1. The correlation result is blurred, and local maxima are obtained by applying a local maxima algorithm. For vibration measurements, it is sufficient to detect the cluster positions only for the first image and to keep those positions for all subsequent images of the stack, because the cluster movement does not exceed a few pixels.
Step 3: The coordinates of the spot map (step 1) are combined with the coarse cluster positions from step 2. An ROI can be applied around each single spot per cluster. The ROIs are marked in Fig. 4c using blue squares. The spot diameter is approximately 8 to 10 pixels, so the square ROI around each spot is selected to be 30 pixels wide, which corresponds to 1.9 mm in the object space.
The subpixel position $ (x_ {sp}, y_{sp}) $ of one spot inside the ROI is calculated as
$$ (x_ {sp}, y_{sp}) = \left(\frac{\sum\limits_{x}\sum\limits_{y}xI(x, y)} {\sum\limits_{x}\sum\limits_{y}I(x, y)}, \frac{\sum\limits_{x}\sum\limits_{y}yI(x, y)} {\sum\limits_{x}\sum\limits_{y}I(x, y)}\right) $$
using the gray value weighted center of gravity (CoG), where $ I(x, y) $ is the intensity at position ($ x $ , $ y $ ). All intensity values below a threshold are set to zero to remove background noise. The position of one cluster containing $ N $ spots is given by
$$ (x_ {cl}, y_{cl}) = \left(\frac{1}{N}\sum\limits_{i = 1}^{N}x_{sp,i}, \frac{1}{N}\sum\limits_{i = 1}^{N}y_{sp,i}\right) $$
The position of each cluster is given in pixel coordinates. A conversion to a metric length unit can be achieved using the magnification $ \beta' $ of the bi-telecentric lens and the pixel size $ d_p $ of 7 µm. The conversion factor $ k $ is given by
$$ k = \frac{d_p}{\beta'} = \frac{7 \mu \mathrm{m}}{0.11} = 63.64 \frac{\mu \mathrm{m}}{\mathrm{Pixel}} $$
This linear mapping can be applied only for small movements on the sensor, because the lens distortion introduces an error that is position dependent. However, for vibrations, linear mapping is a valid assumption.
Experiments and results
The goal of the presented experiments is to obtain an overview of the capabilities of the proposed vibration measurement method in terms of accuracy improvement and resolution limit. To investigate the sensitivity of the proposed measurement method over a wide frequency range, an inertial shaker is used to create a chirp signal up to high frequencies.
The measurement setup shown in Fig. 3 is used to demonstrate the accuracy improvement using the multipoint method. The piezo stage is actuated in the $ X_W $ -direction with a sinusoidal oscillation of amplitude 1 µm and frequency of 50 Hz for a time period of 0.4 s. The camera image has a size of 900 pixels $ \times $ 576 pixels, and the sampling rate of the camera is set to 1000 fps.
Fig. 3 Experimental setup:
A telecentric lens (6) images the light sources of a moving (4) and a static (5) object using a high-speed camera (8). Each spot of a light source is replicated using a DOE (7). A piezo stage (2) is used to actuate an object (4), and its displacement is measured by two vibrometers (1,3).
In Fig. 5, the positional signals of the vibrometer and the camera are given in micrometer and pixel units. The reference signal of the $ X $ -vibrometer is plotted in the middle (red). For clarity, the camera signals have an additional offset. The signal is evaluated using a conventional single spot (upper, blue signal) as described in Eq. 1, shifted by +3 µm. The averaged cluster position using the multipoint method (lower, green signal), calculated according to Eq. 2, is shifted by -3 µm. Both signals are high-pass filtered (30 Hz cutoff frequency). In the lower plot, the difference between the two camera signals and the reference signal is shown. It is clearly visible that the multipoint position signal ($ x_{cl} $ ) matches the vibrometer signal ($ x_{vib} $ ) better than the conventional single-spot evaluation ($ x_{sp} $ ) by a factor of 4. The standard deviation of the difference signals is $ \sigma_{sp} $ = 0.402 µm for a single spot, and $ \sigma_{cl} $ = 0.104 µm for the averaged cluster position. Applying the conversion factor $ k $ (Eq. (3)), this corresponds to $ \sigma_{sp} $ = 0.0069 pixels and $ \sigma_{cl} $ = 0.0017 pixels.
Fig. 5 Accuracy improvement of multipoint method: $ X- $ oscillation measurement (frequency 50 Hz, 1 µm amplitude) with the proposed camera system and a vibrometer as a reference.
The upper signal $ x_{sp} $ (blue) is evaluated using the position signal of a single spot (Eq. 1) and the lower signal $ x_{cl} $ (green) using the averaged cluster position (Eq. 2) (shown with $ \pm $3 µm offsets). The difference with respect to the reference signal $ x_{vib} $ is plotted in the lower chart.
Resolution limit
To analyze the lower bound on the measurable amplitudes, we use the same experimental setup as in the previous section.
Our experiments show that an amplitude of 100 nm can still be measured using this method. Together with the measurement range of 148 mm this corresponds to a dynamic range of 1480000:1. In Fig. 6a the camera (red) and the vibrometer (blue) signals are depicted for an oscillation amplitude of 100 nm. Both signals are high-pass filtered (30 Hz cutoff frequency) to obtain the pure motion signals detected by the vibrometer and the camera. Despite the noise of the camera signal, the oscillation frequency is clearly visible and matches the vibrometer signal. An amplitude of 100 nm in the object space can be converted to the imaging sensor using the magnification $ \beta' $ or the conversion factor $ k $ . This corresponds to 11 nm (0.0016 pixels) on the sensor. The difference between the vibrometer and the camera is plotted in the lower chart. The standard deviation of the difference signal in the image plane is $ \sigma_{d} $ = 0.095 µm or $ \sigma_{d} $ = 0.0015 pixels.
Fig. 6 Piezo stage oscillation ($ X_W $ -direction) of 50 Hz and 100 nm amplitude measured by the proposed camera setup and a vibrometer as reference.
a Position signals over time of camera (red) and vibrometer (blue); the difference is plotted beneath. Both signals are high-pass filtered (30 Hz cutoff frequency). b Amplitude spectrum of the camera signal (upper) and the vibrometer signal (lower). Both have a peak frequency at 49.62 Hz. The filtered-out ambient vibration spectrum is shown in light blue.
Fig. 6b shows the amplitude spectrum of the camera (upper plot) and the vibrometer (lower plot). The filtered-out ambient vibration spectrum (below 30 Hz) is shown in light blue. It can be observed that both measured frequency spectra show good correspondence. The detected peak frequency of the proposed measurement system as well as that of the reference system is 49.62 Hz. Small stimulations at a frequency of approximately 100 Hz are also detected with both systems.
Ambient vibration
Ambient vibrations are present in almost every vibration measurement scenario. Conventionally, they are suppressed using a high-pass filter, as done in the previous sections. The frequency peaks of the ambient vibrations in our laboratory room are 10.4 Hz, 16.2 Hz, and 22.1 Hz. In the case of the vibrometer, those disturbing vibrations are not easy to avoid, whereas with the camera system, they can be suppressed almost entirely. Therefore, as described in the introduction, we use the relative movement between the moving and static light sources in the image.
A typical image containing four clusters, $ C_0 $ to $ C_3 $ , is depicted in Fig. 7a. The blue clusters $ C_0 $ and $ C_1 $ belong to the light sources that are attached to the piezo stage, and the yellow clusters $ C_2 $ and $ C_3 $ belong to the static light sources (see Fig. 3a).
a The camera image of the experimental setup of Fig. 3 containing four clusters. The blue clusters $ C_0 $ and $ C_1 $ are actively moved by the piezo stage, and the yellow clusters $ C_2 $ and $ C_3 $ are static. b The spectrum of an $ X $-oscillation (50 Hz, 100 nm amplitude) for the mean relative position $ (x_{c_0}+x_{c_1})/2-(x_{c_2}+x_{c_3})/2 $ (upper), the absolute position $ x_{c_0} $ (middle), and the vibrometer position signal (lower). The absolute position signal of the camera and vibrometer show environmental vibrations (green squares) that necessitate the use of a high-pass filter. In the relative signal (upper chart), these vibrations are intrinsically compensated and require no filtering.
The relative position is calculated as
$$ x_{rel} = \frac{x_{c_0} + x_{c_1}}{2} - \frac{x_{c_2}+ x_{c_3}}{2} $$
with $ x_{\mathrm{c},i} $ being the $ X $ -coordinates of clusters $ C_{i} $ , i = 0,1,2,3.
A spectral analysis was performed for the vibrometer, the absolute position signal of cluster $ C_0 $ , and the relative position signal according to Eq. 4. The results are shown in Fig. 7b. Ambient vibrations introduced by the laboratory environment are marked in the spectrum of the absolute cluster position $ C_0 $ (middle plot) and in the vibrometer spectrum (lower plot) with green squares. The upper plot shows the spectrum of the relative position signal obtained using Eq. 4. It can be seen that all three peaks of the ambient vibrations are compensated almost entirely. We would like to emphasize that with this method, the signal to be measured does not have to be spectrally separated from ambient vibrations. The reason we have chosen a signal frequency higher than the ambient vibration frequencies was to be able to compare our results with the LDV.
High frequency measurement
We also use a chirp signal to verify the sensitivity of the proposed measurement method over a wide frequency range. The chirp signal consisted of frequencies that linearly increased from 200 Hz to 1000 Hz at a constant amplitude. An inertial shaker is used to convert the chirp signal to a movement in the $ X- $ direction, and a single light source is attached to the shaker tip. The camera frame rate is set to 3000 fps, requiring a reduced image resolution of 700 pixels $ \times $ 300 pixels. The vibrometer signal is also acquired at 3000 Hz.
Fig. 8 shows the measurement results of the vibrometer and the camera, including the position signals in the time domain and their difference (a), as well as the short-time Fourier transform (STFT) of each signal in (b) and (c). The vibrometer and the camera signals are high-pass filtered with 30 Hz and both are compensated for positional and time offset. The time-domain plot demonstrates that the change in amplitude and frequency can be accurately measured by the proposed system. The standard deviation of the difference signal is $ \sigma_{d} $ = 0.242 µm or $ \sigma_{d} $ = 0.0038 pixels.
Fig. 8 High frequency measurement.
a Chirp signal of an inertial shaker with linearly increasing frequency from 200 Hz to 1000 Hz measured with a camera and vibrometer. b and c are the short-time Fourier transform (STFT) of each signal.
All presented measurement results were achieved by applying a (linear) conversion factor $ k $ from the image domain to real-world units. However, as the vibrational amplitude increases, the influence of lens distortions on the position signal increases. To evaluate the introduced error due to the linear mapping, an accurate calibration of the camera system would be necessary. However, if larger amplitudes are to be measured, a resolution of 100 nm often is not necessary. In [29, 30], it was reported that the standard deviation of the reprojection error for a linear conversion factor is 16 µm (standard deviation) for a measurement field of 100 mm × 74 mm. Simulations using 1000 different measurements with amplitudes ranging from 0.1 mm to 50 mm show that the standard deviation is in good approximation linearly dependent on the amplitude. Therefore, depending on the application (vibration amplitude size, desired accuracy), it has to be decided whether calibration is necessary. For our experiments, this error was negligible because the measured amplitudes were very small.
Similar to any other vibration measurement method, the proposed system has certain advantages and disadvantages. The precision of the proposed method depends on the amount of light available at the object points. To achieve a high signal-to-noise ratio, we applied active light sources with commercial pinholes (Thorlabs, P200K) mounted in front to obtain small spots and a similar intensity distribution for all clusters. A commercial pinhole is rather inconvenient for use in industrial applications because it makes the light sources bulky and difficult to attach. Here, a blackened aluminum foil with small holes can be used as the aperture, placed in front of the LEDs. Together with a small coin cell, the LED modules can become very small, easy to attach, and can operate independently. Another disadvantage compared to an LDV is the need for active light modules that must be attached to the object. One solution for this issue is the use of fluorescent particles or micro reflectors (spheres) that are attached to the object, in combination with external illumination. The resulting effect on measurement resolution, in particular for small amplitudes and high frequencies, will be discussed in a following publication.
On the other hand, the advantages of the system include its high resolution and ease of use regarding application and signal processing. Furthermore, with the proposed camera-based method, it is possible to measure the vibration amplitudes directly in two dimensions without the need for time-based integration of the acceleration signal, as would be necessary for LDVs. This also offers the opportunity to measure the relative movements between multiple target points.
The concept of averaging over a certain lateral domain to increase the measurement uncertainty is used not only in our method, but also in digital image correlation (DIC), which is a common technique for measuring full-field vibrations. In DIC, the resolution as well as the accuracy of lateral (in-plane) and axial (out-of-plane) position measurements depend on the size of the correlation patch. Using large correlation patches, the in-plane and out-of-plane resolution of DIC can be very good. In [31], for example, a sensitivity of 3.3 µm was reported for a field of view of 100 mm $ \times $ 80 mm. Large patches, however, also lead to reduced lateral resolution and sampling of this lateral and axial position measurement. This is a problem for high lateral resolution measurements, which are necessary, especially for objects that deform. In DIC, even slight deformations over the extent of the correlation patch lead to inaccuracies in the correlation. Our method avoids this problem by averaging while maintaining the highest lateral resolution of the object by using localized small object points. Averaging is realized in the image space instead of the object space.
To date, the multipoint method has been used for the measurement of building deformation and 2D/3D coordinate measurements29,30,32,33. In the case of building deformation measurement, accelerometers are often employed because of their high resolution and ease of use. However, accelerometers measure only changes in velocity and, therefore, tend to drift, especially for slow movements. In addition, they often need wiring for power supply and data transfer, which makes them more inconvenient to apply, compared to an independent light module.
The application scope of our method is mainly seen in the industrial sector, for example, in measuring vibrations of car engines, car body parts (e.g., brakes), or other objects where it is helpful to measure at multiple positions simultaneously. Applications that are not practicable for the proposed method are those in which the mass or stiffness of the measured object is changed significantly by the applied light modules, for example in mini- or microsystems or perhaps in ultra-lightweight constructions. For larger applications such as bridges or buildings, the multipoint method can also be used, but error influences resulting from air turbulence and thermally induced refraction index changes must be considered. As long as these restrictions are considered, this measurement system offers a cheap and simple way to measure a large range of amplitudes with very high resolution.
The proposed vibration measurement system is based on the detection of active light markers attached to a vibrating object. A diffractive optical element is used to replicate each light source holographically into a cluster of $ N $ = 21 spots in the image plane. Spatial averaging of all spot centers per cluster improves the detection accuracy of the light marker position ideally by a factor of $ \sqrt{N} $ .
The proposed vibration measurement setup is able to detect vibration amplitudes of 100 nm clearly in both the spatial and frequency domains with a standard deviation of $ \sigma_d $ = 0.095 µm, which corresponds to $ \sigma_d $ = 0.0015 pixels. To the best of our knowledge, this is the highest reported accuracy for imaging-based spot position measurements. The dynamic range of the proposed system is 1480000:1. High frequencies were analyzed by attaching a light source to an inertial shaker. A chirp signal from 200 Hz to 1000 Hz with amplitudes from 28 µm to 0.3 µm was measured. Both spectrograms (STFT) of the camera and the vibrometer show good correspondence.
We also showed that ambient vibrations can be compensated almost entirely by calculating the relative movement between the light markers attached to the environment and to the object, making it possible to avoid the use of band-pass filters. In addition, the concept of band-pass filtering may also be combined with the aforementioned subtraction method. In this way, the signal quality in terms of noise of the specimen, which vibrates with a frequency in the range of the ambient, can be improved even further. The main advantages of the proposed method are its simplicity and cost regarding components and signal processing, and in the high resolution that can be achieved even under external disturbances.
We gratefully thank the German Research Foundation (DFG - Deutsche Forschungsgemeinschaft) for funding the projects "Dynamische Referenzierung von Koordinatenmess- und Bearbeitungsmaschinen" (OS 111/42-2 and SA 847/16-2).
Simon Hartlieb performed the mechanical and optical design as well as the measurements, evaluations, and article writing. Michael Ringkowski implemented the measurement software and supported the measurements and evaluations. Tobias Haist, Oliver Sawodny, and Wolfgang Osten provided advice throughout the work, including reviewing and editing of the article.
[1] Y. Fu et al. Spatially encoded multibeam laser doppler vibrometry using a single photodetector. Optics Letters 35, 1356-1358 (2010).
[4] C. Yang et al. A multi-point laser doppler vibrometer with fiber-based configuration. Review of scientific instruments 84, 121702 (2013).
[6] W. N. MacPherson et al. Multipoint laser vibrometer for modal analysis. Applied Optics 46, 3126-3132 (2007).
[9] D. Kim et al. 3-d vibration measurement using a single laser scanning vibrometer by moving to three different locations. IEEE Transactions on Instrumentation and Measurement 63, 2028-2033 (2014).
[12] D. Ribeiro et al. Non-contact measurement of the dynamic displacement of railway bridges using an advanced video-based system. Engineering Structures 75, 164-180 (2014).
[13] A. M. Wahbeh et al. A vision-based approach for the direct measurement of displacements in vibrating systems. Smart Materials and Structures 12, 785-794 (2003).
[14] S. Patsias & W. J. Staszewskiy. Damage detection using optical measurements and wavelets. Structural Health Monitoring 1, 5-22 (2002).
[16] J. G. Chen et al. Modal identification of simple structures with high-speed video using motion magnification. JournalofSoundandVibration 345, 58-71 (2015).
[19] Z. Liu et al. Time-varying motion filtering for visionbased nonstationary vibration measurement. IEEE Transactions on Instrumentation and Measurement 69, 3907-3916 (2020).
[21] L. P. Yu & B. Pan. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement. Mechanical Systems and Signal Processing 94, 374-383 (2017).
[22] G. D'Emilia, L. Razzè & E. Zappa. Uncertainty analysis of high frequency image-based vibration measurements. Measurement 46(8), 2630-2637 (2013).
[23] T. Haist et al. Multi-image position detection. Optics express 22, 14450-14463 (2014).
[24] R. D. Gow et al. A comprehensive tool for modeling cmos image-sensor-noise performance. IEEE Transactions on Electron Devices 54, 1321-1329 (2007).
[25] B. F. Alexander & K. C. Ng. Elimination of systematic error in subpixel accuracy centroid estimation [also Letter 34(11)3347-3348(Nov1995)]. Optical Engineering 30, 1320-1331 (1991).
[29] S. Hartlieb et al. Hochgenaue kalibrierung eines holografischen multi-punkt positionsmesssystems. tm - Technisches Messen 87, 504-513 (2020).
[30] S. Hartlieb et al. Highly accurate imaging based position measurement using holographic point replication. Measurement 172, 108852 (2021).
[32] F. Guerra et al. Precise building deformation measurement using holographic multipoint replication. Applied Optics 59(9), 2746-2753 (2020).
Fig. 1 Scheme of the vibration measurement method, including the spectra of the absolute and relative spot position signals. Point light sources on a vibrating object (red) and a static object (green) are imaged to a camera. The center of the imaged spot is used as the position signal. Environmental vibrations (marked with a green square) are present in the absolute position spectrum $ x_{abs}$ of the red light sources. By calculating the relative movement $ x_{rel}$ between the red and green light sources, ambient vibrations can be removed.
Fig. 2 Scheme of a bi-telecentric lens with f1, f2 as the focal lengths of the lenses L1 and L2. A diffractive optical element (DOE) is placed in the aperture (Fourier) plane. The DOE replicates a single light source into a predefined cluster of N = 21 spots.
Fig. 4 Three-step image processing procedure: a Initialization with spot map; b Find clusters; c Calculate averaged subpixel cluster positions.
Fig. 3 Experimental setup: A telecentric lens (6) images the light sources of a moving (4) and a static (5) object using a high-speed camera (8). Each spot of a light source is replicated using a DOE (7). A piezo stage (2) is used to actuate an object (4), and its displacement is measured by two vibrometers (1,3).
Fig. 5 Accuracy improvement of multipoint method: $ X- $oscillation measurement (frequency 50 Hz, 1 µm amplitude) with the proposed camera system and a vibrometer as a reference. The upper signal $ x_{sp} $ (blue) is evaluated using the position signal of a single spot (Eq. 1) and the lower signal $ x_{cl} $ (green) using the averaged cluster position (Eq. 2) (shown with $ \pm $3 µm offsets). The difference with respect to the reference signal $ x_{vib} $ is plotted in the lower chart.
Fig. 6 Piezo stage oscillation ($ X_W $-direction) of 50 Hz and 100 nm amplitude measured by the proposed camera setup and a vibrometer as reference. a Position signals over time of camera (red) and vibrometer (blue); the difference is plotted beneath. Both signals are high-pass filtered (30 Hz cutoff frequency). b Amplitude spectrum of the camera signal (upper) and the vibrometer signal (lower). Both have a peak frequency at 49.62 Hz. The filtered-out ambient vibration spectrum is shown in light blue.
Fig. 7 a The camera image of the experimental setup of Fig. 3 containing four clusters. The blue clusters $ C_0 $ and $ C_1 $ are actively moved by the piezo stage, and the yellow clusters $ C_2 $ and $ C_3 $ are static. b The spectrum of an $ X $-oscillation (50 Hz, 100 nm amplitude) for the mean relative position $ (x_{c_0}+x_{c_1})/2-(x_{c_2}+x_{c_3})/2 $ (upper), the absolute position $ x_{c_0} $ (middle), and the vibrometer position signal (lower). The absolute position signal of the camera and vibrometer show environmental vibrations (green squares) that necessitate the use of a high-pass filter. In the relative signal (upper chart), these vibrations are intrinsically compensated and require no filtering.
Fig. 8 High frequency measurement. a Chirp signal of an inertial shaker with linearly increasing frequency from 200 Hz to 1000 Hz measured with a camera and vibrometer. b and c are the short-time Fourier transform (STFT) of each signal. | CommonCrawl |
Calculus of Trigonometric Functions
Basic derivatives of sine and cosine
Differentiating Various Trig Functions (sin/cos)
Applications of differentiation of Trig Functions (sin/cos)
Finding primitive functions, indefinite integrals, definite integrals (sin/cos)
Integration of sin/cos (non-linear functions)
Basic derivatives of tan
Differentiating Various Trig Functions (tan)
Applications of differentiation of Trig Functions (mixed)
Finding primitive functions, indefinite integrals, definite integrals (tan)
Differentiating Various Trig Functions (mixed functions)
Optimisation with Trig Functions (sin/cos/tan)
Finding primitive functions, indefinite integrals, definite integrals (mixed functions)
Integration of Sine and Cosine
Area bound by curves and x/y axis, area between two curves (sin/cos)
Area between sin/cos curve and linear function
Area between two sin/cos curves
Area bound by curves and x/y axis, area between two curves (tan)
Area between tan curve and linear function
Level 8 - NCEA Level 3
The sine and cosine functions are periodic functions, meaning that when expressed as functions of a variable $x$x, there is a quantity $k$k such that if any multiple of $k$k is added to $x$x, the same function value occurs. What we mean here is that the function cycles, or repeats.
In symbols, $f$f is a periodic function if $f\left(x\right)=f\left(x+nk\right),$f(x)=f(x+nk),where $n$n is an integer. Since sine and cosine are periodic functions, the tangent function must also be periodic because of the way it is defined: $\tan\left(x\right)=\frac{\sin\left(x\right)}{\cos\left(x\right)}$tan(x)=sin(x)cos(x).
The period $k$k of both the sine function and the cosine function is $2\pi$2π. However, the period of the tangent function is just $\pi$π.
From the graph, it is clear that the gradient of the sine function also varies periodically - from a maximum of $1$1 at $x=0\pm n2\pi$x=0±n2π, through zero at $\frac{\pi}{2}\pm n2\pi$π2±n2π, through a minimum of $-1$−1 at $\frac{3\pi}{2}\pm n2\pi$3π2±n2π.
The following graph shows how the gradient of the sine function varies and it also shows that the variations in the gradient correspond to the form of the cosine function.
The graphs strongly suggest that if $f\left(x\right)=\sin x$f(x)=sinx, then the derivative is $f'\left(x\right)=\cos x$f′(x)=cosx.
Observation of graphs suggests, further, that if $g\left(x\right)=\cos x$g(x)=cosx, then $g'\left(x\right)=-\sin\left(x\right)$g′(x)=−sin(x) is its derivative. You could check, for example, that the gradient of $\cos x$cosx increases between $x=\frac{\pi}{2}$x=π2 and $x=\frac{3\pi}{2}$x=3π2 while on the same interval, $\sin x$sinx is decreasing and, therefore, $-\sin x$−sinx is increasing, corresponding to the increasing gradient of $\cos x$cosx.
To obtain the derivative of $\sin x$sinx in a more rigorous way, we need to evaluate the 'first principles' statement:
$f'\left(x\right)=\lim_{h\rightarrow0}\frac{\sin\left(x+h\right)-\sin x}{h}$f′(x)=limh→0sin(x+h)−sinxh
To do this we need the expansion $\sin\left(x+h\right)=\sin x\cos h+\cos x\sin h$sin(x+h)=sinxcosh+cosxsinh, and we also need to evaluate the limits $\lim_{h\rightarrow0}\frac{\sin h}{h}$limh→0sinhh and $\lim_{h\rightarrow0}\frac{\cos h-1}{h}$limh→0cosh−1h. This is done with the help of some circle geometry in more complete treatments of the topic, and the results given above are confirmed.
The usual rules for differentiating sums, products or quotients of functions apply to the trigonometric functions, as do the rules for differentiating a function of a function.
What is the gradient function for $f(x)=\tan x$f(x)=tanx?
We write $y=\tan x=\frac{\sin x}{\cos x}$y=tanx=sinxcosx. Then $y'=\frac{\cos x.\cos x-\sin x.\left(-\sin x\right)}{\cos^2x}$y′=cosx.cosx−sinx.(−sinx)cos2x. This is, $y'=\frac{\cos^2x+\sin^2x}{\cos^2x}=\frac{1}{\cos^2x}=\sec^2x$y′=cos2x+sin2xcos2x=1cos2x=sec2x.
For what values of $x$x does the function defined by $y=2\sin(x+\frac{\pi}{6})$y=2sin(x+π6) reach its maximum?
In this function, three operations have been applied. First, $\frac{\pi}{6}$π6 is added to the variable $x$x. Then, the sine of the result is obtained, and finally, there is a multiplication by $2$2. The function is a composite: a function of a function of a function. The steps in the following long-winded solution would not need to be fully written out after some practice in this type of problem.
Put $u=\sin(x+\frac{\pi}{6})$u=sin(x+π6). Then $y=2u$y=2u and $\frac{dy}{du}=2$dydu=2.
Next, put $v=x+\frac{\pi}{6}$v=x+π6. Then $u=\sin v$u=sinv and $\frac{du}{dv}=\cos v$dudv=cosv.
Finally, $\frac{dv}{dx}=1$dvdx=1.
So, $y'=\frac{dy}{du}\frac{du}{dv}\frac{dv}{dx}=2.\cos v.1=2\cos\left(x+\frac{\pi}{6}\right)$y′=dydududvdvdx=2.cosv.1=2cos(x+π6).
The function $y=2\sin(x+\frac{\pi}{6})$y=2sin(x+π6) reaches its maximum at points where its derivative is zero. So, we put $2\cos\left(x+\frac{\pi}{6}\right)=0$2cos(x+π6)=0 and solve for $x$x.
The cosine function has zeros at $\pm\frac{\pi}{2}$±π2, $\pm\frac{3\pi}{2}$±3π2, and so on. Of these, one that corresponds to a maximum of the sine function is $\frac{\pi}{2}$π2. Therefore, we require $x+\frac{\pi}{6}=\frac{\pi}{2}$x+π6=π2. That is, $x=\frac{\pi}{3}$x=π3 and, considering the periodic nature of the sine function, the full solution must be
$x=\frac{\pi}{3}\pm2n\pi$x=π3±2nπ
More Worked Examples
Consider the graphs of $y=\cos x$y=cosx and its derivative $y'=-\sin x$y′=−sinx below. A number of points have been labelled on the graph of $y'=-\sin x$y′=−sinx.
Which point on the gradient function corresponds to where the graph of $y=\cos x$y=cosx is increasing most rapidly?
$A$A
$B$B
$C$C
$D$D
$E$E
Which point on the gradient function corresponds to where the graph of $y=\cos x$y=cosx is decreasing most rapidly?
Which points on the gradient function corresponds to where the graph of $y=\cos x$y=cosx is stationary?
There is an expansion system in mathematics that allows a function to be written in terms of powers of $x$x. The value of $\sin x$sinx and $\cos x$cosx, for any value of $x$x, can be given by the expansions below:
$\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\frac{x^9}{9!}-\text{. . .}$sinx=x−x33!+x55!−x77!+x99!−. . .
$\cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\frac{x^8}{8!}-\text{. . .}$cosx=1−x22!+x44!−x66!+x88!−. . .
Find $\frac{d}{dx}\left(\sin x\right)$ddx(sinx) by filling in the gaps below.
$\frac{d}{dx}\left(\sin x\right)$ddx(sinx) $=$= $1-\frac{3\editable{}}{3!}+\frac{\editable{}x^4}{5!}-\frac{\editable{}}{7!}+\frac{9\editable{}}{9!}$1−3
3!+
x45!−
7!+9
9!$-$−$\text{. . .}$. . .
$=$= $1-\frac{\editable{}}{2!}+\frac{x^4}{\editable{}}-\frac{\editable{}}{6!}+\frac{x^8}{\editable{}}$1−
2!+x4
−
$-$−$\text{. . .}$. . .
$=$= $\editable{}$
Find $\frac{d}{dx}\left(\cos x\right)$ddx(cosx) by filling in the gaps below.
$\frac{d}{dx}\left(\cos x\right)$ddx(cosx) $=$= $-\frac{2\editable{}}{2!}+\frac{\editable{}x^3}{4!}-\frac{\editable{}}{6!}+\frac{8\editable{}}{8!}$−2
$=$= $-\frac{\editable{}}{1!}+\frac{x^3}{\editable{}}-\frac{\editable{}}{5!}+\frac{x^7}{\editable{}}$−
Consider the function $y=\sin ax$y=sinax, where $a$a is a constant.
Let $u=ax$u=ax.
Rewrite the function in terms of $u$u.
Determine $\frac{du}{dx}$dudx.
Hence, determine $\frac{dy}{dx}$dydx in terms of $x$x.
Consider the function $f\left(x\right)=\sin3x$f(x)=sin3x.
State $f'\left(x\right)$f′(x).
Hence, evaluate $f'\left(\frac{\pi}{6}\right)$f′(π6).
Choose and apply a variety of differentiation, integration, and antidifferentiation techniques to functions and relations, using both analytical and numerical methods
Apply differentiation methods in solving problems | CommonCrawl |
{{#invoke:Hatnote|hatnote}} {{#invoke:Hatnote|hatnote}}Template:Main other {{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }}
Distance is a numerical description of how far apart objects are. In physics or everyday usage, distance may refer to a physical length, or an estimation based on other criteria (e.g. "two counties over"). In mathematics, a distance function or metric is a generalization of the concept of physical distance. A metric is a function that behaves according to a specific set of rules, and is a concrete way of describing what it means for elements of some space to be "close to" or "far away from" each other. In most cases, "distance from A to B" is interchangeable with "distance between B and A".
1 Mathematics
1.1 Geometry
1.2 Distance in Euclidean space
1.3 Variational formulation of distance
1.4 Generalization to higher-dimensional objects
1.5 Algebraic distance
1.6 General case
1.7 Distances between sets and between a point and a set
1.8 Graph theory
2 Distance versus directed distance and displacement
2.1 Directed distance
2.2 Displacement
3 Other "distances"
{{#invoke:see also|seealso}}
In analytic geometry, the distance between two points of the xy-plane can be found using the distance formula. The distance between (x1, y1) and (x2, y2) is given by:
d=(Δx)2+(Δy)2=(x2−x1)2+(y2−y1)2.{\displaystyle d={\sqrt {(\Delta x)^{2}+(\Delta y)^{2}}}={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.\,}
Similarly, given points (x1, y1, z1) and (x2, y2, z2) in three-space, the distance between them is:
d=(Δx)2+(Δy)2+(Δz)2=(x2−x1)2+(y2−y1)2+(z2−z1)2.{\displaystyle d={\sqrt {(\Delta x)^{2}+(\Delta y)^{2}+(\Delta z)^{2}}}={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}}.}
These formula are easily derived by constructing a right triangle with a leg on the hypotenuse of another (with the other leg orthogonal to the plane that contains the 1st triangle) and applying the Pythagorean theorem. In the study of complicated geometries,we call this (most common) type of distance Euclidean distance,as it is derived from the Pythagorean theorem,which does not hold in Non-Euclidean geometries.This distance formula can also be expanded into the arc-length formula.
Distance in Euclidean space
In the Euclidean space Rn, the distance between two points is usually given by the Euclidean distance (2-norm distance). Other distances, based on other norms, are sometimes used instead.
For a point (x1, x2, ...,xn) and a point (y1, y2, ...,yn), the Minkowski distance of order p (p-norm distance) is defined as:
1-norm distance =∑i=1n|xi−yi|{\displaystyle =\sum _{i=1}^{n}\left|x_{i}-y_{i}\right|}
2-norm distance =(∑i=1n|xi−yi|2)1/2{\displaystyle =\left(\sum _{i=1}^{n}\left|x_{i}-y_{i}\right|^{2}\right)^{1/2}}
p-norm distance =(∑i=1n|xi−yi|p)1/p{\displaystyle =\left(\sum _{i=1}^{n}\left|x_{i}-y_{i}\right|^{p}\right)^{1/p}}
infinity norm distance =limp→∞(∑i=1n|xi−yi|p)1/p{\displaystyle =\lim _{p\to \infty }\left(\sum _{i=1}^{n}\left|x_{i}-y_{i}\right|^{p}\right)^{1/p}}
=max(|x1−y1|,|x2−y2|,…,|xn−yn|).{\displaystyle =\max \left(|x_{1}-y_{1}|,|x_{2}-y_{2}|,\ldots ,|x_{n}-y_{n}|\right).}
p need not be an integer, but it cannot be less than 1, because otherwise the triangle inequality does not hold.
The 2-norm distance is the Euclidean distance, a generalization of the Pythagorean theorem to more than two coordinates. It is what would be obtained if the distance between two points were measured with a ruler: the "intuitive" idea of distance.
The 1-norm distance is more colourfully called the taxicab norm or Manhattan distance, because it is the distance a car would drive in a city laid out in square blocks (if there are no one-way streets).
The infinity norm distance is also called Chebyshev distance. In 2D, it is the minimum number of moves kings require to travel between two squares on a chessboard.
The p-norm is rarely used for values of p other than 1, 2, and infinity, but see super ellipse.
In physical space the Euclidean distance is in a way the most natural one, because in this case the length of a rigid body does not change with rotation.
Variational formulation of distance
The Euclidean distance between two points in space (A=r→(0){\displaystyle A={\vec {r}}(0)} and B=r→(T){\displaystyle B={\vec {r}}(T)} ) may be written in a variational form where the distance is the minimum value of an integral:
D=∫0T(∂r→(t)∂t)2dt{\displaystyle D=\int _{0}^{T}{\sqrt {\left({\partial {\vec {r}}(t) \over \partial t}\right)^{2}}}\,dt}
Here r→(t){\displaystyle {\vec {r}}(t)} is the trajectory (path) between the two points. The value of the integral (D) represents the length of this trajectory. The distance is the minimal value of this integral and is obtained when r=r∗{\displaystyle r=r^{*}} where r∗{\displaystyle r^{*}} is the optimal trajectory. In the familiar Euclidean case (the above integral) this optimal trajectory is simply a straight line. It is well known that the shortest path between two points is a straight line. Straight lines can formally be obtained by solving the Euler–Lagrange equations for the above functional. In non-Euclidean manifolds (curved spaces) where the nature of the space is represented by a metric gab{\displaystyle g_{ab}} the integrand has be to modified to gacr˙cgabr˙b{\displaystyle {\sqrt {g^{ac}{\dot {r}}_{c}g_{ab}{\dot {r}}^{b}}}} , where Einstein summation convention has been used.
Generalization to higher-dimensional objects
The Euclidean distance between two objects may also be generalized to the case where the objects are no longer points but are higher-dimensional manifolds, such as space curves, so in addition to talking about distance between two points one can discuss concepts of distance between two strings. Since the new objects that are dealt with are extended objects (not points anymore) additional concepts such as non-extensibility, curvature constraints, and non-local interactions that enforce non-crossing become central to the notion of distance. The distance between the two manifolds is the scalar quantity that results from minimizing the generalized distance functional, which represents a transformation between the two manifolds:
D=∫0L∫0T{(∂r→(s,t)∂t)2+λ[(∂r→(s,t)∂s)2−1]}dsdt{\displaystyle {\mathcal {D}}=\int _{0}^{L}\int _{0}^{T}\left\{{\sqrt {\left({\partial {\vec {r}}(s,t) \over \partial t}\right)^{2}}}+\lambda \left[{\sqrt {\left({\partial {\vec {r}}(s,t) \over \partial s}\right)^{2}}}-1\right]\right\}\,ds\,dt}
The above double integral is the generalized distance functional between two plymer conformation. s{\displaystyle s} is a spatial parameter and t{\displaystyle t} is pseudo-time. This means that r→(s,t=ti){\displaystyle {\vec {r}}(s,t=t_{i})} is the polymer/string conformation at time ti{\displaystyle t_{i}} and is parameterized along the string length by s{\displaystyle s} . Similarly r→(s=S,t){\displaystyle {\vec {r}}(s=S,t)} is the trajectory of an infinitesimal segment of the string during transformation of the entire string from conformation r→(s,0){\displaystyle {\vec {r}}(s,0)} to conformation r→(s,T){\displaystyle {\vec {r}}(s,T)} . The term with cofactor λ{\displaystyle \lambda } is a Lagrange multiplier and its role is to ensure that the length of the polymer remains the same during the transformation. If two discrete polymers are inextensible, then the minimal-distance transformation between them no longer involves purely straight-line motion, even on a Euclidean metric. There is a potential application of such generalized distance to the problem of protein folding[1][2] This generalized distance is analogous to the Nambu-Goto action in string theory, however there is no exact correspondence because the Euclidean distance in 3-space is inequivalent to the space-time distance minimized for the classical relativistic string.
Algebraic distance
Template:Expand section This is a metric often used in computer vision that can be minimized by least squares estimation. [1][2] For curves or surfaces given by the equation xTCx=0{\displaystyle x^{T}Cx=0} (such as a conic in homogeneous coordinates), the algebraic distance from the point x′{\displaystyle x'} to the curve is simply x′TCx′{\displaystyle x'^{T}Cx'} . It may serve as an "initial guess" for geometric distance to refine estimations of the curve by more accurate methods, such as non-linear least squares.
General case
In mathematics, in particular geometry, a distance function on a given set M is a function d: M×M → R, where R denotes the set of real numbers, that satisfies the following conditions:
d(x,y) ≥ 0, and d(x,y) = 0 if and only if x = y. (Distance is positive between two different points, and is zero precisely from a point to itself.)
It is symmetric: d(x,y) = d(y,x). (The distance between x and y is the same in either direction.)
It satisfies the triangle inequality: d(x,z) ≤ d(x,y) + d(y,z). (The distance between two points is the shortest distance along any path).
Such a distance function is known as a metric. Together with the set, it makes up a metric space.
For example, the usual definition of distance between two real numbers x and y is: d(x,y) = |x − y|. This definition satisfies the three conditions above, and corresponds to the standard topology of the real line. But distance on a given set is a definitional choice. Another possible choice is to define: d(x,y) = 0 if x = y, and 1 otherwise. This also defines a metric, but gives a completely different topology, the "discrete topology"; with this definition numbers cannot be arbitrarily close.
Distances between sets and between a point and a set
d(A, B) > d(A, C) + d(C, B)
Various distance definitions are possible between objects. For example, between celestial bodies one should not confuse the surface-to-surface distance and the center-to-center distance. If the former is much less than the latter, as for a LEO, the first tends to be quoted (altitude), otherwise, e.g. for the Earth-Moon distance, the latter.
There are two common definitions for the distance between two non-empty subsets of a given set:
One version of distance between two non-empty sets is the infimum of the distances between any two of their respective points, which is the every-day meaning of the word, i.e.
d(A,B)=infx∈A,y∈Bd(x,y).{\displaystyle d(A,B)=\inf _{x\in A,y\in B}d(x,y).}
This is a symmetric premetric. On a collection of sets of which some touch or overlap each other, it is not "separating", because the distance between two different but touching or overlapping sets is zero. Also it is not hemimetric, i.e., the triangle inequality does not hold, except in special cases. Therefore only in special cases this distance makes a collection of sets a metric space.
The Hausdorff distance is the larger of two values, one being the supremum, for a point ranging over one set, of the infimum, for a second point ranging over the other set, of the distance between the points, and the other value being likewise defined but with the roles of the two sets swapped. This distance makes the set of non-empty compact subsets of a metric space itself a metric space.
The distance between a point and a set is the infimum of the distances between the point and those in the set. This corresponds to the distance, according to the first-mentioned definition above of the distance between sets, from the set containing only this point to the other set.
In terms of this, the definition of the Hausdorff distance can be simplified: it is the larger of two values, one being the supremum, for a point ranging over one set, of the distance between the point and the set, and the other value being likewise defined but with the roles of the two sets swapped.
In graph theory the distance between two vertices is the length of the shortest path between those vertices.
Distance versus directed distance and displacement
Distance along a path compared with displacement
Distance cannot be negative and distance travelled never decreases. Distance is a scalar quantity or a magnitude, whereas displacement is a vector quantity with both magnitude and direction. Directed distance is a positive, zero, or negative scalar quantity.
The distance covered by a vehicle (for example as recorded by an odometer), person, animal, or object along a curved path from a point A to a point B should be distinguished from the straight line distance from A to B. For example whatever the distance covered during a round trip from A to B and back to A, the displacement is zero as start and end points coincide. In general the straight line distance does not equal distance travelled, except for journeys in a straight line.
Directed distance
Directed distances are distances with a directional sense. They can be determined along straight lines and along curved lines. A directed distance of a point C from point A in the direction of B on a line AB in a Euclidean vector space is the distance from A to C if C falls on the ray AB, but is the negative of that distance if C falls on the ray BA (I.e., if C is not on the same side of A as B is).
A directed distance along a curved line is not a vector and is represented by a segment of that curved line defined by endpoints A and B, with some specific information indicating the sense (or direction) of an ideal or real motion from one endpoint of the segment to the other (see figure). For instance, just labelling the two endpoints as A and B can indicate the sense, if the ordered sequence (A, B) is assumed, which implies that A is the starting point.
A displacement (see above) is a special kind of directed distance defined in mechanics. A directed distance is called displacement when it is the distance along a straight line (minimum distance) from A and B, and when A and B are positions occupied by the same particle at two different instants of time. This implies motion of the particle. The distance traveled by a particle must always be greater than or equal to its displacement, with equality occurring only when the particle moves along a straight path.
Another kind of directed distance is that between two different particles or point masses at a given time. For instance, the distance from the center of gravity of the Earth A and the center of gravity of the Moon B (which does not strictly imply motion from A to B) falls into this category.
Other "distances"
Canberra distance
Chebyshev distance
E-statistics, or energy statistics, which are functions of distances between statistical observations
Hamming distance and Lee distance, which are used in coding theory
Kullback–Leibler distance, which measures the difference between two probability distributions
Levenshtein distance
Mahalanobis distance is used in statistics
Circular distance is the distance traveled by a wheel. The circumference of the wheel is 2π × radius, and assuming the radius to be 1, then each revolution of the wheel is equivalent of the distance 2π radians. In engineering ω = 2πƒ is often used, where ƒ is the frequency.
Template:Col-start Template:Col-break
Astronomical system of units
Comoving distance
Cosmic distance ladder
Distance (graph theory)
Distance geometry
Distance measures (cosmology)
Dijkstra's algorithm
Distance matrix
Distance-based road exit numbers
Engineering tolerance
Great-circle distance
Template:Col-break Template:Sister
Hamming distance
Lee distance
Meridian arc
Metric (mathematics)
Metric space
Orders of magnitude (length)
Proper length
Proxemics – physical distance between people
Taxicab geometry
Template:Col-end
↑ SS Plotkin, PNAS.2007; 104: 14899–14904,
↑ AR Mohazab, SS Plotkin,"Minimal Folding Pathways for Coarse-Grained Biopolymer Fragments" Biophysical Journal, Volume 95, Issue 12, Pages 5496–5507
|CitationClass=citation }}.
{{#invoke: Navbox | navbox }}
Retrieved from "https://en.formulasearchengine.com/index.php?title=Distance&oldid=221629"
Elementary mathematics
Metric geometry | CommonCrawl |
Last 3 years (12)
European Astronomical Society Publications Series (2)
Invasive Plant Science and Management (1)
Proceedings of the Nutrition Society (1)
The Canadian Entomologist (1)
Entomological Society of Canada TCE ESC (1)
Weed Science Society of America (1)
Publications of the Newton Institute (1)
Edited by Richard P. Tucker, University of Michigan, Ann Arbor, Tait Keller, Rhodes College, Memphis, J. R. McNeill, Georgetown University, Washington DC, Martin Schmid
Book: Environmental Histories of the First World War
Print publication: 23 August 2018, pp ix-ix
Part I - Europe and North America
Print publication: 23 August 2018, pp 17-96
Print publication: 23 August 2018, pp x-xi
Part IV - The Long Aftermath
Print publication: 23 August 2018, pp 255-295
Part II - War's Global Reach
Print publication: 23 August 2018, pp 97-172
Part III - The Middle East and Africa
Print publication: 23 August 2018, pp v-vi
Print publication: 23 August 2018, pp vii-viii
Print publication: 23 August 2018, pp xii-xii
Environmental Histories of the First World War
Edited by Richard P. Tucker, Tait Keller, J. R. McNeill, Martin Schmid
This anthology surveys the ecological impacts of the First World War. Editors Richard P. Tucker, Tait Keller, J. R. McNeill, and Martin Schmidt bring together a list of experienced authors who explore the global interactions of states, armies, civilians, and the environment during the war. They show how the First World War ushered in enormous environmental changes, including the devastation of rural and urban environments, the consumption of strategic natural resources such as metals and petroleum, the impact of war on urban industry, and the disruption of agricultural landscapes leading to widespread famine. Taking a global perspective, Environmental Histories of the First World War presents the ecological consequences of the vast destructive power of the new weaponry and the close collaboration between militaries and civilian governments taking place during this time, showing how this war set trends for the rest of the century.
KIT coaxial gyrotron development: from ITER toward DEMO
S. Ruess, K. A. Avramidis, M. Fuchs, G. Gantenbein, Z. Ioannidis, S. Illy, J. Jin, P. C. Kalaria, T. Kobarg, I. Gr. Pagonakis, T. Ruess, T. Rzesnicki, M. Schmid, M. Thumm, J. Weggen, A. Zein, J. Jelonnek
Journal: International Journal of Microwave and Wireless Technologies / Volume 10 / Issue 5-6 / June 2018
Karlsruhe Institute of Technology (KIT) is doing research and development in the field of megawatt-class radio frequency (RF) sources (gyrotrons) for the Electron Cyclotron Resonance Heating (ECRH) systems of the International Thermonuclear Experimental Reactor (ITER) and the DEMOnstration Fusion Power Plant that will follow ITER. In the focus is the development and verification of the European coaxial-cavity gyrotron technology which shall lead to gyrotrons operating at an RF output power significantly larger than 1 MW CW and at an operating frequency above 200 GHz. A major step into that direction is the final verification of the European 170 GHz 2 MW coaxial-cavity pre-prototype at longer pulses up to 1 s. It bases on the upgrade of an already existing highly modular short-pulse (ms-range) pre-prototype. That pre-prototype has shown a world record output power of 2.2 MW already. This paper summarizes briefly the already achieved experimental results using the short-pulse pre-prototype and discusses in detail the design and manufacturing process of the upgrade of the pre-prototype toward longer pulses up to 1 s.
A data-assimilation method for Reynolds-averaged Navier–Stokes-driven mean flow reconstruction
Dimitry P. G. Foures, Nicolas Dovetta, Denis Sipp, Peter J. Schmid
Journal: Journal of Fluid Mechanics / Volume 759 / 25 November 2014
Published online by Cambridge University Press: 04 November 2014, pp. 404-431
Print publication: 25 November 2014
We present a data-assimilation technique based on a variational formulation and a Lagrange multipliers approach to enforce the Navier–Stokes equations. A general operator (referred to as the measure operator) is defined in order to mathematically describe an experimental measure. The presented method is applied to the case of mean flow measurements. Such a flow can be described by the Reynolds-averaged Navier–Stokes (RANS) equations, which can be formulated as the classical Navier–Stokes equations driven by a forcing term involving the Reynolds stresses. The stress term is an unknown of the equations and is thus chosen as the control parameter in our study. The data-assimilation algorithm is derived to minimize the error between a mean flow measurement and the measure performed on a numerical solution of the steady, forced Navier–Stokes equations; the optimal forcing is found when this error is minimal. We demonstrate the developed data-assimilation framework on a test case: the two-dimensional flow around an infinite cylinder at a Reynolds number of $\mathit{Re}=150$ . The mean flow is computed by time-averaging instantaneous flow fields from a direct numerical simulation (DNS). We then perform several 'measures' on this mean flow and apply the data-assimilation method to reconstruct the full mean flow field. Spatial interpolation, extrapolation, state vector reconstruction and noise filtering are considered independently. The efficacy of the developed identification algorithm is quantified for each of these cases and compared with more traditional methods when possible. We also analyse the identified forcing in terms of unsteadiness characterization, present a way to recover the second-order statistical moments of the fluctuating velocities and finally explore the possibility of pressure reconstruction from velocity measurements.
Optimal mixing in two-dimensional plane Poiseuille flow at finite Péclet number
D. P. G. Foures, C. P. Caulfield, P. J. Schmid
Journal: Journal of Fluid Mechanics / Volume 748 / 10 June 2014
We consider the nonlinear optimisation of the mixing of a passive scalar, initially arranged in two layers, in a two-dimensional plane Poiseuille flow at finite Reynolds and Péclet numbers, below the linear instability threshold. We use a nonlinear-adjoint-looping approach to identify optimal perturbations leading to maximum time-averaged energy as well as maximum mixing in a freely evolving flow, measured through the minimisation of either the passive scalar variance or the so-called mix-norm, as defined by Mathew, Mezić & Petzold (Physica D, vol. 211, 2005, pp. 23–46). We show that energy optimisation appears to lead to very weak mixing of the scalar field whereas the optimal mixing initial perturbations, despite being less energetic, are able to homogenise the scalar field very effectively. For sufficiently long time horizons, minimising the mix-norm identifies optimal initial perturbations which are very similar to those which minimise scalar variance, demonstrating that minimisation of the mix-norm is an excellent proxy for effective mixing in this finite-Péclet-number bounded flow. By analysing the time evolution from initial perturbations of several optimal mixing solutions, we demonstrate that our optimisation method can identify the dominant underlying mixing mechanism, which appears to be classical Taylor dispersion, i.e. shear-augmented diffusion. The optimal mixing proceeds in three stages. First, the optimal mixing perturbation, energised through transient amplitude growth, transports the scalar field across the channel width. In a second stage, the mean flow shear acts to disperse the scalar distribution leading to enhanced diffusion. In a final third stage, linear relaxation diffusion is observed. We also demonstrate the usefulness of the developed variational framework in a more realistic control case: mixing optimisation by prescribed streamwise velocity boundary conditions.
Synchrotron-Based Chemical Nano-Tomography of Microbial Cell-Mineral Aggregates in their Natural, Hydrated State
Gregor Schmid, Fabian Zeitvogel, Likai Hao, Pablo Ingino, Wolfgang Kuerner, James J. Dynes, Chithra Karunakaran, Jian Wang, Yingshen Lu, Travis Ayers, Chuck Schietinger, Adam P. Hitchcock, Martin Obst
Journal: Microscopy and Microanalysis / Volume 20 / Issue 2 / April 2014
Print publication: April 2014
Chemical nano-tomography of microbial cells in their natural, hydrated state provides direct evidence of metabolic and chemical processes. Cells of the nitrate-reducing Acidovorax sp. strain BoFeN1 were cultured in the presence of ferrous iron. Bacterial reduction of nitrate causes precipitation of Fe(III)-(oxyhydr)oxides in the periplasm and in direct vicinity of the cells. Nanoliter aliquots of cell-suspension were injected into custom-designed sample holders wherein polyimide membranes collapse around the cells by capillary forces. The immobilized, hydrated cells were analyzed by synchrotron-based scanning transmission X-ray microscopy in combination with angle-scan tomography. This approach provides three-dimensional (3D) maps of the chemical species in the sample by employing their intrinsic near-edge X-ray absorption properties. The cells were scanned through the focus of a monochromatic soft X-ray beam at different, chemically specific X-ray energies to acquire projection images of their corresponding X-ray absorbance. Based on these images, chemical composition maps were then calculated. Acquiring projections at different tilt angles allowed for 3D reconstruction of the chemical composition. Our approach allows for 3D chemical mapping of hydrated samples and thus provides direct evidence for the localization of metabolic and chemical processes in situ.
Varicella zoster virus in American Samoa: seroprevalence and predictive value of varicella disease history in elementary and college students
A. MAHAMUD, J. LEUNG, Y. MASUNU-FALEAFAGA, E. TESHALE, R. WILLIAMS, T. DULSKI, M. THIEME, P. GARCIA, D. S. SCHMID, S. R. BIALEK
Journal: Epidemiology & Infection / Volume 142 / Issue 5 / May 2014
Published online by Cambridge University Press: 26 July 2013, pp. 1002-1007
The epidemiology of varicella is believed to differ between temperate and tropical countries. We conducted a varicella seroprevalence study in elementary and college students in the US territory of American Samoa before introduction of a routine varicella vaccination programme. Sera from 515 elementary and 208 college students were tested for the presence of varicella-zoster virus (VZV) IgG antibodies. VZV seroprevalence increased with age from 76·0% in the 4–6 years group to 97·7% in those aged ⩾23 years. Reported history of varicella disease for elementary students was significantly associated with VZV seropositivity. The positive and negative predictive values of varicella disease history were 93·4% and 36·4%, respectively, in elementary students and 97·6% and 3·0%, respectively, in college students. VZV seroprevalence in this Pacific island appears to be similar to that in temperate countries and suggests endemic VZV circulation.
Localization of flow structures using $\infty $ -norm optimization
Journal: Journal of Fluid Mechanics / Volume 729 / 25 August 2013
Stability theory based on a variational principle and finite-time direct-adjoint optimization commonly relies on the kinetic perturbation energy density ${E}_{1} (t)= (1/ {V}_{\Omega } )\int \nolimits _{\Omega } e(\boldsymbol{x}, t)\hspace{0.167em} \mathrm{d} \Omega $ (where $e(\boldsymbol{x}, t)= \vert \boldsymbol{u}{\vert }^{2} / 2$ ) as a measure of disturbance size. This type of optimization typically yields optimal perturbations that are global in the fluid domain $\Omega $ of volume ${V}_{\Omega } $ . This paper explores the use of $p$ -norms in determining optimal perturbations for 'energy' growth over prescribed time intervals of length $T$ . For $p= 1$ the traditional energy-based stability analysis is recovered, while for large $p\gg 1$ , localization of the optimal perturbations is observed which identifies confined regions, or 'hotspots', in the domain where significant energy growth can be expected. In addition, the $p$ -norm optimization yields insight into the role and significance of various regions of the flow regarding the overall energy dynamics. As a canonical example, we choose to solve the $\infty $ -norm optimal perturbation problem for the simple case of two-dimensional channel flow. For such a configuration, several solutions branches emerge, each of them identifying a different energy production zone in the flow: either the centre or the walls of the domain. We study several scenarios (involving centre or wall perturbations) leading to localized energy production for different optimization time intervals. Our investigation reveals that even for this simple two-dimensional channel flow, the mechanism for the production of a highly energetic and localized perturbation is not unique in time. We show that wall perturbations are optimal (with respect to the $\infty $ -norm) for relatively short and long times, while the centre perturbations are preferred for very short and intermediate times. The developed $p$ -norm framework is intended to facilitate worst-case analysis of shear flows and to identify localized regions supporting dominant energy growth.
The preferred mode of incompressible jets: linear frequency response analysis
X. Garnaud, L. Lesshafft, P. J. Schmid, P. Huerre
Journal: Journal of Fluid Mechanics / Volume 716 / 10 February 2013
The linear amplification of axisymmetric external forcing in incompressible jet flows is investigated within a fully non-parallel framework. Experimental and numerical studies have shown that isothermal jets preferably amplify external perturbations for Strouhal numbers in the range $0. 25\leq {\mathit{St}}_{D} \leq 0. 5$ , depending on the operating conditions. In the present study, the optimal forcing of an incompressible jet is computed as a function of the excitation frequency. This analysis characterizes the preferred amplification as a pseudo-resonance with a dominant Strouhal number of around $0. 45$ . The flow response at this frequency takes the form of a vortical wavepacket that peaks inside the potential core. Its global structure is characterized by the cooperation of local shear-layer and jet-column modes.
Asteroseismology of eclipsing binary stars using Kepler and the hermes spectrograph
V.S. Schmid, J. Debosscher, P. Degroote, C. Aerts
Journal: European Astronomical Society Publications Series / Volume 64 / 2013
We introduce our PhD project in which we focus on pulsating stars in eclipsing binaries. The combination of high-precision Kepler photometry with high-resolution hermes spectroscopy allows for detailed descriptions of our sample of target stars. We report here the detection of three false positives by radial velocity measurements.
Observing GRBs with the LOFT Wide Field Monitor
S. Brandt, M. Hernanz, M. Feroci, L. Amati, Alvarez, P. Azzarello, D. Barret, E. Bozzo, C. Budtz-Jørgensen, R. Campana, A. Castro-Tirado, A. Cros, E. Del Monte, I. Donnarumma, Y. Evangelista, J.L. Galvez Sanchez, D. Götz, F. Hansen, J.W. den Herder, A. Hornstrup, R. Hudec, D. Karelin, M. van der Klis, S. Korpela, I. Kuvvetli, N. Lund, P. Orleanski, M. Pohl, A. Rachevski, A. Santangelo, S. Schanne, C. Schmid, L. Stella, S. Suchy, C. Tenzer, A. Vacchi, J. Wilms, N. Zampa, J.J.M. in't Zand, A. Zdziarski
LOFT (Large Observatory For X-ray Timing) is one of the four candidate missions currently under assessment study for the M3 mission in ESAs Cosmic Vision program to be launched in 2024. LOFT will carry two instruments with prime sensitivity in the 2–30 keV range: a 10 m2 class large area detector (LAD) with a <1° collimated field of view and a wide field monitor (WFM) instrument. The WFM is based on the coded mask principle, and 5 camera units will provide coverage of more than 1/3 of the sky. The prime goal of the WFM is to detect transient sources to be observed by the LAD. With its wide field of view and good energy resolution of <500 eV, the WFM will be an excellent instrument for detecting and studying GRBs and X-ray flashes. The WFM will be able to detect ~150 gamma ray bursts per year, and a burst alert system will enable the distribution of ~100 GRB positions per year with a ~1 arcmin location accuracy within 30 s of the burst. | CommonCrawl |
Exponential growth questions and answers
The number of bacteria after t hours in a controlled laboratory experiment is n=f(t) ....
The number of bacteria after {eq}t {/eq} hours in a controlled laboratory experiment is {eq}n=f(t) {/eq}. Suppose there is an unlimited amount of space and nutrients for the bacteria. Which do you think is larger, {eq}f'(5) \ \mathrm{or} \ f'(10) {/eq}? If the supply of nutrients is limited, would that affect your conclusion? Explain.
Exponential and Logistic Growth:
Two mathematical models that can express population growth are an exponential and logistic model. The former assumes that a population can grow infinitely large unchecked, and the latter assumes that there is some limit on how large the population can grow. This means that the exponential growth would slow down after a certain point.
Answer and Explanation:
Bacteria reproduce by binary fission, which means that they split in half. Thus, the more bacteria exist in a population, the more that can split in half to form new bacteria. Thus, the rate of change of the population after ten hours will be larger than the rate of change of the population after five hours. Since the rate of change of a function is the derivative, this means that the value of the derivative at ten hours will be larger than the value of the derivative at five. Mathematically, this means that {eq}f'(10)>f'(5) {/eq}.
However, in real life, habitats are limited in the amount that they can handle. The space that a population can occupy or the nutrients that they consume may limit the growth of this population. Thus, the rate of change of the population may be affected by the limit on that population. If the limit is low compared to the initial value of the population and the initial rate of change, then perhaps the growth in the population would slow down as it approaches this limit. In this case, then {eq}f'(5) < f'(10) {/eq}. If this limit is high compared to the initial value of the population and the initial rate of change, then the growth may not have needed to slow down even by ten hours. In this case, the conclusions drawn previous paragraph would still hold.
Become a member and unlock all Study Answers
Our experts can answer your tough homework and study questions.
Ask a question Ask a question
Learn more about this topic:
Get access to this video and our entire Q&A library
Exponential Growth: Definition & Examples
from High School Algebra I: Help and Review
Chapter 6 / Lesson 10
Related to this Question
This year an item costs $106, an increase of 10%...
The table gives estimate of the world population,...
Suppose the population of a small town is 9,000...
The population is decreasing at a rate of 4% per...
The body regulates the level of a certain hormone....
How long will it take $2,000 to reach $6,600 when...
Assume that each decays according to the formula...
A small town has 500 inhabitants. At 8 AM, 40...
The population of the US was 281.4 million in 2000...
The world population was 6.9 billion at the end of...
In the early 1920s, Germany had tremendously high...
In an experimental study, two groups of mice with...
Consider the graph of Q_1 = 2^x, Q_2 = e^x and...
A bacterial colony has an initial population of...
The following table gives estimates of the world...
The population of a colony of rabbits grows...
Given the exponential function: a(x) = p(1 + r)^x,...
The weekly revenue of a business selling gummy...
If the population had continued growing at its...
The population of a certain bacteria grows with...
Exponential Decay: Examples & Definition
Calculating Rate and Exponential Growth: The Population Dynamics Problem
Exponential Growth vs. Decay
Comparing Linear & Exponential Functions
Logistic Population Growth: Equation, Definition & Graph
Finding and Classifying Geometric Sequences
Writing & Evaluating Real-Life Linear Models: Process & Examples
What Is an Exponential Function?
Rational Exponents
Applying Quadratic Functions to Motion Under Gravity & Simple Optimization Problems
Biotic Potential and Carrying Capacity of a Population
The Pythagorean Theorem: Practice and Application
Competitive Exclusion Principle: Definition & Example
Density-Dependent Factors: Examples & Definition
How to Solve Exponential Equations
Elimination Method in Algebra: Definition & Examples
Functions: Identification, Notation & Practice Problems
Using Mathematical Models to Solve Problems
Density-Independent Factors: Examples & Definition
Problem solving using Linear Equations
AP Calculus AB & BC: Help and Review
High School Algebra II: Homework Help Resource
NY Regents Exam - Integrated Algebra: Help and Review
ACT Math Prep: Review & Practice
CUNY Assessment Test in Math: Practice & Study Guide
3rd Grade Math: Practice & Review
Accuplacer Arithmetic Test: Practice & Study Guide
Holt McDougal Larson Geometry: Online Textbook Help
Glencoe Math Connects: Online Textbook Help
Cambridge Pre-U Mathematics - Short Course: Practice & Study Guide
NMTA Middle Grades Mathematics (203): Practice & Study Guide
6th-8th Grade Math: Practice & Review
Algebra I: High School
ACT Prep: Practice & Study Guide
SAT Prep: Practice & Study Guide
ACT Prep: Help and Review
College Algebra: Help and Review
Algebra II: High School
CAHSEE Math Exam: Help and Review
Holt McDougal Algebra 2: Online Textbook Help
Explore our homework questions and answers library
To ask a site support question, click here
Get expert help 24/7
Sign up and access a network of thousands of Exponential growth experts.
Videos, quizzes & homework help
Watch 5 minute video clips, get step by step explanations, take practice quizzes and tests to master any topic.
Study.com has a library of 920,000 questions and answers for covering your toughest textbook problems
I love the way expert tutors clearly explains the answers to my homework questions. Keep up the good work!
- Maritess, College Student
Question to be answered | CommonCrawl |
Habitat preferences of diurnal raptors in relation to human access to their breeding territories in the Balkan Mountain Range, Bulgaria
Nadejda Djorgova ORCID: orcid.org/0000-0003-1473-71961,
Dimitar Ragyov1,
Valko Biserkov1,
Jordan Biserkov ORCID: orcid.org/0000-0002-1180-27251 &
Boris P. Nikolov ORCID: orcid.org/0000-0001-8818-86071
In this study we examined the habitat preferences of three diurnal raptors in relation to human access. We aimed to identify the selection of breeding habitat by the Golden Eagle (Aquila chrysaetos), the Long-legged Buzzard (Buteo rufinus), and the Peregrine Falcon (Falco peregrinus) in response to site accessibility by humans, and in turn, the response of these species to human presence.
Data about the nest locations were collected. Analyses and maps were created using ArcGIS. The "least cost path" was defined using the Cost Path tool.
The lowest values of the Cost Path were established for Long-legged Buzzard and the highest values were estimated for Golden Eagle. Intermediate Cost Path values for Peregrine Falcon were found.
The Long-legged Buzzard could be considered as the most tolerant to human presence in its breeding territories. The Golden Eagle have the lowest degree of tolerance and the Peregrine Falcon is ranked in an intermediate position compared to the other two species, but closer to Golden Eagle.
Raptors are generally used as indicators of anthropogenic effects, as they are higher order predators, long lived, require large territories, and are often sensitive to habitat change (Newton 1979; Russo 2006; Sergio et al. 2006; Carrete et al. 2009). Identifying the factors governing the nesting habitat selection of raptors with high conservation status may be of primary importance in determining the conservation objectives. Environmental variables are often at the forefront of habitat selection studies for raptors however we also need to consider the effect of anthropogenic factors in influencing habitat selection. Human activities impact the raptors by altering the characteristics of their habitats (Watson 1991, 1992; Xirouchakis 2001; Russo 2006; Whitfield et al. 2007; Demerdzhiev et al. 2014). In many instances human presence in the breeding territories causes disturbance of the pairs, which are particularly sensitive in pre-reproductive and reproductive periods (Tjernberg 1983; Andersen et al. 1990; Watson and Dennis 1992; Alivizatos et al. 1998; Pedrini and Sergio 2001; Redinov 2010; Spaul and Heath 2016, 2017; Perona et al. 2019).
Accessibility for people is an important predictor of direct and indirect human impact on ecosystems (Trombulak and Frisell 2000). Accessibility is determined by the available paths and transport infrastructure, by the terrain slope and unevenness, the natural water barriers, and the vegetation permeability. Tourism and recreational activities (e.g. rock climbing, caving, hang gliding and paragliding, and winter sports) are also an important aspect, in terms of how people use the different accessibility areas. Despite the obvious importance of accessibility, its quantification is intricate. Simple measuring of the distance is often used as the easiest quantification of accessibility, which neglects the influence of topography, water features and vegetation. The ArcGIS tools Cost Distance and Cost Path enable a more sophisticated approach to quantifying accessibility by taking into account the slope of the landscape to be traversed. In the context of ecology, "the least cost path" analysis is traditionally employed mainly to assess the functional connectivity of landscapes for species and determine sites that are potentially used as dispersal routes or that should be conserved as biological corridors (Schadt et al. 2002; LaRue and Nielsen 2008; Jobe and White 2009). Accessibility models can be used to analyze the impact of tourism as well as other human impacts on protected areas in order to identify more appropriate management and protection measures (Esteves et al. 2011). They also reduce costs and increase efficiency in conducting environmental studies (Greenwood 1996).
The Golden Eagle (Aquila chrysaetos) (Linnaeus 1758) and the Peregrine Falcon (Falco peregrinus) (Tunstall 1771) were abundant in Bulgaria until the beginning of the twentieth century, with their populations declining thereafter (Simeonov et al. 1990). The Golden Eagle population in Bulgaria is estimated at 120–150 pairs (Petrov et al. 2015), and the Peregrine Falcon population has been estimated at 200 pairs (Ragyov et al. 2008; Stoyanov et al. 2015). The Long-legged Buzzard (Buteo rufinus) (Cretzschmar 1827) is recorded as a breeding species in Bulgaria, and its breeding population was recently estimated at 800–1000 pairs (Vatev et al. 2015). These species of raptors are common in the Balkan Mountain Range but are of high conservation importance on a national level. All three species are protected by the Bulgarian Biological Diversity Act. The Golden Eagle and Long-legged Buzzard are also listed in the Red Data Book of the Republic of Bulgaria under the category VU‒Vulnerable (Petrov et al. 2015; Vatev et al. 2015), while Peregrine Falcon is listed as EN ‒ Endangered (Stoyanov et al. 2015).
Golden Eagle is known to live in various rocky habitats in proximity to open areas is of importance. In Bulgaria, its nests are positioned predominantly on cliffs, but also have been found on trees (Simeonov et al. 1990; Milchew and Georgiewa 1992; Milchev 1994; Petrov et al. 2007).
Long-legged Buzzard resides on cliffs, in stone quarries near deforested mountain slopes or other open spaces, on hilly foothills (Simeonov et al. 1990; Vatev et al. 2015). It is also found near wetlands of various types (Michev et al. 1984; Vatev 1987). Its nests are placed on cliffs, trees and quarries (Boev 1962; Michev et al. 1984; Vatev 1987; Simeonov et al. 1990; Milchev 2009; Demerdzhiev et al. 2014). Peregrine Falcon inhabits rocky terrains, near open spaces, sometimes with scattered trees and patchy woodland. Very rarely, it occurs in settlements (Simeonov et al. 1990; Stoynov et al. 2007, 2015). This species prefers for nesting steep, high cliffs with a dominant position to the landscape, which are found in many areas of its range (Gainzarain et al. 2000; Jenkins 2000; Sergio et al. 2004; Jenkins and van Zyl 2005; Brambilla et al. 2006; Ragyov et al. 2008; Karyakin and Nikolenko 2009).
The European Ecological Network Natura 2000 is a fundamental tool for the conservation of many bird species and their habitats in the European Union, the main purpose of which being to cover the most suitable areas for the survival and reproduction of the target species of birds in protected areas (Fernández and Gurrutxaga 2010). The protected areas (parks and reserves) form the core of the biodiversity conservation in Bulgaria. Studies on the selection of breeding territories by protected raptors in relation to human access to their territories can be helpful in creating effective conservation plans and in the design of protected areas.
We formulated a working hypothesis that accessibility is an important factor in the choice of breeding habitat for Golden Eagle, Long-legged Buzzard and Peregrine Falcon, which can be quantified and used as a reliable indicator in assessing anthropogenic impact. We aim to identify the selection of breeding habitat by the Golden Eagle, the Long-legged Buzzard, and the Peregrine Falcon in response to site accessibility by humans, and to compare the response of these species to human presence.
Balkan Mountain Range is the largest mountain range in Bulgaria covering an area of 11,596 km2. Balkan Mountain Range has a length of 530 km and its width ranges between 15 and 45 km. The average altitude is 722 m, with Botev Peak being the highest point at 2376 m (Fig. 1). A mountainous variety of the temperate continental climate is predominating in the Balkan Mountain Range. The seasonal precipitation distribution corresponds to the moderate continental retention regime. The forests are mainly deciduous, dominated by European Beech (Fagus sylvatica), Hornbeam (Carpinus betulus), Durmast Oak (Quercus petraea), and with local occurrences of Moesian Beech (Fagus sylvatica moesiaca), Sweet Chestnut (Castanea sativa), Turkey Oak (Q. cerris), and Hungarian Oak (Q. frainetto). The coniferous forests do not form a continuous coverage in the mountain. They are represented mostly by Norway Spruce (Picea abies), but Scots Pine (Pinus sylvestris) and Macedonian Pine (Pinus peuce) are also found. Alpine grass vegetation is present in the highest zone (Kopralev 2002). The Balkan Mountain Range is a biodiversity rich region for fauna. Forest habitats maintain the majority of the faunal biodiversity, but alpine regions also contribute to the biodiversity of the area (Gruev and Kuzmanov 1994). Part of the territory of the study area is included in the European Ecological Network Natura 2000 and Protected Areas, which are presented in Fig. 1.
Map of the study area with the breeding territories of the three species under study recorded
Data collection on the location of Golden Eagle, Long-legged Buzzard and Peregrine Falcon nests within the territory of the Balkan Mountain range took place between March and July during the period 2006–2010. Prior to the study, potential nesting sites such as cliff formations and solitary cliffs were identified based on reports of previous cases of nesting. Also, a survey of topographic maps and maps of Google Earth™ was performed. Potential locations were explored from an optimal distance, depending on the field conditions. The terrain was scanned using binoculars with a laser rangefinder and a spotting scope. The centers of the breeding territories were determined either based on nests found or indirectly based on birds' behavior. GPS device, binoculars' laser rangefinder, and a compass were also used. In some cases, the exact location of the nest was not possible to be fixed due to topographic features and/or risk of unnecessary disturbance to the birds. However, most of these cases were related to single-standing small cliffs and the exact position of the nest was estimated within an acceptable error of up to 10‒20 m. A total of 38 territories (29 localized nests) of Golden Eagle, 54 territories (38 nests) of Long-legged Buzzard and 20 territories (6 nests) of Peregrine Falcon were identified (Fig. 1).
GIS analyses
The analyses and maps were created using ArcGIS 10.6 (ESRI, Redlands, CA, USA).
Input layers
Input layers are presented in Table 1.
Table 1 Input layers
Cost Distance and Cost Path tools
In this study, we regarded accessibility as an integral characteristic which is inversely proportional to the effort that people have to make to reach a certain point. Accessibility of certain point (e.g. center of the breeding territory) is determined from a given starting point (e.g. settlement) and is inversely proportional to the sum of the slope along the flattest path between the two. The ArcGIS Cost Path tool was used to calculate "the least cost path" but the application of Cost Distance was needed as a preliminary step. In this case, when using the Cost Distance tool, for each pixel on the map, its slope was counted. The slope is the most important parameter for calculating "the total energy cost" to move from the starting point to the certain point, compared to altitude, exposure, and other parameters in mountain areas in the temperate latitudes and at altitudes up to 2000 m (Minetti et al. 2002; Jobe and White 2009). First based on an elevation raster a new slope raster is created using ArcGIS Slope tool. For each pixel, the tool calculates the rate of change in elevation from that pixel to its neighbors in east–west direction (x-axis) and north–south direction (y-axis) and uses trigonometry to determine the overall slope in degrees. Finally, the least-cost path from a source to a destination is created. The algorithm works by iteratively expanding the source and destination's neighboring pixels like circular waves and recomputing the least-cost required to reach them given the new wave front.
To calculate the cost distance, depending on the specific task, its formula needs to be adapted to produce a result that can be related to the ecology of the species and the analyzed environmental parameters and factors. In the present case, the following initial formula for cost distance was used:
$$ {\text{Cost distance}} = \mathop {\min }\limits_{{{\text{pa}}}} \mathop \sum \limits_{{{\text{ant}}}}^{{{\text{rp}}}} {\text{Slope }}\left( {{\text{input}}\_{\text{pixel}}} \right){ } \times {\text{ Multiplier }}\left( {{\text{active}}\_{\text{population}}} \right) $$
where the symbols "pa" means path, "rp" result pixel, and "ant" antropogenic point.
Multiplier is a function of active_population and determines the relationship between cost distance and active population. The relationship between the Multiplier and the active population is inversely proportional. In the simplest case, the relation is described by the 1/x function:
$$ {\text{Multiplier }}\left( x \right){ } = { }{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 x}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$x$}} $$
Graphically, it can be depicted as a virtual moving away of anthropogenic objects at a distance inversely proportional to their active population (Fig. 2).
Scheme of "virtual moving away" of anthropogenic objects
The active population in the present sample was within a range of 1 to 77,411. In the case of the bigger settlements (population over 10,000), the 1/x function assigned so much importance to the large number of inhabitants that those settlements completely dominated the rest of the settlements in determining the disturbance of the breeding pairs as a result of human presence (using cost distance and cost path). In order to correct that bias and to achieve a more objective impact assessment for multiplier for bigger settlements with population more than 10,000, it was assumed that with the population exceeding 10,000 residents, disturbance increased negligibly. Hence, a threshold of 10,000 was introduced:
$$ {\text{Multiplier }} = { }\frac{1}{{{\text{min }}\left( {{\text{active}}\_{\text{population}},{ }10,000} \right)}} $$
In the sorted sample (without repeats), the ratio r between two consecutive values varied between 1 and 2.5. In this model, the influence of the sources with the lowest numbers was negligible and it could be disregarded in practice. To achieve a more objective impact assessment, the ratio r needed to be minimised. The effect of the source of impact was omnidirectional and could be considered as a circle (two-dimensional object). The multiplier was a one-dimensional scalar revealing the virtual "distance" from the nest. The most direct relationship between those two was a square root. Thus, the ratio r would vary between 1 and \(\sqrt {2.5}\) ≈ 1.58, and formula (1) was changed accordingly:
$$ {\text{Multiplier }} = { }\frac{1}{{\sqrt {{\text{min }}\left( {{\text{active}}\_{\text{population}},{ }10,000} \right)} }} $$
Formula (2) yielded values in the range of 0.01 to 1. To obtain values in the range of 1 to 100, the following formula was used:
$$ {\text{Multiplier }} = { }\frac{100}{{\sqrt {{\text{min }}\left( {{\text{active}}\_{\text{population}},{ }10,000} \right)} }} $$
When applying formula (3), this reflected in the cost path geometry as an increased likelihood of connecting nearer and smaller settlements to the center of the breeding territory than the cost path obtained without the use of a square root.
In order to make a general analysis to determine the joint disturbance that all anthropogenic objects may cause, their multiplier values must be comparable. In our case, the active population indicator, for which there were accurate data on the settlement's population numbers, was taken as a base. In the cases of incomplete data for other anthropogenic objects and data of indirect origin, values based on expert knowledge were given.
The Cost Path analyses are presented in Table 2. Sample section of the study area presenting the results obtained using Cost Path tool is shown in Fig. 3.
Table 2 Cost Path analyses
Map of a sample section of the study area presenting the results obtained using Cost Path. The centers of the breeding territories are virtual points obtained after a deliberate change of the real location in order to prevent illegal actions on the protected species
STATISTICA V. 12 (StatSoft Inc. 2014) was used for the statistical analysis of the data. We used Spearman's rank correlation coefficient to determine the presence or absence of correlation. We used descriptive statistics to calculate mean, minimum, maximum and standard deviation of the cost path values of various impact points (anthropogenic objects) for all three species of the raptors under study. Non-parametric one-way ANOVA was used in comparing requirements for the location of breeding territories of the three species in terms of accessibility. Cluster analysis was used to classify anthropogenic objects into relative groups.
For all three species, the maximum mean cost path belonged to quarries and mines, suggesting that they would also have the lowest potential impact on the breeding behavior of these breeding species. The greatest possible impact would be expected from settlements and tourist trails. The smallest cost path values were found for Long-legged Buzzard, indicating that the potential impact of the anthropogenic objects is strongest for this species. The maximum values of cost path were detected for the Golden Eagle, respectively, the potential impact of the anthropogenic objects is the least for this species (Table 3).
Table 3 Descriptive statistics of the cost path values of various impact points (anthropogenic objects) calculated for all three species under study
The results of the correlation analysis showed lack of correlation between the variables (Spearman r < 0.7; n = 112; p < 0.05).
Based on the cost path values, a grouping of the anthropogenic points is presented in Fig. 4.
Grouping of anthropogenic objects based on cost path values
The results of non-parametric one-way ANOVA are presented in Table 4. Comparing the three species, the most pronounced differences were found between the Long-legged Buzzard and the Golden Eagle. The selection of breeding territories in terms of accessibility in these two species differs significantly and reflects their different response in terms of human presence near the nest. The lack of statistically significant differences between the Peregrine Falcon and the Golden Eagle suggests that the two species exhibit very similar requirements for the location of breeding territories in terms of accessibility (Table 4).
Table 4 Comparison of cost path values between Golden Eagle (GE), Long-legged Buzzard (LLB) and Peregrine Falcon (PF) using non-parametric one-way ANOVA
The analyses on the distribution of the Golden Eagle, Long-legged Buzzard and Peregrine Falcon breeding territories in the Natura 2000 zones and the Protected Areas in the Balkan Mountains (Fig. 5) showed that Natura 2000 zones covered 84% of the Golden Eagle territories, 81% of the Long-legged Buzzard and 90% of the Peregrine Falcon territories. The Protected Areas were found to include 50%, 29% and 60% respectively of the territories of the three studied species of raptors.
Distribution of the breeding territories of the Golden Eagle, the Long-legged Buzzard and the Peregrine Falcon in the Natura 2000 Zones (SPA – Special Protected Areas; SCI – Sites of Community Importance) and the Protected Areas
Using the average cost path, we segregated anthropogenic effects into the three groups. Group one encompassed quarries, mines and religious centres. Group two included lodges, roads and recreational activity starting points while group three included settlements and pedestrian routes. The anthropogenic objects of the group one had the highest cost path values for all three bird species. Those were "the highest cost anthropogenic objects", meaning that the centers of the breeding territories were the most difficult to access from those points. Hence, quarries, mines, and religious tourism centers had the least impact and the lowest degree of significance as a source of anthropogenic invasion to the breeding territories. The anthropogenic objects of the group two were characterized by similar but lower cost path values. The anthropogenic objects of the group three had the lowest cost path values, i.e. those were "the least cost anthropogenic objects". The centers of the breeding territories were the most easily accessible from those points, hence they were considered to be the most significant and the most affecting the three bird species in the surveyed area. This was also supported by Cost Path All analysis of the united layer including all anthropogenic objects.
Golden Eagle is characterized as sensitive to human disturbance (Andersen et al. 1990; Watson 1997). The species positions its breeding territories to avoid the proximity of anthropogenic effects. Shafaeipour (2015) suggests that the Golden Eagle selects cliffs that are remote from roads and settlements for nesting. Evidence indicates that easily accessible nesting sites are more vulnerable with higher breeding failure in comparison to those that are more difficult to access (Watson and Dennis 1992). Tourism expansion has steadily increased indirect disturbance within breeding territories (Perona et al. 2019). Recreational activities such as rock climbing, paragliding, mountain biking and snowboarding may increase breeding failure and reduce the foraging habitat available for Golden Eagles (Spaul and Heath 2016). Reduced foraging area increases the energy budget required by adult eagles and predisposes nestlings to increased predation (Tjernberg 1983; Pedrini and Sergio 2001). Proportionally, occupied territories decrease near tourist destinations (Kaisanlahti-Jokimäki et al. 2008). The territories occupied by Golden Eagle are characterized for the most part by low levels of disturbance (Xirouchakis 2001; Tapia et al. 2008) and a low density of the human population (Vazhov 2012). In the present study, the highest mean values of the cost path for Golden Eagle, as well as the highest mean values of the Cost Path All, characterized it as the species with the lowest degree of tolerance to human presence, occupying the most difficult to access territories. The major parts of the breeding territories of Golden Eagle (84%) in the Balkan Mountains were found to be within Natura 2000 zones and only half of the territories were covered by protected areas. That is very insufficient, given the high conservation status of the species in Bulgaria, the sensitivity of Golden Eagle to anthropogenic factors and the fact that 46.7% of its national population resides in the Balkan Mountains (Vatev et al. 2015; Petrov et al. 2015).
The lowest average values of the cost path in the Balkan Mountains were established for Long-legged Buzzard for all anthropogenic objects, except for the lodges. The highest cost path values for the lodges for Long-legged Buzzard probably due to the specifics of the location of the lodges and it is an accidental departure from the general trend observed in this species. We believe that the occupation of breeding territories by the Long-legged Buzzard is not a function of disturbance from lodges. The results of the Cost Path analysis suggested that Long-legged Buzzard used to set its breeding territories in the most easily accessible by human's zones of the studied area compared to the other two species, so Long-legged Buzzard could be considered as the most tolerant to human presence. The major parts of the breeding territories of Long-legged Buzzard (81%) were found to be within Natura 2000 zones and only 29% of the territories were covered by protected areas. That is very insufficient, given the high conservation status of the species in Bulgaria and that the bulk of the European Long-legged Buzzard population lives on the Balkan Peninsula (Mrlik and Landsfeld 2002; Stoychev and Demerdzhiev 2020).
Although Peregrine Falcon had been reported to breed in settlements and quarries (Moore et al. 1992, 1997; Crick and Ratcliffe 1995; Cade et al. 1996; Emison et al. 1997; Stoyanov 2003; Brambilla et al. 2006), the calculated average cost path values in this study were relatively high for the species. The average cost path values ranked Peregrine Falcon by its tolerance to human presence in an intermediate position compared to the other two species, but closer to those reported for Golden Eagle. This could be a result of the persecution of the species in Bulgaria for decades until the second half of twentieth century (Ragyov et al. 2008). Most of the breeding territories of Peregrine Falcon (90%) were found to be within Natura 2000 zones and 60% of the breeding territories are in protected areas, placing this species in a better situation than the other two. Perhaps the fact that more breeding territories of the species are in protected areas has a positive effect on the population recovery observed in recent years (Stoynov et al. 2007; Ragyov et al. 2008).
The Long-legged Buzzard establishes its breeding territories in the most easily accessible zones, so this species could be considered as the most tolerant to disturbance. Long-legged Buzzard proved to be the most adaptive to anthropogenic presence in its breeding habitats, which could be considered as an advantage to support the range expansion of the species over the last decades. The Golden Eagle is the species with the lowest degree of tolerance, occupying the most difficult territories in terms of human access. Golden Eagle avoided proximity to humans to the highest degree, which identified it as the most sensitive to anthropogenic invasion in its breeding habitat. The Peregrine Falcon is ranked in an intermediate position compared to the other two species. The lack of statistically significant differences between Peregrine Falcon and Golden Eagle suggested that, regarding its response towards human presence in its territory, Peregrine Falcon was closer to Golden Eagle. Natura 2000 coverage is favorable for all three species—most of their breeding territories fall into Natura 2000 zones. The location of the breeding territories of the three species of interest in protected parks and reserves proved to be insufficient, the situation with Golden Eagle and Long-legged Buzzard being particularly adverse. It is recommended that all the permanently occupied territories of Golden Eagle, Long-legged Buzzard, and Peregrine Falcon be included in the system of the protected areas.
The coordinates of the centers of breeding territories will not be made available for public access in order to prevent unscrupulous action on the protected species.
Alivizatos H, Goutner V, Karandinos MG. Reproduction and behaviour of the Long-legged Buzzard (Buteo rufinus) in North-eastern Greece. Die Vogelwarte. 1998;39:176–82.
Andersen DE, Rongstad OJ, Mytton WR. Home range changes in raptors exposed to increased human activity levels in southeastern Colorado. Wildl Soc Bull. 1990;18:134–42.
Boev N. Données sur l'étendue de l'aire d'habitat estivale de certaines espèces d'oiseaux en Bulgarie. Bull de l'Institut de Zoologie et Musée, BAS. 1962;11:31–46 (in Bulgarian, summary in French).
Brambilla M, Rubolini D, Guidali F. Factors affecting breeding habitat selection in a cliff-nesting peregrine Falco peregrinus population. J Ornithol. 2006;147:428–35.
Cade T, Martell M, Redig P, Septon G, Tordoff H. Peregrine Falcons in urban north America. In: Bird D, Varland D, Negro J, editors. Raptor in human landscapes: adaptation to built and cultivated environments. San Diego: Academic Press; 1996. p. 3–13.
Carrete M, Tella JL, Blanco G, Bertellotti M. Effects of habitat degradation on the abundance, richness and diversity of raptors across Neotropical biomes. Biol Conserv. 2009;142:2002–11.
Crick HQP, Ratcliffe DA. The Peregrine Falco peregrinus breeding population of the United Kingdom in 1991. Bird Study. 1995;42:1–19.
Demerdzhiev D, Dobrev V, Popgeorgiev G. Effects of habitat change on territory occupancy, breeding density and breeding success of Long-legged Buzzard (Buteo rufinus Cretzschmar, 1927) in Besaparski Ridove Spatial Protection Area (Natura 2000), southern Bulgaria. Acta Zool Bulg. 2014;5(Suppl):191–200.
Emison WB, White CM, Hurley VG, Brimm DJ. Factors influencing the breeding distribution of the peregrine falcon in Victoria, Australia. Wildlife Res. 1997;24:433–44.
Esteves CF, de Barros Ferraz SF, de Barros Ferraz KM, Galetti M, Theobald DM. Human accessibility modelling applied to protected areas management. Nat Conservação. 2011;9:232–9.
Fernández JM, Gurrutxaga M. Habitat suitability models for assessing bird conservation goals in spatial protection areas. Ardeola. 2010;57:79–91.
Gainzarain JA, Arambarri R, Rodríguez AF. Breeding density, habitat selection and reproductive rates of the Peregrine falcon Falco pererinus in Alava (northern Spain). Bird Study. 2000;47:225–31.
Greenwood JJD. Basic techniques. In: Sutherland WJ, editor. Ecological census techniques. Cambridge: Cambridge University Press; 1996. p. 11–110.
Gruev B, Kuzmanov B. General Biogeography. Sofia: University Press; 1994. (in Bulgarian).
Jenkins AR. Hunting mode and success of African Peregrines Falco peregrinus minor: does nesting habitat quality affect foraging efficiency? Ibis. 2000;142:235–46.
Jenkins AR, van Zyl AJ. Conservation status and community structure of cliff-nesting raptors and ravens on the Cape Peninsula. South Africa Ostrich. 2005;76:175–84.
Jobe RT, White PS. A new cost-distance model for human accessibility and an evaluation of accessibility bias in permanent vegetation plots in Great Smoky Mountains National Park, USA. J Veg Sci. 2009;20:1099–109.
Kaisanlahti-Jokimäki ML, Jokimäki J, Huhta E, Ukkola M, Helle P, Ollila T. Territory occupancy and breeding success of the Golden Eagle (Aquila chrysaetos) around tourist destinations in northern Finland. Ornis Fennica. 2008;85:2–12.
Karyakin IV, Nikolenko EG. Peregrine Falcon in the Altai-Sayan Region. Russia Raptors Conserv. 2009;16:96–128.
Kopralev I. Geography of Bulgaria. Publisher For Com; 2002 (in Bulgarian).
LaRue MA, Nielsen CK. Modelling potential dispersal corridors for cougars in midwestern North America using least-cost path methods. Ecol Model. 2008;212:372–81.
Michev T, Vatev IT, Simeonov PS, Profirov L. Distribution and Nesting of the Long-Legged Buzzard (Buteo rufinus (Cretzchmar, 1827) in Bulgaria. Ecology. 1984;13:74–82 (in Bulgarian).
Milchev B. Breeding bird atlas of the Strandja mountains, south-east Bulgaria. Sandgrouse. 1994;16:2–27.
Milchev B. Breeding biology of the Long-legged Buzzard Buteo rufinus in SE Bulgaria, nesting also in quarries. Avocetta. 2009;33:25–32.
Milchev B, Georgiewa U. Eine Studie zum Bestand, zur Brutbiologie und Ernӓhrug des Steinadlers, Aquila chrysaetos (L.) im Strandsha - Gebirge. Beitr Vogelkd. 1992;38:327–34.
Minetti AE, Moia C, Roi GS, Susta D, Ferretti G. Energy cost of walking and running at extreme uphill and downhill slopes. J Appl Physiol. 2002;93:1039–46.
PubMed Article PubMed Central Google Scholar
Moore N, Kelly P, Lang F. Quarrynesting by Peregrine falcons in Ireland. Irish Birds. 1992;4:519–24.
Moore NP, Kelly PF, Lang FA, Lynch JM, Langton SD. The Peregrine Falco peregrinus in quarries: current status and factors influencing occupancy in the Republic of Ireland. Bird Study. 1997;44:176–81.
Mrlik V, Landsfeld B. The occurrence of Long-legged Buzzard (Buteo rufinus) in parts of Central Europe during 1980–1998 and possible factors for its recent expansion. Egretta. 2002;45:104–14.
Newton I. Population ecology of raptors. Berkhamsted: T & AD Poyser; 1979.
Nikolov I. Photo guide of Bulgarian waterfalls. Sofia: Biko Bulgaria; 2013.
Pedrini P, Sergio F. Density, productivity, diet and human persecution of Golden eagles (Aquila chrysaetos) in the central-eastern Italian Alps. J Raptor Res. 2001;35:40–8.
Perona AM, Urios V, López-López P. Holidays? Not for all. Eagles have larger home ranges on holidays as a consequence of human disturbance. Biol Conserv. 2019;231:59–66.
Petrov T, Tonchev B, Demerdjiev D, Daskalova G, Stoynov E. Golden eagle (Aquila chrysaetos). In: Atlas of the breeding birds in Bulgaria. Conservation series/book 10. Sofia: BSPB; 2007 (in Bulgarian).
Petrov TS, Spiridonov J, Domuschiev D, Kurtev M. Golden Eagle (Aquila chrysaetos, Linnaeus, 1758). In: Red data book of the Republic of Bulgaria, Vol 2. Animals. Sofia: BAS & MOEW; 2015 (in Bulgarian).
Ragyov D, Demerdzhiev D, Angelov I. Peregrine in Bulgaria – general overview. In: Sielicki J, Mizera T, editors. Peregrine Falcon populations – status and perspectives in the 21st century. Turul, Warsaw: European Peregrine Falcon Working Group, Society for the Protection of Wild Animals "Falcon"; 2008. p. 345–60.
Redinov KO. Ecology of the Long-legged Buzzard in Mykolayiv region (Southern Ukraine). Berkut. 2010;19:116–32 (in Russian).
Russo D. Effects of land abandonment on animal species in Europe: conservation and management implications. Luxembourg: Office for Official Publications of the EuropeanCommunities; 2006.
Schadt S, Knauer F, Kaczensky P, Revilla E, Wiegand T, Trepl L. Rule-based assessment of suitable habitat and patch connectivity for the Eurasian lynx. Ecol Appl. 2002;12:1469–83.
Sergio F, Newton I, Marchesi L, Pedrini P. Ecologically justified charisma: preservation of top predators delivers biodiversity conservation. J Appl Ecol. 2006;43:1049–55.
Sergio F, Rizzoli F, Marchesi L, Pedrini P. The importance of interspecific interactions for breeding-site selection: peregrine falcons seek proximity to raven nests. Ecography. 2004;27:818–26.
Shafaeipour A. Breeding success and nest characteristics of Golden Eagle Aquila chrysaetos in western Iran. Acta Zool Bulg. 2015;67:299–302.
Simeonov S, Michev T, Nankinov D. Fauna of Bulgaria, Vol. 20 Aves, Part I. Sofia: BAS; 1990 (in Bulgarian).
Spaul RJ, Heath JA. Nonmotorized recreation and motorized recreation in shrub-steppe habitats affects behavior and reproduction of golden eagles (Aquila chrysaetos). Ecol Evol. 2016;6:8037–49.
Spaul RJ, Heath JA. Flushing responses of Golden Eagles (Aquila chrysaetos) in response to recreation. Wilson J Ornithol. 2017;129:834–45.
Stoyanov G. Peregrine falcon (Falco peregrinus). Acrocephalus. 2003;24:41–2.
Stoyanov GP, Borisov B, Antonov A, Domuschiev D, Petrov T. Peregrine Falcon (Falco peregrinus, Tunstall, 1771). In: Golemanski V, editor. Red data book of the Republic of Bulgaria. Animals, vol. 2. Sofia: BAS & MOEW; 2015 (in Bulgarian).
Stoychev S, Demerdzhiev D, et al. Long-legged Buzzard Buteo rufinus. In: Keller V, Herrando S, Voříšek P, Franch M, Kipson M, Milanesi P, et al., editors. European breeding bird atlas 2: distribution, abundance and change. Barcelona: European Bird Census Council & Lynx Editions; 2020. p. 480.
Stoynov E, Petrov T, Tonchev B, Ruskov K, Hristov H, Profirov L. Peregrine Falcon (Falco peregrinus). In: Atlas of the breeding birds in Bulgaria. Conservation series/book 10. Sofia: BSPB; 2007. p. 188–9 (in Bulgarian).
Tapia L, Domínguez J, Rodríguez L. Hunting habitat preferences of raptors in a mountainous area (Northwestern Spain). Pol J Ecol. 2008;56:323–33.
Tjernberg M. Habitat and nest site features of golden eagles Aquila chrysaetos (L.), in Sweden. Swed Wildl Res. 1983;12:131–63.
Trombulak SC, Frissell CA. Review of ecological effects of roads on terrestrial and aquatic communities. Conserv Biol. 2000;14:18–30.
Vatev I. Notes on the breeding biology of the Long-legged Buzzard (Buteo rufinus) in Bulgaria. J Raptor Res. 1987;21:8–13.
Vatev I, Angelov I, Domuschiev D, Profirov L. Long-legged Buzzard (Buteo rufinus, Cretzschmar, 1827). In: Red data book of the Republic of Bulgaria. Animals, vol. 2. Sofia: BAS & MOEW; 2015 (in Bulgarian).
Vazhov SV. Some features of the ecological niches of raptors in the Russian part of the altai foothills. Raptors Conserv. 2012;25:115–25 (in Russian).
Watson J, Dennis RH. Nest-site selection by Golden Eagles Aquila chrysaetos in Scotland. British Birds. 1992;85:469–81.
Watson J. The Golden Eagle and pastoralism across Europe. In: Curtis DJ, Bignal EM, Curtis MA, editors. Birds and pastoral agriculture in Europe. Peterborough: JNCC; 1991. p. 56–7.
Watson J. Golden Eagle Aquila chrysaetos breeding success and afforestation in Argyll. Bird Study. 1992;39:203–6.
Watson J. The golden eagle. London: T & AD Poyser; 1997.
Whitfield DP, Fielding AH, Gregory MJP, Gordon AG, McLeod DRA, Haworth PF. Complex effects of habitat loss on Golden Eagles Aquila chrysaetos. Ibis. 2007;149:26–36.
Xirouchakis S. The Golden Eagle Aquila chrysaetos in Crete. Distribution, population status and conservation problems. Avocetta. 2001;25:275–81.
The authors express their gratitude to PhD Svetlana Naumova for polishing up the English version of the manuscript.
The work was not funded by any specific body.
Institute of Biodiversity and Ecosystem Research, Bulgarian Academy of Sciences, 2 Gagarin Street, 1113, Sofia, Bulgaria
Nadejda Djorgova, Dimitar Ragyov, Valko Biserkov, Jordan Biserkov & Boris P. Nikolov
Nadejda Djorgova
Dimitar Ragyov
Valko Biserkov
Jordan Biserkov
Boris P. Nikolov
DR, VB and ND conceived and designed the study. DR and ND performed the field work. ND, JB and VB processed and analyzed the data. ND wrote the manuscript. VB and BPN made comments to the manuscript. VB, BPN and ND read and approved the manuscript. All authors read and approved the final manuscript.
Correspondence to Nadejda Djorgova.
This work complies with the current laws of Bulgaria. It was based on simple field observation without any experimental manipulation or damage to the studied birds.
Djorgova, N., Ragyov, D., Biserkov, V. et al. Habitat preferences of diurnal raptors in relation to human access to their breeding territories in the Balkan Mountain Range, Bulgaria. Avian Res 12, 29 (2021). https://doi.org/10.1186/s40657-021-00265-6
Anthropogenic impact
Least cost path
Long-legged Buzzard | CommonCrawl |
$(a+b)^2$ formula
$(a+b)^2 = a^2+b^2+2ab$
When two literals $a$ and $b$ represent two terms in algebraic form. The sum of them is written as $a+b$ in mathematics. It is an algebraic expression and also a binomial. The square of sum of them or binomial is written in mathematics as follows.
$(a+b)^2$
The square of sum of two terms is equal to the $a$ squared plus $b$ squared plus $2$ times product of $a$ and $b$.
$(a+b)^2$ $\,=\,$ $a^2+b^2+2ab$
In mathematics, the $a$ plus $b$ whole squared algebraic identity is called in three ways.
The square of sum of two terms identity.
The square of a binomial rule.
The special binomial product formula.
In mathematics, the square of the sum of two terms is used as a formula in two cases.
The square of the sum of two terms is expanded as the sum of squares of both terms and two times the product of them.
$\implies$ $(a+b)^2 \,=\, a^2+b^2+2ab$
The sum of squares of the two terms and two times the product of them is simplified as the square of the sum of two terms.
$\implies$ $a^2+b^2+2ab \,=\, (a+b)^2$
$(1) \,\,\,$ Find $(3x+4y)^2$
Now, take $a = 3x$ and $b = 4y$ and substitute them in the expansion of the formula for evaluating its value.
$\implies$ $(3x+4y)^2$ $\,=\,$ $(3x)^2+(4y)^2+2(3x)(4y)$
$\implies$ $(3x+4y)^2$ $\,=\,$ $9x^2+16y^2+2 \times 3x \times 4y$
$\implies$ $(3x+4y)^2$ $\,=\,$ $9x^2+16y^2+24xy$
$(2) \,\,\,$ Simplify $p^2+25q^2+10pq$
$\implies$ $p^2+25q^2+10pq$ $\,=\,$ $p^2+(5q)^2+10pq$
$\implies$ $p^2+25q^2+10pq$ $\,=\,$ $p^2+(5q)^2+2 \times 5 \times p \times q$
$\implies$ $p^2+25q^2+10pq$ $\,=\,$ $p^2+(5q)^2+2 \times p \times 5 \times q$
$\implies$ $p^2+25q^2+10pq$ $\,=\,$ $p^2+(5q)^2+2 \times p \times 5q$
$\implies$ $p^2+25q^2+10pq$ $\,=\,$ $p^2+(5q)^2+2(p)(5q)$
Now, take $a = p$ and $b = 5q$, and simplify the algebraic expression by the $(a+b)^2$ identity
$\implies$ $p^2+25q^2+10pq$ $\,=\,$ $(p+5q)^2$
The $a$ plus $b$ whole square identity can be derived in mathematics in two different methods.
Algebraic method
Learn the algebraic approach to derive the expansion of the $a+b$ whole square formula by the multiplication.
Geometric method
Learn the geometric method to derive the expansion of the $a+b$ whole squared identity by the areas of geometric shapes. | CommonCrawl |
L. Delage-Laurin, Palani, R. Shankar, Golota, N., Mardini, M., Ouyang, Y., Tan, K. Ooi, Swager, T. M., and Griffin, R. G., "Overhauser Dynamic Nuclear Polarization with Selectively Deuterated BDPA Radicals", Journal of the American Chemical Society, 2021.
R. - Q. Lu, Yuan, W., Croy, R. G., Essigmann, J. M., and Swager, T. M., "Metallocalix[4]arene Polymers for Gravimetric Detection of N-Nitrosodialkylamines", Journal of the American Chemical Society, vol. 719, 2021.
S. Nagelberg, Totz, J. F., Mittasch, M., Sresht, V., Zeininger, L., Swager, T. M., Kreysing, M., and Kolle, M., "Actuation of Janus Emulsion Droplets via Optothermally Induced Marangoni Forces", Physical Review Letters, vol. 127, no. 14, 2021.
M. J. Bezdek, Luo, S. - X. Lennon, Liu, R. Y., He, Q., and Swager, T. M., "Trace Hydrogen Sulfide Sensing Inspired by Polyoxometalate-Mediated Aerobic Oxidation", ACS Central Science, vol. 76, no. 9, pp. 1572 - 1580, 2021.
K. Yoshinaga and Swager, T. M., "Revisiting the Heck Reaction for Fluorous Materials Applications", Synlett, 2021.
B. Yoon, Choi, S. - J., Swager, T. M., and Walsh, G. F., "Flexible Chemiresistive Cyclohexanone Sensors Based on Single-Walled Carbon Nanotube–Polymer Composites", ACS Sensors, 2021.
S. Guo and Swager, T. M., "Versatile Porous Poly(arylene ether)s via Pd-Catalyzed C–O Polycondensation", Journal of the American Chemical Society, 2021.
Z. Nelson, Romero, N. A., Tiepelt, J., Baldo, M., and Swager, T. M., "Polymerization and Depolymerization of Photoluminescent Polyarylene Chalcogenides", Macromolecules, 2021.
S. - X. Lennon Luo, Liu, R. Y., Lee, S., and Swager, T. M., "Electrocatalytic Isoxazoline–Nanocarbon Metal Complexes", Journal of the American Chemical Society, 2021.
Q. P. Ngo, He, M., Concellón, A., Yoshinaga, K., Luo, S. - X. Lennon, Aljabri, N., and Swager, T. M., "Reconfigurable Pickering Emulsions with Functionalized Carbon Nanotubes", Langmuir, 2021.
R. S. Andre, Ngo, Q. P., Fugikawa-Santos, L., Correa, D. S., and Swager, T. M., "Wireless Tags with Hybrid Nanomaterials for Volatile Amine Detection", ACS Sensors, vol. 6, no. 6, pp. 2457 - 2464, 2021.
J. Li, Concellón, A., Yoshinaga, K., Nelson, Z., He, Q., and Swager, T. M., "Janus Emulsion Biosensors for Anti-SARS-CoV-2 Spike Antibody", ACS Central Science, 2021.
L. Delage-Laurin, Nelson, Z., and Swager, T. M., "C-Term Faraday Rotation in Metallocene Containing Thin Films", ACS Applied Materials & Interfaces, vol. 131533, no. 21, pp. 25137 - 25142, 2021.
A. Concellón, Fong, D., and Swager, T. M., "Complex Liquid Crystal Emulsions for Biosensing", Journal of the American Chemical Society, 2021.
Z. Nelson, Delage-Laurin, L., Peeks, M. D., and Swager, T. M., "Large Faraday Rotation in Optical-Quality Phthalocyanine and Porphyrin Thin Films", Journal of the American Chemical Society, 2021.
D. Fong and Swager, T. M., "Trace Detection of Hydrogen Peroxide via Dynamic Double Emulsions", Journal of the American Chemical Society, vol. 1435, no. 11, pp. 4397 - 4404, 2021.
A. J. Hsieh, Wu, Y. - C. Mason, Hu, W., Mikhail, J. P., Veysset, D., Kooi, S. E., Nelson, K. A., Rutledge, G. C., and Swager, T. M., "Bottom-up design toward dynamically robust polyurethane elastomers", Polymer, vol. 218, p. 123518, 2021.
K. Hee Ku, McDonald, B. R., Vijayamohanan, H., Zentner, C. A., Nagelberg, S., Kolle, M., and Swager, T. M., "Dynamic Coloration of Complex Emulsions by Localization of Gold Rings Near the Triphase Junction", Small, vol. 175, no. 12, p. 2007507, 2021.
F. Niroui, Saravanapavanantham, M., Han, J., Patil, J. J., Swager, T. M., Lang, J. H., and Bulović, V., "Hybrid Approach to Fabricate Uniform and Active Molecular Junctions", Nano Letters, 2021.
J. C. Beard and Swager, T. M., "An Organic Chemist's Guide to N-Nitrosamines: Their Structure, Reactivity, and Role as Contaminants", The Journal of Organic Chemistry, 2021.
M. J. Bezdek, Luo, S. - X. Lennon, Ku, K. Hee, and Swager, T. M., "A chemiresistive methane sensor", Proceedings of the National Academy of Sciences, vol. 118, no. 2, p. e2022515118, 2021.
P. Wang, Lu, R. - Q., France-Lanord, A., Wang, Y., Zhou, J., Grossman, J. C., and Swager, T. M., "Cyclobutene based macrocycles", Materials Chemistry Frontiers, vol. 4, no. 12, pp. 3529 - 3538, 2020.
R. ‐Q. Lu, Luo, S. ‐X. Lennon, He, Q., Concellón, A., and Swager, T. M., "Methane Detection with a Tungsten‐Calix[4]arene‐Based Conducting Polymer Embedded Sensor Array", Advanced Functional Materials, vol. 8, p. 2007281, 2020.
M. He and Swager, T. M., "Aryl Migration on Graphene", Journal of the American Chemical Society, vol. 142, no. 42, pp. 17876 - 17880, 2020.
W. - T. Koo, Kim, Y., Kim, S., Suh, B. Lim, Savagatrup, S., Kim, J., Lee, S. - J., Swager, T. M., and Kim, I. - D., "Hydrogen Sensors from Composites of Ultra-small Bimetallic Nanoparticles and Porous Ion-Exchange Polymers", Chem, 2020.
C. Tang, Ku, K. Hee, Luo, S. - X. Lennon, Concellón, A., Wu, Y. - C. Mason, Lu, R. - Q., and Swager, T. M., "Chelating Phosphine Ligand Stabilized AuNPs in Methane Detection", ACS Nano, 2020.
H. Xin, Li, J., Lu, R. - Q., Gao, X., and Swager, T. M., "Azulene–Pyridine-Fused Heteroaromatics", Journal of the American Chemical Society, 2020.
S. I. Etkind, Griend, D. A. Vander, and Swager, T. M., "Electroactive Anion Receptor with High Affinity for Arsenate", The Journal of Organic Chemistry, 2020.
C. A. Zentner, Concellón, A., and Swager, T. M., "Controlled Movement of Complex Double Emulsions via Interfacially Confined Magnetic Nanoparticles", ACS Central Science, 2020.
S. - J. Choi, Yoon, B., Lin, S., and Swager, T. M., "Functional Single-Walled Carbon Nanotubes for Anion Sensing", ACS Applied Materials & Interfaces, vol. 12, no. 25, pp. 28375 - 28382, 2020.
S. - X. Lennon Luo, Lin, C. - J., Ku, K. Hee, Yoshinaga, K., and Swager, T. M., "Pentiptycene Polymer/Single-Walled Carbon Nanotube Complexes: Applications in Benzene, Toluene, and o-Xylene Detection", ACS Nano, 2020.
Q. He, Ku, K. Hee, Vijayamohanan, H., Kim, B. J., and Swager, T. M., "Switchable Full-Color Reflective Photonic Ellipsoidal Particles", Journal of the American Chemical Society, 2020.
S. Savagatrup, Ma, D., Zhong, H., Harvey, K. S., Kimerling, L. C., Agarwal, A. M., and Swager, T. M., "Dynamic Complex Emulsions as Amplifiers for On-Chip Photonic Cavity-Enhanced Resonators", ACS Sensors, 2020.
K. Yoshinaga, Delage-Laurin, L., and Swager, T. M., "Fluorous phthalocyanines and subphthalocyanines", Journal of Porphyrins and Phthalocyanines, 2020.
J. Li, Savagatrup, S., Nelson, Z., Yoshinaga, K., and Swager, T. M., "Fluorescent Janus emulsions for biosensing of Listeria monocytogenes", Proceedings of the National Academy of Sciences, 2020.
A. Fernandez, Zentner, C. A., Shivrayan, M., Samson, E., Savagatrup, S., Zhuang, J., Swager, T. M., and Thayumanavan, S., "Programmable Emulsions via Nucleophile-Induced Covalent Surfactant Modifications", Chemistry of Materials, 2020.
Y. Li, Concellón, A., Lin, C. - J., Romero, N. A., Lin, S., and Swager, T. M., "Thiophene-fused polyaromatics: synthesis, columnar liquid crystal, fluorescence and electrochemical properties", Chemical Science, 2020.
Y. Li, Chen, C., Meshot, E. R., Buchsbaum, S. F., Herbert, M., Zhu, R., Kulikov, O., McDonald, B., Bui, N. T. N., Jue, M. L., Park, S. Jin, Valdez, C. A., Hok, S., He, Q., Doona, C. J., Wu, K. Jen, Swager, T. M., and Fornasiero, F., "Autonomously Responsive Membranes for Chemical Warfare Protection", Advanced Functional Materials, vol. 344, 2020.
A. Nitti, Osw, P., Calcagno, G., Botta, C., Etkind, S. I., Bianchi, G., Po, R., Swager, T. M., and Pasini, D., "One-Pot Regiodirected Annulations for the Rapid Synthesis of π-Extended Oligomers", Organic Letters, 2020.
Y. He, Alamri, H., Kawelah, M., Gizzatov, A., Alghamdi, M. F., Swager, T. M., and S. Zhu, S., "Brine-Soluble Zwitterionic Copolymers with Tunable Adsorption on Rocks", ACS Applied Materials & Interfaces, vol. 12, no. 11, pp. 13568 - 13574, 2020.
D. Fong, Luo, S. - X., Andre, R. S., and Swager, T. M., "Trace Ethylene Sensing via Wacker Oxidation", ACS Central Science, 2020.
C. - C. A. Voll, Markopoulos, G., Wu, T. C., Welborn, M., Engelhart, J. U., Rochat, ébastien, Han, G. G. D., Sazama, G. T., Lin, T. - A., Van Voorhis, T., Baldo, M. A., and Swager, T. M., "Lock-and-Key Exciplexes for Thermally Activated Delayed Fluorescence", Organic Materials, vol. 02, no. 01, 2020.
S. ‐J. Choi, Yoon, B., Ray, J. D., Netchaev, A., Moores, L. C., and Swager, T. M., "Chemiresistors for the Real‐Time Wireless Detection of Anions", Advanced Functional Materials, p. 1907087, 2019.
T. Ikai, Yoshida, T., Shinohara, K. -ichi, Taniguchi, T., Wada, Y., and Swager, T. M., "Triptycene-Based Ladder Polymers with One-Handed Helical Geometry", Journal of the American Chemical Society, vol. 141, no. 11, pp. 4696 - 4703, 2019.
Y. Kim, Wang, Y., France-Lanord, A., Wang, Y., Wu, Y. - C. Mason, Lin, S., Li, Y., Grossman, J. C., and Swager, T. M., "Ionic Highways from Covalent Assembly in Highly Conducting and Stable Anion Exchange Membrane Fuel Cells", Journal of the American Chemical Society, 2019.
A. Concellón, Zentner, C. A., and Swager, T. M., "Dynamic Complex Liquid Crystal Emulsions", Journal of the American Chemical Society, 2019.
C. A. Zentner, Anson, F., Thayumanavan, S., and Swager, T. M., "Dynamic Imine Chemistry at Complex Double Emulsion Interfaces", Journal of the American Chemical Society, 2019.
N. A. Romero, Parker, W. O., and Swager, T. M., "Functional, Redox-Responsive Poly(phenylene sulfide)-Based Gels", Macromolecules, 2019.
K. H. Ku, Li, J., Yoshinaga, K., and Swager, T. M., "Dynamically Reconfigurable, Multifunctional Emulsions with Controllable Structure and Movement", Advanced Materials, p. 1905569, 2019.
S. - J. Choi, Savagatrup, S., Kim, Y., Lang, J. H., and Swager, T. M., "Precision pH Sensor Based on WO 3 Nanofiber-Polymer Composites and Differential Amplification", ACS Sensors, 2019.
M. He, Croy, R. G., Essigmann, J. M., and Swager, T. M., "Chemiresistive Carbon Nanotube Sensors for N -Nitrosodialkylamines", ACS Sensors, 2019.
I. Jeon, Park, G. Hoon, Wang, P., Li, J., Hunter, I. W., and Swager, T. M., "Dynamic Fluid‐Like Graphene with Ultralow Frictional Molecular Bearing", Advanced Materials, p. 1903195, 2019.
Y. Sun, Wu, Y. - C. Mason, Veysset, D., Kooi, S. E., Hu, W., Swager, T. M., Nelson, K. A., and Hsieh, A. J., "Molecular dependencies of dynamic stiffening and strengthening through high strain rate microparticle impact of polyurethane and polyurea elastomers", Applied Physics Letters, vol. 115, no. 9, p. 093701, 2019.
Y. - C. Mason Wu and Swager, T. M., "Living Polymerization of 2-Ethylthio-2-oxazoline and Postpolymerization Diversification", Journal of the American Chemical Society, 2019.
V. Schroeder, Evans, E. D., Wu, Y. - C. Mason, Voll, C. - C. A., McDonald, B. R., Savagatrup, S., and Swager, T. M., "Chemiresistive Sensor Array and Machine Learning Classification of Food", ACS Sensors, 2019.
W. - T. Koo, Kim, Y., Savagatrup, S., Yoon, B., Jeon, I., Choi, S. - J., Kim, I. - D., and Swager, T. M., "Porous Ion Exchange Polymer Matrix for Ultrasmall Au Nanoparticle-Decorated Carbon Nanotube Chemiresistors", Chemistry of Materials, 2019.
L. Zeininger, Nagelberg, S., Harvey, K. S., Savagatrup, S., Herbert, M. B., Yoshinaga, K., Capobianco, J. A., Kolle, M., and Swager, T. M., "Rapid Detection of Salmonella enterica via Directional Emission from Carbohydrate-Functionalized Dynamic Double Emulsions", ACS Central Science, 2019.
I. Jeon, Peeks, M. D., Savagatrup, S., Zeininger, L., Chang, S., Thomas, G., Wang, W., and Swager, T. M., "Janus Graphene: Scalable Self‐Assembly and Solution‐Phase Orthogonal Functionalization", Advanced Materials, p. 1900438, 2019.
Y. He, Benedetti, F. M., Lin, S., Liu, C., Zhao, Y., Ye, ‐Z., Van Voorhis, T., M. De Angelis, G., Swager, T. M., and Smith, Z. P., "Polymers with Side Chain Porosity for Ultrapermeable and Plasticization Resistant Materials for Gas Separations", Advanced Materials, 2019.
L. Zeininger, Weyandt, E., Savagatrup, S., Harvey, K. S., Zhang, Q., Zhao, Y., and Swager, T. M., "Waveguide-based chemo- and biosensors: complex emulsions for the detection of caffeine and proteins", Lab on a Chip, 2019.
C. - J. Lin, Zeininger, L., Savagatrup, S., and Swager, T. M., "Morphology-Dependent Luminescence in Complex Liquid Colloids", Journal of the American Chemical Society, 2019.
Y. - C. Mason Wu, Hu, W., Sun, Y., Veysset, D., Kooi, S. E., Nelson, K. A., Swager, T. M., and Hsieh, A. J., "Unraveling the high strain-rate dynamic stiffening in select model polyurethanes − the role of intermolecular hydrogen bonding", Polymer, 2019.
Q. Zhang, Zeininger, L., Sung, K. - J., Miller, E. A., Yoshinaga, K., Sikes, H. D., and Swager, T. M., "Emulsion Agglutination Assay for the Detection of Protein–Protein Interactions: An Optical Sensor for Zika Virus", ACS Sensors, 2019.
C. Dengiz, Wu, Y. - C., and Swager, T., "Synthesis and Optoelectronic Properties of Iptycene–Naphthazarin Dyes", Synlett, 2019.
C. - C. A. Voll and Swager, T. M., "Extended π-Conjugated Structures via Dehydrative C–C Coupling", Journal of the American Chemical Society, 2018.
C. Ho Park, Schroeder, V., Kim, B. J., and Swager, T. M., "Ionic Liquid-Carbon Nanotube Sensor Arrays for Human Breath Related Volatile Organic Compounds", ACS Sensors, 2018.
Y. Kim, Lin, Z., Jeon, I., Van Voorhis, T., and Swager, T. M., "Polyaniline Nanofiber Electrodes for Reversible Capture and Release of Mercury(II) from Water", Journal of the American Chemical Society, 2018.
J. Soetbeer, Gast, P., Walish, J. J., Zhao, Y., George, C., Yang, C., Swager, T. M., Griffin, R. G., and Mathies, G., "Conformation of bis-nitroxide polarizing agents by multi-frequency EPR spectroscopy", Physical Chemistry Chemical Physics, 2018.
B. Yoon, Choi, S. - J., Swager, T. M., and Walsh, G. F., "Switchable Single-Walled Carbon Nanotube–Polymer Composites for CO 2 Sensing", ACS Applied Materials & Interfaces, 2018.
V. Schroeder, Savagatrup, S., He, M., Lin, S., and Swager, T. M., "Carbon Nanotube Chemical Sensors", Chemical Reviews, 2018.
W. Jie Ong and Swager, T. M., "Dynamic self-correcting nucleophilic aromatic substitution", Nature Chemistry, 2018.
P. Wang, Lin, S., Lin, Z., Peeks, M. D., Van Voorhis, T., and Swager, T. M., "A Semiconducting Conjugated Radical Polymer: Ambipolar Redox Activity and Faraday Effect", Journal of the American Chemical Society, 2018.
V. Schroeder and Swager, T. M., "Translating Catalysis to Chemiresistive Sensing", Journal of the American Chemical Society, 2018.
K. Yoshinaga and Swager, T. M., "Fluorofluorescent Perylene Bisimides", Synlett, 2018.
S. Nagelberg, Zarzar, L. D., Subramanian, K., Sresht, V., Blankschtein, D., Barbastathis, G., Kreysing, M., Swager, T. M., and Kolle, M., "Reconfigurable and dynamically tunable droplet-based compound micro-lenses", in 3D Image Acquisition and Display: Technology, Perception and ApplicationsImaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), Orlando, Florida , 2018.
P. Wang, Jeon, I., Lin, Z., Peeks, M. D., Savagatrup, S., Kooi, S. E., Van Voorhis, T., and Swager, T. M., "Insights into Magneto-Optics of Helical Conjugated Polymers", Journal of the American Chemical Society, 2018.
Q. Zhang, Scigliano, A., Biver, T., Pucci, A., and Swager, T. M., "Interfacial bioconjugation on emulsion droplet for biosensors", Bioorganic & Medicinal Chemistry, 2018.
C. Mackin, Schroeder, V., Zurutuza, A., Su, C., Kong, J., Swager, T. M., and Palacios, T., "Chemiresistive Graphene Sensors for Ammonia Detection", ACS Applied Materials & Interfaces, 2018.
R. Zhu and Swager, T. M., "Polymer Valence Isomerism: Poly(Dewar- o -xylylene)s", Journal of the American Chemical Society, 2018.
L. Zeininger, He, M., Hobson, S. T., and Swager, T. M., "Resistive and Capacitive γ-Ray Dosimeters Based On Triggered Depolymerization in Carbon Nanotube Composites", ACS Sensors, 2018.
T. N. B. Truong, Savagatrup, S., Jeon, I., and Swager, T. M., "Modular synthesis of polymers containing 2,5-di(thiophenyl)- N -arylpyrrole", Journal of Polymer Science Part A: Polymer Chemistry, 2018.
S. Lin and Swager, T. M., "Carbon Nanotube Formic Acid Sensors Using a Nickel Bis( ortho -diiminosemiquinonate) Selector", ACS Sensors, 2018.
T. Ikai, Yoshida, T., Awata, S., Wada, Y., Maeda, K., Mizuno, M., and Swager, T. M., "Circularly Polarized Luminescent Triptycene-Based Polymers", ACS Macro Letters, pp. 364 - 369, 2018.
T. M. Swager, "Sensor Technologies Empowered by Materials and Molecular Innovations", Angewandte Chemie International Edition, 2018.
W. Huang, Einzinger, M., Zhu, T., Chae, H. Sik, Jeon, S., Ihn, S. - G., Sim, M., Kim, S., Su, M., Teverovskiy, G., Wu, T., Van Voorhis, T., Swager, T. M., Baldo, M. A., and Buchwald, S. L., "Molecular Design of Deep Blue Thermally Activated Delayed Fluorescence Materials Employing a Homoconjugative Triptycene Scaffold and Dihedral Angle Tuning", Chemistry of Materials, 2018.
Y. Zhao, He, Y., and Swager, T. M., "Porous Organic Polymers via Ring Opening Metathesis Polymerization", ACS Macro Letters, vol. 7, pp. 300 - 304, 2018.
C. Paoletti, He, M., Salvo, P., Melai, B., Calisi, N., Mannini, M., Cortigiani, B., Bellagambi, F. G., Swager, T. M., Di Francesco, F., and Pucci, A., "Room temperature amine sensors enabled by sidewall functionalization of single-walled carbon nanotubes", RSC Advances, no. 8, pp. 5578 - 5585, 2018.
L. C. H. Moh, Goods, J. B., Kim, Y., and Swager, T. M., "Free Volume Enhanced Proton Exchange Membranes from Sulfonated Triptycene Poly(Ether Ether Ketone)", Journal of Membrane Science, 2017.
I. Jeon, Yoon, B., He, M., and Swager, T. M., "Hyperstage Graphite: Electrochemical Synthesis and Spontaneous Reactive Exfoliation", Advanced Materials, p. 1704538, 2017.
Y. Kim, Moh, L. C. H., and Swager, T. M., "Anion Exchange Membranes: Enhancement by Addition of Unfunctionalized Triptycene Poly(Ether Sulfone)s", ACS Applied Materials & Interfaces, 2017.
S. Ishihara, O'Kelly, C. J., Tanaka, T., Kataura, H., Labuta, J., SHINGAYA, Y., Nakayama, T., Ohsawa, T., Nakanishi, T., and Swager, T. M., "Metallic vs. Semiconducting SWCNT Chemiresistors: A Case for Separated SWCNTs Wrapped by Metallo-Supramolecular Polymer", ACS Applied Materials & Interfaces, 2017.
S. Savagatrup, Schröder, V., He, X., Lin, S., He, M., Yassine, O., Salama, K. N., Zhang, X. Xiang, and Swager, T. M., "Bio-Inspired Carbon Monoxide Sensors with Voltage-Activated Sensitivity", Angewandte Chemie International Edition, 2017.
T. Ikai, Wada, Y., Awata, S., Yun, C., Maeda, K., Mizuno, M., and Swager, T. M., "Chiral triptycene-pyrene π-conjugated chromophores with circularly polarized luminescence", Org. Biomol. Chem., 2017.
M. Einzinger, Zhu, T., de Silva, P., Belger, C., Swager, T. M., Van Voorhis, T., and Baldo, M. A., "Shorter Exciton Lifetimes via an External Heavy-Atom Effect: Alleviating the Effects of Bimolecular Processes in Organic Light-Emitting Diodes", Advanced Materials, p. 1701987, 2017.
B. Koo and Swager, T. M., "Distinct Interfacial Fluorescence in Oil-in-Water Emulsions via Exciton Migration of Conjugated Polymers", Macromolecular Rapid Communications, 2017.
A. Schleper, Voll, C. - C., Engelhart, J., and Swager, T., "Iptycene-Containing Azaacenes with Tunable Luminescence", Synlett, 2017.
S. Soylemez, Yoon, B., Toppare, L., and Swager, T. M., "Quaternized Polymer–Single-Walled Carbon Nanotube Scaffolds for a Chemiresistive Glucose Sensor", ACS Sensors, 2017.
C. - C. A. Voll, Engelhart, J., Einzinger, M., Baldo, M. A., and Swager, T. M., "Donor-Acceptor Iptycenes with Thermally Activated Delayed Fluorescence", European Journal of Organic Chemistry, 2017.
T. M. Swager, "50th Anniversary Perspective : Conducting/Semiconducting Conjugated Polymers. A Personal Perspective on the Past and the Future", Macromolecules, vol. 50, no. 13, pp. 4867 - 4886, 2017.
R. Zhu, Desroches, M., Yoon, B., and Swager, T. M., "Wireless Oxygen Sensors Enabled by Fe(II)-Polymer Wrapped Carbon Nanotubes", ACS Sensors, 2017.
C. Dengiz, Luppino, S. P., Gutierrez, G. D., and Swager, T. M., "Naphthazarin-Polycyclic Conjugated Hydrocarbons and Iptycenes", The Journal of Organic Chemistry, 2017.
A. Nitti, Bianchi, G., Po, R., Swager, T. M., and Pasini, D., "Domino Direct Arylation and Cross-Aldol for Rapid Construction of Extended Polycyclic π-Scaffolds", Journal of the American Chemical Society, 2017.
T. V. Can, Weber, R. T., Walish, J. J., Swager, T. M., and Griffin, R. G., "Frequency-Swept Integrated Solid Effect", Angewandte Chemie International Edition, 2017.
J. Fennell, Hamaguchi, H., Yoon, B., and Swager, T., "Chemiresistor Devices for Chemical Warfare Agent Detection Based on Polymer Wrapped Single-Walled Carbon Nanotubes", Sensors, vol. 17, no. 5, p. 982, 2017.
T. V. Can, Weber, R. T., Walish, J. J., Swager, T. M., and Griffin, R. G., "Ramped-amplitude NOVEL", The Journal of Chemical Physics, vol. 146, no. 15, p. 154204, 2017.
C. Dengiz and Swager, T., "Homoconjugated and Spiro Push–Pull Systems: Cycloadditions of Naphtho- and Anthradiquinones with Electron-Rich Alkynes", Synlett, 2017.
L. D. Zarzar, Kalow, J. A., He, X., Walish, J. J., and Swager, T. M., "Optical visualization and quantification of enzyme activity using dynamic droplet lenses", Proceedings of the National Academy of Sciences, no. 15, pp. 3821 - 3825, 2017.
S. - C. Sha, Zhu, R., Herbert, M. B., Kalow, J. A., and Swager, T. M., "Chemical warfare simulant-responsive polymer nanocomposites: Synthesis and evaluation", Journal of Polymer Science Part A: Polymer Chemistry, 2017.
Nanomaterials that undergo a physical change upon chemical warfare agent (CWA) exposure can potentially be used in detectors to warn soldiers of their presence or in fabrics to provide on-demand protection. In this study, hybrid nanoparticles (NPs) were prepared by grafting a CWA-responsive polymer from a silicon dioxide (SiO2) surface using ring opening metathesis polymerization; the covalent functionalization of the polymers on the NP surface was confirmed by gel permeation chromatography, dynamic light scattering, and transmission electron microscopy analysis. The polymer-grafted SiO2 NPs were found to undergo a pronounced decrease (approximately 200 nm) in their hydrodynamic radius upon exposure to CWA simulants trifluoroacetic acid and diethyl chlorophosphate in toluene. This decrease in hydrodynamic radius is attributed to the electrophile-mediated ionization of the triarylmethanol responsive unit and represents a rare example of polycation formation leading to polymer chain collapse. We have ascribed this ionization-induced collapse to the formation of a favorable stacking interaction between the planar triarylcations. These studies have important implications for the development of breathable fabrics that can provide on-demand protection for soldiers in combat situations.
S. Luppino and Swager, T., "Differentially Substituted Phenylene-Containing Oligoacene Derivatives", Synlett, vol. 28, no. 03, pp. 323 - 326, 2017.
We report the synthesis and characterization of seven new linearly conjugated ladder compounds of the phenylene-containing oligoacene molecule class. Each derivative incorporates a fused four-membered-ring linkage in the acene-like backbone. Crystal packing, spectroscopic and electrochemical properties of the molecules are described.
T. M. Swager, "Impedance for Endocrine Disruption Compounds", ACS Central Science, vol. 3, no. 2, pp. 99 - 100, 2017.
Y. He, Savagatrup, S., Zarzar, L. D., and Swager, T. M., "Interfacial Polymerization on Dynamic Complex Colloids: Creating Stabilized Janus Droplets", ACS Applied Materials & Interfaces, vol. 9, no. 8, pp. 7804 - 7811, 2017.
Complex emulsions, including Janus droplets, are becoming increasingly important in pharmaceuticals and medical diagnostics, the fabrication of microcapsules for drug delivery, chemical sensing, E-paper display technologies, and optics. Because fluid Janus droplets are often sensitive to external perturbation, such as unexpected changes in the concentration of the surfactants or surface-active biomolecules in the environment, stabilizing their morphology is critical for many real-world applications. To endow Janus droplets with resistance to external chemical perturbations, we demonstrate a general and robust method of creating polymeric hemispherical shells via interfacial free-radical polymerization on the Janus droplets. The polymeric hemispherical shells were characterized by optical and fluorescence microscopy, scanning electron microscopy, and confocal laser scanning microscopy. By comparing phase diagrams of a regular Janus droplet and a Janus droplet with the hemispherical shell, we show that the formation of the hemispherical shell nearly doubles the range of the Janus morphology and maintains the Janus morphology upon a certain degree of external perturbation (e.g., adding hydrocarbon–water or fluorocarbon–water surfactants). We attribute the increased stability of the Janus droplets to (1) the surfactant nature of polymeric shell formed and (2) increase in interfacial tension between hydrocarbon and fluorocarbon due to polymer shell formation. This finding opens the door of utilizing these stabilized Janus droplets in a demanding environment.
B. Koo and Swager, T. M., "Interfacial Pressure/Area Sensing: Dual-Fluorescence of Amphiphilic Conjugated Polymers at Water Interfaces", ACS Macro Letters, vol. 6, no. 2, pp. 134 - 138, 2017.
Exciton migration to emissive defects in π-conjugated polymers is a robust signal amplification strategy for optoelectronic sensors. Herein we report end-capped conjugated polymers that show two distinct emissions as a function of interpolymer distances at the air–water and hydrocarbon–water interfaces. Amphiphilic poly(phenylene ethynylene)s (PPEs) end-capped with perylene monoimides display two distinct emission colors (cyan from PPE and red from perylene), the relative intensity of which depends on the surface pressure applied on the Langmuir monolayers. This behavior produces a ratiometric interfacial pressure indicator. Relative quantum yields are maintained at the different surface pressures and hence display no sign of self-quenching of the excitons in an aggregated state. These polymers can be organized at the micelle–water interface in lytropic liquid crystals, thereby paving the way for potential applications of end-capped amphiphilic conjugated polymers in biosensors and bioimaging.
Q. Zhang, Savagatrup, S., Kaplonek, P., Seeberger, P. H., and Swager, T. M., "Janus Emulsions for the Detection of Bacteria", ACS Central Science, 2017.
Janus emulsion assays that rely on carbohydrate–lectin binding for the detection of Escherichia coli bacteria are described. Surfactants containing mannose are self-assembled at the surface of Janus droplets to produce particles with lectin binding sites. Janus droplets orient in a vertical direction as a result of the difference in densities between the hydrocarbon and fluorocarbon solvents. Binding of lectin to mannose(s) causes agglutination and a tilted geometry. The distinct optical difference between naturally aligned and agglutinated Janus droplets produces signals that can be detected quantitatively. The Janus emulsion assay sensitively and selectively binds to E. coli at 104 cfu/mL and can be easily prepared with long-time stability. It provides the basis for the development of inexpensive portable devices for fast, on-site pathogen detection.
S. Nagelberg, Zarzar, L. D., Nicolas, N., Subramanian, K., Kalow, J. A., Sresht, V., Blankschtein, D., Barbastathis, G., Kreysing, M., Swager, T. M., and Kolle, M., "Reconfigurable and responsive droplet-based compound micro-lenses", Nature Communications, 2017.
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications—integral micro-scale imaging devices and light field display technology—thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses.
H. Tsujimoto, Ha, D. - G., Markopoulos, G., Chae, H. Sik, Baldo, M. A., and Swager, T. M., "Thermally Activated Delayed Fluorescence and Aggregation Induced Emission with Through-Space Charge Transfer", Journal of the American Chemical Society, 2017.
Emissive molecules comprising a donor and an acceptor bridged by 9,9-dimethylxanthene, were studied (XPT, XCT, and XtBuCT). The structures position the donor and acceptor with cofacial alignment at distances of 3.3–3.5 Å wherein efficient spatial charge transfer can occur. The quantum yields were enhanced by excluding molecular oxygen and thermally activated delayed fluorescence with lifetimes on the order of microseconds was observed. Although the molecules displayed low quantum yields in solution, higher quantum yields were observed in the solid state. Crystal structures revealed π–π intramolecular interactions between a donor and an acceptor, however, the dominant intermolecular interactions were C—H···π, which likely restrict the molecular dynamics to create aggregation-induced enhanced emission. Organic light emitting devices using XPT and XtBuCT as dopants displayed electroluminescence external quantum efficiencies as high as 10%.
W. Ong, Bertani, F., Dalcanale, E., and Swager, T., "Redox Switchable Thianthrene Cavitands", Synthesis, vol. 49, no. 02, pp. 358 - 364, 2016.
A redox activated vase-to-kite conformational change is reported for a new resorcinarene-based cavitand appended with four quinoxaline-fused thianthrene units. In its neutral state, the thianthrene-containing cavitand was shown by 1H NMR to adopt a closed vase conformation. Upon oxidation the electrostatic repulsion among the thianthrene radical cations promotes a kite conformation in the thianthrene-containing cavitand. The addition of acid produced a shoulder feature below 300 nm in the cavitand's UV-Vis spectrum that we have assigned to the vase-to-kite conformation change. UV-Vis spectroelectrochemical studies of the cavitand revealed a development of a similar shoulder peak consistent with the oxidation-induced vase-to-kite conformation change. To support that the shoulder peak is diagnostic for a vase-to-kite conformation change, a model molecule constituting a single quinoxaline wall of the cavitand was synthesized and studied. As expected UV-Vis spectroelectrochemical studies of the cavitand arm did not display a shoulder peak below 300 nm. The oxidation-induced vase-to-kite conformation is further confirmed by the distinctive upfield shift in 1H chemical shift of the methine signal.
S. F. Liu, Lin, S., and Swager, T. M., "An Organocobalt–Carbon Nanotube Chemiresistive Carbon Monoxide Detector", ACS Sensors, vol. 1, no. 4, pp. 354 - 357, 2016.
A chemiresistive detector for carbon monoxide was created from single-walled carbon nanotubes (SWCNTs) by noncovalent modification with diiodo(η5:η1-1-[2-(N,N-dimethylamino)ethyl]-2,3,4,5-tetramethylcyclopentadienyl)-cobalt(III) ([Cp∧CoI2]), an organocobalt complex with an intramolecular amino ligand coordinated to the metal center that is displaced upon CO binding. The unbound amino group can subsequently be transduced chemiresistively by the SWCNT network. The resulting device was shown to have a ppm-level limit of detection and unprecedented selectivity for CO gas among CNT-based chemiresistors. This work, the first molecular-level mechanistic elucidation for a CNT-based chemiresistive detector for CO, demonstrates the efficacy of using an analyte's reactivity to produce another chemical moiety that is readily transduced as a strategy for the rational design of chemiresistive CNT-based detectors.
M. He and Swager, T. M., "Covalent Functionalization of Carbon Nanomaterials with Iodonium Salts", Chemistry of Materials, vol. 28, no. 23, pp. 8542 - 8549, 2016.
Covalent functionalization significantly enhances the utility of carbon nanomaterials for many applications. Herein, we report an efficient method for the covalent functionalization of carbon nanotubes (CNTs) and graphite. This reaction involves the reduction of carbon nanomaterials with sodium naphthalide, followed by the addition of diaryliodonium salts. CNTs, including single-walled, double-walled, and multi-walled variants (SWCNTs, DWCNTs, and MWCNTs, respectively), as well as graphite, can be efficiently functionalized with substituted arene and heteroarene iodonium salts. The preferential transfer of phenyl groups containing electron-withdrawing groups was demonstrated by reactions with unsymmetrical iodonium salts. The lower reactivity of iodonium salts, relative to the more commonly used diazonium ions, presents opportunities for greater diversity in the selective functionalization of carbon nanomaterials.
B. Koo and Swager, T. M., "Highly Emissive Excimers by 2D Compression of Conjugated Polymers", ACS Macro Letters, vol. 5, no. 7, pp. 889 - 893, 2016.
Interactions between π-conjugated polymers are known to create ground-state aggregates, excimers, and exciplexes. With few exceptions, these species exhibit decreased fluorescence quantum yields relative to the isolated polymers in liquid or solid solutions. Herein, we report a method to assemble emissive conjugated polymer excimers and demonstrate their applicability in the detection of selected solvent vapors. Specifically, poly(phenylene ethynylene)s (PPEs) with amphiphilic side chains are organized in a Langmuir monolayer at the air–water interface. Compression of the monolayer results in the reversible conversion from a face-on organization of the π-system relative to the water to what appears to be an incline-stack conformation. The incline-stack organization creates a bright yellow emissive excimeric state with increases of 28% in relative fluorescence quantum yields to the face-on monolayer conformation. Multilayers can be transferred onto the glass substrate via a Langmuir–Blodgett method with preservation of the excimer emission. These films are metastable and the fluorescence reverts to a cyan color similar to the spectra obtained in solution and spin-cast films after exposure to selected solvent vapors. This behavior has practical utility as a fluorescence-based indicator for selected volatile organic compounds.
Y. Zhao, Rocha, S. V., and Swager, T. M., "Mechanochemical Synthesis of Extended Iptycenes", Journal of the American Chemical Society, vol. 138, no. 42, pp. 13834 - 13837, 2016.
Iptycenes are intriguing compounds receiving considerable attention as a result of their rigid noncompliant three-dimensional architecture. The preparation of larger iptycenes is often problematic, as a result of their limited solubility and synthetic procedures involving multiple Diels–Alder reactions under harsh extended reaction conditions. We report a mechanochemical synthesis of structurally well-defined iptycenes through an iterative reaction sequence, wherein Diels–Alder reactions and a subsequent aromatization afford higher order iptycenes. We further report that double Diels–Alder reactions under solvent-free condition provide facile access to highly functionalized iptycenes with molecular weights over 2000 Da. Quartz crystal microbalance measurements reveal that these materials efficiently absorb the aromatic hydrocarbons benzene and toluene.
M. Krikorian, Voll, C. - C. A., Yoon, M., Venkatesan, K., Kouwer, P. H. J., and Swager, T. M., "Smectic A mesophases from luminescent sandic platinum(II) mesogens", Liquid Crystals, vol. 43, no. 12, pp. 1709 - 1713, 2016.
B. Yoon, Liu, S. F., and Swager, T. M., "Surface-Anchored Poly(4-vinylpyridine)–Single-Walled Carbon Nanotube–Metal Composites for Gas Detection", Chemistry of Materials, vol. 28, no. 16, pp. 5916 - 5924, 2016.
A platform for chemiresistive gas detectors based upon single-walled carbon nanotube (SWCNT) dispersions stabilized by poly(4-vinylpyridine) (P4VP) covalently immobilized onto a glass substrate was developed. To fabricate these devices, a glass substrate with gold electrodes is treated with 3-bromopropyltrichlorosilane. The resulting alkyl bromide coating presents groups that can react with the P4VP to covalently bond (anchor) the polymer–SWCNT composite to the substrate. Residual pyridyl groups in P4VP not consumed in this quaternization reaction are available to coordinate metal nanoparticles or ions chosen to confer selectivity and sensitivity to target gas analytes. Generation of P4VP coordinated to silver nanoparticles produces an enhanced response to ammonia gas. The incorporation of soft Lewis acidic Pd2+ cations by binding PdCl2 to P4VP yields a selective and highly sensitive device that changes resistance upon exposure to vapors of thioethers. The latter materials have utility for odorized fuel leak detection, microbial activity, and breath diagnostics. A third demonstration makes use of permanganate incorporation to produce devices with large responses to vapors of volatile organic compounds that are susceptible to oxidation.
S. Ishihara, Azzarelli, J. M., Krikorian, M., and Swager, T. M., "Ultratrace Detection of Toxic Chemicals: Triggered Disassembly of Supramolecular Nanotube Wrappers", Journal of the American Chemical Society, vol. 138, pp. 8221 - 8227, 2016.
Chemical sensors offer opportunities for improving personal security, safety, and health. To enable broad adoption of chemical sensors requires performance and cost advantages that are best realized from innovations in the design of the sensing (transduction) materials. Ideal materials are sensitive and selective to specific chemicals or chemical classes and provide a signal that is readily interfaced with portable electronic devices. Herein we report that wrapping single walled carbon nanotubes with metallo-supramolecular polymers creates sensory devices with a dosimetric (time- and concentration-integrated) increase in electrical conductivity that is triggered by electrophilic chemical substances such as diethylchlorophosphate, a nerve agent simulant. The mechanism of this process involves the disassembly of the supramolecular polymer, and we demonstrate its utility in a wireless inductively powered sensing system based on near-field communication technology. Specifically, the dosimeters can be powered and read wirelessly with conventional smartphones to create sensors with ultratrace detection limits.
R. Zhu, Azzarelli, J. M., and Swager, T. M., "Wireless Hazard Badges to Detect Nerve-Agent Simulants", Angewandte Chemie International Edition, vol. 55, pp. 9662 - 9666, 2016.
Human exposure to hazardous chemicals can have adverse short- and long-term health effects. In this Communication, we have developed a single-use wearable hazard badge that dosimetrically detects diethylchlorophosphate (DCP), a model organophosphorous cholinesterase inhibitor simulant. Improved chemically actuated resonant devices (CARDs) are fabricated in a single step and unambiguously relate changes in chemiresistance to a wireless readout. To provide selective and readily manufacturable sensor elements for this platform, we developed an ionic-liquid-mediated single walled carbon nanotube based chemidosimetric scheme with DCP limits of detection of 28 ppb. As a practical demonstration, an 8 h workday time weighted average equivalent exposure of 10 ppb DCP effects an irreversible change in smartphone readout.
G. D. Gutierrez, Sazama, G. T., Wu, T., Baldo, M. A., and Swager, T. M., "Red Phosphorescence from Benzo[2,1,3]thiadiazoles at Room Temperature", The Journal of Organic Chemistry, vol. 81, pp. 4789-4796, 2016.
E. M. Sletten and Swager, T. M., "Readily accessible multifunctional fluorous emulsions", Chem. Sci., p. -, 2016.
Strategies for the facile fabrication of nanoscale materials and devices represent an increasingly important challenge for chemists. Here{,} we report a simple{,} one-pot procedure for the formation of perfluorocarbon emulsions with defined functionalization. The fluorous core allows for small molecules containing a fluorous tail to be stabilized inside the emulsions. The emulsions can be formed using a variety of hydrophilic polymers resulting in an array of sizes (90 nm to >1 micron) and surface charges (-95 mV to 65 mV) of fluid particles. The surface of the emulsions can be further functionalized{,} covalently or non-covalently{,} through in situ or post-emulsion modification. The total preparation time is 30 minutes or less from commercially available reagents without specialized equipment. We envision these emulsions to be applicable to both biological and materials systems.
N. Willis-Fox, Belger, C., Fennell, Jr., J. F., Evans, R. C., and Swager, T. M., "Threading the Needle: Fluorescent Poly-pseudo-rotaxanes for Size-Exclusion Sensing", Chemistry of Materials, vol. 28, pp. 2685-2691, 2016.
C. A. Zuniga, Goods, J. B., Cox, J. R., and Swager, T. M., "Long-Term High-Temperature Stability of Functionalized Graphene Oxide Nanoplatelets in Arab-D and API Brine", ACS Applied Materials & Interfaces, vol. 8, pp. 1780-1785, 2016.
F. Bertani, Riboni, N., Bianchi, F., Brancatelli, G., Sterner, E. S., Pinalli, R., Geremia, S., Swager, T. M., and Dalcanale, E., "Triptycene-Roofed Quinoxaline Cavitands for the Supramolecular Detection of BTEX in Air", Chemistry – A European Journal, vol. 22, pp. 3312–3319, 2016.
J. Im, Sterner, E. S., and Swager, T. M., "Integrated Gas Sensing System of SWCNT and Cellulose Polymer Concentrator for Benzene, Toluene, and Xylenes", Sensors, vol. 16, p. 183, 2016.
S. Chang, Han, G. D., Weis, J. G., Park, H., Hentz, O., Zhao, Z., Swager, T. M., and Gradecak, S., "Transition Metal-Oxide Free Perovskite Solar Cells Enabled by a New Organic Charge Transport Layer", ACS Applied Materials & Interfaces, vol. 8, no. 13, pp. 8511–8519, 2016.
G. D. Han, Maurano, A., Weis, J. G., Bulović, V., and Swager, T. M., "VOC enhancement in polymer solar cells with isobenzofulvene–C60 adducts", Organic Electronics, vol. 31, pp. 48 - 55, 2016.
J. F. Fennell, Liu, S. F., Azzarelli, J. M., Weis, J. G., Rochat, S., Mirica, K. A., Ravnsbæk, J. B., and Swager, T. M., "Nanowire Chemical/Biological Sensors: Status and a Roadmap for the Future", Angewandte Chemie International Edition, vol. 55, pp. 1266–1281, 2016.
J. G. Weis, Ravnsbæk, J. B., Mirica, K. A., and Swager, T. M., "Employing Halogen Bonding Interactions in Chemiresistive Gas Sensors", ACS Sensors, vol. 1, no. 2, pp. 115-119, 2016.
S. F. Liu, Lin, S., and Swager, T. M., "An Organocobalt–Carbon Nanotube Chemiresistive Carbon Monoxide Detector", ACS Sensors, vol. 1, no. 4, pp. 354-357, 2016.
G. D. Gutierrez, Coropceanu, I., Bawendi, M. G., and Swager, T. M., "A Low Reabsorbing Luminescent Solar Concentrator Employing π-Conjugated Polymers", Advanced Materials (Weinheim, Germany), vol. 28, no. 3, pp. 497-5011, 2016.
A highly efficient thin-film luminescent solar concentrator (LSC) utilizing two π-conjugated polymers as antennae for small amounts of the valued perylene bisimide Lumogen F Red 305 is presented. The LSC exhibits high photoluminescence quantum yield, low reabsorption, and relatively low refractive indices for waveguide matching. A Monte Carlo simulation predicts the LSC to possess exceptionally high optical efficiencies on large scales.
T. M. Swager, Bulović, V., Han, G. D., Maurano, A., Po, R., and Pellegrino, A., "Functionalized nanostructures and devices including photovoltaic devices.", 2015.
Embodiments described herein provide functionalized carbon nanostructures for use in various devices, including photovoltaic devices (e.g., solar cells). In some cases, the carbon nanostructures are fullerenes substituted with one or more isobenzofulvene species and/or indane species. Devices including such materials may exhibit increased efficiency, increased open circuit potential, high electron/hole mobility, and/or low elec. resistance. [on SciFinder(R)]
M. G. Campbell, Liu, S. F., Swager, T. M., and Dinca, M., "Chemiresistive Sensor Arrays from Conductive 2D Metal-Organic Frameworks", Journal of the American Chemical Society, vol. 137, pp. 13780–13783, 2015.
Applications of porous metal-org. frameworks (MOFs) in electronic devices are rare, owing in large part to a lack of MOFs that display elec. cond. Here, we describe the use of conductive two-dimensional (2D) MOFs as a new class of materials for chemiresistive sensing of volatile org. compds. (VOCs). We demonstrate that a family of structurally analogous 2D MOFs can be used to construct a cross-reactive sensor array that allows for clear discrimination between different categories of VOCs. Exptl. data show that multiple sensing mechanisms are operative with high degrees of orthogonality, establishing that the 2D MOFs used here are mechanistically unique and offer advantages relative to other known chemiresistor materials.
Y. Zhao, Chen, L., and Swager, T. M., "Simultaneous Identification of Neutral and Anionic Species in Complex Mixtures without Separation", Angewandte Chemie, International Edition, p. Ahead of Print, 2015.
A chemosensory system is reported that operates without the need for sepn. techniques and is capable of identifying anions and structurally similar bioactive mols. In this strategy, the coordination of analytes to a metal complex with an open binding cleft generates \"static structures\" on the NMR timescale. Unique signals are created by strategically placing fluorine atoms in close proximity to bound analytes so that small structural differences induce distinct 19F NMR shifts that can be used to identify each analyte. The utility of this method is illustrated by quantifying caffeine levels in coffee, by identifying ingredients in tea and energy drinks, and by discriminating between multiple biogenic amines with remote structural differences six carbon atoms away from the binding site. We further demonstrate the simultaneous identification of multiple neutral and anionic species in a complex mixture
C. Belger, Weis, J. G., Egap, E., and Swager, T. M., "Colorimetric Stimuli-Responsive Hydrogel Polymers for the Detection of Nerve Agent Surrogates", Macromolecules (Washington, DC, United States), vol. 48, pp. 7990–7994, 2015.
The threat of chem. warfare agents (CWAs) necessitates the development of functional materials that not only quickly detect the presence of CWAs but also actively protect against their toxicity. The authors have synthesized responsive units that exhibit colorimetric responses upon exposure to CWAs and incorporated them into a versatile detection platform based on copolymers prepd. by ring-opening metathesis polymn. (ROMP). The theor. detection limits for CWA simulants in soln. for these polymers are as low as 1 ppm. By incorporating hydrogel-promoting units as pendant chains, the authors are able to obtain polymers that instantly respond to CWA vapors and are easy to regenerate to the deactivated state by simple treatment with ammonium hydroxide vapor. The authors further demonstrate a collapse of the polymer gels in response to trifluoroacetic acid (TFA), a strong acid that produces a more fully ionized state as a result of its more caustic nature.
J. A. Kalow and Swager, T. M., "Synthesis of Miktoarm Branched Conjugated Copolymers by ROMPing In and Out.", ACS Macro Letters, vol. 4, pp. 1229–1233, 2015.
Architecture represents a promising yet underutilized control element in polymer design due to the challenging synthesis of compositionally varied branched copolymers. We report the one-pot synthesis of miktoarm branched polymers by ring-opening metathesis polymn. In this work, we graft to and from telechelic poly(3-hexylthiophene), which is end-capped by oxime click chem., using various norbornene monomers. The self-assembly of the resulting miktoarm H-shaped conjugated polymers is studied in soln. and in the solid state. A dual stimuli-responsive miktoarm polymer is prepd. that displays pH-switchable lower crit. soln. temp. and fluorescence.
K. Kawasumi, Wu, T., Zhu, T., Chae, H. Sik, Van Voorhis, T., Baldo, M. A., and Swager, T. M., "Thermally Activated Delayed Fluorescence Materials Based on Homoconjugation Effect of Donor–Acceptor Triptycenes", Journal of the American Chemical Society, vol. 137, pp. 11908-11911, 2015.
Donor–acceptor triptycences, TPA-QNX(CN)2 and TPA-PRZ(CN)2, were synthesized and their emissive properties were studied. They exhibited a blue-green fluorescence with emission lifetimes on the order of a microsecond in cyclohexane at room temperature. The long lifetime emission is quenched by O2 and is attributed to thermally activated delayed florescence (TADF). Unimolecular TADF is made possible by the separation and weak coupling due to homoconjugation of the HOMO and LUMO on different arms of the three-dimensional donor–acceptor triptycene. Organic light emitting devices (OLEDs) were fabricated using TPA-QNX(CN)2 and TPA-PRZ(CN)2 as emitters which displayed electroluminescence with efficiencies as high as 9.4% EQE.
T. V. Can, Walish, J. J., Swager, T. M., and Griffin, R. G., "Time domain DNP with the NOVEL sequence", The Journal of Chemical Physics, vol. 143, p. -, 2015.
F. Niroui, Wang, A. I., Sletten, E. M., Song, Y., Kong, J., Yablonovitch, E., Swager, T. M., Lang, J. H., and Bulović, V., "Tunneling Nanoelectromechanical Switches Based on Compressible Molecular Thin Films", ACS Nano, vol. 9, pp. 7886-7894, 2015.
Abrupt switching behavior and near-zero leakage current of nanoelectromechanical (NEM) switches are advantageous properties through which NEMs can outperform conventional semiconductor electrical switches. To date, however, typical NEMs structures require high actuation voltages and can prematurely fail through permanent adhesion (defined as stiction) of device components. To overcome these challenges, in the present work we propose a NEM switch, termed a "squitch," which is designed to electromechanically modulate the tunneling current through a nanometer-scale gap defined by an organic molecular film sandwiched between two electrodes. When voltage is applied across the electrodes, the generated electrostatic force compresses the sandwiched molecular layer, thereby reducing the tunneling gap and causing an exponential increase in the current through the device. The presence of the molecular layer avoids direct contact of the electrodes during the switching process. Furthermore, as the layer is compressed, the increasing surface adhesion forces are balanced by the elastic restoring force of the deformed molecules which can promote zero net stiction and recoverable switching. Through numerical analysis, we demonstrate the potential of optimizing squitch design to enable large on–off ratios beyond 6 orders of magnitude with operation in the sub-1 V regime and with nanoseconds switching times. Our preliminary experimental results based on metal–molecule–graphene devices suggest the feasibility of the proposed tunneling switching mechanism. With optimization of device design and material engineering, squitches can give rise to a broad range of low-power electronic applications.
Y. Zhao and Swager, T. M., "Functionalized Metalated Cavitands via Imidation and Late-Stage Elaboration", European Journal of Organic Chemistry, vol. 2015, pp. 4593–4597, 2015.
Efficient methods for the preparation of functionalized metallated cavitands are described. Functional groups can be either introduced by an imidation of metal-oxo complexes or by a late-stage elaboration of the imido ligands. By using diversified iminophosphorane (PPh3=NR) reagents, π-conjugated pyrene, redox active ferrocene, and polymerizable norbornene moieties were successfully introduced. Furthermore, the iodo and alkynyl groups on the imido ligands are capable of undergoing efficient Sonogashira cross-coupling and copper-catalyzed azide alkyne cycloaddition reactions, thereby providing facile access to complex architectures containing metallated cavitands.
S. F. Liu, Moh, L. C. H., and Swager, T. M., "Single-Walled Carbon Nanotube–Metalloporphyrin Chemiresistive Gas Sensor Arrays for Volatile Organic Compounds", Chemistry of Materials, vol. 27, pp. 3560-3563, 2015.
S. F. Liu, Petty, A. R., Sazama, G. T., and Swager, T. M., "Single-Walled Carbon Nanotube/Metalloporphyrin Composites for the Chemiresistive Detection of Amines and Meat Spoilage", Angewandte Chemie International Edition, vol. 54, pp. 6554–6557, 2015.
Chemiresistive detectors for amine vapors were made from single-walled carbon nanotubes by noncovalent modification with cobalt meso-arylporphyrin complexes. We show that through changes in the oxidation state of the metal, the electron-withdrawing character of the porphyrinato ligand, and the counteranion, the magnitude of the chemiresistive response to ammonia could be improved. The devices exhibited sub-ppm sensitivity and high selectivity toward amines as well as good stability to air, moisture, and time. The application of these chemiresistors in the detection of various biogenic amines (i.e. putrescine, cadaverine) and in the monitoring of spoilage in raw meat and fish samples (chicken, pork, salmon, cod) over several days was also demonstrated.
M. G. Campbell, Sheberla, D., Liu, S. F., Swager, T. M., and Dincă, M., "Cu3 (hexaiminotriphenylene)2 : An Electrically Conductive 2D Metal-Organic Framework for Chemiresistive Sensing", Angewandte Chemie (International ed. in English), 2015.
The utility of metal-organic frameworks (MOFs) as functional materials in electronic devices has been limited to date by a lack of MOFs that display high electrical conductivity. Here, we report the synthesis of a new electrically conductive 2D MOF, Cu3 (HITP)2 (HITP=2,3,6,7,10,11-hexaiminotriphenylene), which displays a bulk conductivity of 0.2 S cm(-1) (pellet, two-point-probe). Devices synthesized by simple drop casting of Cu3 (HITP)2 dispersions function as reversible chemiresistive sensors, capable of detecting sub-ppm levels of ammonia vapor. Comparison with the isostructural 2D MOF Ni3 (HITP)2 shows that the copper sites are critical for ammonia sensing, indicating that rational design/synthesis can be used to tune the functional properties of conductive MOFs.[on SciFinder (R)]
T. M. Swager, Im, J., Petty, A. R., Schnorr, J., and Schmaedicke, C., "Devices and methods including a preconcentrator material for detection of analytes.", PCT Int. Appl. Massachusetts Institute of Technology, USA ., p. 94pp., 2015.
Embodiments described herein provide devices and methods for the detn. of analytes. The device typically includes an absorbent material that allows for an analyte sample to be concd. and analyzed simultaneously and within a short period of time (e.g., less than 10 s). Embodiments described herein can provide portable and easily operable devices for on-site, real time field monitoring with high sensitivity, selectivity, and fast response time. [on SciFinder(R)]
L. D. Zarzar, Sresht, V., Sletten, E. M., Kalow, J. A., Blankschtein, D., and Swager, T. M., "Dynamically reconfigurable complex emulsions via tunable interfacial tensions.", Nature (London, United Kingdom), vol. 518, pp. 520–524, 2015.
Emulsification is a powerful, well-known technique for mixing and dispersing immiscible components within a continuous liq. phase. Consequently, emulsions are central components of medicine, food and performance materials. Complex emulsions, including Janus droplets (i.e., droplets with faces of differing chemistries) and multiple emulsions, are of increasing importance in pharmaceuticals and medical diagnostics, in the fabrication of microparticles and capsules for food, in chem. sepns., in cosmetics, and in dynamic optics. Because complex emulsion properties and functions are related to the droplet geometry and compn., the development of rapid, simple fabrication approaches allowing precise control over the droplets' phys. and chem. characteristics is crit. Significant advances in the fabrication of complex emulsions have been made using a no. of procedures, ranging from large-scale, less precise techniques that give compositional heterogeneity using high-shear mixers and membranes, to small-vol. but more precise microfluidic methods. However, such approaches have yet to create droplet morphologies that can be controllably altered after emulsification. Reconfigurable complex liqs. potentially have great utility as dynamically tunable materials. Here we describe an approach to the one-step fabrication of three- and four-phase complex emulsions with highly controllable and reconfigurable morphologies. The fabrication makes use of the temp.-sensitive miscibility of hydrocarbon, silicone and fluorocarbon liqs., and is applied to both the microfluidic and the scalable batch prodn. of complex droplets. We demonstrate that droplet geometries can be alternated between encapsulated and Janus configurations by varying the interfacial tensions using hydrocarbon and fluorinated surfactants including stimuli-responsive and cleavable surfactants. This yields a generalizable strategy for the fabrication of multiphase emulsions with controllably reconfigurable morphologies and the potential to create a wide range of responsive materials. [on SciFinder(R)]
T. M. Swager and Im, J., "Filter materials including functionalized cellulose.", PCT Int. Appl. Massachusetts Institute of Technology, USA ., p. 41pp., 2015.
Embodiments described herein provide materials and methods for the absorption or filtration of various species and analytes. In some cases, the materials may be used to remove or reduce the amt. of a substance in vapor sample (e.g., cigarette smoke). [on SciFinder(R)]
B. Koo, Sletten, E. M., and Swager, T. M., "Functionalized Poly(3-hexylthiophene)s via Lithium-Bromine Exchange.", Macromolecules, vol. 48, pp. 229–235, 2015.
Poly(3-hexylthiophene) (P3HT) is one of the most extensively investigated conjugated polymers and has been employed as the active material in many devices including field-effect transistors, org. photovoltaics and sensors. As a result, methods to further tune the properties of P3HT are desirable for specific applications. Herein, we report a facile postpolymn. modification strategy to functionalize the 4-position of com. available P3HT in two simple steps-bromination of the 4-position of P3HT (Br-P3HT) followed by lithium-bromine exchange and quenching with an electrophile. We achieved near quant. lithium-bromine exchange with Br-P3HT, which requires over 100 thienyl lithiates to be present on a single polymer chain. The lithiated-P3HT is readily combined with functional electrophiles, resulting in P3HT derivs. with ketones, secondary alcs., trimethylsilyl (TMS) group, fluorine, or an azide at the 4-position. We demonstrated that the azide-modified P3HT could undergo Cu-catalyzed or Cu-free click chem., significantly expanding the complexity of the structures that can be appended to P3HT using this method. [on SciFinder(R)]
Y. Zhao and Swager, T. M., "Simultaneous Chirality Sensing of Multiple Amines by 19F NMR.", Journal of the American Chemical Society, vol. 137, pp. 3221–3224, 2015.
The rapid detection and differentiation of chiral compds. is important to synthetic, medicinal, and biol. chem. Palladium complexes with chiral pincer ligands have utility in detg. the chirality of various amines. The binding of enantiomeric amines induces distinct 19F NMR shifts of the fluorine atoms appended on the ligand that defines a chiral environment around palladium. Further this method has the ability to evaluate the enantiomeric compn. and discriminate between enantiomers with chiral centers several carbons away from the binding site. The wide detection window provided by optimized chiral chemosensors allows the simultaneous identification of as many as 12 chiral amines. The extraordinary discriminating ability of this method is demonstrated by the resoln. of chiral aliph. amines that are difficult to sep. using chiral chromatog. [on SciFinder(R)]
J. G. Weis and Swager, T. M., "Thiophene-Fused Tropones as Chemical Warfare Agent-Responsive Building Blocks.", ACS Macro Letters, vol. 4, pp. 138–142, 2015.
We report the synthesis of dithienobenzotropone-based conjugated alternating copolymers by direct arylation polycondensation. Postpolymn. modification by hydride redn. yields cross-conjugated, reactive hydroxyl-contg. copolymers that undergo phosphorylation and ionization upon exposure to the chem. warfare agent mimic diethylchlorophosphate (DCP). The resulting conjugated, cationic copolymer is highly colored and facilitates the spectroscopic and colorimetric detection of DCP in both soln. and thin-film measurements. [on SciFinder(R)]
J. M. Azzarelli, Mirica, K. A., Ravnsbæk, J. B., and Swager, T. M., "Wireless gas detection with a smartphone via rf communication", Proceedings of the National Academy of Sciences, vol. 111, pp. 18162-18166, 2014.
Chemical sensing is of critical importance to human health, safety, and security, yet it is not broadly implemented because existing sensors often require trained personnel, expensive and bulky equipment, and have large power requirements. This study reports the development of a smartphone-based sensing strategy that employs chemiresponsive nanomaterials integrated into the circuitry of commercial near-field communication tags to achieve non-line-of-sight, portable, and inexpensive detection and discrimination of gas-phase chemicals (e.g., ammonia, hydrogen peroxide, cyclohexanone, and water) at part-per-thousand and part-per-million concentrations.
D. den Boer, Weis, J. G., Zuniga, C. A., Sydlik, S. A., and Swager, T. M., "Apparent roughness as indicator of (local) deoxygenation of graphene oxide.", Chemistry of Materials, vol. 26, pp. 4849–4855, 2014.
Detailed characterization of graphene oxide (GO) and its reduced forms continues to be a challenge. We have employed scanning tunneling microscopy (STM) to examine GO samples with varying degrees of deoxygenation via controlled chem. redn. Anal. of the roughness of the apparent height in STM topog. measurements, i.e. the äpparent roughness\", revealed a correlation between increasing deoxygenation and decreasing apparent roughness. This anal. can therefore be a useful supplement to the techniques currently available for the study of GO and related materials. The presence of a high elec. field underneath the STM tip can locally induce a reaction on the GO basal plane that leads to local deoxygenation, and the restoration of the sp2 hybridization of the carbons promotes increased planarity. These findings are in line with the apparent roughness values found for GO at varying levels of chem. redn. and illustrates the value of having a tool to gain structural/chem. insight on a local scale. This is the first example of employing an STM tip to locally reduce GO to reduced GO (rGO) and partially reduced GO (prGO) without locally destroying the graphene sample. Local manipulation on the nanoscale has utility for graphene nanoelectronics, and anal. employing the apparent roughness is an addnl. tool for the study of graphene oxide and related basal plane chem. [on SciFinder(R)]
J. B. Ravnsbaek and Swager, T. M., "Mechanochemical Synthesis of Poly(phenylene vinylenes).", ACS Macro Letters, vol. 3, pp. 305–309, 2014.
We report a simple, rapid, and solvent-free methodol. for solid-state polymns. yielding poly(phenylene vinylenes) (PPVs) promoted by ball-milling. This solid-state Gilch polymn. method produces PPVs in as little as five minutes of milling. Detailed investigations of the parameter space governing the solid-state polymn., i.e., milling time, base strength, solid-state diln., milling frequency, and size of milling balls, revealed that polymn. by ball-milling is a rapid process achieving mol. no. av. wts. of up to 40 kDa in up to 70% yield. To explore the scope, a solid-state polymn. via the dithiocarbamate precursor route is explored. [on SciFinder(R)]
W. P. Forrest, Weis, J. G., John, J. M., Axtell, J. C., Simpson, J. H., Swager, T. M., and Schrock, R. R., "Stereospecific Ring-Opening Metathesis Polymerization of Norbornadienes Employing Tungsten Oxo Alkylidene Initiators.", Journal of the American Chemical Society, vol. 136, pp. 10910–10913, 2014.
We report here the polymn. of several 7-isopropylidene-2,3-disubstituted norbornadienes, 7-oxa-2,3-dicarboalkoxynorbornadienes, and 11-oxa-benzonorbornadienes with a single tungsten oxo alkylidene catalyst, W(O)(CH-t-Bu)(OHMT)(Me2Pyr) (OHMT = 2,6-dimesitylphenoxide; Me2Pyr = 2,5-dimethylpyrrolide) to give cis, stereoregular polymers. The tacticities of the menthyl ester derivs. of two polymers were detd. for two types. For poly(7-isopropylidene-2,3-dicarbomenthoxynorbornadiene) the structure was shown to be cis, isotactic, while for poly(7-oxa-2,3-dicarbomenthoxynorbornadiene) the structure was shown to be cis, syndiotactic. A bis-trifluoromethyl-7-isopropylidene norbornadiene was not polymd. stereoregularly with W(O)(CHCMe2Ph)(Me2Pyr)(OHMT) alone, but a cis, stereoregular polymer was formed in the presence of 1 equiv of B(C6F5)3. [on SciFinder(R)]
V. Sathyendran, McAuliffe, G. N., Swager, T., Freeman, J. T., Taylor, S. L., and Roberts, S. A., "Clostridium difficile as a cause of healthcare-associated diarrhoea among children in Auckland, New Zealand: clinical and molecular epidemiology", European journal of clinical microbiology & infectious diseases : official publication of the European Society of Clinical Microbiology, vol. 33, pp. 1741–1747, 2014.
We aimed to determine the incidence of Clostridium difficile infection (CDI), the molecular epidemiology of circulating C. difficile strains and risk factors for CDI among hospitalised children in the Auckland region. A cross-sectional study was undertaken of hospitalised children <15 years of age in two hospitals investigated for healthcare-associated diarrhoea between November 2011 and June 2012. Stool specimens were analysed for the presence of C. difficile using a two-step testing algorithm including polymerase chain reaction (PCR). C. difficile was cultured and PCR ribotyping performed. Demographic data, illness characteristics and risk factors were compared between children with and without CDI. Non-duplicate stool specimens were collected from 320 children with a median age of 1.2 years (range 3 days to 15 years). Forty-six patients (14 %) tested met the definition for CDI. The overall incidence of CDI was 2.0 per 10,000 bed days. The percentage of positive tests among neonates was only 2.6 %. PCR ribotyping showed a range of strains, with ribotype 014 being the most common. Significant risk factors for CDI were treatment with proton pump inhibitors [risk ratio (RR) 1.74, 95 % confidence interval (CI) 1.09-5.59; p = 0.002], presence of underlying malignancy (RR 2.71, 95 % CI 1.65-4.62; p = 0.001), receiving chemotherapy (RR 2.70, 95 % CI 1.41-4.83; p = 0.003) and exposure to antibiotics (RR 1.17, 95 % CI 0.99-1.17; p = 0.03). C. difficile is an important cause of healthcare-associated diarrhoea in this paediatric population. The notion that neonatal populations will always have high rates of colonisation with C. difficile may not be correct. Several risk factors associated with CDI among adults were also found to be significant.[on SciFinder (R)]
E. M. Sletten and Swager, T. M., "Fluorofluorophores: Fluorescent Fluorous Chemical Tools Spanning the Visible Spectrum.", Journal of the American Chemical Society, vol. 136, pp. 13574–13577, 2014.
\"Fluoro\" refers to both fluorescent and fluorinated compds. Despite the shared prefix, there are very few fluorescent mols. that are sol. in perfluorinated solvents. This paucity is surprising, given that optical microscopy is a ubiquitous technique throughout the phys. sciences and the orthogonality of fluorous materials is a commonly exploited strategy in synthetic chem., materials science, and chem. biol. We have addressed this shortage by synthesizing a panel of \"fluorofluorophores,\" fluorescent mols. contg. high wt. percent fluorine with optical properties spanning the visible spectrum. We demonstrate the utility of these fluorofluorophores by prepg. fluorescent perfluorocarbon nanoemulsions. [on SciFinder(R)]
T. V. Can, Corzilius, B., Walish, J. J., Griffin, R. G., Caporini, M. A., Rosay, M., Maas, W. E., Mentink-Vigier, F., Vega, S., Baldus, M., and Swager, T. M., "Overhauser effects in insulating solids", The Journal of chemical physics, vol. 141, p. 64202, 2014.
We report magic angle spinning, dynamic nuclear polarization (DNP) experiments at magnetic fields of 9.4 T, 14.1 T, and 18.8 T using the narrow line polarizing agents 1,3-bisdiphenylene-2-phenylallyl (BDPA) dispersed in polystyrene, and sulfonated-BDPA (SA-BDPA) and trityl OX063 in glassy glycerol/water matrices. The (1)H DNP enhancement field profiles of the BDPA radicals exhibit a significant DNP Overhauser effect (OE) as well as a solid effect (SE) despite the fact that these samples are insulating solids. In contrast, trityl exhibits only a SE enhancement. Data suggest that the appearance of the OE is due to rather strong electron-nuclear hyperfine couplings present in BDPA and SA-BDPA, which are absent in trityl and perdeuterated BDPA (d21-BDPA). In addition, and in contrast to other DNP mechanisms such as the solid effect or cross effect, the experimental data suggest that the OE in non-conducting solids scales favorably with magnetic field, increasing in magnitude in going from 5 T, to 9.4 T, to 14.1 T, and to 18.8 T. Simulations using a model two spin system consisting of an electron hyperfine coupled to a (1)H reproduce the essential features of the field profiles and indicate that the OE in these samples originates from the zero and double quantum cross relaxation induced by fluctuating hyperfine interactions between the intramolecular delocalized unpaired electrons and their neighboring nuclei, and that the size of these hyperfine couplings is crucial to the magnitude of the enhancements. Microwave power dependent studies show that the OE saturates at considerably lower power levels than the solid effect in the same samples. Our results provide new insights into the mechanism of the Overhauser effect, and also provide a new approach to perform DNP experiments in chemical, biophysical, and physical systems at high magnetic fields.[on SciFinder (R)]
S. R. Yost, Lee, J., Wilson, M. W. B., Wu, T., McMahon, D. P., Parkhurst, R. R., Thompson, N. J., Congreve, D. N., Rao, A., Johnson, K., Sfeir, M. Y., Bawendi, M. G., Swager, T. M., Friend, R. H., Baldo, M. A., and Van Voorhis, T., "A transferable model for singlet-fission kinetics [Erratum to document cited in CA160:710659].", Nature Chemistry, vol. 6, p. 649, 2014.
On page 492, the 12th author's name was incomplete; the cor. name is given and the online version has been cor. [on SciFinder(R)]
S. R. Yost, Lee, J., Wilson, M. W. B., Wu, T., McMahon, D. P., Parkhurst, R. R., Thompson, N. J., Congreve, D. N., Rao, A., Johnson, K., Sfeir, M. Y., Bawendi, M. G., Swager, T. M., Friend, R. H., Baldo, M. A., and Van Voorhis, T., "A transferable model for singlet-fission kinetics.", Nature Chemistry, vol. 6, pp. 492–497, 2014.
Exciton fission is a process that occurs in certain org. materials whereby 1 singlet exciton splits into 2 independent triplets. In photovoltaic devices these 2 triplet excitons can each generate an electron, producing quantum yields per photon of >100% and potentially enabling single-junction power efficiencies >40%. Fission dynamics were measured using ultrafast photoinduced absorption and present a 1st-principles expression that successfully reproduces the fission rate in materials with vastly different structures. Fission is nonadiabatic and Marcus-like in weakly interacting systems, becoming adiabatic and coupling-independent at larger interaction strengths. In neat films, fission yields near unity were demonstrated even when monomers are sepd. by >5 \AA. For efficient solar cells, fission must outcompete charge generation from the singlet exciton. This work lays the foundation for tailoring mol. properties like soly. and energy level alignment while maintaining the high fission yield required for photovoltaic applications. [on SciFinder(R)]
T. M. Swager and Zhong, Y. Lin., "Methods involving graphene and functionalized graphene.", U.S. Pat. Appl. Publ. Massachusetts Institute of Technology, USA ., p. 19pp., 2014.
Embodiments relating to the synthesis and processing of graphene mols. are provided. In some cases, methods for the electrochem. expansion and/or functionalization of graphene mols. are provided. In some embodiments, one or more species may be intercalated between adjacent graphene sheets. [on SciFinder(R)]
D. J. Schipper, Moh, L. C. H., M\üller, P., and Swager, T. M., "Dithiolodithiole as a building block for conjugated materials", Angewandte Chemie - International Edition, vol. 53, pp. 5847–5851, 2014.
The development of new conjugated organic materials for dyes, sensors, imaging, and flexible light emitting diodes, field-effect transistors, and photovoltaics has largely relied upon assembling $π$-conjugated molecules and polymers from a limited number of building blocks. The use of the dithiolodithiole heterocycle as a conjugated building block for organic materials is described. The resulting materials exhibit complimentary properties to widely used thiophene analogues, such as stronger donor characteristics, high crystallinity, and a decreased HOMO-LUMO gap. The dithiolodithiole (C4S4) motif is readily synthetically accessible using catalytic processes, and both the molecular and bulk properties of materials based on this building block can be tuned by judicious choice of substituents.
T. M. Swager, Azzarelli, J. M., and White, K. R., "Selective detection of alkenes or alkynes.", PCT Int. Appl. Massachusetts Institute of Technology, USA ., p. 49pp., 2014.
A detector can detect an analyte including a carbon-carbon multiple bond moiety and capable of undergoing Diels-Alder reaction with a heteroarom. compd. having an extrudable group. The detector can detect, differentiate, and quantify ethylene. The detector can be a color based detector, a fluorescence based detector, or a resistivity based detector. [on SciFinder(R)]
T. M. Swager, Bulović, V., Han, G. D., and Andrew, T. L., "Functionalized nanostructures and related devices.", U.S. Pat. Appl. Publ. Massachusetts Institute of Technology, USA ., p. 28pp., 2014.
Embodiments described herein provide functionalized carbon nanostructures for use in various devices, including photovoltaic devices (e.g., solar cells). In some embodiments, carbon nanostructures substituted with at least one cyclobutyl and/or cyclobutenyl group are provided. Devices including such materials may exhibit increased efficiency, increased open circuit potential, high electron/hole mobility, and/or low elec. resistance. [on SciFinder(R)]
Y. Zhao, Markopoulos, G., and Swager, T. M., "19F NMR fingerprints: identification of neutral organic compounds in a molecular container.", Journal of the American Chemical Society, vol. 136, pp. 10683–10690, 2014.
Improved methods for quickly identifying neutral org. compds. and differentiation of analytes with similar chem. structures are widely needed. We report a new approach to effectively \"fingerprint\" neutral org. mols. by using 19F NMR and mol. containers. The encapsulation of analytes induces characteristic up- or downfield shifts of 19F resonances that can be used as multidimensional parameters to fingerprint each analyte. The strategy can be achieved either with an array of fluorinated receptors or by incorporating multiple nonequivalent fluorine atoms in a single receptor. Spatial proximity of the analyte to the 19F is important to induce the most pronounced NMR shifts and is crucial in the differentiation of analytes with similar structures. This new scheme allows for the precise and simultaneous identification of multiple analytes in a complex mixt. [on SciFinder(R)]
B. Duncan Den, Han, G. D., and Swager, T. M., "Templating fullerenes by domain boundaries of a nanoporous network", Langmuir : the ACS journal of surfaces and colloids, vol. 30, pp. 762–767, 2014.
We present a new templating approach that combines the templating properties of nanoporous networks with the dynamic properties and the lattice mismatch of domain boundaries. This templating approach allows for the inclusion of guests with different sizes without the need for a strict molecular design to tailor the nanoporous network. With this approach, nonperiodic patterns of functional molecules can be formed and studied. We show that domain boundaries in a trimesic acid network are preferred over pores within the network as adsorption sites for fullerenes by a factor of 100-200. Pristine fullerenes of different sizes and functionalized fullerenes were templated in this way.[on SciFinder (R)]
S. L. Buchwald, Swager, T. M., Teverovskiy, G., and Su, M., "Organic conductive materials including iptycene-based structure and devices.", PCT Int. Appl. Massachusetts Institute of Technology, USA ., p. 44pp., 2014.
Embodiments described herein relate to compns. including iptycene-based structures and extended iptycene structures. The iptycene-based compd. comprises an iptycene core and at least one optionally substituted heterocyclyl or optionally substituted heteroaryl moiety rigidly bonded to the iptycene core, wherein the optionally substituted heterocyclyl or optionally substituted heteroaryl moiety defines at least a portion of the iptycene core. In some embodiments, the compns. may be useful in org. light-emitting diodes (OLEDs), org. photovoltaics, and other devices. [on SciFinder(R)]
T. V. Can, Caporini, M. A., Mentink-Vigier, F., Corzilius, B., Walish, J. J., Rosay, M., Maas, W. E., Baldus, M., Vega, S., Swager, T. M., and Griffin, R. G., "Overhauser effects in insulating solids.", Journal of Chemical Physics, vol. 141, pp. 064202/1–064202/8, 2014.
We report magic angle spinning, dynamic nuclear polarization (DNP) expts. at magnetic fields of 9.4 T, 14.1 T, and 18.8 T using the narrow line polarizing agents 1,3-bisdiphenylene-2-phenylallyl (BDPA) dispersed in polystyrene, and sulfonated-BDPA (SA-BDPA) and trityl OX063 in glassy glycerol/water matrixes. The 1H DNP enhancement field profiles of the BDPA radicals exhibit a significant DNP Overhauser effect (OE) as well as a solid effect (SE) despite the fact that these samples are insulating solids. In contrast, trityl exhibits only a SE enhancement. Data suggest that the appearance of the OE is due to rather strong electron-nuclear hyperfine couplings present in BDPA and SA-BDPA, which are absent in trityl and perdeuterated BDPA (d21-BDPA). In addn., and in contrast to other DNP mechanisms such as the solid effect or cross effect, the exptl. data suggest that the OE in non-conducting solids scales favorably with magnetic field, increasing in magnitude in going from 5 T, to 9.4 T, to 14.1 T, and to 18.8 T. Simulations using a model two spin system consisting of an electron hyperfine coupled to a 1H reproduce the essential features of the field profiles and indicate that the OE in these samples originates from the zero and double quantum cross relaxation induced by fluctuating hyperfine interactions between the intramol. delocalized unpaired electrons and their neighboring nuclei, and that the size of these hyperfine couplings is crucial to the magnitude of the enhancements. Microwave power dependent studies show that the OE sats. at considerably lower power levels than the solid effect in the same samples. Our results provide new insights into the mechanism of the Overhauser effect, and also provide a new approach to perform DNP expts. in chem., biophys., and phys. systems at high magnetic fields. (c) 2014 American Institute of Physics. [on SciFinder(R)]
K. M. Frazier, Mirica, K. A., Walish, J. J., and Swager, T. M., "Fully-drawn carbon-based chemical sensors on organic and inorganic surfaces.", Lab on a Chip, vol. 14, pp. 4059-4066, 2014.
Mech. abrasion is an extremely simple, rapid, and low-cost method for deposition of carbon-based materials onto a substrate. However, the method is limited in throughput, precision, and surface compatibility for drawing conductive pathways. Selective patterning of surfaces using laser-etching can facilitate substantial improvements to address these current limitations for the abrasive deposition of carbon-based materials. This study demonstrates the successful on-demand fabrication of fully-drawn chem. sensors on a wide variety of substrates (e.g., weighing paper, polymethyl methacrylate, silicon, and adhesive tape) using single-walled carbon nanotubes (SWCNTs) as sensing materials and graphite as electrodes. Mech. mixing of SWCNTs with solid or liq. selectors yields sensors that can detect and discriminate parts-per-million (ppm) quantities of various nitrogen-contg. vapors (pyridine, aniline, triethylamine). [on SciFinder(R)]
R. R. Parkhurst and Swager, T. M., "Antiaromaticity in Nonbenzenoid Oligoarenes and Ladder Polymers", Topics in current chemistry, 2014.
Polycyclic aromatic hydrocarbons (PAHs) and fully-conjugated ladder polymers are leading candidates for organics electronics, as their inherent conformational rigidity encourages electron delocalization. Many of these systems consist of fused benzenoid or heterocyclic aromatic rings. Less frequently, however, PAHs are reported with character that alternates between the aromaticity of benzene fragments and the antiaromaticity of a nonbenzenoid moiety. This chapter will focus on recent work published on the theory, synthesis, and properties of two such systems: [N]phenylenes containing 4$π$-electron cyclobutadienoid character, and diaryl[a,e]pentalenes containing 8$π$-electron pentalenoid character.[on SciFinder (R)]
S. Rochat and Swager, T. M., "Fluorescence Sensing of Amine Vapors Using a Cationic Conjugated Polymer Combined with Various Anions.", Angewandte Chemie, International Edition, vol. 53, pp. 9792–9796, 2014.
A series of conjugated cationic polymers, differentiated only by their accompanying counter-anions, was prepd. and characterized. The choice of counter-anion (CA) was found to drastically impact the soly. of the polymers and their optical properties in soln. and in the solid state. Fluorescent polymer thin films were found to be instantaneously quenched by volatile amines in the gas phase at low ppm concns., and a mini-array with CAs as variable elements was found to be able to differentiate amines with good fidelity. [on SciFinder(R)]
D. den Boer, Han, G. D., and Swager, T. M., "Templating fullerenes by domain boundaries of a nanoporous network.", Langmuir : the ACS journal of surfaces and colloids, vol. 30, pp. 762–7, 2014.
We present a new templating approach that combines the templating properties of nanoporous networks with the dynamic properties and the lattice mismatch of domain boundaries. This templating approach allows for the inclusion of guests with different sizes without the need for a strict molecular design to tailor the nanoporous network. With this approach, nonperiodic patterns of functional molecules can be formed and studied. We show that domain boundaries in a trimesic acid network are preferred over pores within the network as adsorption sites for fullerenes by a factor of 100-200. Pristine fullerenes of different sizes and functionalized fullerenes were templated in this way.
J. B. Goods, Sydlik, S. A., Walish, J. J., and Swager, T. M., "Phosphate functionalized graphene with tunable mechanical properties.", Advanced materials (Deerfield Beach, Fla.), vol. 26, pp. 718–23, 2014.
The synthesis of a covalently modified graphene oxide derivative with exceptional and tunable compressive strength is reported. Treatment of graphene oxide with triethyl phosphite in the presence of LiBr produces monolithic structures comprised of lithium phosphate oligomers tethered to graphene through covalent phosphonate linkages. Variation of the both phosphate content and associated cation produces materials of various compressive strengths and elasticity.
M. K. Kiesewetter, Michaelis, V. K., Walish, J. J., Griffin, R. G., and Swager, T. M., "High field dynamic nuclear polarization NMR with surfactant sheltered biradicals.", The journal of physical chemistry. B, vol. 118, pp. 1825–30, 2014.
We illustrate the ability to place a water-insoluble biradical, bTbk, into a glycerol/water matrix with the assistance of a surfactant, sodium octyl sulfate (SOS). This surfactant approach enables a previously water insoluble biradical, bTbk, with favorable electron-electron dipolar coupling to be used for dynamic nuclear polarization (DNP) nuclear magnetic resonance (NMR) experiments in frozen, glassy, aqueous media. Nuclear Overhauser enhancement (NOE) and paramagnetic relaxation enhancement (PRE) experiments are conducted to determine the distribution of urea and several biradicals within the SOS macromolecular assembly. We also demonstrate that SOS assemblies are an effective approach by which mixed biradicals are created through an assembly process.
M. Krikorian, Liu, S., and Swager, T. M., "Columnar Liquid Crystallinity and Mechanochromism in Cationic Platinum(II) Complexes.", Journal of the American Chemical Society, vol. 136, pp. 2952–2955, 2014.
Cationic square planar Pt(II) complexes are reported with high degrees of intermolecular association. These complexes display thermotropic columnar liquid crystalline behavior in spite of having only a single side chain. Crystals undergo mechanochromic transformations that can be reversed with solvent.
V. K. Michaelis, Ong, T. - C., Kiesewetter, M. K., Frantz, D. K., Walish, J. J., Ravera, E., Luchinat, C., Swager, T. M., and Griffin, R. G., "Topical Developments in High-Field Dynamic Nuclear Polarization.", Israel Journal of Chemistry, p. Ahead of Print, 2014.
We report our recent efforts directed at improving high-field dynamic nuclear polarization (DNP) expts. We investigated a series of thiourea nitroxide radicals and the assocd. DNP enhancements ranging from $ε$=25 to 82, which demonstrate the impact of mol. structure on performance. We directly polarized low-gamma nuclei, including 13C, 2H, and 17O, by the cross effect mechanism using trityl radicals as a polarization agent. We discuss a variety of sample prepn. techniques for DNP with emphasis on the benefits of methods that do not use a glass-forming cryoprotecting matrix. Lastly, we describe a corrugated waveguide for use in a 700 MHz/460 GHz DNP system that improves microwave delivery and increases enhancements up to 50 %. [on SciFinder(R)]
K. Saetia, Schnorr, J. M., Mannarino, M. M., Kim, S. Yeol, Rutledge, G. C., Swager, T. M., and Hammond, P. T., "Spray-Layer-by-Layer Carbon Nanotube/Electrospun Fiber Electrodes for Flexible Chemiresistive Sensor Applications.", Advanced Functional Materials, vol. 24, pp. 492–502, 2014.
Development of a versatile method for incorporating conductive materials into textiles could enable advances in wearable electronics and smart textiles. One area of crit. importance is the detection of chems. in the environment for security and industrial process monitoring. Here, the fabrication of a flexible, sensor material based on functionalized multi-walled carbon nanotube (MWNT) films on a porous electrospun fiber mat for real-time detection of a nerve agent simulant is reported. The material is constructed by layer-by-layer (LbL) assembly of MWNTs with opposite charges, creating multilayer films of MWNTs without binder. The vacuum-assisted spray-LbL process enables conformal coatings of nanostructured MWNT films on individual electrospun fibers throughout the bulk of the mat with controlled loading and elec. cond. A thiourea-based receptor is covalently attached to the primary amine groups on the MWNT films to enhance the sensing response to di-Me methylphosphonate (DMMP), a simulant for sarin nerve agent. Chemiresistive sensors based on the engineered textiles display reversible responses and detection limits for DMMP as low as 10 ppb in the aq. phase and 5 ppm in the vapor phase. This fabrication technique provides a versatile and easily scalable strategy for incorporating conformal MWNT films into three-dimensional substrates for numerous applications. [on SciFinder(R)]
D. den Boer, Krikorian, M., Esser, B., and Swager, T. M., "STM Study of Gold(I) Pyrazolates: Distinct Morphologies, Layer Evolution, and Cooperative Dynamics", The Journal of Physical Chemistry C, vol. 117, pp. 8290–8298, 2013.
The authors describe the first study of trinuclear gold(I) pyrazolates on the mol. level by time-dependent scanning tunneling microscopy (STM). On the graphite/1-octanoic acid interface, dodecyl-functionalized gold pyrazolates formed concn.-controlled morphologies. The authors found two types of monomeric packing and one dimeric type with two trinuclear gold pyrazolates next to each other on the surface. For an octadecyl-functionalized deriv., all studied concns. resulted in a dimeric morphol. However, different concns. led to different transient states during the layer evolution. At low concns., a transient monomeric state was present with the alkyl chains in a gauche-conformation that subsequently converted to a more optimized anti-conformation. At higher concns. a less stable \"line\" polymorph was obsd. The confinement of the mols. to the surface led to cooperative dynamics, in which two mols. in a dimer moved as if they were one particle. Furthermore, in a higher level of cooperativity, the rotation of one dimer appears to induce rotations in coupled neighboring dimers. [on SciFinder(R)]
K. M. Frazier and Swager, T. M., "Robust Cyclohexanone Selective Chemiresistors Based on Single-Walled Carbon Nanotubes.", Analytical Chemistry (Washington, DC, United States), vol. 85, pp. 7154–7158, 2013.
Functionalized single-walled carbon nanotube (SWCNT)-based chemiresistors are reported for a highly robust and sensitive gas sensor to selectively detect cyclohexanone, a target analyte for explosive detection. The trifunctional selector has three important properties: it noncovalently functionalizes SWCNTs with cofacial $π$-$π$ interactions, it binds to cyclohexanone via hydrogen bond (mechanistic studies were studied), and it improves the overall robustness of SWCNT-based chemiresistors (e.g., humidity and heat). The authors' sensors produced reversible and reproducible responses in <30 s to 10 ppm of cyclohexanone and displayed an av. theor. limit of detection (LOD) of 5 ppm. [on SciFinder(R)]
C. N. McEwen, Ligler, F. S., and Swager, T. M., "Chemical and biological detection.", Chemical Society reviews, vol. 42, pp. 8581–3, 2013.
K. L. Neal, Shakerdge, N. B., Hou, S. S., Klunk, W. E., Mathis, C. A., Nesterov, E. E., Swager, T. M., McLean, P. J., and Bacskai, B. J., "Development and screening of contrast agents for in vivo imaging of Parkinson's disease", Molecular imaging and biology : MIB : the official publication of the Academy of Molecular Imaging, vol. 15, pp. 585–595, 2013.
PURPOSE: The goal was to identify molecular imaging probes that would enter the brain, selectively bind to Parkinson's disease (PD) pathology, and be detectable with one or more imaging modalities. PROCEDURE: A library of organic compounds was screened for the ability to bind hallmark pathology in human Parkinson's and Alzheimer's disease tissue, alpha-synuclein oligomers and inclusions in two cell culture models, and alpha-synuclein aggregates in cortical neurons of a transgenic mouse model. Finally, compounds were tested for blood-brain barrier permeability using intravital microscopy. RESULTS: Several lead compounds were identified that bound the human PD pathology, and some showed selectivity over Alzheimer's pathology. The cell culture models and transgenic mouse models that exhibit alpha-synuclein aggregation did not prove predictive for ligand binding. The compounds had favorable physicochemical properties, and several were brain permeable. CONCLUSIONS: Future experiments will focus on more extensive evaluation of the lead compounds as PET ligands for clinical imaging of PD pathology.[on SciFinder (R)]
Q. Zhe Ni, Daviso, E., Can, T. V., Markhasin, E., Jawla, S. K., Swager, T. M., Temkin, R. J., Herzfeld, J., and Griffin, R. G., "High frequency dynamic nuclear polarization.", Accounts of chemical research, vol. 46, pp. 1933–41, 2013.
During the three decades 1980-2010, magic angle spinning (MAS) NMR developed into the method of choice to examine many chemical, physical, and biological problems. In particular, a variety of dipolar recoupling methods to measure distances and torsion angles can now constrain molecular structures to high resolution. However, applications are often limited by the low sensitivity of the experiments, due in large part to the necessity of observing spectra of low-$\gamma$ nuclei such as the I = 1/2 species (13)C or (15)N. The difficulty is still greater when quadrupolar nuclei, such as (17)O or (27)Al, are involved. This problem has stimulated efforts to increase the sensitivity of MAS experiments. A particularly powerful approach is dynamic nuclear polarization (DNP) which takes advantage of the higher equilibrium polarization of electrons (which conventionally manifests in the great sensitivity advantage of EPR over NMR). In DNP, the sample is doped with a stable paramagnetic polarizing agent and irradiated with microwaves to transfer the high polarization in the electron spin reservoir to the nuclei of interest. The idea was first explored by Overhauser and Slichter in 1953. However, these experiments were carried out on static samples, at magnetic fields that are low by current standards. To be implemented in contemporary MAS NMR experiments, DNP requires microwave sources operating in the subterahertz regime, roughly 150-660 GHz, and cryogenic MAS probes. In addition, improvements were required in the polarizing agents, because the high concentrations of conventional radicals that are required to produce significant enhancements compromise spectral resolution. In the last two decades, scientific and technical advances have addressed these problems and brought DNP to the point where it is achieving wide applicability. These advances include the development of high frequency gyrotron microwave sources operating in the subterahertz frequency range. In addition, low temperature MAS probes were developed that permit in situ microwave irradiation of the samples. And, finally, biradical polarizing agents were developed that increased the efficiency of DNP experiments by factors of ∼4 at considerably lower paramagnet concentrations. Collectively, these developments have made it possible to apply DNP on a routine basis to a number of different scientific endeavors, most prominently in the biological and material sciences. This Account reviews these developments, including the primary mechanisms used to transfer polarization in high frequency DNP, and the current choice of microwave sources and biradical polarizing agents. In addition, we illustrate the utility of the technique with a description of applications to membrane and amyloid proteins that emphasizes the unique structural information that is available in these two cases.
S. Rochat and Swager, T. M., "Conjugated Amplifying Polymers for Optical Sensing Applications.", ACS Applied Materials & Interfaces, vol. 5, pp. 4488–4502, 2013.
A review. Thanks to their unique optical and electrochem. properties, conjugated polymers have attracted considerable attention over the last 2 decades and resulted in numerous technol. innovations. In particular, their implementation in sensing schemes and devices was widely studied and produced a multitude of sensory systems and transduction mechanisms. Conjugated polymers possess numerous attractive features that make them particularly suitable for a broad variety of sensing tasks. They display sensory signal amplification (compared to their small-mol. counterparts) and their structures can easily be tailored to adjust soly., absorption/emission wavelengths, energy offsets for excited state electron transfer, and/or for use in soln. or in the solid state. This versatility has made conjugated polymers a fluorescence sensory platform of choice in the recent years. In this review, the authors highlight a variety of conjugated polymer-based sensory mechanisms together with selected examples from the recent literature. [on SciFinder(R)]
S. Rochat and Swager, T. M., "Water-soluble cationic conjugated polymers: response to electron-rich bioanalytes.", Journal of the American Chemical Society, vol. 135, pp. 17703–6, 2013.
We report the concise synthesis of a symmetrical monomer that provides a head-to-head pyridine building block for the preparation of cationic conjugated polymers. The obtained poly(pyridinium-phenylene) polymers display appealing properties such as high electron affinity, charge-transport upon n-doping, and optical response to electron-donating analytes. A simple assay for the optical detection of low micromolar amounts of a variety of analytes in aqueous solution was developed. In particular, caffeine could be measured at a 25 $μ$M detection limit. The reported polymers are also suitable for layer-by-layer film formation.
J. M. Schnorr, van der Zwaag, D., Walish, J. J., Weizmann, Y., and Swager, T. M., "Sensory Arrays of Covalently Functionalized Single-Walled Carbon Nanotubes for Explosive Detection.", Advanced Functional Materials, vol. 23, pp. 5285–5291, 2013.
Chemiresistive sensor arrays for cyclohexanone and nitromethane are fabricated using single-walled carbon nanotubes (SWCNTs) that are covalently functionalized with urea, thiourea, and squaramide contg. selector units. Based on initial sensing results and 1H NMR binding studies, the most promising selectors are chosen and further optimized. These optimized selectors are attached to SWCNTs and simultaneously tested in a sensor array. The sensors show a very high level of reproducibility between measurements with the same sensor and across different sensors of the same type. Furthermore, the sensors show promising long-term stability, which renders them suitable for practical applications. [on SciFinder(R)]
C. Simocko, Yang, Y., Swager, T. M., and Wagener, K. B., "Metathesis Step-Growth Polymerizations in Ionic Liquid.", ACS Macro Letters, vol. 2, pp. 1061–1064, 2013.
Metathesis step-growth polymns. in ionic liqs. (ILs) was explored to take advantage of the high b.ps. of ILs, thereby permitting the use of low pressures at high temps. Optimization reactions found that high polymers form efficiently using small amts. of catalyst and short reaction times. For example, high mol. wt. main-chain triptycene polymers with high triptycene incorporation were synthesized. This new methodol. is applicable to various metathesis reactions that require removal of volatile byproducts as a driving force, including acyclic diene metathesis (ADMET). [on SciFinder(R)]
A. A. Smith, Corzilius, B. \örn, Haze, O., Swager, T. M., and Griffin, R. G., "Observation of strongly forbidden solid effect dynamic nuclear polarization transitions via electron-electron double resonance detected NMR.", The Journal of chemical physics, vol. 139, p. 214201, 2013.
We present electron paramagnetic resonance experiments for which solid effect dynamic nuclear polarization transitions were observed indirectly via polarization loss on the electron. This use of indirect observation allows characterization of the dynamic nuclear polarization (DNP) process close to the electron. Frequency profiles of the electron-detected solid effect obtained using trityl radical showed intense saturation of the electron at the usual solid effect condition, which involves a single electron and nucleus. However, higher order solid effect transitions involving two, three, or four nuclei were also observed with surprising intensity, although these transitions did not lead to bulk nuclear polarization–suggesting that higher order transitions are important primarily in the transfer of polarization to nuclei nearby the electron. Similar results were obtained for the SA-BDPA radical where strong electron-nuclear couplings produced splittings in the spectrum of the indirectly observed solid effect conditions. Observation of high order solid effect transitions supports recent studies of the solid effect, and suggests that a multi-spin solid effect mechanism may play a major role in polarization transfer via DNP.
S. A. Sydlik and Swager, T. M., "Functional Graphenic Materials Via a Johnson-Claisen Rearrangement.", Advanced Functional Materials, vol. 23, pp. 1873–1882, 2013.
The hydroxyl functionalities in graphene oxide (GO), the vast majority that must be allylic alcs., have been subjected to Johnson-Claisen rearrangement conditions. Under these conditions, a [3, 3] sigmatropic rearrangement after reaction with tri-Et orthoacetate gives rise to an ester functional group, attached to the graphitic framework via a robust C-C bond. This variation of the Claisen rearrangement offers an unprecedented versatility of further functionalizations, while maintaining the desirable properties of unfunctionalized graphene. The resultant functional groups were found to withstand reductive treatments for the deoxygenation of graphene sheets and a resumption of electronic cond. is obsd. The ester groups are easily sapond. to carboxylic acids in situ with basic conditions, to give water-sol. graphene. The ester functionality can be further reacted as is, or the carboxylic acid can easily be converted to the more reactive acid chloride. Subsequent amide formation yields up to 1 amide in 15 graphene carbons and increases intergallery spacing up to 12.8 \AA, suggesting utility of this material in capacitors and in gas storage. Other functionalization schemes, which include the installation of terminal alkynes and dipolar cycloaddns., allow for the synthesis of a highly pos. charged, water-sol. graphene. The highly neg. and pos. charged graphenes (zeta potentials of -75 mV and +56 mV, resp.), are successfully used to build layer-by-layer (LBL) constructs. [on SciFinder(R)]
Y. Zhao and Swager, T. M., "Detection and differentiation of neutral organic compounds by 19F NMR with a tungsten calix[4]arene imido complex.", Journal of the American Chemical Society, vol. 135, pp. 18770–3, 2013.
Fluorinated tungsten calix[4]arene imido complexes were synthesized and used as receptors to detect and differentiate neutral organic compounds. It was found that the binding of specific neutral organic molecules to the tungsten centers induces an upfield shift of the fluorine atom appended on the arylimido group, the extent of which is highly dependent on electronic and steric properties. We demonstrate that the specific bonding and size-selectivity of calix[4]arene tungsten-imido complex combined with (19)F NMR spectroscopy is a powerful new method for the analysis of complex mixtures.
E. Ahmed, Morton, S. W., Hammond, P. T., and Swager, T. M., "Fluorescent Multiblock π-Conjugated Polymer Nanoparticles for In Vivo Tumor Targeting.", Advanced Materials (Weinheim, Germany), vol. 25, pp. 4504–4510, 2013.
We have developed highly fluorescent multiblock conjugated polymer nanoparticles for bioimaging and in vivo tumor targeting. We have shown that folate functionalized conjugated polymer nanoparticles exhibit preferential cell assocn. and uptake in vitro compared to non-functionalized nanoparticles. [on SciFinder(R)]
J. M. Batson and Swager, T. M., "Towards a perylene-containing nanohoop.", Synlett, vol. 24, pp. 2545–2549, 2013.
An octabenzo [12]cycloparaphenylene I (nanohoop) was prepd. in four steps as a potential precursor to a perylene-like cycloparaphenylene. Addn. of the lithium reagent derived from 1,4-dibromonaphthalene to 1,4-cyclohexanedione, O-methoxymethylation and isolation of the cis-dinaphthylcyclohexane, nickel-catalyzed macrocyclization, and acid- and microwave-mediated aromatization yielded I in 0.34% overall yield. The absorption and emission spectra of I and of its unaromatized precursor were obtained and compared. [on SciFinder(R)]
N. Calisi, Giuliani, A., Alderighi, M., Schnorr, J. M., Swager, T. M., Di Fabio, F., and Pucci, A., "Factors affecting the dispersion of MWCNTs in electrically conducting SEBS nanocomposites.", European Polymer Journal, vol. 49, pp. 1471–1478, 2013.
The accessible concn. of exfoliated and undamaged multi-walled carbon nanotubes (MWCNTs) in polymer nanocomposites is an essential issue to the future of these materials. In this work, we report two methodologies directed at obtaining elec. conducting poly(styrene-b-(ethylene-co-butylene)-b-styrene) (SEBS) nanocomposites with different MWCNT contents. The first depends on the time modulation of ultrasonication of toluene mixts., whereas the second relies on the use of alkyl-functionalized MWCNTs (f-MWCNTs). UV-vis spectroscopy investigations and thermogravimetric analyses allowed the quantification of exfoliated CNTs incorporated in the SEBS mixt. TEM micrographs denoted that a prolonged sonication time (40 min) induced an extensive MWCNTs degrdn. (av. length decreased of 40%), which affected the elec. cond. of the nanocomposites. The f-MWCNTs appeared to be more effective in prepg. SEBS nanocomposites due to the higher dispersion efficiency, negligible nanotube degrdn. and higher elec. cond.The temp. dependence of the resistance of the SEBS/MWCNT system was investigated in the range 20-60 °C to explore its potential for sensor development. [on SciFinder(R)]
H. P. de Oliveira, Sydlik, S. A., and Swager, T. M., "Supercapacitors from Free-Standing Polypyrrole/Graphene Nanocomposites", The Journal of Physical Chemistry C, vol. 117, pp. 10270–10276, 2013.
Interfacial/in situ oxidative polymn. of polypyrrole in the presence of functionalized graphene sheets produces high-quality composites for supercapacitors, as analyzed by electrochem. impedance spectroscopy and cyclic voltammetry anal. The synergistic interaction induced by the growth of p-type polypyrrole on the surface of neg. charged carboxylate functionalized graphene sheets results in higher storage capacity than graphene-only or polymer-only films. The high cond. of p-doped polypyrrole and high surface area of graphene promote high charge accumulation in capacitors. The authors report the optimization of the relative concns. of carboxylate functionalized graphene in the polypyrrole matrix to maximize the compn.'s capacitance to 277.8 F/g. [on SciFinder(R)]
K. A. Mirica, Azzarelli, J. M., Weis, J. G., Schnorr, J. M., and Swager, T. M., "Rapid prototyping of carbon-based chemiresistive gas sensors on paper.", Proceedings of the National Academy of Sciences of the United States of America, vol. 110, pp. E3265–E3270,SE3265/1–SE3265/28, 2013.
Chem. functionalized carbon nanotubes (CNTs) are promising materials for sensing of gases and volatile org. compds. However, the poor soly. of carbon nanotubes hinders their chem. functionalization and the subsequent integration of these materials into devices. This manuscript describes a solvent-free procedure for rapid prototyping of selective chemiresistors from CNTs and graphite on the surface of paper. This procedure enables fabrication of functional gas sensors from com. available starting materials in <15 min. The 1st step of this procedure involves the generation of solid composites of CNTs or graphite with small mol. selectors-designed to interact with specific classes of gaseous analytes-by solvent-free mech. mixing in a ball mill and subsequent compression. The 2nd step involves deposition of chemiresistive sensors by mech. abrasion of these solid composites onto the surface of paper. Parallel fabrication of multiple chemiresistors from diverse composites rapidly generates cross-reactive arrays capable of sensing and differentiating gases and volatile org. compds. at part-per-million and part-per-thousand concns. [on SciFinder(R)]
D. K. Frantz, Walish, J. J., and Swager, T. M., "Synthesis and Properties of the 5,10,15-Trimesityltruxen-5-yl Radical.", Organic Letters, vol. 15, pp. 4782–4785, 2013.
The synthesis of a long-lived, truxene-based radical that is highly delocalized and exhibits a narrow EPR absorption is reported. The radical is stable for multiple hours in a soln. exposed to air and remains for months in the solid state under inert gas. Characterization and properties are discussed. [on SciFinder(R)]
G. D. Han, Collins, W. R., Andrew, T. L., Bulović, V., and Swager, T. M., "Cyclobutadiene-C 60 Adducts: N-Type Materials for Organic Photovoltaic Cells with High V OC", Advanced Functional Materials, p. n/a–n/a, 2013.
New tetraalkylcyclobutadiene-C60 adducts are developed via Diels-Alder cycloaddn. of C60 with in situ generated cyclobutadienes. The cofacial $π$-orbital interactions between the fullerene orbitals and the cyclobutene are shown to decrease the electron affinity and thereby increase the LUMO (LUMO) energy level of C60 significantly (ca. 100 and 300 meV for mono- and bisadducts, resp.). These variations in LUMO levels of fullerene can be used to generate higher open-circuit voltages (VOC) in bulk heterojunction polymer solar cells. The tetramethylcyclobutadiene-C60 monoadduct displays an open-circuit voltage (0.61 V) and a power conversion efficiency (2.49%) comparable to the widely used P3HT/PCBM (poly(3-hexylthiophene/([6,6]-phenyl-C61-butyric acid Me ester) composite (0.58 V and 2.57%, resp.). The role of the cofacial $π$-orbital interactions between C60 and the attached cyclobutene group was probed chem. by epoxidn. of the cyclobutene moiety and theor. through d. functional theory calcns. The electrochem., photophys., and thermal properties of the newly synthesized fullerene derivs. support the proposed effect of functionalization on electron affinities and photovoltaic performance. [on SciFinder(R)]
S. A. Sydlik, Lee, J. - H., Walish, J. J., Thomas, E. L., and Swager, T. M., "Epoxy functionalized multi-walled carbon nanotubes for improved adhesives", Carbon, p. Ahead of Print, 2013.
Three different types of epoxy-functionalized multi-walled carbon nanotubes (EpCNTs) were prepd. by multiple covalent functionalization methods. The EpCNTs were characterized by thermogravimetric anal. (TGA), IR spectroscopy (FTIR), and Raman spectroscopy to confirm covalent functionalization. The effect of the different chemistries on the adhesive properties was compared to a neat com. epoxy (Hexion formulation 4007) using functionalized and unfunctionalized multi-walled carbon nanotubes (MWCNT) at 0.5, 1, 2, 3, 5, and 10 wt%. It was found that an EpCNT at 1 wt% increased the lap shear strength, tested using the American Society for Testing and Materials std. test D1002, by 36% over the unfilled epoxy formulation and by 27% over a 1 wt% unmodified MWCNT control sample. SEM images revealed a fracture surface morphol. change with the incorporation of EpCNT and a deflection of the crack fronts at the site of embedded CNTs, as the mechanism accounting for increased adhesive strength. Rheol. studies showed non-linear viscosity and DSC cure studies showed an alteration of cure kinetics with increased CNT concn., and these effects were more pronounced for EpCNT. [on SciFinder(R)]
S. A. Sydlik, Delgado, P. A., Inomata, S., VanVeller, B., Yang, Y., Swager, T. M., and Wagener, K. B., "Triptycene-containing polyetherolefins via acyclic diene metathesis polymerization", Journal of Polymer Science Part A: Polymer Chemistry, vol. 51, pp. 1695–1706, 2013.
Several new triptycene-containing polyetherolefins were synthesized via acyclic diene metathesis (ADMET) polymerization. The well-established mechanism, high selectivity and specificity, mild reaction conditions, and well-defined end-groups make the ADMET polymerization a good choice for studying systematic variations in polymer structure. Two types of triptycene-based monomer with varying connectivities were used in the synthesis of homopolymers, block copolymers, and random copolymers. In this way, the influence of the triptycene architecture and concentration in the polymer backbone on the thermal behavior of the polymers was studied. Inclusion of increasing amounts of triptycene were found to increase the glass transition temperature, from 44 degrees C in polyoctenamer to 59 degrees C in one of the hydrogenated triptycene homopolymers (H-PT2). Varying the amounts and orientations of triptycene was found to increase the stiffness (H-PT1), toughness (PT11-b-PO1) and ductility (PT11-ran-PO3) of the polymer at room temperature. (c) 2013 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem, 2013, 51, 1695-1706
M. L. Gupta, Sydlik, S. A., Schnorr, J. M., Woo, D. J., Osswald, S., Swager, T. M., and Raghavan, D., "The effect of mixing methods on the dispersion of carbon nanotubes during the solvent-free processing of multiwalled carbon nanotube/epoxy composites", Journal of Polymer Science Part B: Polymer Physics, vol. 51, pp. 410–420, 2013.
Several solvent-free processing methods to disperse multiwalled carbon nanotubes (MWCNTs) in bisphenol F-based epoxy resin were investigated, including the use of a microfluidizer (MF), planetary shear mixer (PSM), ultrasonication (US) and combinations. The processed mixture was cured with diethyl toluene diamine. Three complimentary techniques were used to characterize the dispersion of the MWCNTs in cured composite samples: optical microscopy, micro Raman spectroscopy, and scanning electron microscopy (SEM). For sample MF + PSM, optical micrographs and Raman images showed reduced agglomeration and a homogeneous distribution of MWCNTs in the epoxy matrix. SEM analysis of fractured specimen after tensile testing revealed breakage of nanotubes along the fracture surface of the composite. A comparison of the MWCNT dispersion in the epoxy samples processed using different methods showed that a combination of MF and PSM processing yields a more homogeneous sample than the PSM or US + PSM processed samples. Mechanical testing of the composites showed about 15% improvement in the tensile strength of samples processed by the MF + PSM method over other methods. Thermogravimetric analysis (TGA) results showed a small decrease in the onset degradation temperature for poorly dispersed samples produced by PSM compared with the well-mixed samples (MF + PSM). These results strongly suggest that the MF + PSM processing method yield better-dispersed and stronger MWCNT/epoxy composites. (c) 2012 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys, 2013
J. R. Cox, Simpson, J. H., and Swager, T. M., "Photoalignment layers for liquid crystals from the di-$π$-methane rearrangement.", Journal of the American Chemical Society, vol. 135, pp. 640–3, 2013.
Photoalignment of nematic liquid crystals is demonstrated using a di-$π$-methane rearrangement of a designed polymer. The alignment mechanism makes use of the strong coupling of the liquid crystal directors to dibenzobarrelene groups. The large structural changes that accompany photoisomerization effectively passivate segments of the polymer, allowing the remaining dibenzobarrelene groups to dominate the director alignment. Photoisomerization requires triplet sensitization, and the polymer was designed to have a uniaxially fixed rigid structure and rapid triplet energy transfer from the proximate benzophenone units to the dibenzobarrelene groups. The isomerization was observed to be regiospecific, and thin films showed alignment.
V. K. Michaelis, Smith, A. A., Corzilius, B. \örn, Haze, O., Swager, T. M., and Griffin, R. G., "High-field (13)c dynamic nuclear polarization with a radical mixture.", Journal of the American Chemical Society, vol. 135, pp. 2935–8, 2013.
We report direct (13)C dynamic nuclear polarization at 5 T under magic-angle spinning (MAS) at 82 K using a mixture of monoradicals with narrow EPR linewidths. We show the importance of optimizing both EPR linewidth and electron relaxation times by studying direct DNP of (13)C using SA-BDPA and trityl radical, and achieve (13)C enhancements above 600. This new approach may be best suited for dissolution DNP and for studies of (1)H depleted biological and other nonprotonated solids.
T. - C. Ong, Mak-Jurkauskas, M. L., Walish, J. J., Michaelis, V. K., Corzilius, B. \örn, Smith, A. A., Clausen, A. M., Cheetham, J. C., Swager, T. M., and Griffin, R. G., "Solvent-Free Dynamic Nuclear Polarization of Amorphous and Crystalline ortho-Terphenyl.", The journal of physical chemistry. B, vol. 117, pp. 3040–6, 2013.
Dynamic nuclear polarization (DNP) of amorphous and crystalline ortho-terphenyl (OTP) in the absence of glass forming agents is presented in order to gauge the feasibility of applying DNP to pharmaceutical solid-state nuclear magnetic resonance experiments and to study the effect of intermolecular structure, or lack thereof, on the DNP enhancement. By way of (1)H-(13)C cross-polarization, we obtained a DNP enhancement ($ε$) of 58 for 95% deuterated OTP in the amorphous state using the biradical bis-TEMPO terephthalate (bTtereph) and $ε$ of 36 in the crystalline state. Measurements of the (1)H T1 and electron paramagnetic resonance experiments showed the crystallization process led to phase separation of the polarization agent, creating an inhomogeneous distribution of radicals within the sample. Consequently, the effective radical concentration was decreased in the bulk OTP phase, and long-range (1)H-(1)H spin diffusion was the main polarization propagation mechanism. Preliminary DNP experiments with the glass-forming anti-inflammation drug, indomethacin, showed promising results, and further studies are underway to prepare DNP samples using pharmaceutical techniques.
C. Cordovilla and Swager, T. M., "Strain release in organic photonic nanoparticles for protease sensing.", Journal of the American Chemical Society, vol. 134, pp. 6932–5, 2012.
Proteases are overexpressed in most cancers and proteolytic activity has been shown to be a viable marker for cancer imaging in vivo. Herein, we describe the synthesis of luminescence-quenched shell cross-linked nanoparticles as photonic nanoprobes for protease sensing. Protease sensing scheme is based on a \"turn-on\" mechanism where the protease cleaves peptide cross-linkers of the fluorescence-quenched shell cross-linked NP (OFF state) leading to a highly emissive non-cross linked NP (ON state). The cross-linked particles can be strained by exposure to a good solvent and proteolysis allows for particle expansion (swelling) and a recovery of the luminescence.
Y. L. Zhong and Swager, T. M., "Enhanced electrochemical expansion of graphite for in situ electrochemical functionalization.", Journal of the American Chemical Society, vol. 134, pp. 17896–9, 2012.
An all electrochemical route to functionalized graphene directly from a graphite electrode is described herein obviating the need for defect inducing oxidative or prolonged sonication treatments. Enhanced electrochemical expansion of graphite is achieved by sequential treatment, beginning with the established method of expansion by electrolysis in a Li(+) containing electrolyte, and then with the much larger tetra-n-butylammonium. The result is a hyperexpansion of the graphite basal planes. As a demonstration of the utility of this method, we successfully performed a subsequent in situ electrochemical diazonium functionalization of the hyperexpanded graphite basal planes to give functional graphene sheets. This potential controlled process is more effective than chemical processes and also provides a means of controlling the degree of functionalization. We have further demonstrated that the functionalized graphene could be converted to a pristine low defect form via laser ablation of the funtional groups. As a result, this method presents a potentially scalable approach for graphene circuit patterning.
O. Haze, Corzilius, B. \örn, Smith, A. A., Griffin, R. G., and Swager, T. M., "Water-soluble narrow-line radicals for dynamic nuclear polarization.", Journal of the American Chemical Society, vol. 134, pp. 14287–90, 2012.
The synthesis of air-stable, highly water-soluble organic radicals containing a 1,3-bis(diphenylene)-2-phenylallyl (BDPA) core is reported. A sulfonated derivative, SA-BDPA, retains the narrow electron paramagnetic resonance linewidth (<30 MHz at 5 T) of the parent BDPA in highly concentrated glycerol/water solutions (40 mM), which enables its use as polarizing agent for solid effect dynamic nuclear polarization (SE DNP). A sensitivity enhancement of 110 was obtained in high-field magic-angle-spinning (MAS) NMR experiments. The ease of synthesis and high maximum enhancements obtained with the BDPA-based radicals constitute a major advance over the trityl-type narrow-line polarization agents.
M. Dionisio, Schnorr, J. M., Michaelis, V. K., Griffin, R. G., Swager, T. M., and Dalcanale, E., "Cavitand-functionalized SWCNTs for N-methylammonium detection.", Journal of the American Chemical Society, vol. 134, pp. 6540–3, 2012.
Single-walled carbon nanotubes (SWCNTs) have been functionalized with highly selective tetraphosphonate cavitand receptors. The binding of charged N-methylammonium species to the functionalized SWCNTs was analyzed by X-ray photoelectron spectroscopy and confirmed by (31)P MAS NMR spectroscopy. The cavitand-functionalized SWCNTs were shown to function as chemiresistive sensory materials for the detection of sarcosine and its ethyl ester hydrochloride in water with high selectivity at concentrations as low as 0.02 mM. Exposure to sarcosine and its derivative resulted in an increased conductance, in contrast to a decreased conductance response observed for potential interferents such as the structurally related glycine ethyl ester hydrochloride.
B. Bonillo and Swager, T. M., "Chain-growth polymerization of 2-chlorothiophenes promoted by Lewis acids.", Journal of the American Chemical Society, vol. 134, pp. 18916–9, 2012.
Lewis acids promote the polymerization of several 2-chloroalkylenedioxythiophenes, providing high-molecular-weight conjugated polymers. The proposed mechanism is a cationic chain-growth polymerization, as confirmed by end-capping reactions and a linear correlation of molecular weight with percent conversion. The \"living\" character of this process was used to prepare new block copolymers.
R. R. Parkhurst and Swager, T. M., "Synthesis and optical properties of phenylene-containing oligoacenes.", Journal of the American Chemical Society, vol. 134, pp. 15351–6, 2012.
Synthesis of a new class of fully unsaturated ladder structures, phenylene-containing oligoacenes (POAs), using 3,4-bis(methylene)cyclobutene as a building block for sequential Diels-Alder reactions is described. The geometric effects of strain and energetic cost of antiaromaticity can be observed via the optical and electrochemical properties of the reported compounds. The resulting shape-persistant ladder structures contain neighboring chromophores that are partially electronically isolated from one another while still undergoing a reduction in the band gap of the material.
M. K. Kiesewetter, Corzilius, B. \örn, Smith, A. A., Griffin, R. G., and Swager, T. M., "Dynamic nuclear polarization with a water-soluble rigid biradical.", Journal of the American Chemical Society, vol. 134, pp. 4537–40, 2012.
A new biradical polarizing agent, bTbtk-py, for dynamic nuclear polarization (DNP) experiments in aqueous media is reported. The synthesis is discussed in light of the requirements of the optimum, theoretical, biradical system. To date, the DNP NMR signal enhancement resulting from bTbtk-py is the largest of any biradical in the ideal glycerol/water solvent matrix, $ε$ = 230. EPR and X-ray crystallography are used to characterize the molecule and suggest approaches for further optimizing the biradical distance and relative orientation.
E. L. Dane, Corzilius, B. \örn, Rizzato, E., Stocker, P., Maly, T., Smith, A. A., Griffin, R. G., Ouari, O., Tordo, P., and Swager, T. M., "Rigid orthogonal bis-TEMPO biradicals with improved solubility for dynamic nuclear polarization.", The Journal of organic chemistry, vol. 77, pp. 1789–97, 2012.
The synthesis and characterization of oxidized bis-thioketal-trispiro dinitroxide biradicals that orient the nitroxides in a rigid, approximately orthogonal geometry are reported. The biradicals show better performance as polarizing agents in dynamic nuclear polarization (DNP) NMR experiments as compared to biradicals lacking the constrained geometry. In addition, the biradicals display improved solubility in aqueous media due to the presence of polar sulfoxides. The results suggest that the orientation of the radicals is not dramatically affected by the oxidation state of the sulfur atoms in the biradical, and we conclude that a biradical polarizing agent containing a mixture of oxidation states can be used for improved solubility without a loss in performance.
J. R. Cox, Kang, H. A., Igarashi, T., and Swager, T. M., "Norbornadiene End-Capping of Cross-Coupling Polymerizations: A Facile Route to Triblock Polymers", ACS Macro Letters, vol. 1, pp. 334–337, 2012.
The potential use of conjugated polymers in device applications is often limited by their less than optimal physicochem. properties. This work describes an efficient protocol to end-cap conjugated polymers synthesized via palladium-catalyzed cross-coupling polymns. with norbornene groups. Specifically, the hydroarylation of norbornadiene is shown to be a high-yielding end-capping method. These strained bicyclic alkenyl end groups can be transformed into macroinitiators via ring-opening metathesis polymn. and can polymerize other strained monomers, such as norbornene, yielding elastomeric triblock copolymers. [on SciFinder(R)]
J. M. Batson and Swager, T. M., "Poly(para-arylene)s via [2+2+2]", ACS Macro Letters, vol. 1, pp. 1121–1123, 2012.
We report a versatile new synthetic route to poly(para-phenylene)s (PPPs) and related poly(para-arylene)s containing high degrees of substitution not readily available by other methods. Our method transforms highly soluble, alkyne-containing polymers into PPPs via high-yielding, transition metal-mediated {[}2+2+2] cyclizations.
T. M. Swager, "Functional Graphene: Top-Down Chemistry of the π-Surface", ACS Macro Letters, vol. 1, pp. 3–5, 2012.
A review. A perspective on selected recent advances in graphene functionalization is offered. The activation of graphite as a means to create exfoliated sheets is highlighted as an important approach to achieve dispersions of individual graphene sheets. [on SciFinder(R)]
J. M. Lobez, Andrew, T. L., Bulović, V., and Swager, T. M., "Improving the performance of P3HT-fullerene solar cells with side-chain-functionalized poly(thiophene) additives: a new paradigm for polymer design.", ACS nano, vol. 6, pp. 3044–56, 2012.
The motivation of this study is to determine if small amounts of designer additives placed at the polymer-fullerene interface in bulk heterojunction (BHJ) solar cells can influence their performance. A series of AB-alternating side-chain-functionalized poly(thiophene) analogues, P1-6, are designed to selectively localize at the interface between regioregular poly(3-hexylthiophene) (rr-P3HT) and PC(n)BM (n = 61, 71). The side chains of every other repeat unit in P1-6 contain various terminal aromatic moieties. BHJ solar cells containing ternary mixtures of rr-P3HT, PC(n)BM, and varying weight ratios of additives P1-6 are fabricated and studied. At low loadings, the presence of P1-6 consistently increases the short circuit current and decreases the series resistance of the corresponding devices, leading to an increase in power conversion efficiency (PCE) compared to reference P3HT/PC(61)BM cells. Higher additive loadings (>5 wt %) lead to detrimental nanoscale phase separation within the active layer blend and produce solar cells with high series resistances and low overall PCEs. Small-perturbation transient open circuit voltage decay measurements reveal that, at 0.25 wt % incorporation, additives P1-6 increase charge carrier lifetimes in P3HT/PC(61)BM solar cells. Pentafluorophenoxy-containing polymer P6 is the most effective side-chain-functionalized additive and yields a 28% increase in PCE when incorporated into a 75 nm thick rr-P3HT/PC(61)BM BHJ at a 0.25 wt % loading. Moreover, devices with 220 nm thick BHJs containing 0.25 wt % P6 display PCE values of up to 5.3% (30% PCE increase over a control device lacking P6). We propose that additives P1-6 selectively localize at the interface between rr-P3HT and PC(n)BM phases and that aromatic moieties at side-chain termini introduce a dipole at the polymer-fullerene interface, which decreases the rate of bimolecular recombination and, therefore, improves charge collection across the active layer.
Y. Takeda, Andrew, T. L., Lobez, J. M., Mork, J. A., and Swager, T. M., "An air-stable low-bandgap n-type organic polymer semiconductor exhibiting selective solubility in perfluorinated solvents.", Angewandte Chemie, International Edition, vol. 51, pp. 9042–6, 2012.
A thin-film transistor: An n-type polymer semiconductor, poly(2,3-bis(perfluorohexyl)thieno[3,4-b]pyrazine), was synthesized through a Pd-catalyzed polycondensation employing a perfluorinated multiphase solvent system. This is the first example of an n-type polymer semiconductor with exclusive solubility in fluorinated solvents. The fabrication of organic field effect transistors containing this new n-type polymer semiconductor is shown.
B. VanVeller, Robinson, D., and Swager, T. M., "Triptycene diols: a strategy for synthesizing planar $π$ systems through catalytic conversion of a poly(p-phenylene ethynylene) into a poly(p-phenylene vinylene).", Angewandte Chemie, International Edition, vol. 51, pp. 1182–6, 2012.
A highly efficient method to introduce planarity and rigidity into a conjugated polymer backbone using a catalytic main-chain alteration to convert alkynes of PPE into annulated alkenes in PPV was developed. Post-polymn. cyclization provided a rare O-substitution of vinyl units of PPV and endowed products with greater planarity and rigidity, resulting in red-shifts in absorbance and fluorescence and defined vibrational features. We hope to extend this cyclization strategy to establish higher order of planarity for other conjugated polymer classes. [on SciFinder(R)]
K. A. Mirica, Weis, J. G., Schnorr, J. M., Esser, B., and Swager, T. M., "Mechanical drawing of gas sensors on paper.", Angewandte Chemie, International Edition, vol. 51, pp. 10740–5, 2012.
Mech. abrasion of compressed single-walled carbon nanotubes (SWCNTs) on the surface of paper produces sensors capable of detecting NH3 gas at sub-ppm concns. This method of fabrication is simple, inexpensive, and entirely solvent-free, and avoids difficulties arising from the inherent instability of many SWCNT dispersions. [on SciFinder(R)]
B. Esser, Schnorr, J. M., and Swager, T. M., "Selective Detection of Ethylene Gas Using Carbon Nanotube-Based Devices: Utility in Determination of Fruit Ripeness.", Angewandte Chemie, International Edition, vol. 51, p. 5507, 2012.
Ethylene is the smallest ripening hormone in plants. In their Communication (DOI: 10.1002/anie.201201042), T. M. Swager et al. show that from a simple mixture of single-walled carbon nanotubes and a copper(I) complex, chemoresistive devices can be prepared that are highly sensitive and selective to sub-ppm concentrations of ethylene. The utility of the sensor was demonstrated by following ripening stages in different fruits.
B. VanVeller, Schipper, D. J., and Swager, T. M., "Polycyclic aromatic triptycenes: oxygen substitution cyclization strategies.", Journal of the American Chemical Society, vol. 134, pp. 7282–5, 2012.
The cyclization and planarization of polycyclic aromatic hydrocarbons with concomitant oxygen substitution was achieved through acid catalyzed transetherification and oxygen-radical reactions. The triptycene scaffold enforces proximity of the alcohol and arene reacting partners and confers significant rigidity to the resulting $π$-system, expanding the tool set of iptycenes for materials applications.
T. Andrew, VanVeller, B., and Swager, T., "The Synthesis of Azaperylene-9,10-dicarboximides", Synlett, vol. 2010, pp. 3045–3048, 2010.
The syntheses of two azaperylene 9,10-dicarboximides are presented. 1-Aza- and 1,6-diazaperylene 9,10-dicarboximides contg. a 2,6-diisopropylphenyl substituent at the N-imide position were synthesized in two steps starting from naphthalene and isoquinoline derivs. [on SciFinder(R)]
E. L. Dane and Swager, T. M., "Synthesis of a water-soluble 1,3-bis(diphenylene)-2-phenylallyl radical.", The Journal of organic chemistry, vol. 75, pp. 3533–6, 2010.
The design and synthesis of a water-soluble 1,3-bis(diphenylene)-2-phenylallyl (BDPA) radical via the conjugate addition of a derivatized fluorene nucelophile is described. The compound is designed for use in dynamic nuclear polarization NMR. Its 9 GHz EPR spectrum in glycerol/water is reported.
C. Song and Swager, T. M., "Reactive conducting thiepin polymers.", The Journal of organic chemistry, vol. 75, pp. 999–1005, 2010.
We report the design and synthesis of annulated thiepins designed to undergo bent-to-planar transformation driven by aromatization under electrochemical control. Thiepins are conjugated seven-membered ring systems with a thioether in the macrocycle. We synthesized thermally stable thiepins that are electropolymerizable to give rise to thiepin-containing electroactive polymers. Extended thiepin systems undergo sulfur extrusion with oxidation, and this feature has utility in peroxide sensing.
J. Lim and Swager, T. M., "Fluorous biphase synthesis of a poly(p-phenyleneethynylene) and its fluorescent aqueous fluorous-phase emulsion.", Angewandte Chemie, International Edition, vol. 49, pp. 7486–8, 2010.
It's just a phase: A highly fluorinated compd. with a rigid three-dimensional architecture was synthesized as a monomer for poly(p-phenyleneethynylene)s (PPEs). Fluorous biphase reactions were applied for the synthesis of a PPE that is sol. only in fluorous solvents. A fluorous soln. of the polymer could be processed into aq. emulsions that display high fluorescence quantum yields. [on SciFinder(R)]
B. Esser and Swager, T. M., "Detection of ethylene gas by fluorescence turn-on of a conjugated polymer.", Angewandte Chemie, International Edition, vol. 49, pp. 8872–5, 2010.
The sensing scheme the authors have designed for the detection of ethylene is based on a fluorescence turn-on mechanism and mimics nature by using a copper(I) complex to bind to ethylene. The fluorescence of the conjugated polymer is partially quenched by the presence of copper(I) moieties that can coordinate to the polymer. Upon exposure to ethylene gas, the copper complexes bind to the ethylene mols. and no longer quench the polymer fluorescence. The advantage of this fluorescence turn-on over a turn-off mechanism is that it requires a specific binding event to the copper to create a new signal, whereas fluorescence quenching can occur in multiple ways. Furthermore, if a completely dark background (completely quenched) state can be achieved, even a weak turn-on signal can be readily measured and thereby can allow trace detection. [on SciFinder(R)]
J. M. Lobez and Swager, T. M., "Radiation detection: resistivity responses in functional poly(olefin sulfone)/carbon nanotube composites.", Angewandte Chemie, International Edition, vol. 49, pp. 95–8, 2010.
A novel sensing scheme that is not based on scintillation or charge generation in semiconductors can be deployed for the detection of uncharged ionization radiation using small devices. Functional POSs were accessed by click chem. methods, and several strategies were successfully deployed to increase sensor sensitivity. Systematic improvements in sensitivity can be accomplished by rational design, and incorporation of the appropriate chem. components to achieve sensitivities in the 103 rad range. [on SciFinder(R)]
B. VanVeller and Swager, T. M., "Biocompatible post-polymerization functionalization of a water soluble poly(p-phenylene ethynylene).", Chemical Communications, vol. 46, pp. 5761–3, 2010.
A biocompatible post-polymerization functionalization reaction takes advantage of a polymer's structural motif for the controllable attachment of biotin as a model biosensor that responds to streptavidin.
S. Albert-Seifried, Finlayson, C. E., Laquai, F., Friend, R. H., Swager, T. M., Kouwer, P. H. J., Jur\'ıcek, M., Kitto, H. J., Valster, S., Nolte, R. J. M., and Rowan, A. E., "Multichromophoric phthalocyanine-(perylenediimide)(8) molecules: a photophysical study.", Chemistry (Weinheim an der Bergstrasse, Germany), vol. 16, pp. 10021–9, 2010.
We describe the synthesis of a series of phthalocyanine (Pc)-perylenediimide (PDI)(8) öctad\" molecules, in which eight PDI moieties are attached to a Pc core through alkyl-chain linkers. There is clear spectroscopic evidence that these octads can exist as non-aggregated \"monomers\" or form aggregates along the Pc cores, depending on the type of Pc and the solvent medium. In the low dielectric constant solvents, into which the octads are soluble, photoexcitation of the PDI units leads to rapid energy transfer to the Pc centre, rather than a charge separation between moieties. In octad monomers, the Pc singlet excited-state decays within tens of ps, whereas the excitons are stabilised in the aggregated form of the molecules, typically with lifetimes in the order of 1-10 ns. By contrast, in an octad design in which pi-pi interactions are suppressed by the steric hindrance of a corona of incompatible glycol tails around the molecule, a more straightforward photophysical interaction of F\örster energy transfer between the PDI moieties and Pc core may be inferred. We consider these molecules as prototypical multichromophoric aggregates, giving delocalised states with considerable flexibility of design.
A. Lohr and Swager, T. M., "Stabilization of the nematic mesophase by a homogeneously dissolved conjugated polymer", Journal of Materials Chemistry, vol. 20, p. 8107, 2010.
A semi-conjugated iptycene polymer contg. a special \"crankshaft\" backbone leading to superior soly. in a nematic LC medium was synthesized. The incorporation of this polymer in the LC leads to thermodn. stabilization of the nematic mesophase. This effect is attributed to organizational coupling of the LC mols. to the polymer chains. [on SciFinder(R)]
M. Levine, Song, I., Andrew, T. L., Kooi, S. E., and Swager, T. M., "Photoluminescent energy transfer from poly(phenyleneethynylene)s to near-infrared emitting fluorophores", Journal of Polymer Science Part A: Polymer Chemistry, vol. 48, pp. 3382–3391, 2010.
Photoluminescent energy transfer was investigated in conjugated polymer-fluorophore blended thin films. A pentiptycene-contg. poly(phenyleneethynylene) was used as the energy donor, and 13 fluorophores were used as energy acceptors. The efficiency of energy transfer was measured by monitoring both the quenching of the polymer emission and the enhancement of the fluorophore emission. Near-IR emitting squaraines and terrylenes were identified as excellent energy acceptors. These results, where a new fluorescent signal occurs in the near-IR region on a completely dark background, offer substantial possibilities for designing highly sensitive turn-on sensors. © 2010 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem 48: 3382-3391, 2010. [on SciFinder(R)]
Y. Weizmann, Chenoweth, D. M., and Swager, T. M., "Addressable terminally linked DNA-CNT nanowires.", Journal of the American Chemical Society, vol. 132, pp. 14009–11, 2010.
Despite many advances in carbon nanotube (CNT) research, several issues continue to plague the field with regard to the construction of well-defined hybrid CNT materials. Regiospecific covalent functionalization, nonspecific surface absorption, and carbon nanotube aggregation/bundling present major difficulties when working with these materials. In this communication, we circumvent these problems and report a new addressable hybrid material composed of single-walled carbon nanotubes terminally linked by oligonucleotides into a nanowire motif. We show that the oligonucleotide junctions are addressable and can be targeted by gold nanoparticles.
E. L. Dane, King, S. B., and Swager, T. M., "Conjugated polymers that respond to oxidation with increased emission.", Journal of the American Chemical Society, vol. 132, pp. 7758–68, 2010.
Thioether-containing poly(para-phenylene-ethynylene) (PPE) copolymers show a strong fluorescence turn-on response when exposed to oxidants in solution as a result of the selective conversion of thioether substituents into sulfoxides and sulfones. We propose that the increase in fluorescence quantum yield (Phi(F)) upon oxidation is the result of both an increase in the rate of fluorescence (k(F)), as a result of greater spatial overlap of the frontier molecular orbitals in the oxidized materials, and an increase in the fluorescence lifetime (tau(F)), due to a decrease in the rate of nonradiative decay. Contrary to established literature, the reported sulfoxides do not always act as fluorescence quenchers. The oxidation is accompanied by spectral changes in the absorption and emission of the polymers, which are dramatic when oxidation causes the copolymer to acquire a donor-acceptor interaction. The oxidized polymers have high fluorescence quantum yields in the solid state, with some having increased photostability. A turn-on fluorescence response to hydrogen peroxide in organic solvents in the presence of an oxidation catalyst indicates the potential of thioether-containing materials for oxidant sensing. The reported polymers show promise as new materials in applications where photostability is important, where tunability of emission across the visible spectrum is desired, and where efficient emission is an advantage.
S. E. Wheeler, McNeil, A. J., M\üller, P., Swager, T. M., and Houk, K. N., "Probing substituent effects in aryl-aryl interactions using stereoselective Diels-Alder cycloadditions.", Journal of the American Chemical Society, vol. 132, pp. 3304–11, 2010.
Stereoselective Diels-Alder cycloadditions that probe substituent effects in aryl-aryl sandwich complexes were studied experimentally and theoretically. Computations on model systems demonstrate that the stereoselectivity in these reactions is mediated by differential pi-stacking interactions in competing transition states. This allows relative stacking free energies of substituted and unsubstituted sandwich complexes to be derived from measured product distributions. In contrast to gas-phase computations, dispersion effects do not appear to play a significant role in the substituent effects, in accord with previous experiments. The experimental pi-stacking free energies are shown to correlate well with Hammett sigma(m) constants (r = 0.96). These substituent constants primarily provide a measure of the inductive electron-donating and -withdrawing character of the substituents, not donation into or out of the benzene pi-system. The present experimental results are most readily explained using a recently proposed model of substituent effects in the benzene sandwich dimer in which the pi-system of the substituted benzene is relatively unimportant and substituent effects arise from direct through-space interactions. Specifically, these results are the first experiments to clearly show that OMe enhances these pi-stacking interactions, despite being a pi-electron donor. This is in conflict with popular models in which substituent effects in aryl-aryl interactions are modulated by polarization of the aryl pi-system.
J. M. Lobez and Swager, T. M., "Disassembly of Elastomers: Poly(olefin sulfone)−Silicones with Switchable Mechanical Properties", Macromolecules, vol. 43, pp. 10422–10426, 2010.
An elastomeric polymer composite that can be disassembled at will into its individual components when exposed to mild bases is presented. The composite is formed of a poly(olefin sulfone) and a silicone bound together using \"click\" chem. The mech. properties of the composites can be varied depending on their formulation. Its base-triggered decompn. is advantageous from the point of view of composite recycling and controlled release of chems. [on SciFinder(R)]
C. Song, Walker, B. D., and Swager, T. M., "Conducting Thiophene-Annulated Azepine Polymers", Macromolecules, vol. 43, pp. 5233–5237, 2010.
We report the synthesis of annulated azepines, conjugated seven-membered ring systems with nitrogen, designed to undergo an electrochem. controlled bent-to-planar transformation driven by aromatization. A Pd-catalyzed double amination strategy enabled us to synthesize annulated azepines, which are thermally and electrochem. stable even in highly oxidized states. Several methods of chem. and electrochem. polymn. yielded azepine-based materials that were demonstrated to retain their redox properties in the solid state under ambient conditions. Because of their redox stability and cond., these polymers could find utility in actuating materials research. [on SciFinder(R)]
J. M. W. Chan, Kooi, S. E., and Swager, T. M., "Synthesis of Stair-Stepped Polymers Containing Dibenz[ a , h ]anthracene Subunits", Macromolecules, vol. 43, pp. 2789–2793, 2010.
Polycyclic arom. monomers based on substituted dibenz[a,h]anthracene frameworks have been prepd. in high yields. From these, several fluorescent stair-stepped conjugated polymers contg. fused arom. subunits have been synthesized via Sonogashira polycondensations and Glaser-type oxidative couplings. The new polymers were characterized by NMR spectroscopy, gel-permeation chromatog. (GPC), UV-vis, and fluorescence spectroscopy. [on SciFinder(R)]
Y. Weizmann, Lim, J., Chenoweth, D. M., and Swager, T. M., "Regiospecific synthesis of Au-nanorod/SWCNT/Au-nanorod heterojunctions.", Nano letters, vol. 10, pp. 2466–9, 2010.
The synthesis of precisely defined nanoscale hybrid materials remains a challenge at the frontier of chemistry and material science. In particular, the assembly of diverse high-aspect ratio one-dimensional materials such as gold nanorods and carbon nanotubes into functional systems is of ever increasing interest due to their electronic and sensing applications. To meet these challenges, methods for interfacing gold nanorods with carbon materials such as single-walled carbon nanotubes (SWCNTs) in a regio-controlled manner are needed. Herein, we report a method for the regiospecific synthesis of terminally linked gold nanorod-SWCNTs based on a nanotube surface protection strategy. The key to our approach is a SWCNT surface protection procedure allowing for selective functionalization of the SWCNT termini.
E. L. Dane and Swager, T. M., "Carbanionic route to electroactive carbon-centered anion and radical oligomers.", Organic letters, vol. 12, pp. 4324–7, 2010.
The synthesis of poly-1,3-bisdiphenylene-2-phenyl allyl (BDPA) radicals via a new anionic oligomerization strategy is reported. The material displays a reversible reduction from the orange-red radical to the blue carbanion in solution.
B. VanVeller, Miki, K., and Swager, T. M., "Rigid hydrophilic structures for improved properties of conjugated polymers and nitrotyrosine sensing in water.", Organic letters, vol. 12, pp. 1292–5, 2010.
The efficient synthesis of a hydrophilic monomer bearing a three-dimensional noncompliant array of hydroxyl groups is described that prevents water-driven excimer features of hydrophobic poly(p-phenylene ethynylene) backbones. Sensitivity of the polymer to 3-nitrotyrosine is also discussed.
T. L. Andrew, Cox, J. R., and Swager, T. M., "Synthesis, reactivity, and electronic properties of 6,6-dicyanofulvenes.", Organic letters, vol. 12, pp. 5302–5, 2010.
A series of 6,6-dicyanofulvene derivatives are synthesized starting from masked, dimeric, or monomeric cyclopentadienones. The reactivities of 6,6-dicyanofulvenes relative to their parent cyclopentadienones are discussed. 6,6-Dicyanofulvenes are capable of undergoing two consecutive, reversible, one-electron reductions and are presented as potential n-type small molecules.
C. Rotschild, Tomes, M., Mendoza, H., Andrew, T. L., Swager, T. M., Carmon, T., and Baldo, M. A., "Cascaded energy transfer for efficient broad-band pumping of high-quality, micro-lasers.", Advanced materials (Deerfield Beach, Fla.), vol. 23, pp. 3057–60, 2011.
Low-cost, silicon-compatible on chip high-quality (Q) micro-ring visible lasers are prepd. by coating silica resonators with a combination of org. dyes - AlQ3, DCJTB, and Terrylene. The materials are selected to demonstrate cascaded energy transfer, which concs. the optical pump and overcomes the mismatch in the spectrally and spatially broad pump and the narrow modes of the high-finesse resonant cavity. The broadband pump excites a short wavelength dye in a low Q-factor cavity, there by avoiding the need to resonantly match the pump. Next, the low Q-factor cavity pumps a longer wavelength dye in what is effectively a higher Q-factor cavity, and the process can be repeated, until the desired Q-factor is met. All cavities are on the same resonator and differ only in the wavelength of their operation. Detuning of one cavity with respect to the other cannot occur, eliminating the instability normally assocd. with resonant pumping of high-Q lasers. [on SciFinder(R)]
W. R. Collins, Lewandowski, W., Schmois, E., Walish, J., and Swager, T. M., "Claisen rearrangement of graphite oxide: a route to covalently functionalized graphenes.", Angewandte Chemie, International Edition, vol. 50, pp. 8848–52, 2011.
The basal plane allylic alc. functionality of graphite oxide can be converted into N,N-dimethylamide groups through an Eschenmoser-Claisen sigmatropic rearrangement by using N,N-dimethylacetamide di-Me acetal. Subsequent sapon. of these groups affords the carboxylic acids, which, when deprotonated, electrostatically stabilize the graphene sheets in an aq. environment. [on SciFinder(R)]
W. R. Collins, Schmois, E., and Swager, T. M., "Graphene oxide as an electrophile for carbon nucleophiles.", Chemical Communications, vol. 47, pp. 8790–2, 2011.
The covalent, surface functionalization of graphene oxide with the malononitrile anion has been demonstrated. Once installed, these surface-bound \"molecular lynchpins\" can be chemically modified to increase the solubility of the graphene derivative in either organic or aqueous environments.
J. M. Schnorr and Swager, T. M., "Emerging Applications of Carbon Nanotubes.", Chemistry of Materials, vol. 23, pp. 646–657, 2011.
A review. From their unique elec. and mech. properties, carbon nanotubes (CNTs) have attracted great attention in recent years. A diverse array of methods was developed to modify CNTs and to assemble them into devices. From these innovations, many applications that include the use of CNTs were demonstrated. Transparent electrodes for org. light-emitting diodes (OLEDs), lithium-ion batteries, supercapacitors, and CNT-based electronic components such as field-effect transistors (FETs) were demonstrated. Also, CNTs were employed in catalysis and sensing as well as filters and mech. and biomedical applications. This review highlights illustrative examples from these areas to give an overview of applications of CNTs. [on SciFinder(R)]
S. Liu, M\üller, P., Takase, M. K., and Swager, T. M., "\"Click\" synthesis of heteroleptic tris-cyclometalated iridium(III) complexes: Cu(I) triazolide intermediates as transmetalating reagents.", Inorganic chemistry, vol. 50, pp. 7598–609, 2011.
Efficient synthesis of heteroleptic tris-cyclometalated Ir(III) complexes mer-Ir(C(/$\backslash$)N)(2)(trpy) (trpy = 2-(1H-[1,2,3]triazol-4-yl)pyridine) is achieved by using the Cu(I)-triazolide intermediates formed in \"click\" reactions as transmetalating reagents. Ligand preparation and cyclometalation of Ir(III) is accomplished in one pot. The robust nature of click chemistry provides opportunities to introduce different functional groups to the cyclometalated system, for example, alkyl, perfluoroalkyl, and aryl moieties. All of the meridional isomers show short-lived phosphorescence at room temperature, both in solution and in the solid state. DFT calculations indicates that the phosphorescence of mer-Ir(C(/$\backslash$)N)(2)(trpy) is attributed to the (3)MLCT and (3)LC mixed excited states, also supported by the broad spectral shape and hypsochromic shift upon media rigidification. The luminescence efficiency and excited state lifetimes of the cyclometalated complexes can be tuned by varying the substituents on the triazole ring, while the emission color is mainly determined by the phenylpyridine-based ligands. Moreover, the trpy ligand can acquire the N(/$\backslash$)N chelating mode under selective reaction conditions. mer-Ir(C(/$\backslash$)N)(2)(trpy) complexes isomerize into cationic [Ir(C(/$\backslash$)N)(2)(N(/$\backslash$)N\_trpy)](+) species instead of their fac isomers upon heating or UV radiation. This can be explained by the strong trans influence exerted by the phenyl groups. The weakened Ir-C(trpy) bonds are likely to be activated and protonated, leading to the switch of the trpy ligand to a thermodynamically more stable N(/$\backslash$)N chelating mode.
J. M. Schnorr and Swager, T. M., "Wiring-up catalytically active metals in solution with sulfonated carbon nanotubes", Journal of Materials Chemistry, vol. 21, p. 4768, 2011.
Highly water sol. sulfonate MWCNTs were synthesized and could be used to facilitate the electron transfer between Pd and Cu in a Wacker-type oxidn. in soln. [on SciFinder(R)]
D. Izuhara and Swager, T. M., "Bispyridinium-phenylene-based copolymers: low band gap n-type alternating copolymers", Journal of Materials Chemistry, vol. 21, p. 3579, 2011.
Bispyridinium-phenylene-based conjugated donor-acceptor copolymers were synthesized by a Stille cross-coupling and cyclization sequence. These polyelectrolytes are freely sol. in org. solvents and display broad optical absorption bands that extend into the near-IR region. They show ambipolar redox properties with high electron affinities (LUMO levels) of 3.9-4.0 eV as well as high degrees of electroactivity. When reduced (n-doped) these materials display in situ conductivities as high as 180 S/cm. The high cond. is attributed to the planar structure that is enforced by the cyclic structures of the polymer. The electron affinities are compared to PCBM, a C60 based n-type material and hence may find utility in photovoltaic devices. [on SciFinder(R)]
T. L. Andrew and Swager, T. M., "Structure-Property relationships for exciton transfer in conjugated polymers", Journal of Polymer Science Part B: Polymer Physics, vol. 49, pp. 476–498, 2011.
A review. The ability of conjugated polymers to function as electronic materials is dependent on the efficient transport of excitons along the polymer chain. Generally, the photophysics of the chromophore monomer dictate the excited state behavior of the corresponding conjugated polymers. Different mol. structures are examd. to study the role of excited state lifetimes and mol. conformations on energy transfer. The incorporation of rigid, three-dimensional scaffolds, such as iptycenes and cyclophanes, can encourage an oblique packing of the chromophore units of a conjugated polymer, thus allowing the formation of electronically-coupled aggregates that retain high quantum yields of emission. Rigid iptycene scaffolds also act as excellent structural directors that encourage complete solvation of PPEs in a liq. crystal (LC) solvent. LC-PPE mixts. display both an enhanced conformational alignment of polymer chains and extended effective conjugation lengths relative to isotropic solns., which leads to enhanced energy transfer. Facile exciton migration in poly(p-phenylene ethynylene)s (PPEs) allows energy absorbed over large areas to be funneled into traps created by the binding of analytes, resulting in signal amplification in sensory devices. © 2011 Wiley Periodicals, Inc., J Polym Sci Part B: Polym Phys, 2011. [on SciFinder(R)]
F. Wang and Swager, T. M., "Diverse chemiresistors based upon covalently modified multiwalled carbon nanotubes.", Journal of the American Chemical Society, vol. 133, pp. 11181–93, 2011.
A diverse array of multiwalled carbon nanotube (MWCNT) sensory materials have been synthesized and used to create sensors capable of identifying volatile organic compounds (VOCs) on the basis of their functional groups. Functionalized MWCNTs with a series of cross-sensitive recognition groups were successfully synthesized via zwitterionic and post-transformation synthetic procedures. The incorporated chemical functional groups on MWCNT surfaces introduced greatly increased sensitivity and selectivity to the targeted analytes. The distinct response pattern of each chemical was subjected to statistical treatments, which led to a clear separation and accurate identification of 100% of the VOCs. These results demonstrate that covalent functionalized MWCNT-based sensor arrays are a promising approach for low-cost, real time detection and identification of VOCs.
J. R. Cox, M\üller, P., and Swager, T. M., "Interrupted energy transfer: highly selective detection of cyclic ketones in the vapor phase.", Journal of the American Chemical Society, vol. 133, pp. 12910–3, 2011.
We detail our efforts toward the selective detection of cyclic ketones, e.g. cyclohexanone, a component of plasticized explosives. Thin films comprised of a conjugated polymer are used to amplify the emission of an emissive receptor via energy transfer. We propose that the energy transfer is dominated by an electron-exchange mechanism to an upper excited state of the fluorophore followed by relaxation and emission to account for the efficient energy transfer in the absence of appreciable spectral overlap. Exposure to cyclic ketones results in a ratiometric fluorescence response. The thin films show orthogonal responses when exposed to cyclic ketones versus acyclic ketones. We demonstrate that the exquisite selectivity is the result of a subtle balance between receptor design and the partition coefficient of molecules into the polymer matrix.
Y. Weizmann, Chenoweth, D. M., and Swager, T. M., "DNA-CNT nanowire networks for DNA detection.", Journal of the American Chemical Society, vol. 133, pp. 3238–41, 2011.
The ability to detect biological analytes in a rapid, sensitive, operationally simple, and cost-effective manner will impact human health and safety. Hybrid biocatalyzed-carbon nanotube (CNT) nanowire-based detection methods offer a highly sensitive and specific platform for the fabrication of simple and effective conductometric devices. Here, we report a conductivity-based DNA detection method utilizing carbon nanotube-DNA nanowire devices and oligonucleotide-functionalized enzyme probes. Key to our sensor design is a DNA-linked-CNT wire motif, which forms a network of interrupted carbon nanotube wires connecting two electrodes. Sensing occurs at the DNA junctions linking CNTs, followed by amplification using enzymatic metalization leading to a conductimetric response. The DNA analyte detection limit is 10 fM with the ability to discriminate single, double, and triple base pair mismatches. DNA-CNT nanowires and device sensing gaps were characterized by scanning electron microscopy (SEM) and confocal Raman microscopy, supporting the enhanced conductometric response resulting from nanowire metallization.
D. Izuhara and Swager, T. M., "Poly(3-hexylthiophene)- block -poly(pyridinium phenylene)s: Block Polymers of p- and n-Type Semiconductors", Macromolecules, vol. 44, pp. 2678–2684, 2011.
Conjugated cryst.-cryst. donor-acceptor-donor block copolymer semiconductors, with regioregular poly(3-hexylthiophene) as a donor (p-type) block and poly(pyridinium phenylene) as an acceptor (n-type) block within the backbone, were produced by sequential Grignard metathesis synthesis of poly(3-hexylthiophene), a Yamamoto-type cross-coupling polymn.-cyclization sequence. These conjugated block copolymers are sol. in org. solvents and display broad optical absorption bands extending close to the near-IR region. They show reversible ambipolar redox properties with high electron affinities of 3.8-4.0 eV as well as useful ionization potentials of 5.1 eV that are characteristic of the resp. blocks. Block copolymers from p- and n-type org. semiconductors are of interest for the formation of nanostructured bulk heterojunctions in photovoltaic devices. [on SciFinder(R)]
T. L. Andrew and Swager, T. M., "Thermally-Polymerized Rylene Nanoparticles.", Macromolecules, vol. 44, pp. 2276–2281, 2011.
Rylene dyes functionalized with varying numbers of phenyl trifluorovinylether (TFVE) moieties were subjected to a thermal emulsion polymerization to yield shape-persistent, water-soluble chromophore nanoparticles. Perylene and terrylene diimide derivatives containing either two or four phenyl TFVE functional groups were synthesized and subjected to thermal emulsion polymerization in tetraglyme. Dynamic light scattering measurements indicated that particles with sizes ranging from 70 - 100 nm were obtained in tetraglyme, depending on monomer concentration. The photophysical properties of individual monomers were preserved in the nanoemulsions and emission colors could be tuned between yellow, orange, red, and deep red. The nanoparticles were found to retain their shape upon dissolution into water and the resulting water suspensions displayed moderate to high fluorescence quantum yield.
S. A. Sydlik, Chen, Z., and Swager, T. M., "Triptycene Polyimides: Soluble Polymers with High Thermal Stability and Low Refractive Indices", Macromolecules, vol. 44, pp. 976–980, 2011.
A series of sol., thermally stable arom. polyimides were synthesized using com. available five- and six-membered ring anhydrides and 2,6-diaminotriptycene derivs. All of these triptycene polyimides (TPIs) were sol. in common org. solvents despite their completely arom. structure due to the three-dimensional triptycene structure that prevents strong interchain interactions. Low soln. viscosities (0.07-0.47 dL/g) and versatile solubilities allow for easy soln. processing of these polymers. Nanoporosity in the solid state gives rise to high surface areas (up to 430 m2/g) and low refractive indexes (1.19-1.79 at 633 nm), which suggest very low dielec. consts. at optical frequencies. Polymer films were found to be amorphous. The decompn. temp. (Td) for all of the polymers is above 500°, and no glass transition temps. can be found below 450° by differential scanning calorimetry (DSC), indicating excellent prospects for high-temp. applications. This combination of properties makes these polymers candidates for spin-on dielec. materials. [on SciFinder(R)]
A. Ramírez-Monroy and Swager, T. M., "Metal Chelates Based on Isoxazoline[60]fullerenes", Organometallics, vol. 30, pp. 2464–2467, 2011.
3-R-C60-Fullereno[6,6]isoxazolines (R = 2-pyridinyl, 2-MeOC6H4, 6-(2-MeOC6H4)C5H3N, 2,2'-bipyridin-6-yl) were prepd. by cycloaddn. of C60 with nitrile oxides RC≡N-O, generated in situ by dehydrohalogenation of hydroxyimidoyl chlorides RCX:NOH. This reaction provides access to an array of fullerene-fused heterocycles bearing covalently linked chelate moieties. The improved chelating property of the newly synthesized isoxazoline[60]fullerene adducts toward transition metals allows the syntheses of octahedral and square-planar organometallic compds. of rhenium, iridium, and platinum. This new approach has great potential as a general route to other novel derivs. contg. catalytically active transition metals. The redox properties of 3-(2-pyridinyl)fullereno[d]isoxazoline (2a) and its rhenium chlorotricarbonyl complexes were studied by cyclic voltammetry. [on SciFinder(R)]
R. Parkhurst and Swager, T., "Synthesis of 3,4-Bis(benzylidene)cyclobutenes", Synlett, vol. 2011, pp. 1519–1522, 2011.
The syntheses of several derivs. of 3,4-bis(benzylidene)cyclobutene I-V are reported. Previously unknown 1,2-dibromo-3,4-bis(benzylidene)cyclobuteneIII-V were obtained through in situ generation of 1,6-diphenyl-3,4-dibromo-1,2,4,5-tetraene followed by electrocyclic ring closure. Ensuing redn. and metal-catalyzed cross-coupling provided addnl. derivs. The effects of ring strain on the geometry and electronics of these derivs. were examd. by X-ray crystallog. and 1H NMR spectroscopy, resp. [on SciFinder(R)]
T. L. Andrew and Swager, T. M., "Detection of explosives via photolytic cleavage of nitroesters and nitramines.", The Journal of organic chemistry, vol. 76, pp. 2976–93, 2011.
The nitramine-containing explosive RDX and the nitroester-containing explosive PETN are shown to be susceptible to photofragmentation upon exposure to sunlight. Model compounds containing nitroester and nitramine moieties are also shown to fragment upon exposure to UV irradiation. The products of this photofragmentation are reactive, electrophilic NO(x) species, such as nitrous and nitric acid, nitric oxide, and nitrogen dioxide. N,N-Dimethylaniline is capable of being nitrated by the reactive, electrophilic NO(x) photofragmentation products of RDX and PETN. A series of 9,9-disubstituted 9,10-dihydroacridines (DHAs) are synthesized from either N-phenylanthranilic acid methyl ester or a diphenylamine derivative and are similarly shown to be rapidly nitrated by the photofragmentation products of RDX and PETN. A new (turn-on) emission signal at 550 nm is observed upon nitration of DHAs due to the generation of fluorescent donor-acceptor chromophores. Using fluorescence spectroscopy, the presence of ca. 1.2 ng of RDX and 320 pg of PETN can be detected by DHA indicators in the solid state upon exposure to sunlight. The nitration of aromatic amines by the photofragmentation products of RDX and PETN is presented as a unique, highly selective detection mechanism for nitroester- and nitramine-containing explosives and DHAs are presented as inexpensive and impermanent fluorogenic indicators for the selective, standoff/remote identification of RDX and PETN.
J. Bouffard, Kim, Y., Swager, T. M., Weissleder, R., and Hilderbrand, S. A., "A Highly Selective Fluorescent Probe for Thiol Bioimaging.", Organic Letters, vol. 10, pp. 37–40, 2008.
A new fluorescent turn-on probe (I) for the selective sensing and bioimaging of thiols is reported. In aq. buffer solns. at physiol. pH, thiols cleave the 2,4-dinitrobenzenesulfonyl group to release the red-emissive donor-acceptor fluorophore. The probe displays excellent immunity to interference from nitrogen and oxygen nucleophiles and the imaging of thiols in living cells is demonstrated. [on SciFinder(R)]
N. T. Tsui, Yang, Y., Mulliken, A. D., Torun, L., Boyce, M. C., Swager, T. M., and Thomas, E. L., "Enhancement to the rate-dependent mechanical behavior of polycarbonate by incorporation of triptycenes", Polymer, vol. 49, pp. 4703–4712, 2008.
The tensile and compressive properties of triptycene-polycarbonates were tested over 6 orders of magnitude in strain rate. Initially we studied a low mol. wt., low triptycene content PC blended with Iupilon PC and then a series of higher mol. wt., higher triptycene content polymers. The PC blend with only 1.9 wt.% triptycene displayed up to 20% increase in modulus and up to 17% increase in yield strength with elongations over 80% as compared to the Iupilon PC. The higher mol. wt. T-PCs (up to 26 wt.% triptycene) exhibited improvements in modulus by over 20% and improvements in compressive strengths by nearly 50% at both low and high strain rates without any apparent sacrifice to ductility, as compared to Iupilon PC. All samples contg. triptycene units retained transparency and exhibited no signs of crystallinity or phase sepn. Moreover, both the blends and triptycene-PC copolymers displayed significantly altered dynamic mech. spectra, specifically, the emergence of a pronounced, new $\beta$' relaxation approx. 75° above the traditionally obsd. $\beta$ relaxation in PC (approx. -100°). The enhancement of the mech. properties obsd. provides valuable insights into the unique packing and interactions during plastic flow induced by the presence of triptycene units. [on SciFinder(R)]
J. Ho, Rose, A., Swager, T., and Bulovic, V., "Solid-state chemosensitive organic devices for vapor-phase detection.", Springer Series in Materials Science, vol. 107, pp. 141–184, 2008.
A review on state-of-the-art vapor-phase solid-state chemosensing org. devices. The individual chem. sensing approaches and their transduction mechanisms are described. [on SciFinder(R)]
J. M. W. Chan and Swager, T. M., "Synthesis of arylethynylated cyclohexa-m-phenylenes via sixfold Suzuki coupling", Tetrahedron Letters, vol. 49, pp. 4912–4914, 2008.
A one-step, one-pot assembly of arylethynylated cyclohexa-meta-phenylenes from meta-dibromide and meta-diboronates was accomplished using a 6-fold Suzuki macrocyclization, without the need for high diln. or slow-addn. techniques. [on SciFinder(R)]
K. - N. Hu, Song, C., Yu, H. - H., Swager, T. M., and Griffin, R. G., "High-frequency dynamic nuclear polarization using biradicals: a multifrequency EPR lineshape analysis.", The Journal of chemical physics, vol. 128, p. 052302, 2008.
To date, the cross effect (CE) and thermal mixing (TM) mechanisms have consistently provided the largest enhancements in dynamic nuclear polarization (DNP) experiments performed at high magnetic fields. Both involve a three-spin electron-electron-nucleus process whose efficiency depends primarily on two electron-electron interactions–the interelectron distance R and the correct electron paramagnetic resonance (EPR) frequency separation that matches the nuclear Larmor frequency, /omega(e2)-omega(e1)/ = omega(n). Biradicals, for example, two 2,2,6,6-tetramethyl-piperidine-1-oxyls (TEMPOs) tethered with a molecular linker, can in principle constrain both the distance and relative g-tensor orientation between two unpaired electrons, allowing these two spectral parameters to be optimized for the CE and TM. To verify this hypothesis, we synthesized a series of biradicals–bis-TEMPO tethered by n ethylene glycol units (a.k.a. BTnE)–that show an increasing DNP enhancement with a decreasing tether length. Specifically at 90 K and 5 T, the enhancement grew from approximately 40 observed with 10 mM monomeric TEMPO, where the average R approximately 56 A corresponding to electron-electron dipolar coupling constant omega(d)2 pi = 0.3 MHz, to approximately 175 with 5 mM BT2E (10 mM electrons) which has R approximately 13 A with omega(d)2 pi = 24 MHz. In addition, we compared these DNP enhancements with those from three biradicals having shorter and more rigid tethers-bis-TEMPO tethered by oxalyl amide, bis-TEMPO tethered by the urea structure, and 1-(TEMPO-4-oxyl)-3-(TEMPO-4-amino)-propan-2-ol (TOTAPOL) TOTAPOL is of particular interest since it is soluble in aqueous media and compatible with DNP experiments on biological systems such as membrane and amyloid proteins. The interelectron distances and relative g-tensor orientations of all of these biradicals were characterized with an analysis of their 9 and 140 GHz continuous-wave EPR lineshapes. The results show that the largest DNP enhancements are observed with BT2E and TOTAPOL that have shorter tethers and the two TEMPO moieties are oriented so as to satisfy the matching condition for the CE.
T. M. Swager, "Iptycenes in the design of high performance polymers.", Accounts of chemical research, vol. 41, pp. 1181–9, 2008.
This Account details the use of building blocks known as iptycene units, which are particularly useful in the design of advanced materials because of their three-dimensional, noncompliant structures. Iptycenes are built upon [2,2,2]-ring systems in which the bridges are aromatic rings, and the simplest member of this class of compounds is triptycene. Iptycenes can provide steric blocking, which can prevent strong interactions between polymeric chromophores that have a strong tendency to form nonemissive exciplex complexes. Iptycene-containing conjugated polymers are exceptionally stable and display solution-like emissive spectra and quantum yields in the solid state. This application of iptycenes has enabled new vapor detection methods for ultratrace detection of high explosives that are now used by the U.S. military. The three-dimensional shape of iptycenes creates interstitial space (free volume) around the molecules. This space can confer size selectivity in sensory responses and also promotes alignment in oriented polymers and liquid crystals. Specifically, the iptycene-containing polymers and molecules align in the anisotropic host material in a way that minimizes the free volume. This effect can be used to align molecules contrary to what would be predicted by conventional models on the basis of aspect ratios. In one demonstration, we show that an iptycene polymer aligns orthogonally to the host polymer when stretched, and these structures approximate molecular versions of woven cloth. In liquid crystal solutions, the conjugated iptycene-containing polymers exhibit greater electronic delocalization, and the transport of excited states along the polymer backbone is observed. Structures that preserve high degrees of internal free volume can also be designed to create low dielectric constant insulators. These materials have high temperature stability (>500 degrees C) and hardness that make them potential interlayer dielectric materials for integrated circuits. In cases where the iptycene structures are less densely spaced along the polymer backbones, interlocking structures can be created. These structures allow for small interpolymer motions, but at large deformations, the steric clashes between iptycenes result in the transfer of load from one polymer to another. This mechanism has the ability to impart greater modulus, strength, and ductility. It is difficult to increase modulus without adversely affecting ductility, and classical high-modulus materials have low ductility. As a result, the use of interlocking iptycene structures is a promising approach to new generations of structural materials.
K. Müllen and Swager, T. M., "Advanced Polymer Design and Synthesis", Accounts of Chemical Research, vol. 41, pp. 1085–1085, 2008.
A review. [on SciFinder(R)]
H. Gu and Swager, T. M., "Fabrication of Free-standing, Conductive, and Transparent Carbon Nanotube Films", Advanced Materials, vol. 20, pp. 4433–4437, 2008.
Single-walled carbon nanotube films are fabricated from a homogeneous nanotube dispersion and a slow-evapn. technique. The transmittance and sheet resistance of the films are evaluated at room temp. Using nanotube films as conductive substrates for electrochem. deposition illustrates that these nanotube films can act as alternatives to indium tin oxide for optoelectronic applications. [on SciFinder(R)]
F. Wang, Yang, Y., and Swager, T. M., "Molecular recognition for high selectivity in carbon nanotube/polythiophene chemiresistors.", Angewandte Chemie, International Edition, vol. 47, pp. 8394–6, 2008.
A chemiresistive material based on carbon nanotubes wrapped with a calixarene-substituted polythiophene displays a selective and sensitive response to xylene isomers, as demonstrated by conductance changes, quartz crystal microbalance, and fluorescence studies. This demonstrates the promise of low-cost, real-time sensors using host-guest chem. [on SciFinder(R)]
J. Bouffard and Swager, T. M., "Self-assembly of amphiphilic poly(phenylene ethynylene)s in water-potassium dodecanoate-decanol lyotropic liquid crystals.", Chemical Communications, pp. 5387–9, 2008.
Poly(phenylene ethynylene)s bearing a high density of branched amphiphilic side-chains self-assemble at the air-water interface and in water-potassium dodecanoate-decanol lyotropic liquid crystals.
S. B. Raymond, Skoch, J., Hills, I. D., Nesterov, E. E., Swager, T. M., and Bacskai, B. J., "Smart optical probes for near-infrared fluorescence imaging of Alzheimer's disease pathology.", European journal of nuclear medicine and molecular imaging, vol. 35 Suppl 1, pp. S93–8, 2008.
Near-infrared fluorescent probes for amyloid-beta (Abeta) are an exciting option for molecular imaging in Alzheimer's disease research and may translate to clinical diagnostics. However, Abeta-targeted optical probes often suffer from poor specificity and slow clearance from the brain. We are designing smart optical probes that emit characteristic fluorescence signal only when bound to Abeta.
K. Venkatesan, Kouwer, P. H. J., Yagi, S., M\üller, P., and Swager, T. M., "Columnar mesophases from half-discoid platinum cyclometalated metallomesogens", Journal of Materials Chemistry, vol. 18, p. 400, 2008.
A series of liq. crystals have been synthesized and studied based upon mononuclear ortho-platinated rod-like heteroarom. and 1,3-diketonate ligands. The liq. cryst. properties of these mols. were investigated using polarized light optical microscopy, differential scanning calorimetry, single crystal x-ray diffraction, and powder x-ray diffraction. Increasing the no. of alkyl chains attached to the 1,3-diketonate units resulted in a transition from lamellar (SmA) to hexagonal columnar phases (Colh). The 2-thienylpyridine units were previously unexplored in metallomesogenic complexes and these studies extend the utility of ortho-platinated 2-phenylpyridines and 2-thienylpyridines to produce columnar liq. cryst. phases. The platinum complexes all display photoluminescence and are of interest for electrooptical applications. [on SciFinder(R)]
A. Narayanan, Varnavski, O. P., Swager, T. M., and Goodson, T., "Multiphoton Fluorescence Quenching of Conjugated Polymers for TNT Detection", Journal of Physical Chemistry C, vol. 112, pp. 881–884, 2008.
The use of conjugated org. compds. in fluorescence sensing applications was demonstrated to provide an impressive method to detect energetic nitro-arom. compds. The amplified fluorescence quenching in conjugated polymers was used for trace detection of nitro-aroms. This method, under one-photon excitation, possesses certain limitations such as the use of harmful ultra-violet radiation and relatively high background noise from light scattering. A novel approach that utilizes the addnl. benefits of nonlinear optical methods involves multiphoton excited fluorescence. This technique employs IR excitation which is essential for eye-safety applications and allows for deeper penetration through the atm., with relatively low background noise. Two conjugated polymers are reported which show good multiphoton absorption properties. This is combined with the excellent sensitivity of the multiphoton excited fluorescence to the presence of tri-nitro toluene (TNT). The multiphoton absorption cross-sections are provided and the Stern-Volmer plots are discussed. This technique, in combination with the great analyte sensitivity of various org. materials, promises to be an important sensing technol. in the IR spectral regions. [on SciFinder(R)]
F. Wang, Gu, H., and Swager, T. M., "Carbon nanotube/polythiophene chemiresistive sensors for chemical warfare agents.", Journal of the American Chemical Society, vol. 130, pp. 5392–3, 2008.
We report a chemiresistor that has been fabricated via simple spin-casting technique from stable CNT dispersion in hexafluoroisopropanol functionalized polythiophene. The sensor has shown high sensitivity and selectivity for a nerve reagent stimulant DMMP. A series of sensing studies, including field effect investigation, electrode passivation, and fluorescent measurement, indicate a combinative mechanism of charge transfer, introduction of scattering sites, and a configurational change of the polymer.
Z. Chen and Swager, T. M., "Synthesis and Characterization of Poly(2,6-triptycene)", Macromolecules, vol. 41, pp. 6880–6885, 2008.
We report the syntheses of two monomers, 2,6-dibromotriptycene and 2,6-diiodotriptycene, and their applications in the homopolymn. via a nickel(0)-mediated Yamamoto-type polycondensation polymn., leading to a novel arom. polymer-poly(2,6-triptycene). This new polymer was characterized by 1H and 13C NMR spectroscopy, which exhibited an excellent match with the NMR spectra of the model compd., 2,2'-bitriptycene. In addn., the polymer was found to be highly sol. in common org. solvents, although it does not contain any flexible side chains. This good soly. was attributed to its high content of triptycene units, whose rigid three-dimensional structure was proposed to prevent dense packing of polymer chains. A highly transparent film could be obtained by spin-casting a chloroform soln. This polymer also demonstrated good thermal stability. [on SciFinder(R)]
T. L. Andrew and Swager, T. M., "Reduced Photobleaching of Conjugated Polymer Films through Small Molecule Additives", Macromolecules, vol. 41, pp. 8306–8308, 2008.
The effect of added antioxidants and triplet quenchers on the photostabilities of thin films of a pentiptycene-contg. poly(p-phenylene ethynylene), and poly(9,9-dioctylfluorene) was studied. The rational design and synthesis of antioxidants and triplet quenchers that are compatible with conjugated polymer films are also presented. [on SciFinder(R)]
J. Bouffard and Swager, T. M., "Fluorescent Conjugated Polymers That Incorporate Substituted 2,1,3-Benzooxadiazole and 2,1,3-Benzothiadiazole Units", Macromolecules, vol. 41, pp. 5559–5562, 2008.
Heterocyclic monomers based on 2,1,3-benzooxadiazole and 2,1,3-benzothiadiazole bearing solubilizing side chains have been synthesized in high yields over three steps from readily available starting materials. The monomers are efficiently cross-coupled with diynes and bis(boronates) to afford high mol. wt. luminescent poly(arylene ethynylene)s and polyfluorenes that exhibit red-shifted absorption and emission maxima, greater soly., and reduced aggregation. [on SciFinder(R)]
Z. Chen, Bouffard, J., Kooi, S. E., and Swager, T. M., "Highly Emissive Iptycene−Fluorene Conjugated Copolymers: Synthesis and Photophysical Properties", Macromolecules, vol. 41, pp. 6672–6676, 2008.
Iptycene-type quinoxaline and thienopyrazine monomers were successfully synthesized via a condensation between 10-dihydro-9,10-ethanoanthracene-11,12-dione and the corresponding diamines. Copolymers based on fluorene and three iptycene monomers were prepd. via Suzuki coupling reaction, and they exhibited good soly. in appropriate org. solvents. These copolymers are fluorescent in both soln. and the solid state, emitting blue, greenish-blue, and red color due to the different electronic properties of the iptycene comonomers. The difference in their absorption and emission spectra was attributed to the donor-acceptor charge transfer interactions and/or polymer backbone conformational changes induced by steric effects. Moreover, the spectroscopic data clearly demonstrated the insulating effect of iptycene units, which prevented the aggregation of the polymer chains and the formation of excimers in the solid state. [on SciFinder(R)]
D. Alcazar, Wang, F., Swager, T. M., and Thomas, E. L., "Gel Processing for Highly Oriented Conjugated Polymer Films", Macromolecules, vol. 41, pp. 9863–9868, 2008.
Hexafluoroisopropanol functionalized polythiophene is able to build up an isotropic self-supporting network structure. The gel network can be melted and then transformed via mech. shearing to form an anisotropic gel with the chains highly aligned along the shearing direction and the conjugated backbones $π$-stacked with respect to each other neighbors. The mechanism by which a dipole functionalized polythiophene can form a reversible network able to be deformed into structurally oriented films may be of interest in the development of novel processing routes for conjugated polymers. [on SciFinder(R)]
H. A. Kang, Bronstein, H. E., and Swager, T. M., "Conductive Block Copolymers Integrated into Polynorbornene-Derived Scaffolds", Macromolecules, vol. 41, pp. 5540–5547, 2008.
The synthesis and electrochem. properties of novel block copolymers are reported. Three different norbornene derivs. having phenylene-thiophene, phenylene-bithiophene, and phenylene-furan structures were copolymd. with either norbornene or 7-oxanorbornene derivs. via ring-opening metathesis polymn. (ROMP). Block copolymers' stabilities and solubilities could be improved by hydrogenation of double bonds in the polymer backbone. The block copolymers were subsequently cross-linked by anodic electropolymn. of phenylene-heterocycle moieties, affording conducting polymers. All of these three block copolymers were readily deposited on the electrode substrates, and their cyclic voltammograms (CVs) revealed an excellent reversibility in the redox cycles. [on SciFinder(R)]
R. M. Moslin, Espino, C. G., and Swager, T. M., "Synthesis of Conjugated Polymers Containing cis-Phenylenevinylenes by Titanium Mediated Reductions.", Macromolecules, vol. 42, pp. 452–454, 2008.
The utility of Sato's titanium-mediated reduction of alkynes towards the synthesis of all cis-poly(phenylenevinylene)s (PPVs) is demonstrated by the syn-selective reduction of a variety of model diynes as well as a tetrayne. This technique was then applied to the reduction of a poly(phenyleneethynylene) (PPE) to provide the corresponding all-cis PPV polymer.
S. T. Meek, Nesterov, E. E., and Swager, T. M., "Near-infrared fluorophores containing benzo[c]heterocycle subunits.", Organic letters, vol. 10, pp. 2991–3, 2008.
The syntheses and spectroscopic properties of eight new push-pull-type near-infrared fluorophores that contain either isobenzofuran or isothianaphthene subunits are presented. The isobenzofuran dyes demonstrate significantly red-shifted absorption compared with their isothianaphthene counterparts, which is attributed to isobenzofuran's more potent pro-quinoidal character.
R. Takita, Song, C., and Swager, T. M., "Pi-dimer formation in an oligothiophene tweezer molecule.", Organic letters, vol. 10, pp. 5003–5, 2008.
An oligothiophene tweezer molecule, which has two quaterthiophene moieties connected to create an electrochemically activated hinge, has been synthesized. Two-electron oxidation of the tweezer molecule produces an intramolecular pi-dimer between the two oligothiophene moieties at room temperature as confirmed by UV-vis absorption, electrochemistry, and EPR experiments.
C. Song and Swager, T. M., "Pi-dimer formation as the driving force for calix[4]arene-based molecular actuators.", Organic letters, vol. 10, pp. 3575–8, 2008.
Stable pi-dimers are formed upon oxidation of the model units of proposed calix[4]arene-based molecular actuators in a solvent of low dielectric constant (CH 2Cl 2) at room temperature. Evidence from UV-vis, EPR, and DPV are all in agreement with the pi-dimer formation. In addition, pi-dimer formation is dependent upon the conformational flexibility of the calix[4]arene hinge.
W. Zhang, Shaikh, A. U., Tsui, E. Y., and Swager, T. M., "Cobalt Porphyrin Functionalized Carbon Nanotubes for Oxygen Reduction", Chemistry of Materials, vol. 21, pp. 3234–3241, 2009.
Carbon nanotube (CNT) compns. were prepd. by covalently grafting a Co(II) porphyrin to functionalized multiwalled carbon nanotubes (MWCNTs) via zwitterionic functionalization of the CNT sidewalls followed by a SN2 substitution reaction. The MWCNT-Co-porphyrin compns., mixed with Nafion, displayed excellent catalytic performance for oxygen redn. in acidic media (pH = 0.0-5.0) at room temp. With low catalyst loading, the oxygen redn. rates achieved are more than one order of magnitude higher than previously reported values for similar Co-porphyrin catalysts. These results demonstrate the advantages of systems of MWCNTs covalently linked to electrocatalytic mols. The electrodes are easily fabricated by a drop-casting vacuum drying procedure. Rotating disk electrode and rotating ring disk electrode measurements revealed the mechanism to be a direct four-proton and four-electron redn. of oxygen to water. These results demonstrate that new MWCNT electrocatalytic systems are potential substitutes for platinum or other metal-based cathode materials in proton conducting membrane fuel cells. [on SciFinder(R)]
R. M. Moslin, Andrew, T. L., Kooi, S. E., and Swager, T. M., "Anionic oxidative polymerization: the synthesis of poly(phenylenedicyanovinylene) (PPCN2V).", Journal of the American Chemical Society, vol. 131, pp. 20–1, 2009.
A new polymerization technique that allows for the first-ever synthesis of poly(phenylenedicyanovinylene)s (PPCN2Vs) is described. PPCN2Vs, with their high electron affinities and structural versatility, seem ideally suited to address the need for new n-type polymers. Remarkably the polymers presented herein become more photoluminescent, in the thin film, under continuous irradiation.
W. Zhang, Sprafke, J. K., Ma, M., Tsui, E. Y., Sydlik, S. A., Rutledge, G. C., and Swager, T. M., "Modular functionalization of carbon nanotubes and fullerenes.", Journal of the American Chemical Society, vol. 131, pp. 8446–54, 2009.
A series of highly efficient, modular zwitterion-mediated transformations have been developed which enable diverse functionalization of carbon nanotubes (CNTs, both single-walled and multi-walled) and fullerenes. Three functionalization strategies are demonstrated. (1) Trapping the charged zwitterion intermediate with added nucleophiles allows a variety of functional groups to be installed on the fullerenes and carbon nanotubes in a one-pot reaction. (2) Varying the electrophile from dimethyl acetylenedicarboxylate to other disubstituted esters provides CNTs functionalized with chloroethyl, allyl, and propargyl groups, which can further undergo S(N)2 substitution, thiol addition, or 1,3-dipolar cycloaddition reactions. (3) Postfunctionalization transformations on the cyclopentenones (e.g., demethylation and saponification) of the CNTs lead to demethylated or hydrolyzed products, with high solubility in water (1.2 mg/mL for MWCNTs). CNT aqueous dispersions of the latter derivatives are stable for months and have been successfully utilized in preparation of CNT-poly(ethylene oxide) nanocomposite via electrospinning. Large-scale MWCNT (10 g) functionalization has also been demonstrated to show the scalability of the zwitterion reaction. In total we present a detailed account of diverse CNT functionalization under mild conditions (60 degrees C, no strong acids/bases, or high pressure) and with high efficiency (1 functional group per 10 carbon atoms for SWCNTs), which expand the utility of these materials.
D. Izuhara and Swager, T. M., "Poly(pyridinium phenylene)s: water-soluble N-type polymers.", Journal of the American Chemical Society, vol. 131, pp. 17724–5, 2009.
Poly(pyridinium phenylene) conjugated polymers are synthesized by a cross-coupling and cyclization sequence. These polyelectrolytes are freely soluble in water and display high degrees of electroactivity. When reduced (n-doped) these materials display in situ conductivities as high as 160 S/cm. The high conductivity is attributed to the planar structure that is enforced by the cyclic structures of the polymer. The electron affinities are compared to PCBM, a C(60) based n-type material. We find that these polymers undergo excited state electron transfer reactions with other donor conjugated polymers and hence may find utility in photovoltaic devices.
J. M. W. Chan, Tischler, J. R., Kooi, S. E., Bulović, V., and Swager, T. M., "Synthesis of J-aggregating dibenz[a,j]anthracene-based macrocycles.", Journal of the American Chemical Society, vol. 131, pp. 5659–66, 2009.
Several fluorescent macrocycles based on 1,3-butadiyne-bridged dibenz[a,j]anthracene subunits have been synthesized via a multistep route. The synthetic strategy involved the initial construction of a functionalized dibenz[a,j]anthracene building block, subsequent installation of free alkyne groups on one side of the polycyclic aromatic framework, and a final cyclization based on a modified Glaser coupling under high-dilution conditions. Photophysical studies on three conjugated macrocycles revealed the formation of J-aggregates in thin films, as well as in concentrated solid solutions (polyisobutylene matrix), with peak absorption and emission wavelength in the range of lambda = 460-480 nm. The characteristic red-shifting of the J-aggregate features as compared to the monomer spectra, enhancement in absorption intensities, narrowed linewidths, and minimal Stokes shift values, were all observed. We demonstrate that improvements in spectral features can be brought about by annealing the films under a solvent-saturated atmosphere, where for the best films the luminescence quantum efficiency as high as 92% was measured. This class of macrocycles represents a new category of J-aggregates that due to their high peak oscillator strength and high luminescence efficiency have the potential to be utilized in a variety of optoelectronic devices.
D. Izuhara and Swager, T. M., "Electroactive Block Copolymer Brushes on Multiwalled Carbon Nanotubes", Macromolecules, vol. 42, pp. 5416–5418, 2009.
Electroactive polymer (EAP) brushes with multiwalled carbon nanotube (MWCNT) backbones were prepd. by grafting nonconjugated polymers contg. pendant electroactive groups on the side walls of nanotubes via surface-initiated ring-opening metathesis polymn. Norbornene-functionalized MWCNTs were prepd. from 5-norbornene-2-methanol with pendant N,N,N',N'-tetramethyl-p-phenylenediamine. The ion-conducting monomer with pendant triethyleneoxymethyl ether groups was also synthesized by a similar reaction between norbornene-2,3-endo-dimethanol and triethylene glycol monomethyl ether tosylate. Block copolymer brushes with a shell of ion-conducting polymers surrounding the EAP were also prepd. These resultant electroactive materials exhibited high dispersibility in org. solvents and improved electroactivity and conductance, in comparison with the homo- and copolymers that were not attached to MWCNTs. [on SciFinder(R)]
C. Song and Swager, T. M., "Conducting Polymers Containing peri -Xanthenoxanthenes via Oxidative Cyclization of Binaphthols", Macromolecules, vol. 42, pp. 1472–1475, 2009.
We report an electrochem. transformation of binaphthols to give peri-xanthenoxanthene (PXX) groups in small mols. and within polymer backbones. The monomer 7,7'-bis(2,2'-bithiophen-5-yl)-1,1'-bi-2,2'-naphthol (2) was subjected to electropolymn., resulting in the segmented conducting polymer that is stable at low potentials. However, high-potential electrochem. oxidn. promoted cyclization of binaphthol units gives PXX, which transforms the moderately conducting segmented polymer into a highly conducting fully conjugated polymer. This oxidative cyclization is a highly effective means by which to incorporate a planar polycyclic heteroarom. structure (i.e., PXX) into thiophene-based conducting polymers. A model compd. study conclusively proved the proposed oxidative cyclization scheme. [on SciFinder(R)]
E. L. Dane, Maly, T., Debelouchina, G. T., Griffin, R. G., and Swager, T. M., "Synthesis of a BDPA-TEMPO biradical.", Organic letters, vol. 11, pp. 1871–4, 2009.
The synthesis and characterization of a biradical containing a 1,3-bisdiphenylene-2-phenylallyl (BDPA) free radical covalently attached to a 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) free radical are described. The synthesis of the biradical is a step toward improved polarizing agents for dynamic nuclear polarization (DNP).
J. H. Liao and Swager, T. M., "Quantification of amplified quenching for conjugated polymer microsphere systems.", Langmuir, vol. 23, pp. 112–5, 2007.
A series of nonaggregating carboxylate-functionalized poly(phenylene ethynylene)s (PPEs) have been synthesized for immobilization via electrostatic adsorption onto Eu3+-polystyrene microspheres with a mean diameter of 0.2 microm. This system is shown to constitute a ratiometric system that measures fluorescence quenching with high fidelity. The fluorescence quenching properties of the polymer-coated particles in response to methyl viologen and a naphthyl-functionalized viologen have been investigated in aqueous solutions to study the influence of electrostatic and hydrophobic interactions with pentiptycene-incorporated as well as macrocycle-containing polymers.
A. Satrijo, Kooi, S. E., and Swager, T. M., "Enhanced Luminescence from Emissive Defects in Aggregated Conjugated Polymers.", Macromolecules, vol. 40, pp. 8833–8841, 2007.
Degradation experiments and model studies suggested that the longer lived green fluorescence from an aggregated poly(p-phenylene ethynylene) (PPE) was due to the presence of highly emissive, low-energy, anthryl defect sites rather than the emissive conjugated polymer excimers proposed in a previous report. After elucidating the origin of the green fluorescence, additional anthryl units were purposely incorporated into the polymer to enhance the blue-to-green fluorescence color change that accompanied polymer aggregation. The improved color contrast from this anthryl-doped conjugated polymer led to the development of crude solution-state and solid-state sensors, which, upon exposure to water, exhibited a visually noticeable blue-to-green fluorescence color change.
Y. Yang and Swager, T. M., "Main-Chain Calix[4]arene Elastomers by Ring-Opening Metathesis Polymerization", Macromolecules, vol. 40, pp. 7437–7440, 2007.
Tert-butyl- and adamantyl-substituted alkene-bridged calix[4]arene monomers were synthesized by ring-closing metathesis (RCM). All the three possible conformers (cone, partial cone, and 1,3-alternate) were used as comonomers with cyclooctene and norbornene in ring-opening metathesis polymn. (ROMP). The resultant polymers were high-mol.-wt., transparent and stretchable materials with high calixarene incorporation (up to 25 mol% or 70 wt%) and low glass transition temps. [on SciFinder(R)]
A. Ohira and Swager, T. M., "Ordering of Poly( p -phenylene ethynylene)s in Liquid Crystals", Macromolecules, vol. 40, pp. 19–25, 2007.
Poly(phenylene ethynylene)s were designed and synthesized that are completely sol. in nematic liq. cryst. solvents at mol. wts. (GPC, relative to PS) of more than 100,000. These materials experience a chain extended rod conformation under these conditions with the polymer chains highly aligned with the liq. crystal director and an enhanced conjugation length detd. by optical spectroscopy. Structure-property relationships are investigated that reveal the role of steric congestion about the polymer mainchain in detg. the order parameter of the polymers. It was also found that the order polymer parameters increased with increasing mol. wts. [on SciFinder(R)]
T. M. Swager, "Journal Club: Carbon Nanotubes Make Sense.", Nature (London, United Kingdom), vol. 446, p. 5, 2007.
M. S. Taylor and Swager, T. M., "Triptycenediols by rhodium-catalyzed [2+2+2] cycloaddition.", Organic letters, vol. 9, pp. 3695–7, 2007.
An efficient, modular synthesis of triptycene derivatives is presented, in which the triptycene ring system is constructed from readily available anthraquinone and alkyne starting materials. A rhodium-catalyzed alkyne cyclotrimerization reaction serves as the key step in this new method for the preparation of these useful unnatural products.
Z. Chen and Swager, T. M., "Synthesis and characterization of fluorescent acenequinones as dyes for guest-host liquid crystal displays.", Organic letters, vol. 9, pp. 997–1000, 2007.
Syntheses and spectroscopic properties of alkoxy-substituted para-acenequinones are reported. These compounds showed excellent alignment in nematic liquid crystals as evidenced by polarized UV-vis absorption and fluorescence measurements. [structure: see text]
A. Rose, Tovar, J. D., Yamaguchi, S., Nesterov, E. E., Zhu, Z., and Swager, T. M., "Energy migration in conjugated polymers: the role of molecular structure.", Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, vol. 365, pp. 1589–606, 2007.
Conjugated polymers undergo facile exciton diffusion. Different molecular structures were examined to study the role of the excited state lifetimes and molecular conformations on energy transfer. There is a clear indication that extended fluorescence lifetimes give enhanced exciton diffusion as determined by fluorescence depolarization measurements. These results are consistent with a strong electronic coupling or Dexter-type energy transfer as the dominating mechanism. The control of polymer conformations in liquid crystal solvents was also examined and it was determined that more planar conformations gave enhanced energy transfer to emissive low band-gap endgroups.
K. Kagawa, Qian, P., Tanaka, A., and Swager, T. M., "Templated polypyrrole electro-polymerization: Self-assembled bundles of bilayer membranes of amphiphiles and their actuation behavior", Synthetic Metals, vol. 157, pp. 733–738, 2007.
The electrochem. properties of conducting polymers are highly dependent on the microstructure. The authors report a method to produce specific microstructures of polypyrrole through electropolymn. in the presence of the amphiphile N-{11-(2-hydroxyethyldimethylammonium)undecanoyl}-N,N'-dioctyl-L-glutamate, bromide, which forms supramol. hydrogels with pyrrole in aq. soln. These hydrogels were used as templates during polypyrrole electropolymn. to give microstructures composed of the bundles of bilayer membranes. The highly porous nature of these films resulted in electrochem. properties superior to polypyrrole deposited under the same condition without use of an amphiphilic template. Anal. of the scan rate dependence on cyclic voltammogram reveals that the porous templated films facilitate fast diffusion of dopant ions. The actuation properties were also studied in aq. solns. contg. Na p-toluenesulfonate electrolyte. The strains displayed by the template polypyrrole films were twice those synthesized without the use of a template. [on SciFinder(R)]
J. Bouffard, Eaton, R. F., M\üller, P., and Swager, T. M., "Iptycene-derived pyridazines and phthalazines.", The Journal of organic chemistry, vol. 72, pp. 10166–80, 2007.
The synthesis of new heterocyclic oligo(phenylene) analogues based on soluble, nonaggregating 1,2-diazines is reported. Improved palladium-catalyzed reductive coupling methods were developed to allow for the preparation of large quantities of iptycene-derived bipyridazines and biphthalazines, and the controlled synthesis of well-defined oligomers up to sexipyridazine. Crystallographic, spectroscopic, and computational evidence indicate that in these analogues, hindrance at the ortho position is relaxed relative to poly(phenylenes). The resulting building blocks are promising for incorporation in conjugated electronics materials, and as new iptycene-derived ligands for transition metals.
N. T. Tsui, Torun, L., Pate, B. D., Paraskos, A. J., Swager, T. M., and Thomas, E. L., "Molecular Barbed Wire: Threading and Interlocking for the Mechanical Reinforcement of Polymers", Advanced Functional Materials, vol. 17, pp. 1595–1602, 2007.
The incorporation of pendant iptycene units into polyesters creates a novel polymer-chain contour resembling \"mol. barbed wire.\". These types of units contain a unique structural property called the internal mol.-free vol. (IMFV) and have been shown to induce steric interactions between polymer chains through the minimization of the IMFV. This process creates a sterically interconnected polymer-chain network with high ductility because of two new mechanisms: mol. threading and mol. interlocking. The ability for these mechanisms to enhance the mech. properties of polyesters is robust across concn. and processing conditions. The size, shape, and concn. of these pendant units affect the mech. behavior, and results indicate that the larger units do not necessarily produce superior tensile properties. However, the mol.-barbed-wire architecture consistently produces enhanced mech. properties compared to the ref. polyester. The particular stress-strain response can be tailored by minute changes to the periphery of the iptycene unit. [on SciFinder(R)]
M. S. Taylor and Swager, T. M., "Poly(anthrylenebutadiynylene)s: precursor-based synthesis and band-gap tuning.", Angewandte Chemie, International Edition, vol. 46, pp. 8480–3, 2007.
Skipping the monomer: A highly efficient reductive aromatization reaction transforms an unconjugated precursor polymer into a high-mol.-wt., butadiyne-linked anthracene homopolymer. Photophys. and electrochem. analyses reveal that some of these new materials display remarkable stability in both neutral and doped states and have unusually low intrinsic band gaps. [on SciFinder(R)]
K. Sugiyasu and Swager, T. M., "Conducting-Polymer-Based Chemical Sensors: Transduction Mechanisms", Bulletin of the Chemical Society of Japan, vol. 80, pp. 2074–2083, 2007.
A review. Conducting org. polymers, esp., polythiophene and substituted polythiophenes, for use as sensor materials are discussed. The charge transport mechanisms and response toward analytes based on functional groups and charge interactions are outlined. Sensor systems based on triaryl-Me carbocations are described. The design of specific mol. recognition centers to impart selectivity and the nature of the conducting polymer are also discussed. [on SciFinder(R)]
S. W. Thomas, Joly, G. D., and Swager, T. M., "Chemical sensors based on amplifying fluorescent conjugated polymers.", Chemical reviews, vol. 107, pp. 1339–86, 2007.
A review. In this review the authors restrict the discussions to purely fluorescence-based methods using conjugated polymers. Amplification, small ion sensing using amplifying fluorescent conjugated polymers (AFPs), AFPs for detection of explosives, conjugated polyelectrolytes as biosensors, AFPs for detection of small biomols., proteins, and DNA are discussed. [on SciFinder(R)]
A. Satrijo and Swager, T. M., "Anthryl-doped conjugated polyelectrolytes as aggregation-based sensors for nonquenching multicationic analytes.", Journal of the American Chemical Society, vol. 129, pp. 16020–8, 2007.
The fluorescence-based detection of nonquenching, multicationic small molecules has been demonstrated using a blue-emitting, polyanionic poly(p-phenylene ethynylene) (PPE) doped with green-emitting exciton traps (anthryl units). Multicationic amines (spermine, spermidine, and neomycin) were found to effectively induce the formation of tightly associated aggregates between the polymer chains in solution. This analyte-induced aggregation, which was accompanied by enhanced exciton migration in the PPE, ultimately led to a visually noticeable blue-to-green fluorescence color change in the solution. The aggregation-based sensor exhibited poor sensitivity toward dicationic and monocationic amines, demonstrating that a conjugated polyelectrolyte sensor relying on nonspecific, electrostatic interactions may still attain a certain level of selectivity.
P. H. J. Kouwer and Swager, T. M., "Synthesis and mesomorphic properties of rigid-core ionic liquid crystals.", Journal of the American Chemical Society, vol. 129, pp. 14042–52, 2007.
Ionic liquid crystals combine the unique solvent properties of ionic liquids with self-organization found for liquid crystals. We report a detailed analysis of the structure-property relationship of a series of new imidazolium-based liquid crystals with an extended aromatic core. Investigated parameters include length and nature of the tails, the length of the rigid core, the lateral substitution pattern, and the nature of the counterion. Depending on the molecular structure, two mesophases were observed: a bilayered SmA2 phase and the more common monolayered SmA phase, both strongly interdigitated. Most materials show mesophases stable to high temperatures. For some cases, crystallization could be suppressed, and room-temperature liquid crystalline phases were obtained. The mesomorphic properties of several mixtures of ionic liquid crystals were investigated. Many mixtures showed full miscibility and ideal mixing behavior; however, in some instances we observed, surprisingly, complete demixing of the component SmA phases. The ionic liquid crystals and mixtures presented have potential applications, due to their low melting temperatures, wide temperature ranges, and stability with extra ion-doping.
W. Zhang and Swager, T. M., "Functionalization of single-walled carbon nanotubes and fullerenes via a dimethyl acetylenedicarboxylate-4-dimethylaminopyridine zwitterion approach.", Journal of the American Chemical Society, vol. 129, pp. 7714–5, 2007.
Single-walled carbon nanotubes (SWCNTs) and C60 are functionalized by reactions with di-Me acetylenedicarboxylate (DMAD) and alcs. mediated by 4-(dimethylamino)pyridine (DMAP) to give adducts such as the oxocyclopentenofullerene I. The reactions are proposed to occur by addn. of zwitterions generated from DMAP and DMAD to C60 or SWCNTs with concomitant ring closure followed by substitution of the (dimethylamino)pyridinium moiety of the intermediate adducts with alcs. Reaction of SWCNTs with DMAP, DMAD, and 1-dodecanol yields functionalized carbon nanotubes with significantly greater solubilities in org. solvents than unfunctionalized SWCNTs. The structure of I•CS2 is detd. by X-ray crystallog. [on SciFinder(R)]
T. L. Andrew and Swager, T. M., "A fluorescence turn-on mechanism to detect high explosives RDX and PETN.", Journal of the American Chemical Society, vol. 129, pp. 7254–5, 2007.
A fluorescent chemosensor to detect satd. nitramine and nitrate ester explosives was devised based on a photochem. redn. reaction. 10-Methyl-9,10-dihydroacridine (AcrH2) was found to transfer a hydride ion equiv. to the high explosives RDX and PETN upon irradn. at 313 nm in degassed acetonitrile solns. Mechanistic photophys. studies indicate that the photoredn. of RDX proceeds via a two-step electron-hydrogen atom transfer reaction, whereas PETN photoredn. proceeds via a three-step electron-proton-electron transfer sequence. A zinc analog was synthesized and found to display an 80- or 25-fold increase in 480 nm emission intensity upon reaction with RDX or PETN, resp. Moreover, the Zn analog was found to be unresponsive to TNT and other common contaminants, in addn. to being photostable under ambient conditions. On the basis of these characteristics, a powerful chemosensor that displays a direct fluorescent response to either RDX or PETN is reported. [on SciFinder(R)]
T. - H. Kim, Kim, I., Yoo, M., and Swager, T. M., "Development of Highly Selective Fluorescent Chemosensors for Fluoride Ion", Journal of the Korean Chemical Society, vol. 51, pp. 258–264, 2007.
Novel fluoride sensory systems were successfully developed. Previously developed method of the fluoride-induced lactonization to fluorescent mols. was detailed, and newly developed fluoride-induced arom. cyclization scheme was introduced. Based on the strategies using the specific affinity of fluoride to silicon, the authors' systems are highly selective for fluoride ion. Incorporation of the developed sensor to a conjugated polymer has successfully enhanced its sensitivity to fluoride ion. [on SciFinder(R)]
H. - G. Choi, Amara, J. P., Swager, T. M., and Jensen, K. F., "Directed growth of poly(isobenzofuran) films by chemical vapor deposition on patterned self-assembled monolayers as templates.", Langmuir, vol. 23, pp. 2483–91, 2007.
This paper describes a method to direct the formation of microstructures of poly(isobenzofuran) (PIBF) by chemical vapor deposition (CVD) on chemically patterned, reactive, self-assembled monolayers (SAMs) prepared on gold substrates. We examined the growth dependence of PIBF by deposition onto several different SAMs each presenting different surface functional groups, including a carboxylic acid, a phenol, an alcohol, an amine, and a methyl group. Interferometry, Fourier transform infrared (FT-IR) spectroscopy, X-ray photoelectron spectroscopy (XPS), gel permeation chromatography (GPC), and optical microscopy were used to characterize the PIBF films grown on the various SAMs. Based on the kinetic and the spectroscopic analyses, we suggest that the growth of PIBF is surface-dependent and may follow a cationic polymerization mechanism. Using the cationic polymerization mechanism of PIBF growth, we also prepared patterned SAMs of 11-mercapto-1-undecanol (MUO) or 11-mercaptoundecanoic acid (MUA) by microcontact printing (microCP) on gold substrates as templates, to direct the growth of the PIBF. The directed growth and the formation of microstructures of PIBF with lateral dimensions of 6 microm were investigated using atomic force microscopy (AFM). The average thickness of the microstructures of PIBF films grown on the MUO and the MUA patterns were 400 +/- 40 nm and 490 +/- 40 nm, respectively. SAMs patterned with carboxylic acid salts (Cu2+, Fe2+, or Ag+) derived from MUA led to increases in the average thickness of the microstructures of PIBF by 10%, 12%, or 27%, respectively, relative to that of control templates. The growth dependence of PIBF on the various carboxylic acid salts was also investigated using experimental observations of the growth kinetics and XPS analyses of the relative amount of metal ions present on the template surfaces.
S. W. Thomas, Venkatesan, K., M\üller, P., and Swager, T. M., "Dark-field oxidative addition-based chemosensing: new bis-cyclometalated PtII complexes and phosphorescent detection of cyanogen halides.", Journal of the American Chemical Society, vol. 128, pp. 16641–8, 2006.
Heavy metal complexes that are phosphorescent at room temperature are becoming increasingly important in materials chemistry, principally due to their use in phosphorescent organic light-emitting devices (OLEDs). Their use in optical sensory schemes, however, has not been heavily explored. Homoleptic bis-cyclometalated Pt(II) complexes are known to undergo oxidative addition with appropriate electrophiles (principally alkyl halides) by either thermal or photochemical activation. We have applied this general reaction scheme to the development of a phosphorescence-based sensing system for cyanogen halides. To carry out structure-property relationship studies, a series of previously unreported Pt(II) complexes was prepared. Most of the complexes (excluding those that incorporated substituents on the ligands that forced steric crowding in the square plane) were strongly orange-red phosphorescent (Phi = 0.2-0.3) in a room-temperature oxygen-free solution. These sterically demanding ligands also accelerated the addition of cyanogen bromide to these complexes but slowed the addition of methyl iodide, indicating that the oxidative addition mechanisms for these two electrophiles is different. The lack of solvent-polarity effect on the addition of BrCN suggests a radical mechanism. Oxidative addition of BrCN to the metal complexes in solution or dispersed in poly(methyl methacrylate) gave blue-shifted emissive Pt(IV) complexes. The blue-shifted products give a dark-field sensing scheme that is in sharp contrast to energy transfer-based sensing schemes, which have limited signal-to-noise because of the presence of lower-energy vibronic bands of the energy donor that can overlap with the emission of the acceptor.
J. Hoogboom and Swager, T. M., "Increased alignment of electronic polymers in liquid crystals via hydrogen bonding extension.", Journal of the American Chemical Society, vol. 128, pp. 15058–9, 2006.
Crucial for the development of enhanced electrooptic materials is the construction of highly anisotropic materials. Nematic liquid crystals are able to control the chain conformation and alignment of poly(phenylene ethynylene)s (PPEs), producing electronic polymers with chain-extended planar conformations for improved transport properties. Here, we show that the dichroic ratio, and hence polymer alignment, increases dramatically when interpolymer interactions are introduced by end capping the PPE with hydrogen bonding groups. This increased order can be readily turned off by the introduction of a competing monofunctionalized hydrogen bonding compound. The formation of hydrogen bonds between the polymers results in the formation of gels and elastomers which may be of interest for future applications.
A. J. McNeil, M\üller, P., Whitten, J. E., and Swager, T. M., "Conjugated polymers in an arene sandwich.", Journal of the American Chemical Society, vol. 128, pp. 12426–7, 2006.
A series of poly(p-arylene butadiynylene)s containing zero, one, and two co-facial pi-pi interactions per repeat unit were synthesized and characterized. A surprisingly selective and high-yielding Diels-Alder cycloaddition of anthracene and nonsymmetric, sterically hindered anhydrides proved essential to generating the cofacial arene-containing monomers. Single-crystal X-ray structures display nearly parallel cofacial arenes that are within the van der Waals contact distances. The precursor molecules with cofacial arenes undergo reversible one- and two-electron oxidations to the radical cation and dication in CH2Cl2. The anhydrides were converted to N-alkyl imides to increase the solubility. High-molecular weight poly(p-arylene butadiynylene)s were prepared via Pd/Cu(I)/benzoquinone oxidative coupling of the diacetylene monomers. The resulting polymers are highly emissive in solution and thin films. The ionization potentials were measured using ultraviolet photoelectron spectroscopy with thin films. Last, fluorescence measurements of polymer thin films during continuous irradiation indicate that the most hindered polymer is more resistant to photobleaching.
C. Song, Hu, K. - N., Joo, C. - G., Swager, T. M., and Griffin, R. G., "TOTAPOL: a biradical polarizing agent for dynamic nuclear polarization experiments in aqueous media.", Journal of the American Chemical Society, vol. 128, pp. 11385–90, 2006.
In a previous publication, we described the use of biradicals, in that case two TEMPO molecules tethered by an ethylene glycol chain of variable length, as polarizing agents for microwave driven dynamic nuclear polarization (DNP) experiments. The use of biradicals in place of monomeric paramagnetic centers such as TEMPO yields enhancements that are a factor of approximately 4 larger (epsilon approximately 175 at 5 T and 90 K) and concurrently the concentration of the polarizing agent is a factor of 4 smaller (10 mM electron spins), reducing the residual electron nuclear dipole broadening. In this paper we describe the synthesis and characterization by EPR and DNP/NMR of an improved polarizing agent 1-(TEMPO-4-oxy)-3-(TEMPO-4-amino)propan-2-ol (TOTAPOL). Under the same experimental conditions and using 2.5 mm magic angle rotors, this new biradical yields larger enhancements (epsilon approximately 290) at lower concentrations (6 mM electron spins) and has the additional important property that it is compatible with experiments in aqueous media, including salt solutions commonly used in the study of proteins and nucleic acids.
A. Satrijo, Meskers, S. C. J., and Swager, T. M., "Probing a conjugated polymer's transfer of organization-dependent properties from solutions to films.", Journal of the American Chemical Society, vol. 128, pp. 9030–1, 2006.
The dynamic transfer of a conjugated polymer's organization-dependent properties from the solution state to the solid film state was probed by circularly polarized luminescence (CPL) and circular dichroism (CD) spectroscopy. Different supramolecular organizations within films and aggregate solutions of a chiral poly(p-phenylenevinylene) derivative led to opposite CPL and CD spectra. These dramatic property differences were controlled by regulating the polymer's self-assembly through solvent selection and film annealing. Therefore, different processing conditions can greatly affect the functional properties of conjugated polymer films employed in various optoelectronic applications.
J. J. Reczek, Villazor, K. R., Lynch, V., Swager, T. M., and Iverson, B. L., "Tunable columnar mesophases utilizing C2 symmetric aromatic donor-acceptor complexes.", Journal of the American Chemical Society, vol. 128, pp. 7995–8002, 2006.
Derivatives of relatively electron rich 1,5-dialkoxynaphthalene (Dan) donors and relatively electron deficient 1,4,5,8-naphthalenetetracarboxylic diimide (Ndi) acceptors have been exploited in the folding and self-assembly of a variety of complex molecular systems in solution. Here, we report the use of Dan and Ndi derivatives to direct assembly of extended columns with alternating face-centered stacked structure in the solid state. A variety of 1:1 Dan:Ndi mixtures produced mesophases that were found to be stable over temperature ranges extending up to 110 degrees C. Analysis of these mesophases indicates mixtures with soft/plastic crystal phases and a few mixtures with the thermodynamic properties of true liquid crystals, all composed of alternating donor-acceptor columns within. Importantly, a correspondence was found between the clearing and crystallization points of the mesophase mixtures and the melting/clearing points of the component Ndi and Dan units, respectively. This correspondence enables the predictable tuning of mesophase phase transition temperatures. The study of sterically hindered derivatives led to a set of mixtures in which a dramatic and sudden color change (deep red to yellow) was observed upon crystallization of the mesophase due to a phase separation of the component donor and acceptor units.
P. D. Byrne, M\üller, P., and Swager, T. M., "Conducting metallopolymers based on azaferrocene.", Langmuir, vol. 22, pp. 10596–604, 2006.
A series of 2,5-thiophene-substituted 1',2',3',4',5'-pentamethylazaferrocene complexes were synthesized and electropolymerized to produce polymers with fully pi-conjugated backbones. The length and hence oxidation potential of the conjugated linker (the thiophene fragments) between the metal centers were varied to understand the influence of the metal-metal interactions on the overall electroactivity of the resulting polymer. These complexes were electrochemically polymerized, and the resulting polymers were characterized by cyclic voltammetry, in situ conductivity, and spectroelectrochemistry measurements. The iron-centered oxidations significantly increased the conductivity of the polymer. The results reveal that shorter conjugated linkers cause the onset of conductivity to occur at lower potentials. This effect implies that a superexchange mechanism is likely operative in the charge migration of these polymers.
E. Ishow, Bouffard, J., Kim, Y., and Swager, T. M., "Anthryl-Based Poly(phenylene ethynylene)s: Tuning Optical Properties with Diels−Alder Reactions", Macromolecules, vol. 39, pp. 7854–7858, 2006.
Sol. anthryl-based conjugated poly(phenylene ethynylene)s (PPEs) have been synthesized using palladium-catalyzed Sonogashira-Hagihara cross-coupling polymn. reactions. Mol. wts. up to 3.5 × 104 g•mol-1 were obtained, making them suitable for spectroscopic soln. characterizations and thin film processing. The selective reactivity of these polymers as multidienes has been successfully demonstrated with strong dienophiles. Diels-Alder reactions proceed cleanly to completion with unhindered dienophiles such as N-alkylated maleimide derivs. TGA anal. revealed thermal retro-Diels-Alder reactions at modest temps. around 210 °C. Compared with their parent polymers, the cycloadduct polymers exhibited dramatic hypsochromic shifts of their emission and absorption maxima up to 80 nm along with a considerable quantum yield enhancement. These original anthryl-based polymers appear attractive as reactive conjugated materials whose optical properties can easily be tuned with quant. Diels-Alder reactions. [on SciFinder(R)]
G. D. Joly, Geiger, L., Kooi, S. E., and Swager, T. M., "Highly Effective Water-Soluble Fluorescence Quenchers of Conjugated Polymer Thin Films in Aqueous Environments", Macromolecules, vol. 39, pp. 7175–7177, 2006.
This paper reports water-sol. viologen quenchers that bear hydrophobic substituents and describes their effectiveness as quenchers of emission from thin films of hydrophobic, water-insol. poly(p-phenylene ethynylene)s. [on SciFinder(R)]
J. Zheng and Swager, T. M., "Probing Biological Recognition Using Conjugated Polymers at the Air−Water Interface", Macromolecules, vol. 39, pp. 6781–6783, 2006.
The effect of linker length on biol. recognition at the air-water interface using energy transfer between a fluorescent conjugated polymer and dye-labeled protein. A longer linker provided greater binding of the protein as evidence by the dramatically increased energy transfer. This may be due to an increased hydrophilicity of the side chain, which allowed the biotin to better access the streptavidin located in the aq. subphase. Subtle changes in the polymer structure can thus have important consequences for analyte recognition. [on SciFinder(R)]
K. Sugiyasu, Song, C., and Swager, T. M., "Aromaticity in Tropone-Containing Polythiophene", Macromolecules, vol. 39, pp. 5598–5600, 2006.
Tropone-Contg. thiophene monomers were prepd. and electrochem. polymd. under ambient conditions to form polymers. The polymers with various concns. of TFA were studied by CV, UV-vis-NIR, and cond. In conclusion, the cond. of tropone-contg. polythiophene can be switched reversibly by a protonation/deprotonation process. The cond. switching mechanism of this polymer is alternative to that of conventional proton dopable polymers such as polyaniline. [on SciFinder(R)]
J. P. Amara and Swager, T. M., "Conjugated Polymers with Geminal Trifluoromethyl Substituents Derived from Hexafluoroacetone", Macromolecules, vol. 39, pp. 5753–5759, 2006.
Convenient syntheses of 9,9-bis(trifluoromethyl)fluorene and 6,6,12,12-tetrakis(trifluoromethyl)indenofluorene are reported. The monomers were readily prepd. in two steps from simple halo-aroms. and hexafluoroacetone, which serves as the source of the geminal trifluoromethyl groups. Iodination yields polymerizable monomers that were used to prep. several conjugated polymers. The photophys. properties of these polymers are reported. The polymers demonstrate slightly blue-shifted UV-vis absorption and fluorescence emission spectra and high soln. and solid-state fluorescence quantum efficiency. Polymer photobleaching expts. reveal that poly(fluorene)s with geminal trifluoromethyl substituents demonstrate greater photooxidative stability than poly(9,9-dioctylfluorene). [on SciFinder(R)]
Y. Kim and Swager, T. M., "Sensory Polymers for Electron-Rich Analytes of Biological Interest", Macromolecules, vol. 39, pp. 5177–5179, 2006.
A report on a water-sol. photooxidizing amplifying fluorescent polymer (AFP) and its performance in photoinduced electron transfer (PET) based detection of amino acids, neurotransmitters, and proteins possessing electron-donating arom. moieties in aq. buffer is presented. This report has shown that a water-sol. photooxidizing AFP could serve as a PET-based sensor for amino acids, neurotransmitters and proteins with electron-donating arom. compds. in aq. media. The results show that the quenching efficiency is strongly dependent on electrostatic interaction of the anionic polymer and quencher as well as hydrophobic and electron-transfer interactions between the polymer chains and quencher. [on SciFinder(R)]
H. - G. Choi, Amara, J. P., Swager, T. M., and Jensen, K. F., "Synthesis and Characterization of Poly(isobenzofuran) Films by Chemical Vapor Deposition", Macromolecules, vol. 39, pp. 4400–4410, 2006.
This paper describes the synthesis and properties of poly(isobenzofuran) (PIBF) films prepd. by a thermal chem. vapor deposition (CVD) process. The synthesized precursor monomer, 1,2,3,4-tetrahydro-1,4-epoxynaphthalene, is pyrolyzed by flowing it through a tube furnace at temps. of 600-750°. A reactive intermediate, isobenzofuran (IBF) monomer, is produced by this pyrolysis and deposited onto a silicon substrate where it polymerizes to form a thin coating of poly(isobenzofuran) (PIBF) on the surface. The chem. structure and compn. of the PIBF films are supported by NMR spectroscopy, Fourier transform IR spectroscopy (FT-IR), and XPS. The wt.-av. mol. wt. of the PIBF films range from 5500 to 9400 by varying the deposition (stage), pyrolysis (furnace), and vaporization (source) temps. With the variation of deposition temp. (5, 10, 15, 20, 25°) and pyrolysis temp. (600, 650, 700, 750°), significant changes are obsd. in deposition (growth) rate, mol. wt., and morphol. while the chem. structure of the PIBF films remain the same as probed by FT-IR and XPS anal. On the other hand, variation of the vaporization temp. (40, 45, 50, 55, 60°) leads to significant changes in the chem. structure as well as in the deposition rate, mol. wt., film uniformity, and morphol. By exploring several operating conditions, we have obtained optimal conditions for deposition temp. (10°), pyrolysis temp. (750°), and vaporization temp. (60°) that provide good film properties as well as fast film growth. To investigate the possible role of cationic initiation in IBF polymn., PIBF films were deposited on several surfaces tailored with self-assembled monolayers (SAMs) of thiols that have functional groups of different acidities, including a carboxylic acid (-COOH), a phenol (-PhOH), an alc. (-OH), an amine (-NH2), and a Me group (-CH3). We obsd. the fastest growth of PIBF (k = 2.5 \AA/s) on the carboxylic acid-terminated surfaces whereas the slowest growth was on the methyl-terminated surfaces (k = 0.02 \AA/s). On the basis of the exptl. observations, we proposed a growth mechanism for the PIBF films by the CVD process. [on SciFinder(R)]
Z. Chen, Amara, J. P., Thomas, S. W., and Swager, T. M., "Synthesis of a Novel Poly(iptycene) Ladder Polymer", Macromolecules, vol. 39, pp. 3202–3209, 2006.
A self-polymerizable AB-type monomer for Diels-Alder (D-A) polymn. was prepd., and its polymn. was carried out in the melt phase and at high pressure in soln. The former method generated only low-mol.-wt. polymer, but the latter one offered an efficient polymn. with increased mol. wt., due to the effect of high pressure on reactions with a neg. activation vol. A pyridinium p-toluenesulfonate-catalyzed dehydration reaction of the D-A polymer led to a novel arom. ladder polymer, poly(iptycene), which is sol. in common org. solvents and stable up to 350 °C. The NMR and UV-vis spectra of these polymers match the spectra of their corresponding model compds., the synthesis of which is also reported. [on SciFinder(R)]
N. T. Tsui, Paraskos, A. J., Torun, L., Swager, T. M., and Thomas, E. L., "Minimization of Internal Molecular Free Volume: A Mechanism for the Simultaneous Enhancement of Polymer Stiffness, Strength, and Ductility", Macromolecules, vol. 39, pp. 3350–3358, 2006.
The prepn. and mech./structural characteristics were studied of a polyester contg. 21% triptycene monomer, vs. a ref. polyester homolog wherein benzene replaces the triptycene residue. Solvent-cast films and tension heat-treated (THT) films were studied by tensile deformation and wide-angle x-ray scattering. The addn. of triptycene units increases the Tg and, contrary to what is typically obsd., also increases the ductility of film samples. In comparison to the solvent-cast non-triptycene polyester films, the triptycene polyester films displayed a nearly 3-fold increase in Young's modulus, an approx. 3-fold increase in strength, and a more than 20-fold increase in strain to failure. THT films of the triptycene polyester exhibited a modulus more than 7 times that of the non-triptycene as-cast polyester and strength greater than 14 times higher for roughly the same strain to failure. This unusually beneficial mech. behavior is primarily attributed to the ability of individual triptycene units to express what was termed as internal mol. free vol. (IMFV). The triptycene polymers adopt favorable conformations that minimize the IMFV, and the resultant assembly introduces two mechanisms for the enhancement of tensile mech. properties: mol. threading and mol. interlocking. [on SciFinder(R)]
G. C. Bailey and Swager, T. M., "Masked Michael Acceptors in Poly(phenyleneethynylene)s for Facile Conjugation", Macromolecules, vol. 39, pp. 2815–2818, 2006.
Poly(phenyleneethynylene)s (PPEs) capable of reacting with thiol-contg. mols. have been designed and synthesized with no.-av. mol. wts. ranging from 8000 to 11,000. The PPEs contain pendent 3a,4,7,7a-tetrahydro-4,7-epoxy-1H-isoindole-1,3(2H)-dione groups available to participate in conjugate addn. and Diels-Alder chem. after being thermally activated. If the maleimide group is left unmasked during palladium-catalyzed cross-coupling polymn., it leads to side reactions and the prodn. only of short chain oligomers (Mn < 3000, DP ∼ 4). Reversible Diels-Alder reactions between the maleimides and furan were detd. to be a very effective method to produce a masked Michael acceptor that can be unveiled after polymn. under relatively mild thermal conditions. Cycloreversion to the maleimide has been monitored by thermogravimetric anal. (TGA), TGA-MS, IR, and NMR. A PPE contg. the masked maleimide unit has been modified with a thiolated carboxy-X-rhodamine (ROX) dye, and the resulting absorbance and fluorescence spectra as well as gel permeation chromatograms (GPC) are presented. [on SciFinder(R)]
Y. Yang and Swager, T. M., "Main-Chain Calixarene Polymers: Conformational Effects on Polymerization", Macromolecules, vol. 39, pp. 2013–2015, 2006.
We have synthesized high-mol.-wt. main-chain calixarene homopolymers via linkages on lower rims. We have found that the calixarene conformation has a profound influence on the polymn. Our synthetic methodol. can be applied to the polymns. of other sterically demanding and conformationally flexible monomers. Main-chain calixarene polymers linked through the lower rims present new opportunities to create efficient sepns., and materials for sensing and actuation, as a result of their dynamic receptor properties. [on SciFinder(R)]
Z. Chen, M\üller, P., and Swager, T. M., "Syntheses of soluble, pi-stacking tetracene derivatives.", Organic letters, vol. 8, pp. 273–6, 2006.
[reaction: see text] The syntheses of a series of fluorine- and alkyl/alkoxy-functionalized tetracenes are reported. These functional groups are found to improve the solubility in common organic solvents and tune molecular arrangement in solids. The crystal packing, electrochemical behavior, and UV-vis absorbance spectroscopy of these materials are discussed.
P. D. Byrne, Lee, D., M\üller, P., and Swager, T. M., "Polymerization of thiophene containing cyclobutadiene Co cyclopentadiene complexes", Synthetic Metals, vol. 156, pp. 784–791, 2006.
To understand the charge transport ability of metal coordinated cyclobutadiene, a series of cyclobutadiene cobalt cyclopentadiene (CbCoCp) complexes contg. electrochem. polymerizable 3,4-ethylenedioxythiophene units were synthesized. The complexes were electrochem. polymd. and the resulting polythiophenes were characterized by cyclic voltammetry, in situ cond. and UV-vis spectroelectrochem. Several different derivs. of the CbCoCp complexes and a model study suggested that if the oxidn. of the org. fragment was above CoI/II redox couple of the CbCoCp complex, detrimental side reactions occurred. Side reactions did not occur if the oxidn. potential of the org. fragment was below the oxidn. potential of the metal. [on SciFinder(R)]
S. W. Thomas and Swager, T. M., "Trace Hydrazine Detection with Fluorescent Conjugated Polymers: A Turn-On Sensory Mechanism", Advanced Materials, vol. 18, pp. 1047–1050, 2006.
Exposure of hydrazine vapor to conjugated-polymer films reduces the small no. of oxidized trap sites that quench excitons. This scheme is effective for trace hydrazine detection with a detection limit of <1 ppm. The introduction of an oxidizing dopant quenches the polymer emission and permits detection of hydrazine with larger on/off ratios. [on SciFinder(R)]
H. - G. Choi, Amara, J. P., Martin, T. P., Gleason, K. K., Swager, T. M., and Jensen, K. F., "Structure and Morphology of Poly(isobenzofuran) Films Grown by Hot-Filament Chemical Vapor Deposition", Chemistry of Materials, vol. 18, pp. 6339–6344, 2006.
We describe hot-filament chem. vapor deposition of polyisobenzofuran (PIBF) films and characterize their chem. structure and surface morphol. The precursor monomer, 1,2,3,4-tetrahydro-1,4-epoxy-naphthalene, is pyrolyzed by flowing it over an array of hot filaments held at three different temps. (680, 738, and 800 °C). The produced intermediate, iso benzofuran , is deposited onto a silicon substrate as thin films of PIBF. Fourier transform IR spectroscopy and XPS revealed that the films prepd. at 800 °C possess a chem. structure and a compn. different from those prepd. at lower filament temps. (680 and 738 °C). Increasing filament temp. also leads to the formation of defect domains in the polymer films. By at. force microscopy, we observe a decrease in surface roughness and in the av. size of polymer grains in PIBF domains as the filament temp. is increased. Defect domains exhibit a rougher surface as well as larger size polymer grains than those in PIBF domains. Spectroscopic and microscopic results suggest that growth of the polymer in the defect domains proceeds by a mechanism different from that in the PIBF domains. [on SciFinder(R)]
B. J. Holliday, Stanford, T. B., and Swager, T. M., "Chemoresistive Gas-Phase Nitric Oxide Sensing with Cobalt-Containing Conducting Metallopolymers", Chemistry of Materials, vol. 18, pp. 5649–5651, 2006.
Nitric oxide (NO) was implicated in a wide range of biol. processes including cellular signaling, coronary artery dilation, immune system response, and neurotransmission. With this in mind, the development of direct, sensitive, and selective sensing schemes for NO has attracted great interest in recent years. The development of a selective, ppm level, gas-phase NO detection system based on chemoresistive changes in a cobalt-contg. metallopolymer film device is discussed. These films are easily prepd. and elec. interfaced, and showed detection limits of NOx <1 ppm. [on SciFinder(R)]
S. Rifai, Breen, C. A., Solis, D. J., and Swager, T. M., "Facile in Situ Silver Nanoparticle Formation in Insulating Porous Polymer Matrices", Chemistry of Materials, vol. 18, pp. 21–25, 2006.
The in situ formation of well-dispersed silver nanoparticles in an insulating polymer matrix is reported. Poly(aryl ether)s (PAEs) incorporating an oxadiazole moiety were synthesized, and they demonstrated selective interaction with silver ions. Polymer-ion thin films were easily spin cast and exposed to hydrazine vapors to induce metal nucleation. Introducing a triptycene monomer into the polymer backbone resulted in a highly porous matrix and prevented nanoparticle aggregation. This provides a generalized approach to the facile prepn. of well-dispersed metal nanoparticles embedded thin films. [on SciFinder(R)]
T. M. Swager, "One-Dimensional Metals. Conjugated Polymers, Organic Crystals, Carbon Nanotubes. By Sigmar Roth and David Carroll.", Angewandte Chemie, International Edition, vol. 44, pp. 2473–2473, 2005.
Y. Kim and Swager, T. M., "Ultra-photostable n-type PPVs.", Chemical Communications, pp. 372–4, 2005.
Poly(p-phenylenevinylene)s containing trifluoromethyl substituted aromatic rings (CF3-PPVs) exhibit high photooxidative stability to give robust materials suitable for demanding applications.
B. J. Holliday and Swager, T. M., "Conducting metallopolymers: the roles of molecular architecture and redox matching.", Chemical Communications, pp. 23–36, 2005.
Recent reports of highly conductive metallopolymers are reviewed. This literature is classified into one of two categories (inner or outer sphere) depending on the mode of interaction between the transition metal centers with each other and the conducting polymer backbone. The critical nature of charge transport is discussed in the context of the relative energies of the organic polymer-based and metal-centered redox processes. Also included are recent advances in the development of functional materials based on metal-containing conducting polymers.
S. W. Thomas, Amara, J. P., Bjork, R. E., and Swager, T. M., "Amplifying fluorescent polymer sensors for the explosives taggant 2,3-dimethyl-2,3-dinitrobutane (DMNB).", Chemical Communications, pp. 4572–4, 2005.
Structural and electronic effects on the efficiency of DMNB detection with fluorescent conjugated polymers are described.
P. H. Kwan and Swager, T. M., "Insulated conducting polymers: manipulating charge transport using supramolecular complexes.", Chemical Communications, pp. 5211–5213, 2005.
Rotaxane structures which provide functional insulation to conducting polymers can provide over a 4000-fold redn. in cond. over non-rotaxane structures and with electronically active copper ions in the rotaxane units the cond. increases by more than 80 times relative to the metal-free analog. [on SciFinder(R)]
L. Gutman, Cao, J., Swager, T. M., and Thomas, E. L., "Phase and orientational ordering of A–B–A tri-block co-polymers guest in a quenched host of low molecular weight rod molecules", Chemical Physics Letters, vol. 408, pp. 139–144, 2005.
We investigate the effects of thermodynamical variables, intermol. interactions and block lengths on phase and orientational ordering of guest tri-block co-polymers in a host glassy matrix of short mol. rods. The A and B blocks can align to the short rod mols. Using a field theoretic formulation we demonstrate the occurrence of a nematic-nematic (N/N) first order transition from a guest stabilized to a guest-host stabilized region, a reentrant transition from a guest stabilized nematic region to a host only stabilized regime via an isotropic phase and the possibility to selectively stabilize the orientation of the A or/and B blocks. [on SciFinder(R)]
D. Lee and Swager, T. M., "Toward Isolated Molecular Wires: A pH-Responsive Canopied Polypyrrole", Chemistry of Materials, vol. 17, pp. 4622–4629, 2005.
The design, synthesis, and electropolymn. of sterically hindered pyrrole derivs. are described. Endo and exo adducts of cyclopentadiene and N-phenylmaleimide were converted to give bicyclo[2.2.1]heptane-fused pyrrole monomers, in which a Ph group is rigidly placed proximate the pyrrole fragment. Oxidative polymn. of these monomers affords highly conductive polypyrroles. The rigid mol. scaffold of the pyrrole monomers limits cross-communication between adjacent conducting strands and produces a defined narrow potential window of high cond. dominated by intrachain polaronic charge carriers. The resultant loosely packed polymer chains allow for superior sensory properties. Notably, the elec. cond. of oxidatively doped poly(7) could be reversibly modulated in aq. electrolytes at pH 3-9. [on SciFinder(R)]
Z. Xiang, Nesterov, E. E., Skoch, J., Lin, T., Hyman, B. T., Swager, T. M., Bacskai, B. J., and Reeves, S. A., "Detection of myelination using a novel histological probe.", Journal of Histochemistry and Cytochemistry, vol. 53, pp. 1511–6, 2005.
Current methods for myelin staining in tissue sections include both histological and immunohistochemical techniques. Fluorescence immunohistochemistry, which uses antibodies against myelin components such as myelin basic protein, is often used because of the convenience for multiple labeling. To facilitate studies on myelin, this paper describes a quick and easy method for direct myelin staining in rodent and human tissues using novel near-infrared myelin (NIM) dyes that are comparable to other well-characterized histochemical reagents. The near-infrared fluorescence spectra of these probes allow fluorescent staining of tissue sections in multiple channels using visible light fluorophores commonly used in immunocytochemistry. These dyes have been used successfully to detect normal myelin structure and myelin loss in a mouse model of demyelination disease.
S. W. {Thomas III}, Yagi, S., and Swager, T. M., "Towards chemosensing phosphorescent conjugated polymers: cyclometalated platinum(II) poly(phenylene)s", Journal of Materials Chemistry, vol. 15, p. 2829, 2005.
The synthesis and optical properties of several phosphorescent conjugated poly(phenylene)s contg. cyclometalated square-planar platinum (II) complexes are reported. These electronic polymers were synthesized via Suzuki cross-coupling of a dibromophenylpyridine-ligated Pt(ii) complex with a fluorene diboronic ester. Their optical properties are characterized by relatively strong orange room-temp. phosphorescence with well-resolved vibronic structure in both frozen 2-methyltetrahydrofuran glass and room-temp. fluid soln. Time-resolved phosphorescence spectroscopy has shown that the polymers have excited state lifetimes of approx. 14 $μ$s. These optical properties of the oligomers and polymers are contrasted with those of small model complexes, the optical properties of which have a strong dependence on the identity of the $\beta$-diketonate ligand used. The potential utility of phosphorescent conjugated polymers is illustrated by examn. of the diffusive quenching due to oxygen as a function of mol. structure. [on SciFinder(R)]
P. H. Kwan and Swager, T. M., "Intramolecular photoinduced charge transfer in rotaxanes.", Journal of the American Chemical Society, vol. 127, pp. 5902–9, 2005.
We report the synthesis and photophysical investigation of a series of rotaxanes in which the physical confinement of the donor and acceptor (DA) pair leads, in some cases, to emissive exciplexes. As a comparison, we examined the photoinduced charge-transfer processes in the same DA mixtures under intermolecular conditions. The interlocked configuration of the rotaxane facilitates pi orbital overlap of the excited state DA pair by keeping their center-to-center distance extremely small. This increased interaction between the DA pair significantly lowers the activation energy for exciplex formation (E(a)) and helps stabilize the highly polar charge-transfer complex. We find that the stabilizing effect of the rotaxane architecture compensates for the modest thermodynamic driving force for some charge-transfer interactions. In addition, we examined the temperature dependence on the rotaxanes' optical properties. Metal coordination to the tetrahedral cavity disrupts the cofacial conformation of the DA pair and quenches the fluorescence. Binding of alkali metal ions to the 3,4-ethylenedioxythiophene (EDOT)-based rotaxane, however, gives rise to the emergence of a new weak emission band at even lower energies, indicative of a new emissive exciplex.
J. H. Wosnick, Mello, C. M., and Swager, T. M., "Synthesis and application of poly(phenylene ethynylene)s for bioconjugation: a conjugated polymer-based fluorogenic probe for proteases.", Journal of the American Chemical Society, vol. 127, pp. 3400–5, 2005.
A set of carboxylate-functionalized poly(phenylene ethynylene)s (PPEs) has been synthesized in which the carboxylic acid groups are separated from the polymer backbone by oligo(ethylene glycol) spacer units. These polymers are soluble in water and organic solvents and have photophysical properties that are sensitive to solvent conditions, with high salt content and the absence of surfactant promoting the formation of aggregates of relatively low quantum yield and long fluorescence lifetime. Quenching of these materials by the dinitrophenyl (DNP) chromophore (K(SV) approximately 10(4)) is also highly solvent-dependent. The presence of carboxylate groups far from the polymer backbone appended to each repeating unit allows for the postpolymerization modification of these PPEs with peptides by methods analogous to those described for carboxylate-functionalized small-molecule dyes. Covalent attachment of the fluorescence-quenching 14-mer Lys(DNP)-GPLGMRGLGGGGK to the PPE results in a nonemissive substrate whose fluorescence is restored upon treatment with trypsin. The rate of fluorescence turn-on in this case is increased 3-fold by the presence of surfactant, though the actual rate of peptide hydrolysis remains the same. A small-molecule mimic of the polymer-peptide system shows a smaller fluorescence enhancement upon treatment with trypsin, illustrating the value of polymer-based amplification in this sensory scheme.
S. W. Thomas, Long, T. M., Pate, B. D., Kline, S. R., Thomas, E. L., and Swager, T. M., "perpendicular organization of macromolecules: synthesis and alignment studies of a soluble poly(iptycene).", Journal of the American Chemical Society, vol. 127, pp. 17976–7, 2005.
We describe herein a polymeric material that prefers to align perpendicular to a stretch-aligned polymer host in the solid state. Poly(iptycene) poly-1 was synthesized from monomer 1 under hyperbaric techniques via a Diels-Alder polymerization. Polarized excitation spectra of the anthracene end groups in this material in a stretch-aligned, solution-cast poly(vinyl chloride) (PVC) film showed that the poly(iptycene) prefers to align normal (counter aspect ratio) to the stretching direction of the PVC. This is explained by a \"threading\" mechanism, whereby the PVC intercalates through the internal free volume presented by poly-1, similar to effects observed in small molecule iptycenes under similar conditions.
Y. Kim, Bouffard, J., Kooi, S. E., and Swager, T. M., "Highly emissive conjugated polymer excimers.", Journal of the American Chemical Society, vol. 127, pp. 13726–31, 2005.
Conjugated polymers often display a decrease of fluorescence efficiency upon aggregation due in large part to enhanced interpolymer interactions that produce weakly emissive species generally described as having excimer-like character. We have found that poly(phenylene ethynylene)s with fused pendant [2.2.2] ring structures having alkene bridges substituted with two ester groups function to give highly emissive, broad, and red-shifted emission spectra in the solid state. To best understand the origin of this new solid-state emissive species, we have performed photophysical studies of a series of different materials in solution, spin-coated thin films, solid solutions, and Langmuir films. We conclude that the new, red-shifted, emissive species originate from excimers produced by interchain interactions being mediated by the particular [2.2.2] ring system employed. The ability to design structures that can reliably produce highly emissive conjugated polymer excimers offers new opportunities in the emission tailoring of electroluminescence and sensory devices.
Y. Kim, Whitten, J. E., and Swager, T. M., "High ionization potential conjugated polymers.", Journal of the American Chemical Society, vol. 127, pp. 12122–30, 2005.
We report the synthesis of a series of poly(p-phenylene ethynylene)s (PPEs) with high ionization potentials and associated high excited-state electron affinities. Their photophysical properties were investigated using steady-state and time-resolved fluorescence techniques. The ionization potentials of the polymer thin films were determined using ultraviolet photoelectron spectroscopy (UPS), and those with the highest ionization potentials displayed high sensitivity for the detection of electron-donating aromatic compounds. The effects of sterics, chemical structure, and electronic properties on the polymers' sensory responses were investigated by fluorescence quenching experiments in both solution and solid thin films. In addition, we report that in some cases the excited-state charge-transfer complexes (exciplexes) of the PPEs with analytes were observed. These latter effects provide promising opportunities for the formation of sensitive and selective chemical sensors.
E. E. Nesterov, Zhu, Z., and Swager, T. M., "Conjugation enhancement of intramolecular exciton migration in poly(p-phenylene ethynylene)s.", Journal of the American Chemical Society, vol. 127, pp. 10083–8, 2005.
Efficient energy migration in conjugated polymers is critical to their performance in photovoltaic, display, and sensor devices. The ability to precisely control the polymer conformation is a key issue for the experimental investigations and deeper understanding of the nature of this process. We make use of specially designed iptycene-containing poly(p-phenylene ethynylene)s that display chain-extended conformations when dissolved in nematic liquid crystalline solvents. In these solutions, the polymers show a substantial enhancement in the intrachain exciton migration rate, which is attributed to their increased conjugation length and better alignment. The organizational enhancement of the energy transfer efficiency, as determined by site-selective emission from lower energy traps at the polymer termini, is accompanied by a significant increase of the fluorescence quantum yield. The liquid crystalline phase is a necessary requirement for these phenomena to occur, and when the temperature was increased above the nematic-isotropic transition, we observed a dramatic reduction of the energy transfer efficiency and fluorescence quantum yield. The ability to improve the exciton migration efficiency through precise control of the polymer structure with liquid crystalline solutions demonstrates the importance of a polymer's conformation for energy transfer, and provides a way to improve the energy transporting performance of conjugated polymers.
M. H. Song, Shin, K. - C., Park, B., Takanishi, Y., Ishikawa, K., Watanabe, J., Takezoe, H., Nishimura, S., Toyooka, T., Zhu, Z., and Swager, T. M., "Effect of thick anisotropic NLC layer on lasing in polymeric cholesteric liquid crystals.", Journal of the Korean Physical Society, vol. 46, pp. S265–S268, 2005.
We have studied the lasing characteristics from a dye-doped nematic liq. crystal layer sandwiched by two polymeric cholesteric liq. crystal films as photonic band gap materials. The nematic layer possessing birefringence brings about the following remarkable optical characteristics: (1) reflectance in the photonic band-gap (PEG) region exceeds 50 %, due to the retardation effect, being unpredictable from a single CLC film; (2) efficient lasing occurs at the notch of PEG; (3) the lasing emissions contain both right- and left-circular polarizations. In this study, we demonstrate that a 100-$μ$m-thick nematic liq. crystal layer system shows several dips, resulting in multi-mode lasing actions just at the dips within PBG. [on SciFinder(R)]
D. Zhao and Swager, T. M., "Sensory Responses in Solution vs Solid State: A Fluorescence Quenching Study of Poly(iptycenebutadiynylene)s", Macromolecules, vol. 38, pp. 9377–9384, 2005.
A new series of poly(p-phenylenebutadiynylene)s has been synthesized with unique polymer structural features. In these systems each of the p-phenylene units in the conjugated backbone is the core of a rigid three-dimensional iptycene scaffold. The fluorescence quenching properties of these polymers in response to a series of electron-deficient arom. compds. have been investigated in both soln. and the solid state. It was found that in soln. these polymers displayed higher quenching sensitivity toward studied quenchers compared to a more open-structure iptycene-contg. poly(p-phenyleneethynylene). The quenching behaviors of the conjugated polymer were shown to be strongly influenced by the configuration of the incorporated iptycences. The thin films investigations revealed differences in both the fluorescence quenching and the recovery processes. Distinct behaviors indicated that the fluorescence quenching in the solid state is dictated by different factors than those in soln. Our results further suggest that poly(p-phenylenebutadiynylene)s contg. large iptycene scaffolds that introduce porosity have the ability to efficiently sequester the quencher mols. within thin films as these materials display slow fluorescence recoveries. [on SciFinder(R)]
J. P. Amara and Swager, T. M., "Synthesis and Properties of Poly(phenylene ethynylene)s with Pendant Hexafluoro-2-propanol Groups", Macromolecules, vol. 38, pp. 9091–9094, 2005.
Several poly(phenylene-ethynylene)s with pendant hexafluoro-2-propanol (HFIP) groups have been synthesized and characterized in terms of their soln. and thin-film optical properties. The incorporation of strongly hydrogen-bond-donating HFIP groups into conjugated polymers is shown to greatly enhance their fluorescence response upon exposure to the vapors of several hydrogen-bond-accepting analytes such as pyridine and 2,4-dichloropyrimidine. The enhanced sensitivity of these conjugated polymer-based chemosensors is the result of stronger analyte/polymer binding interactions and more facile photoinduced charge-transfer reactions with hydrogen-bonded analytes. [on SciFinder(R)]
J. H. Wosnick, Liao, J. H., and Swager, T. M., "Layer-by-Layer Poly(phenylene ethynylene) Films on Silica Microspheres for Enhanced Sensory Amplification", Macromolecules, vol. 38, pp. 9287–9290, 2005.
Alternating polyelectrolyte \"layer-by-layer\" deposition was used to create emissive thin films of a weakly anionic, nonaggregating poly(phenylene ethynylene) on silica microspheres. The resulting particles are smooth and emit fluorescence solely from the outside polymer-coated surface. Suspensions of the coated microspheres show similar fluorescence properties to polymers in soln., but with enhanced sensitivity (up to 200-fold) to nitroarom. quenchers. These enhancements are attributed to a combination of surface and electronic effects. [on SciFinder(R)]
C. Song and Swager, T. M., "Highly Conductive Poly(phenylene thienylene)s: m -Phenylene Linkages Are Not Always Bad", Macromolecules, vol. 38, pp. 4569–4576, 2005.
Two isomeric polymers, which contain meta- or para-phenylene linkages between conducting segments, were synthesized and compared by electrochem. methods. The nonconjugated poly(1,5-diacetoxy-m-phenylene tetrathienylene) (PMPT-OAc) showed similar electroactivity to the para-isomer, poly(2,5-diacetoxy-p-phenylene tetrathienylene) (PPPT-OAc), in the cyclic voltammetry and in-situ cond. measurements. Spectroelectrochem. showed a similar buildup of sub-band-gap electronic transitions. Deacetylation of the polymers was performed successfully by reaction with hydrazine, producing PMPT-OH and PPPT-OH. Both phenol-substituted polymers exhibited greater electroactivity than the acetoxy-substituted polymers. In addn., the phenol substituents were found to lower the \"turn-on\" potential in the cond.-potential profile for PMPT-OH, but not for PPPT-OH. An alkoxy-substituted PMPT-OMe shows electroactivity similar to that of PMPT-OAc in the cyclic voltammetry, in-situ cond., and spectroelectrochem. [on SciFinder(R)]
A. Satrijo and Swager, T. M., "Facile Control of Chiral Packing in Poly( p -phenylenevinylene) Spin-Cast Films", Macromolecules, vol. 38, pp. 4054–4057, 2005.
CD spectroscopy was used to demonstrate that a film spin-cast from a polymer having a polyphenylenevinylene backbone and enantiomerically pure chiral side chains can be fabricated from different solvents to have correspondingly different chiral architectures with opposite handedness. [on SciFinder(R)]
S. W. Thomas and Swager, T. M., "Synthesis and Optical Properties of Simple Amine-Containing Conjugated Polymers", Macromolecules, vol. 38, pp. 2716–2721, 2005.
Conjugated polymers (CPs) contg. amino groups have been synthesized, and their optical properties in both soln. and thin film have been studied. New monomers required for the synthesis of these polymers have been readily prepd. via efficient synthetic routes. These monomers have been successfully polymd. with a variety of comonomers. The spectral positions of the absorption and emission spectra correlate with the degree of electron d. on the polymer chain. Polymers contg. N-alkylcarbazole units display similar optical properties in soln. to most CPs. Polymers with dialkylamino groups, however, display very different optical properties, including broadened absorbance and emission spectra, larger Stokes shifts, and longer excited state lifetimes. These results are consistent with a significant difference between the mol. geometries of the absorbing and emitting states. The solid state emission of most of the polymers is sufficient to warrant them for consideration as fluorescent sensing materials. [on SciFinder(R)]
C. A. Breen, Rifai, S., Bulović, V., and Swager, T. M., "Blue electroluminescence from oxadiazole grafted poly(phenylene-ethynylene)s.", Nano letters, vol. 5, pp. 1597–601, 2005.
Blue poly(phenylene-ethynylene) (PPE) electroluminescence is achieved in a single layer organic light emitting device. The polymeric system consists of an oxadiazole grafted PPE, which combines the necessary charge transport properties while maintaining the desirable efficient, narrow light-emitting properties of the PPE. Incorporation of a pentiptycene scaffold within the PPE structure prevents ground-state and excited-state interactions between the pendent oxadiazole units and the conjugated backbone.
A. Rose, Zhu, Z., Madigan, C. F., Swager, T. M., and Bulović, V., "Sensitivity gains in chemosensing by lasing action in organic polymers.", Nature, vol. 434, pp. 876–9, 2005.
Societal needs for greater security require dramatic improvements in the sensitivity of chemical and biological sensors. To meet this challenge, increasing emphasis in analytical science has been directed towards materials and devices having highly nonlinear characteristics; semiconducting organic polymers (SOPs), with their facile excited state (exciton) transport, are prime examples of amplifying materials. SOPs have also been recognized as promising lasing materials, although the susceptibility of these materials to optical damage has thus far limited applications. Here we report that attenuated lasing in optically pumped SOP thin films displays a sensitivity to vapours of explosives more than 30 times higher than is observed from spontaneous emission. Critical to this achievement was the development of a transducing polymer with high thin-film quantum yield, a high optical damage threshold in ambient atmosphere and a record low lasing threshold. Trace vapours of the explosives 2,4,6-trinitrotoluene (TNT) and 2,4-dinitrotoluene (DNT) introduce non-radiative deactivation pathways that compete with stimulated emission. We demonstrate that the induced cessation of the lasing action, and associated sensitivity enhancement, is most pronounced when films are pumped at intensities near their lasing threshold. The combined gains from amplifying materials and lasing promise to deliver sensors that can detect explosives with unparalleled sensitivity.
D. Zhao and Swager, T. M., "Conjugated polymers containing large soluble diethynyl iptycenes.", Organic letters, vol. 7, pp. 4357–60, 2005.
[structures: see text] An efficient synthesis of large iptycenes appended with alkoxy and ethynyl substituents is reported. The rigid shape-persistent iptycene scaffold prevents interactions between the polymer backbones and can be used to solubilize polymers containing less soluble but readily accessible comonomers to prepare functional, solution-processible poly(p-phenyleneethynylene) (PPE)-conjugated polymers. These polymers are highly emissive in thin films without significant excimer/exciplex formation as a result of the effective chain isolation enforced by the iptycene units.
T. Deng, Breen, C., Breiner, T., Swager, T. M., and Thomas, E. L., "A block copolymer nanotemplate for mechanically tunable polarized emission from a conjugated polymer", Polymer, vol. 46, pp. 10113–10118, 2005.
A polymer blend system consisting of polystyrene grafted onto poly(p-phenylene ethynylene) (PS-g-PPE) and poly(styrene-block-isoprene-block-styrene) triblock copolymer (SIS) yields highly polarized emission due to the unidirectional alignment of the PPE mols. During the roll casting, the triblock copolymer microphase separates and creates unidirectionally aligned PS cylindrical microdomains in the rubbery PI matrix. PPE, a fluorescent conjugated polymer, was grafted with polystyrene (PS) side chains that enabled sequestration and alignment of these rigid backbone emitter mols. into the PS microdomains of the SIS triblock copolymer. Deforming the thermoplastic elastomer in a direction perpendicular to the orientation direction of the cylinders causes rotation of the PS cylinders and the PPE emitter mols. and affords tunable polarized emission due to reorientation of the PPE contg. PS cylinders as well as film thinning from Poisson effect. [on SciFinder(R)]
C. A. Breen, Tischler, J. R., Bulović, V., and Swager, T. M., "Highly Efficient Blue Electroluminescence from Poly(phenylene ethynylene) via Energy Transfer from a Hole-Transport Matrix", Advanced Materials, vol. 17, pp. 1981–1985, 2005.
Blue poly(phenylene ethynylene) (PPE) electroluminescence (EL, see Figure) is obsd. in a hybrid org. light-emitting diode (OLED) with Commission Internationale de l'Eclairage coordinates of (x = 0.17; y = 0.20). Energy transfer from a hole-transport host to polystyrene-grafted PPEs is utilized to improve the poor LED characteristics of traditional PPE-based systems. [on SciFinder(R)]
J. Zheng and Swager, T. M., "Poly(arylene ethynylene)s in chemosensing and biosensing.", Advances in Polymer Science, vol. 177, pp. 151–179, 2005.
A review. Poly(arylene ethynylene)s (PArEs) have been used in recent years as effective transducers for a variety of sensing purposes ranging from org. mols. such as Me viologen and TNT to biol. analytes. Their superior sensitivity to minor perturbations is fundamentally governed by the energy transport properties resulting from the extended conjugation of the polymer backbone. An understanding of the underlying principles of energy transport allows the design of sensors with greater sensitivity and specificity. Pioneering work with Me viologen as an electron-transfer quencher demonstrated that connecting receptors in series amplifies the sensing response compared to that of individual receptors. Since then, factors such as the electronic and structural nature of the polymers and their assembly architecture have proven to be important in improving sensory response. In this review, the authors present an overview of works to date by various groups in the field of PArE chemosensors and biosensors. [on SciFinder(R)]
E. E. Nesterov, Skoch, J., Hyman, B. T., Klunk, W. E., Bacskai, B. J., and Swager, T. M., "In vivo optical imaging of amyloid aggregates in brain: design of fluorescent markers.", Angewandte Chemie, International Edition, vol. 44, pp. 5452–6, 2005.
Routine diagnostics and studies of Alzheimer's disease might benefit form the noninvasive optical imaging of amyloid-$\beta$ plaques in the brain. A rational design strategy for in vivo amyloid-imaging agents that enter the brain and selectively stain amyloid plaques is presented (see picture), and properties of a promising lead biomarker candidate are reported. [on SciFinder(R)]
T. M. Long and Swager, T. M., "Molecular design of free volume as a route to low-kappa dielectric materials.", Journal of the American Chemical Society, vol. 125, pp. 14113–9, 2003.
Polymers incorporating the triptycene subunit were prepared for the molecular-level design of low dielectric constant (low-kappa) materials that can be used to manufacture faster integrated circuits. Triptycenes having restricted rotation by multiple point attachment to the polymer backbone are shown to introduce free volume into the films, thereby lowering their dielectric constants. The triptycene containing polymers exhibit a number of desirable properties including low-water absorption and high thermal stability. Systematic studies wherein comparisons are made between two separate classes of triptycene polymers and their non-triptycene containing analogues demonstrate that proper insertion of triptycenes into a polymer backbone can give rise to a reduction in the material's dielectric constant while also improving its mechanical properties. These characteristics are desired by the semiconductor industry for the next generation of microprocessors and memory to provide insulation of the increasingly shrinking features.
C. A. Breen, Deng, T., Breiner, T., Thomas, E. L., and Swager, T. M., "Polarized photoluminescence from poly(p-phenylene-ethynylene) via a block copolymer nanotemplate.", Journal of the American Chemical Society, vol. 125, pp. 9942–3, 2003.
A new approach based on a conjugated polymer/block copolymer guest/host system for the generation of polarized photoluminescence is reported. Synthetic modification of a poly(p-phenylene-ethynylene) (PPE) conjugated polymer is used for domain-specific incorporation into a cylindrical morphology block copolymer host matrix. Subsequent ordering of the host nanostructure via roll cast processing templates uniaxial alignment of the guest PPE. The ordered films are optically anisotropic displaying both polarized absorption with a dichroic ratio of 3.0 at 440 nm and polarized emission with a polarization ratio of 7.3 at 472 nm.
D. Lee and Swager, T. M., "Defining space around conducting polymers: reversible protonic doping of a canopied polypyrrole.", Journal of the American Chemical Society, vol. 125, pp. 6870–1, 2003.
A canopy-shaped pyrrole derivative 2 was prepared, in which a sterically demanding pendant group is juxtaposed to the pyrrole fragment to minimize interstrand pi-pi stacking interactions in the resulting polymer. Anodic polymerization of 2 afforded highly conductive poly(2), the electronic structure of which was probed by various spectroelectrochemical techniques. A limited charge delocalization within poly(2) translates into a well-defined conductivity profile, properties important for resistivity-based sensing. Notably, the bulk conductivity was precisely modulated by a rapid and reversible deprotonation and reprotonation of the polymer backbone.
S. - W. Zhang and Swager, T. M., "Fluorescent detection of chemical warfare agents: functional group specific ratiometric chemosensors.", Journal of the American Chemical Society, vol. 125, pp. 3420–1, 2003.
Indicators providing highly sensitive and functional group specific fluorescent response to diisopropyl fluorophosphate (DFP, a nerve gas (G-agent) simulant) are reported. Nonemissive indicator 2 reacts with DFP to give a cyclized compound 2+A- that shows a high emission due to its highly planar and rigid structure. Very weak emission was observed by the addition of HCl. Another indicator based on pyridyl naphthalene exhibits a large shift in its emission spectrum after reaction with DFP, which provides for quantitative ratiometric detection.
H. - H. Yu, Xu, B., and Swager, T. M., "A proton-doped calix[4]arene-based conducting polymer.", Journal of the American Chemical Society, vol. 125, pp. 1142–3, 2003.
Segmented conducting polymers based upon a calix[4]arene scaffold are reported. The cone conformation creates a zigzag orientation of the polymer segments. Their acid-dependent conductivities are similar to the strong pH conductivity dependence of polyaniline which is said to be acid dopable. On the other hand, they have a segmented structure that imposes greater localization of the carriers. The conductivity of such a system can be considered to result from rapid self-exchange between discrete units. Hence, electron exchange between radical cations and p-diquinone salts produces the high conductivity of these polymers.
K. Kuroda and Swager, T. M., "Self-amplifying sensory materials: Energy migration in polymer semiconductors.", Macromolecular Symposia, vol. 201, pp. 127–134, 2003.
A review. Signal amplification for ultra-sensitive detection was achieved by energy migration in conjugated semiconducting polymeric assemblies. Crit. to optimizing this effect is the synthesis of non-aggregate polymers, the multi-dimensional directional transport of excited states (excitons), and extending the intrinsic excited state lifetime of conjugated polymers. We developed new water-sol. non-ionic conjugated polymers for use in biosensory applications, which can be used to provide highly sensitive/specific ultra-trace detection that is immune to specificity problems that plage ionic conjugated polymers. [on SciFinder(R)]
M. Fasolka, Goldner, L., Hwang, J., Urbas, A., DeRege, P., Swager, T., and Thomas, E., "Measuring Local Optical Properties: Near-Field Polarimetry of Photonic Block Copolymer Morphology", Physical Review Letters, vol. 90, p. 016107, 2003.
Ultrahigh mol. wt. polystyrene-b-polyisoprene block copolymers (BCs), noted for their photonic behavior, were imaged using transmission near-field scanning optical microscopy (NSOM) and NSOM polarimetry. The authors' improved scheme for polarization modulation (PM) polarimetry, which accounts for optical anisotropies of the NSOM aperture probe, enables mapping of the local di-attenuation and birefringence (with sep. aligned di-attenuating and fast axes) in these specimens with sub-diffraction limited resoln. PM-NSOM micrographs illuminate the mesoscopic optical nature of these BC specimens by resolving individual microphase domains and defect structures. [on SciFinder(R)]
T. - H. Kim and Swager, T. M., "A fluorescent self-amplifying wavelength-responsive sensory polymer for fluoride ions.", Angewandte Chemie, International Edition, vol. 42, pp. 4803–6, 2003.
The authors have introduced a new system for the detection of fluoride ion and successfully amplified the response using exciton migration in a semiconducting org. polymer. In contrast to most semiconductive-polymer sensor schemes that rely on changes in emission intensity, this sensory system uses a new fluorescence signal. The approach of directly interconnecting the indicator electronically with the polymer's band structure is a promising alternative to FRET schemes and the authors intend to apply this approach to other analytes of interest. [on SciFinder(R)]
K. Kuroda and Swager, T. M., "Synthesis of a nonionic water soluble semiconductive polymer", Chemical Communications, pp. 26–27, 2003.
A new nonionic water-sol. fluorescent conjugated poly(phenylene ethynylene) is reported with hydroxyl and amide side chains surrounding an arom. polymer backbone. [on SciFinder(R)]
F. Araoka, Shin, K. - C., Takanishi, Y., Ishikawa, K., Takezoe, H., Zhu, Z., and Swager, T. M., "How doping a cholesteric liquid crystal with polymeric dye improves an order parameter and makes possible low threshold lasing.", Journal of Applied Physics, vol. 94, pp. 279–283, 2003.
Lasing conditions in a dye-doped cholesteric liq. crystal (ChLC) were studied in view of optical modes for the light propagating in ChLCs using a polymeric dye with the transition dipole moment parallel to the local director of the ChLC host. Lasing always occurs at the lower-energy edge of the photonic gap. This is because that the optical eigen mode at the lower-energy gap is linearly polarized parallel to the director, while it is perpendicular at the higher-energy gap. Because of this well-defined lasing condition, low-threshold lasing was successfully achieved. [on SciFinder(R)]
T. Swager and Okamoto, Y., "Foreword", Journal of Polymer Science Part A: Polymer Chemistry, vol. 41, pp. 3469–3469, 2003.
J. D. Tovar and Swager, T. M., "Cofacially constrained organic semiconductors.", Journal of Polymer Science, Part A: Polymer Chemistry, vol. 41, pp. 3693–3702, 2003.
The synthesis, spectroscopy, and electrochem. is presented of bis[oligo(thiophene)] monomers where the sterics of covalently attached subunits enforced oblique spatial orientations. The synthetic scheme applied to a variety of chromophores; std. bromination and cross-coupling chem. afforded bis(terthienyl)benzene systems in high yield. Model systems were prepd. consisting of mono(terthiophene)s to probe the effects that van der Waals contacts impose during electropolymn. and methyl-blocked analogs to examine the electrochem. properties of derivs. that do not undergo polymn. The extent of delocalization between chromophores as deduced from electrochem. studies is discussed and viable electrochromic polymers are demonstrated. [on SciFinder(R)]
M. H. Song, Park, B., Shin, K. - C., Ohta, T., Tsunoda, Y., Hoshi, H., Takanishi, Y., Ishikawa, K., Watanabe, J., Nishimura, S., Toyooka, T., Zhu, Z., Swager, T. M., and Takezoe, H., "Effect of Phase Retardation on Defect-Mode Lasing in Polymeric Cholesteric Liquid Crystals", Advanced Materials, vol. 16, pp. 779–783, 2004.
The authors studied the effects of an anisotropic nematic liq, crystal (NLC) defect layer introduced between polymeric cholesteric liq. crystals (PCLC) layers. It was found that the modulation of reflectance exceeds the 50 % limitation provided by simple CLC photonic bandgap (PBGs) due to the birefringence of the defect layer. Moreover, by adjusting the PBG region to be coincident with the fluorescent emission band for the guest polymer dye, sharp defect-mode lasing was obsd. successfully at the defect mode in the middle of PBG. From these results, it was demonstrated that the configuration of a PCLC with a polymer-dye-doped anisotropic defect layer is very attractive for lasing. [on SciFinder(R)]
H. - H. Yu, Pullen, A. E., B\üschel, M. G., and Swager, T. M., "Charge-specific interactions in segmented conducting polymers: an approach to selective ionoresistive responses.", Angewandte Chemie, International Edition, vol. 43, pp. 3700–3703, 2004.
J. Zheng and Swager, T. M., "Biotinylated poly(p-phenylene ethynylene): unexpected energy transfer results in the detection of biological analytes.", Chemical Communications, pp. 2798–9, 2004.
Decreased spectral overlap between a donor biotinylated poly(p-phenylene ethynylene) and a chromophore-labeled streptavidin acceptor leads to better observed fluorescence resonance energy transfer.
J. H. Wosnick and Swager, T. M., "Enhanced fluorescence quenching in receptor-containing conjugated polymers: a calix[4]arene-containing poly(phenylene ethynylene).", Chemical Communications, pp. 2744–5, 2004.
A fluorescent poly(phenylene ethynylene) containing calix[4]arene-based receptor units has a sensitivity to quenching by the N-methylquinolinium ion that is over three times larger than that seen in a control polymer lacking calix[4]arenes.
L. Gutman, Cao, J., Swager, T. M., and Thomas, E. L., "Orientational ordering of short LC rods in an anisotropic liquid crystalline polymer glass.", Chemical Physics Letters, vol. 389, pp. 198–203, 2004.
The orientational phase diagram and ordering of guest liq. cryst. (LC) rods in a host liq. cryst. polymer (LCP) matrix quenched below the glass transition is detd. by field theory. Microscopic anisotropic interactions can align the LC rods to each other and also align LCP matrix side chains and the LC rods in the plane normal to the local LCP chain contour. The authors' numerical anal. suggest ways to exploit host entropy, anisotropy of microscopic interactions and manipulate properties of LC rods for modern applications. The authors predict a nematic-nematic discontinuous orientational transition from a guest stabilized to a guest-host stabilized region and a reentrant transition from a guest stabilized nematic region to a host only stabilized regime. A detailed anal. of phase boundaries transitions and ordering is presented. [on SciFinder(R)]
H. -h Yu and Swager, T. M., "Molecular Actuators—Designing Actuating Materials at the Molecular Level", IEEE Journal of Oceanic Engineering, vol. 29, pp. 692–695, 2004.
K. - C. Shin, Araoka, F., Park, B., Takanishi, Y., Ishikawa, K., Zhu, Z., Swager, T. M., and Takezoe, H., "Advantages of Highly Ordered Polymer-Dyes for Lasing in Chiral Nematic Liquid Crystals", Japanese Journal of Applied Physics, vol. 43, pp. 631–636, 2004.
Lasing in dye-doped chiral nematic liq. crystals (N*LCs) was studied. To demonstrate the advantages of using a polymer dye, that is highly aligned along the local director of N*LCs, over com. small mol. dyes such as 4-(dicyanomethylene)-2-Me-6-(4-dimethylaminostryl)-4H-pyran (DCM), comparative studies of the fluorescence, lasing conditions and order parameters were made using polymer-dye doped N*LC and DCM-doped N*LC cells. Right- and polarized fluorescence spectra for both dyes were accurately simulated by taking account of their d. of modes and order parameters. The greater alignment afforded by the polymer dye in N*LCs provides ideal conditions for lasing. [on SciFinder(R)]
M. D. Disney, Zheng, J., Swager, T. M., and Seeberger, P. H., "Detection of bacteria with carbohydrate-functionalized fluorescent polymers.", Journal of the American Chemical Society, vol. 126, pp. 13343–6, 2004.
Many pathogens that infect humans use cell surface carbohydrates as receptors to facilitate cell-cell adhesion. The hallmark of these interactions is their multivalency, or the simultaneous occurrence of multiple interactions. We have used a carbohydrate-functionalized fluorescent polymer, which displays many carbohydrate ligands on a single polymer chain, to allow for multivalent detection of pathogens. Incubation of a mannose-functionalized polymer with Escherichia coli yields brightly fluorescent aggregates of bacteria. These results show that carbohydrate-functionalized fluorescent polymers are a versatile detection method for bacteria. Future design of detectors for other pathogens only requires information on the carbohydrates bound by the organisms, which has been exhaustively reported in the literature.
K. - N. Hu, Yu, H. - H., Swager, T. M., and Griffin, R. G., "Dynamic nuclear polarization with biradicals.", Journal of the American Chemical Society, vol. 126, pp. 10844–5, 2004.
Dynamic nuclear polarization (DNP) experiments in rotating solids have been performed for the first time using biradicals rather than monomeric paramagnetic centers as polarizing agents. Specifically, two TEMPO radicals were tethered with a poly(ethylene glycol) chain of variable length where the number of glycol units was 2, 3, or 4. NMR experiments show that the signal observed in DNP experiments is approximately inversely proportional to the length of the chain. Thus, the shorter chain with larger electron dipolar couplings yields larger enhancements. The size of the enhancement is a factor of 4 larger than obtained with the identical concentration of monomeric nitroxide radicals achieving a value of approximately 175 for the n = 2 chain.
P. H. Kwan, MacLachlan, M. J., and Swager, T. M., "Rotaxanated Conjugated Sensory Polymers.", Journal of the American Chemical Society, vol. 126, pp. 8638–8639, 2004.
The prepn. of two highly emissive conjugated polyacetylenes with tethered rotaxane repeat units are reported. Hydrogen bonding between acidic alcs. and the N-heteroarom. groups in the rotaxanes attenuates polymer fluorescence. In addn., the rotaxane groups create precise three-dimensional pockets for metal binding, which results in fluorescence quenching. Exposing thin films of Zn-doped polymers to alc. vapors reverses the quenching by up to 25%. [on SciFinder(R)]
Y. Kim, Zhu, Z., and Swager, T. M., "Hyperconjugative and inductive perturbations in poly(p-phenylene vinylenes).", Journal of the American Chemical Society, vol. 126, pp. 452–3, 2004.
New polymers having high solid-state fluorescence quantum yields and the ability to tune their electron affinity without effecting their band gap using hyperconjugative interactions is reported. The novel three-dimensional poly(phenylene vinylenes) having [2.2.2] bicyclic ring systems shown were synthesized, and the different hyperconjugative perturbations provide differential fluorescence sensory quenching responses to electron-rich and electron-deficient analytes in solution and solid thin films.
J. P. Amara and Swager, T. M., "Incorporation of Internal Free Volume: Synthesis and Characterization of Iptycene-Elaborated Poly(butadiene)s", Macromolecules, vol. 37, pp. 3068–3070, 2004.
Prepn. and characterization of 3 high mol. wt. iptycene-contg. polybutadienes as well as their starting monomers (derived from anthracene, tetracene, and 1,4-anthraquinone via initial Diels-Alder reactions followed by operations such as hydrogenation, dehydrohalogenation, and dehydration) are described. All monomers at least possessed a butadiene unit that was used for radical polymn. leading to polymers presenting an internal free vol. that could be utilized for the enhancement of miscibility and redn. of phase sepn. in polymer blends. [on SciFinder(R)]
K. Kuroda and Swager, T. M., "Fluorescent Semiconducting Polymer Conjugates of Poly( N -isopropylacrylamide) for Thermal Precipitation Assays", Macromolecules, vol. 37, pp. 716–724, 2004.
New thermally responsive fluorescent polymers conjugated with poly(N-isopropylacrylamide) (polyNIPA) were synthesized. A nonionic water-sol. poly(phenyleneethynylene) (PPE) was end-capped with a di-tert-butylnitroxide deriv., and subsequent nitroxide-mediated radical polymn. of NIPA afforded PPE-polyNIPA block copolymers. These copolymers phase-sep. from aq. solns. upon heating, and the resultant ppts. are efficiently collected by filtration. The fluorescent spectra of the ppts. indicate the absence of strong assocns. between the PPE $π$-systems. Furthermore, the fluorescent intensities of the collected solids have a linear correlation with the polymer concns. in the solns. of origin. When copolymers are thermally copptd. with dye-labeled (rhodamine B) polyNIPA materials, the dye is localized to the PPE segments, inducing fluorescent resonance energy transfer from the PPE segment (donor) to the dye (acceptor). [on SciFinder(R)]
A. Paraskos, Nishiyama, Y., and Swager, T., "Synthesis and Characterization of Triphenylene-Dione Half-Disc Mesogens", Molecular Crystals and Liquid Crystals, vol. 411, pp. 363–375, 2004.
Novel half-disk mesogens of general structure 6,7,10,11-tetrakis(alkoxy)triphenylene-1,4-dione were synthesized and their phases characterized by polarized microscopy, DSC and x-ray diffraction. These compds. form hexagonal columnar mesophases upon heating, despite their half-disk mol. shapes. X-ray diffraction suggests that within the mesophases a dimeric mesogenic subunit exists, driven by dipolar forces, in which the mols. are oriented antiparallel to one another. Such a dimer is approx. disk-shaped, and may explain the formation of columnar phases by these half-disk mols. [on SciFinder(R)]
K. R. Villazor and Swager, T. M., "Chiral Supramolecular Materials from Columnar Liquid Crystals", Molecular Crystals and Liquid Crystals, vol. 410, pp. 247–253, 2004.
Several $\beta$-diketonate ligands were synthesized which have a chiral directing element and/or a polymerizable moiety. Octahedral iron complexes of these ligands were crosslinked in the columnar hexagonal phase using acyclic diene metathesis (ADMET) polymn. The resulting liq. crystal materials were chiral and retained the order of the mesophase. [on SciFinder(R)]
T. M. Swager, "Polymer electronics for explosives detection.", NATO Science Series, II: Mathematics, Physics and Chemistry, vol. 159, pp. 29–37, 2004.
A review of the amplifying ability of semiconductive org. polymers in sensory schemes and its use for the detection of nitroarom. explosives. Semiconductive org. polymers serve as extremely efficient conduits for the transport of optically induced excitations and it is this transport property that allows for the high sensitivity of these materials to trinitrotoluene (TNT) and dinitrotoluene (DNT), the primary explosives used in landmines. Systematic mol. designs for the formation of improved sensitivity sensory materials are described. [on SciFinder(R)]
M. H. Song, Shin, K. - C., Park, B., Takanishi, Y., Ishikawa, K., Watanabe, J., Nishimura, S., Toyooka, T., Zhu, Z., Swager, T. M., and Takezoe, H., "Polarization characteristics of phase retardation defect mode lasing in polymeric cholesteric liquid crystals", Science and Technology of Advanced Materials, vol. 5, pp. 437–441, 2004.
The lasing characteristics were studied of a dye-doped nematic layer sandwiched by 2 polymeric cholesteric liq. crystal (CLC) films as photonic band gap (PBG) materials. The nematic layer acts as a defect layer, the anisotropy of which brings about the following remarkable optical characteristics: (1) reflectance in the PBG region exceeds 50% due to the retardation effect, being unpredictable from a single CLC film; (2) efficient lasing occurs either at the defect mode wavelength or at the photonic band edge; and (3) the lasing emission due to both the defect mode and the photonic band edge mode contains both right- and left-circular polarizations, while the lasing emission from a dye-doped single CLC layer with a left-handed helix is left-circularly polarized. [on SciFinder(R)]
D. Lee and Swager, T. M., "Defining Space Around Conjugated Polymers: New Vistas in Self-Amplifying Sensory Materials", Synlett, pp. 149–154, 2004.
A review. Synthetic strategies to control interchain electronic communications within conjugated polymers (CPs) are described. Novel chem. architectures built on iptycenes, metallorotaxanes, and canopied pyrroles restrict the dimensionality of electronic structures responsible for exciton and charge transport. Structure-property relationships emerging from studies of selected systems are discussed, focusing on their implications for the sensitivity of these materials as sensors. [on SciFinder(R)]
J. M. Nadeau and Swager, T. M., "New $\beta$-linked pyrrole monomers: approaches to highly stable and conductive electrochromic polymers.", Tetrahedron, vol. 60, pp. 7141–7146, 2004.
An efficient synthetic route to $\beta$-linked dipyrrole monomers has been developed. Electrochem. polymn. of these monomers leads to the incorporation of polycyclic arom. residues into a polymer backbone. The resulting conjugated polymer films are electroactive, robust electrochromic materials that are highly delocalized in their oxidized forms. [on SciFinder(R)]
L. Gutman, Cao, J., and Swager, T. M., "Phase and orientational ordering of low molecular weight rod molecules in a quenched liquid crystalline polymer matrix with mobile side chains.", The Journal of chemical physics, vol. 120, pp. 11316–26, 2004.
We study the phase diagram and orientational ordering of guest liquid crystalline (LC) rods immersed in a quenched host made of a liquid crystalline polymer (LCP) matrix with mobile side chains. The LCP matrix lies below the glass transition of the polymer backbone. The side chains are mobile and can align to the guest rod molecules in a plane normal to the local LCP chain contour. A field theoretic formulation for this system is proposed and the effects of the LCP matrix on LC ordering are determined numerically. We obtain simple analytical equations for the nematic/isotropic phase diagram boundaries. Our calculation show a nematic-nematic (N/N) first order transition from a guest stabilized to a guest-host stabilized region and the possibility of a reentrant transition from a guest stabilized nematic region to a host only stabilized regime separated by an isotropic phase. A detailed study of thermodynamic variables and interactions on orientational ordering and phases is carried out and the relevance of our predictions to experiments and computer simulations is presented.
S. Yamaguchi and Swager, T. M., "Oxidative Cyclization of Bis(biaryl)acetylenes: Synthesis and Photophysics of Dibenzo[ g,p ]chrysene-Based Fluorescent Polymers", Journal of the American Chemical Society, vol. 123, pp. 12087–12088, 2001.
J. Kim, McQuade, T. D., Rose, A., Zhu, Z., and Swager, T. M., "Directing Energy Transfer within Conjugated Polymer Thin Films", Journal of the American Chemical Society, vol. 123, pp. 11488–11489, 2001.
A. Rose, Lugmair, C. G., and Swager, T. M., "Excited-State Lifetime Modulation in Triphenylene-Based Conjugated Polymers", Journal of the American Chemical Society, vol. 123, pp. 11298–11299, 2001.
M. J. MacLachlan, Rose, A., and Swager, T. M., "A Rotaxane Exciplex", Journal of the American Chemical Society, vol. 123, pp. 9180–9181, 2001.
A. Vigalok, Zhu, Z., and Swager, T. M., "Conducting Polymers Incorporating Tungsten-Capped Calixarenes", Journal of the American Chemical Society, vol. 123, pp. 7917–7918, 2001.
I. A. Levitsky, Kim, J., and Swager, T. M., "Mass and Energy Transport in Conjugated Polymer Langmuir-Blodgett Films: Conductivity, Fluorescence, and UV-Vis Studies.", Macromolecules, vol. 34, pp. 2315–2319, 2001.
We have investigated a model thin film sensory system in which analytes diffuse into multilayers of a fluorescent conjugated polymer. The film thickness is precisely controlled by depositing discrete monolayers by the Langmuir-Blodgett technique. The effects of analyte mass transport and energy migration on the photophys. properties of the films were investigated by conducting UV-vis, fluorescence, and elec. cond. measurements. Thin films show different properties when compared to relatively thick films due to prevailing surface phenomena. The diffusion const. of the analyte through the films is estd. to be ∼7 × 10-14 cm2/s from an anal. of a phenomenol. model. A bilayer LB film exposed to the analyte implies higher sensitivity in fluorescence quenching compared to a soln. system due to a fast interpolymer energy migration in the condensed phase. However, as the no. of layers increases, the efficiency of fluorescence quenching decreases. The difference between a sensory system with emissive surface traps and one with bulk distributed quenching traps is discussed. [on SciFinder(R)]
A. E. Pullen and Swager, T. M., "Regiospecific Copolyanilines from Substituted Oligoanilines: Electrochemical Comparisons with Random Copolyanilines", Macromolecules, vol. 34, pp. 812–816, 2001.
Regiospecific substituted polyanilines were prepd. via electropolymn. of methoxy-substituted dimeric and trimeric oligoanilines. The oligoaniline monomers were synthesized utilizing Pd-catalyzed aryl amination cross-coupling chem. The single-crystal x-ray structure of one of the oligomers is presented. The oligoaniline monomers were electropolymd. in 1 M H2SO4, and the electrochem. behavior and potential-dependent in situ cond. of the regiospecific polyaniline films was compared to that of random copolymers obtained from solns. of aniline and o-anisidine of the same molar ratio. The regiospecific polyanilines exhibited higher cond., which may be attributed to a more cryst. and regular structure. Differences in the oxidn. potential of the polymers are obsd. depending on the degree of methoxy substitution. [on SciFinder(R)]
J. Kim and Swager, T. M., "Control of conformational and interpolymer effects in conjugated polymers.", Nature, vol. 411, pp. 1030–4, 2001.
The role of conjugated polymers in emerging electronic, sensor and display technologies is rapidly expanding. In spite of extensive investigations, the intrinsic spectroscopic properties of conjugated polymers in precise conformational and spatial arrangements have remained elusive. The difficulties of obtaining such information are endemic to polymers, which often resist assembly into single crystals or organized structures owing to entropic and polydispersity considerations. Here we show that the conformation of individual polymers and interpolymer interactions in conjugated polymers can be controlled through the use of designed surfactant poly(p-phenylene-ethynylene) Langmuir films. We show that by mechanically inducing reversible conformational changes of these Langmuir monolayers, we can obtain the precise interrelationship of the intrinsic optical properties of a conjugated polymer and a single chain's conformation and/or interpolymer interactions. This method for controlling the structure of conjugated polymers and establishing their intrinsic spectroscopic properties should permit a more comprehensive understanding of fluorescent conjugated materials.
Z. Zhu and Swager, T. M., "Conjugated Polymers Containing 2,3-Dialkoxybenzene and Iptycene Building Blocks", Organic Letters, vol. 3, pp. 3471–3474, 2001.
[structure: see text]. New poly(phenylene ethynylene)s (PPEs) and poly(phenylene vinylene)s (PPVs) that are highly emissive in solution and thin films were prepared utilizing palladium-catalyzed cross-coupling between new 1,4-diiodo-2,3-dialkoxybenzene- and iptycene-containing monomers. The absorption and emission spectra of the resulting polymers consistently showed a significant blue shift relative to the corresponding polymer analogues containing 2,5-dialkoxyphenylenes.[on SciFinder (R)]
T. M. Swager, Long, T. M., Williams, V., and Yang, J. - S., "Polymers containing iptycenes.", Polymeric Materials Science and Engineering, vol. 84, pp. 304–305, 2001.
Recent research on producing polyiptycenes (polymers contg. triptycene units in the backbone) is discussed. The highly rigid structure of these polymers provides large degrees of free vol. and a host of novel properties. [on SciFinder(R)]
J. D. Tovar and Swager, T. M., "Poly(naphthodithiophene)s: robust, conductive electrochromics via tandem cyclization-polymerizations.", Advanced Materials (Weinheim, Germany), vol. 13, pp. 1775–1780, 2001.
To achieve highly stable electrochromic materials, 4,5-bis(2-/3-thiophenyl)-1,2-dialkoxybenzene precursors were prepd., cyclized (aromatized), and the resulting naphthodithiophene monomers were subjected to oxidative polymn. (chem. as well as electrochem.). The precursors, monomers, and polymers were characterized by spectroscopic and electrochem. methods. [on SciFinder(R)]
T. M. Long and Swager, T. M., "Minimization of free volume. Alignment of triptycenes in liquid crystals and stretched polymers.", Advanced Materials (Weinheim, Germany), vol. 13, pp. 601–604, 2001.
Two triptycene derivs. were incorporated into a polyvinyl chloride (PVC) film or into the nematic phase of 4-(trans-4-pentylcyclohexyl)benzonitrile. The supramol. alignment taking place was visualized by polarized UV-vis and IR spectroscopy. The director axis of the nematic liq. crystal or the stretching direction of the polymer film are parallel to the the long axis of the guest mol. The degree of alignment for the 2 triptycene derivs. was detd. via the aspect ratio in comparison to anthracene. The alignment provided by the uniaxial PVC film was lower than that for the LC. A threading mechanism was proposed for the supramol. arrangement based on the minimization of the free vol. [on SciFinder(R)]
A. C. Edrington, Urbas, A. M., DeRege, P., Chen, C. X., Swager, T. M., Hadjichristidis, N., Xenidou, M., Fetters, L. J., Joannopoulos, J. D., Fink, Y., and Thomas, E. L., "Polymer-based photonic crystals.", Advanced Materials (Weinheim, Germany), vol. 13, pp. 421–425, 2001.
A review, with 28 refs., on the development of polymers as 1D photonic crystals with focus on self-assembled block copolymers. The use of plasticizer and homopolymer blends of diblock copolymers to increase periodicity and the role of self-assembly in producing 2D and 3D photonic crystals are discussed. The use of inorg. nanoparticles to increase the dielec. contrast and of a biasing field during self-assembly to control the long-range domain order and orientation are outlined. In-situ tunable materials fabricated via a mechanochromic materials system are also described. The inherent optical anisotropy of extruded polymer films and side-chain liq.-cryst. polymers provides flexibility for novel optical designs. [on SciFinder(R)]
M. Takeuchi, Shioya, T., and Swager, T. M., "Allosteric fluoride anion recognition by a doubly strapped porphyrin.", Angewandte Chemie, International Edition, vol. 40, pp. 3372–3376, 2001.
A highly selective allosteric fluoride recognition system is described consisting of a doubly strapped porphyrin (I) that contains 2 small hydrogen-bonding cavities not being able to bind larger anions. Compd. I was prepd. by condensation of strap moieties with a $\alpha$$\beta$$\alpha$$\beta$-tetrakis(2-aminophenyl)porphyrin under high diln. conditions. By comparison with related strapped porphyrins contg. 4 linkers the cofacial distance between the phenyls in the straps and the porphyrin was estd. to be 3-4 \AA creating a small pocket suitable for F- binding but excluding larger anions. The proximate location of the strap moieties to the porphyrin in I was confirmed by NMR spectroscopy in CDCl3. The Soret band of I in dichloromethane was split into 2 bands of equal intensity indicating that the strap presented a significant perturbation caused by $π$-$π$ interactions of the bis(dithienyl)-benzene moieties with the porphyrin. Upon addn. of F- to a soln. of I in DMSO the split Soret band shifted to a single band with a simultaneous red shift of the Q band whereas no absorbance changes were obsd. upon exposure of larger anions including Cl-, Br-, J-, CN-, and H2PO3-. The plots of absorbance changes vs. TBAF showed a sigmoidal curve indicating that the F- binding to I is cooperative. This binding was analyzed with the Hill equation and further characterized using Scatchard plots. Given the obsd. 1:2 binding stoichiometry energy-minimized geometries were computed suggesting that the cavities in I are contracted. The calcd. structure with 2 bound F- caused an expansion of the cavity that sepd. the planes of the porphyrin and the straps by 4.9 \AA. From studies of the influence of a second coexisting halide ion on the UV/Vis spectrum of the I(F2-) complex it was estd. that the affinity of I to F- was ∼104 times higher than for other halides. A conducting polymer based upon I displays both electrochem. and cond. responses to F- and no response to Cl-. [on SciFinder(R)]
C. J. Cumming, Aker, C., Fisher, M., Fok, M., la Grone, M. J., Reust, D., Rockley, M. G., Swager, T. M., Towers, E., and Williams, V., "Using novel fluorescent polymers as sensory materials for above-ground sensing of chemical signature compounds emanating from buried landmines", IEEE Transactions on Geoscience and Remote Sensing, vol. 39, pp. 1119–1128, 2001.
R. Gimenez and Swager, T. M., "Dinuclear pincer-palladium(II) complexes and their use as homogeneous or heterogeneous catalyst for the aldol reaction of methyl isocyanoacetate.", Journal of Molecular Catalysis A: Chemical, vol. 166, pp. 265–273, 2001.
Two new bimetallic Pd complexes I (R = t-Bu, Ph) have been synthesized and characterized and their catalytic activity checked for the aldol reaction of aldehydes with Me isocyanoacetate. Each palladium atom is coordinated to an SCS-type ligand and the two pincer units are linked by a chiral spacer. The catalytic aldol reaction of Me isocyanoacetate with aldehydes proceeds quickly but no significant diastereoselectivity and enantioselectivity is found. The comparison with a homologous mononuclear Pd complex shows no differences with the bimetallic compds., concluding that there is no cooperativity between the metal centers. Two silica-supported catalysts prepd. with a bimetallic compd. show catalytic activity with very minor enantioselectivity. [on SciFinder(R)]
A. Vigalok and Swager, T. M., "Conducting polymers of tungsten(VI)-oxo calixarene: Intercalation of neutral organic guests.", Advanced Materials (Weinheim, Germany), vol. 14, pp. 368–371, 2002.
A series of host-guest complexes of tungsten(VI)-oxo calixarenes substituted with two bithiophene groups at the upper rim were prepd. The bithienyl tungsten(VI)-oxo calixarenes were prepd. by Stille coupling of the complex with 2-tributylstannyl bithiophene and hydrolysis of the glycol ligand yielding the monomer precursor as the DMSO adduct. Upon removal of DMSO, the complex readily reacted with substituted formamides to produce the host-guest adduct monomers. These adducts undergo electrochem. polymn. that results in conducting polymers that contain guest mols. inside the calixarene cavity. The cond. of the polymers is dependent on the nature of the guest, being significantly lower for the disubstituted formamide complexes. [on SciFinder(R)]
S. Zahn and Swager, T. M., "Three-dimensional electronic delocalization in chiral conjugated polymers.", Angewandte Chemie, International Edition, vol. 41, pp. 4225–30, 2002.
T. Shioya and Swager, T. M., "A reversible resistivity-based nitric oxide sensor", Chemical Communications, pp. 1364–1365, 2002.
A sensor for nitric oxide is reported that uses a novel redox matching mechanism to induce a resistance change upon binding this important ligand to cobalt. [on SciFinder(R)]. This work was made possible by a Postdoctoral Fellowship from the Japan Society for the Promotion of Science to T. Shioya and the generous financial support of the Office of Naval Research.
A. J. Paraskos and Swager, T. M., "Effects of Desymmetrization on Thiophene-Based Bent-Rod Mesogens.", Chemistry of Materials, vol. 14, pp. 4543–4549, 2002.
The synthesis and characterization of various thiophene-based bent-rod liq. crystals are reported, and the effects of varying lateral dipole and core desymmetrization upon mesophase behavior are described. Incorporation of desymmetrized core 7 into the mol. framework has very different consequences depending upon whether n-alkoxy or tetracatenar-type end groups were used. Tetracatenar-type mesogens 8-11 are significantly less mesogenic than the previously reported sym. series 3. When sym. straight-chain compds. 13-17 and unsym. straight-chain compds. 18-21 were studied, however, the desymmetrized core gave rise to mesophases with much broader temp. ranges. Variable temp. x-ray diffraction of these compds. suggests the formation of antiparallel dimers of mols. within the liq. crystal phase, and this may explain the relatively stable mesophases formed by these compds. and their incompatibility with chiral induction. The effects of altering the lateral substituents were also explored, and 3,4-difluorothiophene-based compds. 24-27 exhibit broad nematic mesophases. [on SciFinder(R)]
T. M. Long and Swager, T. M., "Triptycene-containing bis(phenylethynyl)benzene nematic liquid crystals.", Journal of Materials Chemistry, vol. 12, pp. 3407–3412, 2002.
A new class of nematic liq. crystals with triptycenes built into a p-dialkoxybis(phenylethynyl)benzene mesomorphic core were prepd. Triptycenes are appended on the center or terminal ring of the mesogen, leading to sym. liq. crystals and reduced symmetry liq. crystals, resp. Both types displayed monotropic behavior, with the asym. compds. having unusual phase behavior, lacking distinct crystn. transitions, and forming a glassy mesophase. A chiral analog is nonmesomorphic, but induced chiral nematic phases when doped into achiral triptycene-contg. analogs. Rotation is phys. hindered normal to the director and, hence, this new liq. crystal architecture may allow for the synthesis of a single component liq. crystal that displays a biaxial nematic phase. [on SciFinder(R)]
J. D. Tovar and Swager, T. M., "Exploiting the versatility of organometallic cross-coupling reactions for entry into extended aromatic systems.", Journal of Organometallic Chemistry, vol. 653, pp. 215–222, 2002.
Several pendant arenes, e.g. I (R = Me, MeO), were prepd. using a variety of Pd-catalyzed cross-coupling techniques. Under strongly acidic conditions, these arenes underwent double annulations to afford the corresponding polycyclic aroms., e.g. II, in good to excellent yields. The scope of this chem. also includes heteroarom. and heteroat. substituted bis(arylethynyl) arenes thus providing an electronically diverse array of extended arom. systems. [on SciFinder(R)]
H. S. Eichhorn, Paraskos, A. J., Kishikawa, K., and Swager, T. M., "The Interplay of Bent-Shape, Lateral Dipole and Chirality in Thiophene Based Di-, Tri-, and Tetracatenar Liquid Crystals", Journal of the American Chemical Society, vol. 124, pp. 12742–12751, 2002.
A range of mesogenic mols. varying in both bend angle and strength of lateral dipole were synthesized, and their phase behavior was characterized by polarizing microscopy, thermal anal., x-ray diffraction, and electrooptical measurements. The authors find the general destabilization of the liq. crystallinity caused by strong lateral dipolar groups and the bent mol. shape are off-set in mesomorphic tetracatenars, which display stable nematic, smectic, columnar, and cubic mesophases. The broad mesomorphism of the tetracatenars contg. lateral dipoles and their incompatibility with chiral induction are explained by considering that loosely correlated dimers exist within the mesophases. Chiral mesophases of derivs. with strong lateral dipoles were achieved by attaching fewer or different side chains to each end of the mesogen. [on SciFinder(R)]
Z. Zhu and Swager, T. M., "Conjugated Polymer Liquid Crystal Solutions: Control of Conformation and Alignment.", Journal of the American Chemical Society, vol. 124, pp. 9670–9671, 2002.
Fluorescent poly(phenylene vinylene)s and poly(phenylene ethynylene)s contg. rigid triptycene groups were prepd. using Suzuki and Sonogashira protocols. The triptycene groups impart extraordinary soly. to conjugated polymers even in the absence of flexible side chains; addnl. t-Bu groups were introduced, which were found to further enhance soly. Stable solns. of poly(phenylene vinylene)s and poly(phenylene ethynylene)s in nematic liq. crystal 1-(trans-4-hexylcyclohexyl)-4-isothiocyanatobenzene (6CHBT
J. Kim, Levitsky, I. A., McQuade, T. D., and Swager, T. M., "Structural Control in Thin Layers of Poly(p-phenyleneethynylene)s: Photophysical Studies of Langmuir and Langmuir-Blodgett Films.", Journal of the American Chemical Society, vol. 124, pp. 7710–7718, 2002.
We present the relationship between the spatial arrangement and the photophys. properties of fluorescent polymers in thin films with controlled structures. Eight surfactant poly(p-phenyleneethynylene)s were designed and studied. These detailed studies of the behavior of the polymers at the air-water interface, and of the photophys. properties of their transferred LB films, revealed key structure-property relationships. Some of the polymers displayed $π$-aggregates that are characteristic of an edge-on structure at the air-water interface. Monolayer LB films of these polymers showed greatly reduced quantum yields relative to soln. values. Other polymers exhibited a highly emissive face-on structure at the air-water interface, and did not form $π$-aggregates. The combination of pressure-area isotherms and the surface pressure dependent in situ UV-vis spectra of the polymers at the air-water interface revealed different behavioral details. In addn., the UV-vis spectra, fluorescence spectra, and quantum yields of the LB films provide design principles for making highly emissive films. [on SciFinder(R)]
J. D. Tovar, Rose, A., and Swager, T. M., "Functionalizable Polycyclic Aromatics through Oxidative Cyclization of Pendant Thiophenes.", Journal of the American Chemical Society, vol. 124, pp. 7762–7769, 2002.
We present a general strategy for obtaining large sulfur-contg. polycyclic aroms. from thienyl precursors through iron(III) chloride mediated oxidative cyclizations. By placing thienyl moieties in close proximity to adjacent arenes, we have directed the oxidized intermediates into controlled cyclization pathways, effectively suppressing polymer formation. Utilizing these cyclized compds. and their thienyl precursors, we have studied cyclization/polymn. pathways of polymers such as poly(2). The unsubstituted positions $\alpha$ to the sulfur atoms within these arom. cores allowed for efficient halogenation and further functionalization. As a demonstration, we prepd. a series of arylene-ethynylene polymers with varying degrees of chromophore aromatization and used them to probe the effects of synthetically imposed rigidity on polymer photophys. behavior. The symmetries and effective conjugation pathways within the monomers play a key role in detg. photophys. properties. We obsd. that rigid, aromatized chromophores generally led to increased excited-state lifetimes by decreasing radiative rates of fluorescence decay. [on SciFinder(R)]
T. M. Long and Swager, T. M., "Using Ïnternal Free Volume\" to Increase Chromophore Alignment.", Journal of the American Chemical Society, vol. 124, pp. 3826–3827, 2002.
Triptycenes have general applicability for increasing the alignment of fluorescent and dichroic dyes in LC hosts. Dyes contg. varying nos. of triptycenes were synthesized to study the effect of free-vol. alignment of triptycenes on the alignment of dyes. These dyes were designed such that multiple triptycenes could be incorporated and the triptycene-free vol. is coincident to the aspect ratio of the dye, allowing a cooperative effect to increase their overall av. alignment. With increasing triptycene incorporation, a stepwise increase in the alignment parameters of each dye was seen. It was also found that the attachment of one triptycene group has a negligible effect on the optical switching response times of the dyes. This can be a powerful tool for designing dyes with higher alignments for a variety of applications including guest-host reflective LCDs and holog. data storage. [on SciFinder(R)]
W. Haase, Kilian, D., Athanassopoulou, M. A., Knawby, D., Swager, T. M., and Wrobel, S., "Enhanced conductivity and dielectric absorption in discotic liquid crystalline columnar phases of a vanadyl complex.", Liquid Crystals, vol. 29, pp. 133–139, 2002.
A liq. cryst. vanadyl complex was studied by DSC, polarizing optical microscopy, the reversal current technique, x-ray diffraction and frequency domain dielec. spectroscopy. The compd. exhibits three columnar phases: rectangular ordered (Colro), rectangular disordered (Colrd), and hexagonal disordered (Colhd), all of which show a dielec. relaxation process at low frequencies. In the Colro low temp. phase this process seems to be connected with a slow relaxation of polarized polymeric chains inside the columns (mHz frequency range). However, in the Colhd high temp. disordered phase this relaxation is faster (Hz range). It is interesting that the liq. cryst. phases studied show enhanced cond. which changes by four orders of magnitude from 10-9 S m-1 in the orientationally disordered crystal (an ODIC phase) to 10-5 S m-1 in the Colhd high temp. phase. Such a value of the cond. is typical for semiconducting materials. [on SciFinder(R)]
J. H. Moon and Swager, T. M., "Poly(p-phenylene ethynylene) Brushes.", Macromolecules, vol. 35, pp. 6086–6089, 2002.
A novel approach to the fabrication of poly(p-phenylene ethynylene) (PPE) brushes on oxidized silicon surfaces using ring-opening metathesis polymn. was demonstrated. Chem. bound 71-110 \AA thick PPE brushes were constrained from forming aggregates and gave higher emission quantum yields than spin-cast films. [on SciFinder(R)]
T. M. Swager and Wosnick, J. H., "Self-amplifying semiconducting polymers for chemical sensors.", MRS Bulletin, vol. 27, pp. 446–450, 2002.
The ability of excited states (excitons) to migrate rapidly and efficiently through conjugated polymers makes these materials ideal for use in sensors based on fluorescence quenching or amplification of fluorescence signals. The structural features of the conducting polymers allow for design of highly sensitive fluorescent sensors for specific analytes such as the explosive trinitrotoluene (TNT) and to create assemblies that control energy transfer along a predetd. path. The principles involved have broad utility in the design of sensory materials and of electronic devices and display components based on electronic polymers. [on SciFinder(R)]
T. Swager, "Polymer light-emitting devices: Light from insulated organic wires.", Nature materials, vol. 1, pp. 151–2, 2002.
In an contribution to an article in the same issue, the advantages of the rotaxane approach (precise mol. insulation technique) in the design of polymer light-emitting devices are highlighted. Semiconducting polymers are threaded through cyclodextrins and are capable of both emitting light and transporting charge. [on SciFinder(R)]
I. A. Levitsky, Kim, J., and Swager, T. M., "Energy Migration in a Poly(phenylene ethynylene): Determination of Interpolymer Transport in Anisotropic Langmuir-Blodgett Films.", Journal of the American Chemical Society, vol. 121, pp. 1466–1472, 1999.
The photophys. and energy transport properties of poly(p-phenylene ethynylene) were studied in thin films. Highly aligned films of a precise thickness, prepd. by sequential monolayer deposition using the Langmuir-Blodgett technique, were surface modified with luminescent traps (Acridine Orange, AO) for energy transfer studies. The degree of energy transfer to the traps was studied as a function of the AO concn. and the no. of polymer layers. An increased efficiency of energy transfer to the traps was obsd. with increasing nos. of layers to an approx. thickness of 16 layers. This behavior is consistent with a transition to a three-dimensional energy migration topol. A phenomenol. model for the transport was proposed, and solns. were obtained by numerical methods. The model yields a fast (>6 × 1011 s-1) rate of energy transfer between polymer layers and a diffusion length of more than 100 \AA in the Z direction (normal to the film surface). [on SciFinder(R)]
J. Kim, McHugh, S. K., and Swager, T. M., "Nanoscale Fibrils and Grids: Aggregated Structures from Rigid-Rod Conjugated Polymers.", Macromolecules, vol. 32, pp. 1500–1507, 1999.
Langmuir-Blodgett (LB) mol. processing of conjugated polymers [poly(phenylene ethynylenes)] into highly aligned films has revealed conditions for the formation of liq. cryst. monolayer films that structurally evolve into fibril aggregates. The structural requirements for poly(phenylene ethynylene)s to display liq. cryst. phases capable or alignment by LB methods were detd. The reconstruction of monolayers into fibril structures requires a low glass-transition temp. (Tg), weak surface anchoring, and a monolayer with a high energy that can be stabilized by reorganization. This assembly of polymers into aggregated structures produces rigid structural units analogous to naturally occurring fibrous proteins such as collagen and elastin. These oriented, shape-persistent nanoscale structures create new possibilities for the construction of complex supramol. structures, and this capability was demonstrated by the formation of a nanoscale grid. [on SciFinder(R)]
H. W. Gibson, Dotson, D. L., Marand, H., and Swager, T. M., "Synthesis and characterization of liquid crystalline triaryloxy-s-triazines.", Molecular Crystals and Liquid Crystals Science and Technology, Section A: Molecular Crystals and Liquid Crystals, vol. 326, pp. 113–138, 1999.
2,4,6-Tris[p-(p'-n-alkylphenyliminomethylene)phenoxy]-s-triazines (3) are calamitic liq. crystals based on x-ray diffraction patterns, optical textures, and mol. modeling results. Replacement of the Schiff's base moieties in the mesogenic arms to form 2,4,6-tris(p-n-octyloxycarbonylphenoxy)-s-triazine (7) did not result in a liq. cryst. compd. The tricarbonate 2,4,6-tris(p-cholesteryloxycarbonyloxyphenoxy)-s-triazine (11) is liq. cryst. based on the optical textures obsd., although the mesophase type could not be detd. due to the high melting transition and thermal instability of this compd. The use of six ester groups around the triazine nucleus, as 2,4,6-tris(3,5-dicarboalkoxyphenoxy)-s-triazines (13), resulted in compds. which displayed normal melting behavior and no detectable mesomorphism. [on SciFinder(R)]
T. M. Swager, "Design of sensory polymers.", Polymeric Materials Science and Engineering, vol. 80, pp. 248–249, 1999.
A review with 11 refs. is given on the authors' work with emphasis on fluorescence-based sensor schemes and new chemoresistive transducer mechanisms. [on SciFinder(R)]
R. P. Kingsborough and Swager, T. M., "Transition metals in polymeric $π$-conjugated organic frameworks.", Progress in Inorganic Chemistry, vol. 48, pp. 123–231, 1999.
A review with 356 refs. on the wide variety of transition metal-centered conjugated/conducting polymers and networks. Organometallic-based systems, such as organometallic acetylenic polymers, metallacycle polymers, polyferrocenylenes, transition metal aryl complexes, pendant organometallic polymers, pendant ferrocene polymers, organometallic coordination polymers, as well as various types of coordination complexes, such as one-dimensional and two-dimensional phthalocyanine polymers, conjugated porphyrin polymers, polymers derived from schiff-base complexes, polymeric bipyridine and related complexes, and diisocyanoaryl ligands are described. [on SciFinder(R)]
F. Meghdadi, Leising, G., Wang, Y. Z., Gebler, D. D., Swager, T. M., and Epstein, A. J., "Full color LEDs with parahexaphenyl and polypyridine based copolymer.", Synthetic Metals, vol. 102, pp. 1085–1086, 1999.
Results are presented of luminescence (PL) and electroluminescence (EL) studies on color variable LEDs based on parahexaphenyl (PHP) and polypyridine (PPy) related polymers and copolymers. In the fabricated bilayer and 3 layer heterostructure EL devices, PPy and poly(pyridyl vinylene phenylene vinylene) (PPyVPV) are used as electron and hole transport layers resp. For ITO/PPyVPV/PHP/PPy/Al EL device, multi-colors emission in the wide range of the visible spectrum from blue (425 nm) to green (530 nm) and near IR (700 nm) were obtained. The optical and elec. characteristics of bilayer and multilayer EL devices are presented and discussed. [on SciFinder(R)]
Y. Z. Wang, Sun, R. G., Meghdadi, F., Leising, G., Swager, T. M., and Epstein, A. J., "Color variable multilayer light emitting devices based on conjugated polymers and oligomers.", Synthetic Metals, vol. 102, pp. 889–892, 1999.
The authors report the fabrication and study of color variable multilayer light emitting devices based on pyridine-contg. conjugated polymers and para-sexiphenyl (6P) oligomer. Polarity controlled two color devices were fabricated by sandwiching the emitting layer in between emeraldine base and sulfonated forms of polyaniline (SPAN). The emitting layer typically is a blend of two polymers, one of which is a pyridine-based copolymer (PPyVPV). The devices can be operated under either polarity of driving voltage with different colors of light being emitted from different locations. Under forward bias, red light is generated from PPyVPV/SPAN interface. Under reverse bias, light is generated from the bulk of the emitting layer whose color is dependent on the materials used. Voltage controlled multicolor devices were fabricated by combining the pyridine-based polymers with the 6P oligomer. Voltage dependent multicolor emission was obsd. in both bilayer and trilayer devices. The emission colors of single devices cover a wide range of visible spectra whose CIE color coordinates vary from blue to white to green with increasing voltages. [on SciFinder(R)]
T. M. Swager, "Handbook of liquid crystals. (4-volume set.) Edited by D. Demus, J. Goodby, G. W. Gray, H. W. Spiess and B. Vill.", Angewandte Chemie, International Edition, vol. 38, pp. 2279–2281, 1999.
Y. Z. Wang, Sun, R. G., Wang, D. K., Swager, T. M., and Epstein, A. J., "Polarity- and voltage-controlled color-variable light-emitting devices based on conjugated polymers.", Applied Physics Letters, vol. 74, pp. 2593–2595, 1999.
Voltage- and polarity-controlled multilayer multicolor light-emitting test devices based on pyridine-contg. conjugated polymers and derivs. of polyacetylene were fabricated and evaluated. The polymer blends of poly(pyridyl vinylene deriv.) and poly(bis(hexadecyloxy)phenylene-vinylene) and poly(di-Ph Bu acetylene) or poly(hexyl Ph acetylene) are used as the emitting material. Sulfonated poly(o-methoxyaniline) is the redox layer and ITO and Al were used as electrodes. The devices emit red light under forward bias and multiple colors of light (from orange-red to green) under reverse bias. The colors under reverse bias are controlled by the magnitude of the applied voltages. [on SciFinder(R)]
K. Kishikawa, Harris, M. C., and Swager, T. M., "Nematic Liquid Crystals with Bent-Rod Shapes: Mesomorphic Thiophenes with Lateral Dipole Moments.", Chemistry of Materials, vol. 11, pp. 867–871, 1999.
A no. of bent-rod mols. of thiophene derivs. were prepd., and their phase behavior was studied. Mostly these compds. have the nematic phase. The crystal structure of 3,4-dicyano-2,5-bis[(methoxyphenyl)ethynyl]thiophene shows that the antiparallel arrangements of lateral dipoles seems most likely. Crystals are monoclinic, space group P21/n, with a 11.5757(2), b 10.03300(10), c 17.3438(4) \AA, $\beta$ 100.4580(10)°; Z =
S. T. Trzaska, Zheng, H., and Swager, T. M., "Eight-Vertex Metallomesogens: Zirconium Tetrakis-$\beta$-diketonate Liquid Crystals.", Chemistry of Materials, vol. 11, pp. 130–134, 1999.
Square antiprism Zr tetrakis-$\beta$-diketonate complexes with 24 alkoxy chains organize in columnar liq. crystal phases. X-ray diffraction and polarized microscopy studies on complexes with n-alkoxy side chains revealed a columnar hexagonal phase. These sandwich-shaped compds. have much lower transition temps. than their discotic analogs, which leads to the desirable attribute of room temp. liq. crystallinity. The addn. of two branching Me groups to the alkoxy chains dramatically alters the properties of these materials. The branched side chain analogs exhibited a higher clearing point while the liq. crystallinity is maintained at room temp. The branching Me groups also induced a bulk reorganization of the material to a rare columnar oblique phase (Colob). [on SciFinder(R)]
S. S. Zhu, Kingsborough, R. P., and Swager, T. M., "Conducting redox polymers: investigations of polythiophene-Ru(bpy)3n+ hybrid materials.", Journal of Materials Chemistry, vol. 9, pp. 2123–2131, 1999.
A series of thiophene-appended RuII(bpy)3 derivs., Ru(1)3, Ru(2)3, Ru(3)3, Ru(bpy)2(1), Ru(bpy)2(2), and their resulting polymers were synthesized and characterized. The bpy ligands 5,5'-bis(5-(2,2'-bithienyl))-2,2'-bipyridine (1), 4,4'-bis(5-(2,2'-bithienyl))-2,2'-bipyridine (2), and 4-(5-(2,2'-bithienyl))-2,2'-bipyridine (3), all contain electrochem. polymerizable bithienyl moieties. The monomers Ru(2)3, Ru(3)3, Ru(bpy)2(1) and Ru(bpy)2(2) display spectroscopic features that are similar to the ligand-based and MLCT [metal-to-ligand charge-transfer] bands found for Ru(bpy)3. The cyclic voltammograms of all of these polymers display both metal-centered and thiophene-based electroactivity. High redox cond. was found in poly(Ru(2)3) and poly(Ru(3)3) for both the thiophene-based oxidn. and metal-based redn. processes. These results indicate that the polymers display charge localization for both the metal complexes and the tetrathienyl connecting units. The degree of interconnection (no. of linkages) and the substitution pattern were found to control the cond. of these polymers. The highest cond. (3.3 × 10-3 S cm-1) was found for poly(Ru(2)3), which is able to have up to 6 linkages with other ruthenium complexes and possessing a 4,4'-substitution pattern that allows effective orbital overlap of the conjugated polymer backbone with the ruthenium centers. [on SciFinder(R)]
J. D. Tovar and Swager, T. M., "Pyrylium Salts via Electrophilic Cyclization: Applications for Novel 3-Arylisoquinoline Syntheses.", Journal of Organic Chemistry, vol. 64, pp. 6499–6504, 1999.
O-Ethynylphenylcarbonyl compds. undergo cyclization to 2-benzopyrylium salts on treatment with acid. Treatment of these compds. with NH3 gave isoquinolines. 1-Ethoxy-2-benzopyrylium salts were partially hydrolyzed to the hydroxy analogs and 1-dimethylaminoisoquinolines were accompanied by the amino analogs. [on SciFinder(R)]
R. P. Kingsborough and Swager, T. M., "Polythiophene Hybrids of Transition-Metal Bis(salicylidenimine)s: Correlation between Structure and Electronic Properties.", Journal of the American Chemical Society, vol. 121, pp. 8825–8834, 1999.
The synthesis, electrochem., and spectroscopic behavior of tetradentate bis(salicylidenimine) transition metal complexes are reported. Appending these complexes with 3,4-ethylenedioxythiophene (EDOT) moieties allows for electrochem. polymn. at much lower potentials than the parent SALEN complexes. The resulting polymers display well-defined org.-based electrochem. at potentials <0.5 V vs. Fc/Fc+. The EDOT-modified N,N'-ethylene bis(salicylidene), N,N'-o-phenylene bis(salicylidene), and N,N'-trans-cyclohexylene bis(salicylidene) complexes I and II, III and IV, and V and VI, resp., display cyclic voltammograms with four org.-based redox waves. Increasing the interchain sepn. through the use of nonplanar bis(salicylidene) ligands results in only two redox waves. The cond. of the copper-based polymers decreases with increasing interchain spacing, with the max. cond. being 92 S cm-1 for poly(I) and 16 S cm-1 for the stilbenediamine complex polymer. The nickel complexes were less sensitive to increased interchain sepn. and showed cond. greater than 48 S cm-1 regardless of interchain spacing and near 100 S cm-1 in the case of poly(IV). In situ spectroelectrochem. was consistent with the segmented electronic nature of these polymers. Cyclic voltammetry of an analogous uranyl complex revealed that two electrons per repeat unit were removed during oxidn. From electrochem. and in situ EPR spectroscopic studies suggest that $π$-aggregation processes take place in those polymers in which close interchain spacing is allowed. [on SciFinder(R)]
S. T. Trzaska, Hsu, H. - F., and Swager, T. M., "Cooperative Chirality in Columnar Liquid Crystals: Studies of Fluxional Octahedral Metallomesogens.", Journal of the American Chemical Society, vol. 121, pp. 4518–4519, 1999.
Liq. crystals displaying a Colh (columnar hexagonal) phase can display a cooperative chiral state. In the authors model the hexagonal symmetry is an integral feature that favors the chiral state. This latter point is also supported by the recent work on fluxional 8 vertex Zr4+ mesogens (Trazaska et al. 1999) with the same ligands as used in this paper, wherein no increase was found in the CD signal on cooling from the isotropic phase into the oblique columnar phase. [on SciFinder(R)]
J. Kim, McQuade, T. D., McHugh, S. K., and Swager, T. M., "Ion-Specific Aggregation in Conjugated Polymers: Highly Sensitive and Selective Fluorescent Ion Chemosensors", Angewandte Chemie, International Edition, vol. 39, pp. 3868–3872, 2000.
A new transduction mechanism based on the aggregation of conjugated sensory polymers induced by K+ ions is reported; this new system displays enhanced sensitivity because of energy migration processes and has a high selectivity for K+ over Na+ ions. The poly(p-phenylene ethynylene)s were synthesized by the Sonogashira-Hagihara coupling reaction. [on SciFinder(R)]
R. P. Kingsborough and Swager, T. M., "A highly conductive macrocycle-linked metallophthalocyanine polymer.", Angewandte Chemie, International Edition, vol. 39, pp. 2897–2900, 2000.
A highly electroactive polythiophene-metallophthalocyanine hybrid material was prepd. which exhibits cond. more than three orders of magnitude higher than that of know macrocycle-connected polymers. The triptycene-contg. phthalocyanine macrocycle monomers have electrochem. polymerizable thiophene moieties on alternating subunits which give nearly linear polymer backbone with the metal center in direct electronic communication with the conjugated polymer backbone. The electrochem. polymn. proceeds through oxidative coupling of pendant thiophenes. The cyclic voltammogram of the Ni complex polymer displays prominent features in the reductive region at -1.35 and -1.75 V assignable to the Ni redox and ligand-centered processes, resp. The polymer shows both metal-centered electroactivity and high cond. and the Ni redox wave does not contribute to the cond. [on SciFinder(R)]
J. Buey and Swager, T. M., "Three-strand conducting ladder polymers: two-step electropolymerization of metallorotaxanes.", Angewandte Chemie, International Edition, vol. 39, pp. 608–612, 2000.
Rotaxane recognition and assembly methods were used to produce new architectures based on a metallorotaxane (Cu, Zn) monomer bearing two independently electropolymerizable groups in the threading unit and the macrocycle host. The controlled assembly and polymn. of the metallorotaxane monomers leads to three-stranded ladder polymers, wherein one of the conjugated chains is sandwiched between two other chains. In this structure the internal polymer behaves as a partially isolated mol. wire when the outer strands are in an insulating state. Neglecting minor effects arising from conformational issues (there are two enantiomeric conformations) the polymer formed from the threading unit is twice as long as the polymer contg. the macrocycle. When the outer polymer chains are insulating, the electroactivity of Cu or Zn ions can assist in interchain transport. [on SciFinder(R)]
T. D. McQuade, Pullen, A. E., and Swager, T. M., "Conjugated polymer-based chemical sensors.", Chemical reviews, vol. 100, pp. 2537–2574, 2000.
A review with 359 refs. is given on conjugated polymers with synthetic receptors and functional groups, biol. sensors, conjugated polymers with entrapped materials to aid in specificity, and unmodified conjugated polymers as sensors. [on SciFinder(R)]
R. P. Kingsborough and Swager, T. M., "Electrocatalytic Conducting Polymers: Oxygen Reduction by a Polythiophene-Cobalt Salen Hybrid.", Chemistry of Materials, vol. 12, pp. 872–874, 2000.
Highly conducting polythiophene-cobalt salen hybrid material catalyzed the redn. of O2. Rotating ring-disk measurements suggested a selective four-electron redn. process took place to produce H2O as the sole product. [on SciFinder(R)]
J. H. Wosnick and Swager, T. M., "Molecular photonic and electronic circuitry for ultra-sensitive chemical sensors", Current opinion in chemical biology, vol. 4, pp. 715–720, 2000.
Molecular wires have progressed from an intellectual curiosity to become the basis for chemical sensors with unprecedented sensitivity. Particularly exciting opportunities are those that make use of biological superstructures to effect conduction through assemblies of molecular wires.[on SciFinder (R)]
V. E. Williams and Swager, T. M., "An improved synthesis of poly(p-phenylenebutadiynylene)s.", Journal of Polymer Science, Part A: Polymer Chemistry, vol. 38, pp. 4669–4676, 2000.
A new methodol. is described for the synthesis of poly(p-phenylenebutadiynylene)s based on the Pd/Cu-catalyzed, benzoquinone-mediated homocoupling of terminal acetylenes. Homopolymers synthesized from the 2,5-dialkoxy-1,4-diethynylbenzene monomers 1,4-bis(decyloxy)-2,5-diethynylbenzene and 1,4-bis(hexadecyloxy)-2,5-diethynylbenzene were largely insol., with the sol. portion from the polymn. of 1,4-bis(hexadecyloxy)-2,5-diethynylbenzene exhibiting a no.-av. mol. wt. of 14,000. Completely sol. polymers were obtained from these precursors by the random copolymn. of these monomers. The materials exhibited no.-av. mol. wts. ranging from 67,000 to over 150,000. The UV-visible and emission spectra of these polymers were examd. and found to be very similar to those of structurally analogous poly(p-phenyleneethynylene)s and smaller poly(p-phenylenebutadiynylene)s reported by Kijima et al. [on SciFinder(R)]
T. D. McQuade, Hegedus, A. H., and Swager, T. M., "Signal Amplification of a \"Turn-On\" Sensor: Harvesting the Light Captured by a Conjugated Polymer.", Journal of the American Chemical Society, vol. 122, pp. 12389–12390, 2000.
A chemosensor design is presented which substantially amplifies the output of a pH-sensitive fluorophore using energy harvested from a conjugated polymer. The construction of thin films was accomplished via layer-by-layer deposition of a new water-sol., cationic poly(p-phenylene ethynylene) (PPE) and an anionic polyacrylate. [on SciFinder(R)]
D. L. Simone and Swager, T. M., "A Conducting Poly(cyclophane) and Its Poly([2]-catenane).", Journal of the American Chemical Society, vol. 122, pp. 9300–9301, 2000.
Cyclization of 1,4-Bis[2-(2-(2-(2-toluene-p-sulfonylethoxy) ethoxy) ethoxy) ethoxy]benzene with 1,4-diiodo-2,5-dihydroxybenzene, followed by coupling with (3,4-ethylenedioxy)-thiophene produced 1,4-Bis((3,4-ethylenedioxy)thiophene)-7,10,13,16,19,26,29,32,35,38-decaoxa[13.13]paracyclophane (I) a highly fluorescent electropolymerizable monomer and electron donor. Addn. of Paraquat to I in CH3CN resulted in formation of a deep-green colored soln. with a charge-transfer absorption band at $łambda$ = 589 nm ($ε$ = 204 M-1cm-1), indicative of the highly electron-donating nature of the thiophene-phenylene-thiophene arom. scaffold. The tetrakis-hexafluorophosphate I-[2]-catenane complex (II) was prepd. by reaction of I with NH4PF6 and 1,4-bis(bromomethyl)benzene. The deep-green complex II exhibits a charge-transfer absorption at $łambda$ = 626 nm ($ε$ = 1230 M-1 cm-1), which is red-shifted relative to that of Paraquat:I complex indicating greater intimacy between donor and acceptor in the [2]-catenane. The crystal structure of II indicates an interlocked $π$-stacked geometry with inner bipyridinium moieties within the cyclophane cavity and outer bipyridinium on the periphery of the cavity. Electrochem. polymn. of I and II proceeds via two propagating sites at the 5-position of 3,4-ethylenedioxythiophene and affords conducting polymers. Oxidn. and redn. potentials for poly-II are identical to those of II monomer suggesting that the neutral polymer backbone has the same electronic influence as the thiophene-phenylene-thiophene in II. For both poly-I and poly-II, the multiple redn. peaks obsd. are indicative of the energetic inequality between the inner- and outer-bipyridinium groups. The cond. of poly-II rapidly reaches a max. of 0.2 S/cm at 0.12 V vs. Fc/Fc+, which decays quickly thereafter. In contrast, the cond. profile for poly-I shows that oxidn. of the backbone occurs over a broad range of potentials without decay. The absorption spectra of both conducting polymers are similar; in the neutral (insulating) form, the $łambda$max for poly-I was 527 nm (2.35 eV) and poly-II 542 nm (2.29 eV). When oxidatively doped, both displayed a longer wavelength band at 767-796 nm (1.62-1.56 eV), indicative of new states formed within the band gap upon reaching a conductive state. Further lower energy absorptions occur at higher oxidn. potentials leading to an absorption at >1100 nm (>1.13 eV) owing to the formation of free carriers. The films differ in that poly-II requires higher oxidn. potentials to reach its conductive state than does poly-I. [on SciFinder(R)]
R. Deans, Kim, J., Machacek, M. R., and Swager, T. M., "A Poly(p-phenyleneethynylene) with a Highly Emissive Aggregated Phase.", Journal of the American Chemical Society, vol. 122, pp. 8565–8566, 2000.
A cyclophane-based poly(p-phenyleneethynylene) was prepd. and showed to undergo unusual solid-state aggregation to give highly emissive materials. This aggregation was demonstrated to be controllable via the method of film prepn. (Langmuir vs. spin-casting) with facile de-aggregation possible by the introduction of specific analytes or annealing. The subsequent decrease in the fluorescence intensities of spin-cast films after chem.-thermal modifications (due to chain alignment) has potential applications in sensor technologies. [on SciFinder(R)]
T. D. McQuade, Kim, J., and Swager, T. M., "Two-Dimensional Conjugated Polymer Assemblies: Interchain Spacing for Control of Photophysics.", Journal of the American Chemical Society, vol. 122, pp. 5885–5886, 2000.
The relation between cofacial interpolymer distance and solid-state photophysics of random hydrophilic (Me, iso-Pr, isopentyl)- and hexadecyloxy hydrophobic-substituted poly(p-phenylene ethynylene)s was studied. The side chain bulk influences the packing of the polymers at the air-water interface, by providing greater polymer-polymer spacing. Interchain distance has a strong influence on the spectral properties of PPEs. Thin films were prepd. on glass substrates by drop casting, by spin casting, and by using the Langmuir-Schaefer (LS) method. Normalized UV-vis and PL spectra of the isopropyl-substituted PPV films spin cast, drop, and soln. are similar, illustrating the lack of order in the spin cast film relative to the LS film. However, the drop cast film has a PL max. at 463 nm arising from non-aggregated polymer chains, along with three other red-shifted peaks at 494, 512, and 553 nm which may arise from multiple aggregated states, indicating that the drop cast film is intermediate between the LS (fully aggregated case) and the spin cast (least aggregated state). [on SciFinder(R)]
I. A. Levitsky, Kishikawa, K., Eichhorn, H. S., and Swager, T. M., "Exciton coupling and dipolar correlations in a columnar liquid crystal: photophysics of a bent-rod hexacatenar mesogen.", Journal of the American Chemical Society, vol. 122, pp. 2474–2479, 2000.
A novel bent-rod hexacatenar liq. crystal is reported that displays a hexagonal columnar (Colh) phase. The organization of conjugated hexacatenar mesogens in the columnar phase is of interest for their anisotropic electronic properties. The emissive nature of the mesogens varies over the temp. range of the Colh phase and the spectral shifts were analyzed in terms of an exciton-coupling model. The variation of the emission band in this phase is consistent with varying degrees of rotational disorder between the mesogens. The bent-rod shape and highly dipolar nature of the liq. crystal core (mesogen) promotes (as suggested by computation, x-ray diffraction, and photophys. studies) a high degree of antiparallel intermol. correlations between nearest neighbors. The antiparallel organization is novel and differs from structures previously identified in other polycatenars. These studies illustrate the utility of the exciton-coupling model to probe the nature and degree of intermol. correlations in highly dipolar liq. crystals. [on SciFinder(R)]
D. Kilian, Knawby, D., Athanassopoulou, M. A., Trzaska, S. T., Swager, T. M., Wrobel, S., and Haase, W., "Columnar phases of achiral vanadyl liquid crystalline complexes.", Liquid Crystals, vol. 27, pp. 509–521, 2000.
Two liq. cryst. vanadyl complexes were studied by frequency domain dielec. spectroscopy over the range 10 mHz to 13 MHz. The materials exhibit two or three columnar phases denoted Colro, Colrd, and Colhd that were identified by x-ray diffraction. In the higher temp. Colrd phase, a relaxation process in the kHz range is obsd. that is attributed to the reorientation about the mol. short axis. A pronounced dielec. relaxation process shows up in the low temp. Colro phase at hertz and sub-hertz frequencies. This slow relaxation is assigned to reorientation of the mol. dipoles within the polar linear chains, which are aligned along the column's axis. Triangular wave switching studies at low frequency reveal processes inside the Colro phase which are most probably due to ionic/charges relaxations but a ferroelec. switching for an achiral discotic system cannot be ruled out completely. Below the Colro phase there is an orientationally disordered cryst. Crx phase with disordered side chain dipoles. A dielec. relaxation process connected with the intramol. relaxation of the alkoxy side chains, similar to the $\beta$-process of polymers, was found in the lower temp. Crx phase. [on SciFinder(R)]
V. E. Williams and Swager, T. M., "Iptycene-Containing Poly(aryleneethynylene)s.", Macromolecules, vol. 33, pp. 4069–4073, 2000.
The syntheses of two novel iptycene monomers, 1,4-diiodotriptycene (I) and 2,3-diiodo-4,9-dihydro-4,9-benzonaphtho[2,3,c]thiophene (II), are described herein. These monomers were subsequently copolymd. with a no. of diethynylphenyl monomers via a Sonogashira-Hagihara coupling to afford both regiodefined polymers and random terpolymers. Terpolymers derived from coupling I and 2,5-dihexadecyloxy1,1,4-diiodobenzene with a diethynylpentiptycene exhibit emission spectra that are only slightly perturbed from soln. to the solid state, suggesting that polymer assocn. is effectively inhibited in condensed phases. An äll-iptycene\" polymer derived from the copolymn. of II with a diethynylpentiptycene monomer is also notable in that it owes its soly. to a combination of its nonlinearity and the presence of rigid iptycene groups rather than to flexible side chains. [on SciFinder(R)]
E. Gorecka, Nakata, M., Mieczkowski, J., Takanishi, Y., Ishikawa, K., Watanabe, J., Takezoe, H., Eichhorn, S. H., and Swager, T. M., "Induced Antiferroelectric Smectic-C*A Phase by Doping Ferroelectric-C* Phase with Bent-Shaped Molecules.", Physical Review Letters, vol. 85, pp. 2526–2529, 2000.
Doping of the ferroelec. Sm-C* phase with bent-shaped mols. induces the antiferroelec. Sm-C*A phase. The effect was obsd. by electrooptic and dielec. measurements in systems with weak interlayer interactions in which the relative strength of anticlinic-synclinic order between mols. in adjacent layers is easily controlled by external factors. FTIR spectroscopy studies suggest that the bent-shaped mols. are not flat. They reorient upon the elec. field-induced antiferroelec.-ferroelec. transition to adopt a position in which the av. direction of the carbonyl groups is in the smectic plane and a bending tip along the C2 symmetry axis. [on SciFinder(R)]
T. D. McQuade, Kim, J., and Swager, T. M., "Controlling conjugated polymer photophysics: role of interchain spacing within two-dimensional assemblies.", Polymeric Materials Science and Engineering, vol. 83, pp. 529–530, 2000.
Prepn. and photophys. characterization of three polydiacetylene derivs. are discussed with the emphasis on the role of interchain spacing within two-dimensional Langmuir monolayer assemblies. Photophys. and geometrical properties and normalized UV visible and photoluminescence of these films are also discussed. [on SciFinder(R)]
H. - H. Yu, Pullen, A. E., Xu, B., and Swager, T. M., "Toward new actuating devices: synthesis and electrochemical studies of poly(11,23-bis([2,2'-bithiophen]-5-yl)-26,28-dimethoxycalix[4]arene-25,27-diol).", Polymeric Materials Science and Engineering, vol. 83, pp. 523–524, 2000.
The prepn. and cond. measurements of the title polymer were discussed. The cond. studies showed a large hysteresis suggesting that a mol. level compression took place. Thus the materials based on this polymer should have potential use in actuating devices or as artificial muscle. [on SciFinder(R)]
Y. Z. Wang, Gebler, D. D., Spry, D. J., Fu, D. K., Swager, T. M., Macdiarmid, A. G., and Epstein, A. J., "Novel light-emitting devices based on pyridine-containing conjugated polymers.", IEEE Transactions on Electron Devices, vol. 44, pp. 1263–1268, 1997.
The authors present novel light-emitting devices based on several pyridine-contg. conjugated polymers and copolymers in various device configurations. The high electron affinity of pyridine-based polymers improves stability and electron transport properties of the polymers and enables the use of relatively stable metals such as Al as electron injecting contacts. Bilayer devices utilizing poly(9-vinyl carbazole) (PVK) as a hole-transporting/electron-blocking polymer show dramatically improved efficiency and brightness as compared to single layer devices. This is attributed to charge confinement and exciplex emission at the PVK/emitting polymer interface. The incorporation of conducting polyaniline network electrode into PVK reduces the device turn-on voltage significantly while maintaining the high efficiency. Two novel device configurations that enable the use of high work function metals as electrodes are pointed out. [on SciFinder(R)]
K. Hasharoni, Keshavarz-K., M., Sastre, A., Gonzalez, R., Bellavia-Lund, C., Greenwald, Y., Swager, T., Wudl, F., and Heeger, A. J., "Near IR photoluminescence in mixed films of conjugated polymers and fullerenes.", Journal of Chemical Physics, vol. 107, pp. 2308–2312, 1997.
The photoinduced electron transfer between conjugated polymers and a series of functionalized fullerenes was studied. A new photoluminescence signal was obsd. in the near IR (∼1.4 eV). This weak IR photoluminescence does not result from direct excitation of the fullerene, but from radiative electron-hole recombination between the fullerene excited state and the polymer ground state. The intensity of this recombination luminescence depends on the electrochem. nature of the functional group; it is obsd. only for fullerenes with first redn. potential higher than that of C60. [on SciFinder(R)]
S. S. Zhu and Swager, T. M., "Conducting Polymetallorotaxanes: Metal Ion Mediated Enhancements in Conductivity and Charge Localization.", Journal of the American Chemical Society, vol. 119, pp. 12568–12577, 1997.
In this paper polymers are described of two metallorotaxane systems, Rot(1,M) and Rot(2,M) (M = Zn2+ or Cu1+), which are formed by complexing a macrocyclic phenanthroline, a 5,5'-bis([2,2'-bithiophen]-5-yl)-2,2'-bipyridine (ligand 1) or 5,5'-bis(3,4:3',4'-bis(ethylenedioxy)[2,2'-bithiophen]-5-yl)-2,2'-bipyridine (ligand 2), and Zn2+ or Cu1+ ions. The corresponding polymetallorotaxanes, PolyRot(1,M) and PolyRot(2,M), are produced by oxidative polymn. of Rot(1,M) and Rot(2,M). Investigations of the electrochem., conducting, and optical properties of the metallorotaxanes and polymetallorotaxanes as well as related nonrotaxane polymers Poly(1) and Poly(2) are reported. The combined electrochem. and cond. studies of PolyRot(1,M) and PolyRot(2,M) indicated that the polymetallorotaxane's redox and conducting properties were dramatically affected by the Lewis acidic and redox properties of the coordinated metal ions. The Lewis acidity produces charge localization and a redox conduction process in both polymetallorotaxane systems. The matching of the polymer and Cu1+/2+ couple redox potentials in PolyRot(2,Cu) resulted in a Cu1+/2+ contribution to cond. The metal-free PolyRot(1) and PolyRot(2) were produced by extg. the metal ions, and these polymers reversibly bound Zn2+ or Cu2+ ions in soln. The Cu2+ dopes the films of PolyRot(2) and Poly(2), which have lower oxidn. potential than those of PolyRot(1), to produce 106-107-fold cond. increases. In the case of PolyRot(1) and Poly(1), the rotaxane structure was demonstrated to be key for reversible complexation of metal ions. [on SciFinder(R)]
M. B. Goldfinger, Crawford, K. B., and Swager, T. M., "Directed Electrophilic Cyclizations: Efficient Methodology for the Synthesis of Fused Polycyclic Aromatics.", Journal of the American Chemical Society, vol. 119, pp. 4578–4593, 1997.
A versatile method for the synthesis of complex, fused, polycyclic, arom. systems in high chem. yield is described. Construction is achieved using a general two-step synthetic sequence. Pd-catalyzed Suzuki and Negishi type cross-coupling chemistries allow for the prepn. of non-fused skeletal ring systems in yields consistently >80%. The crit. ring-forming step, which generally proceeds in very high to quant. yield, utilizes (4-alkoxyphenyl)ethynyl groups and is induced by strong electrophiles such as trifluoroacetic acid and iodonium tetrafluoroborate. The reaction in essence produces phenanthrene moieties which are integrated into extended polycyclic arom. structures. Fused polycyclic benzenoids as well as benzenoid/thiophene systems may be prepd. by this methodol. The scope of the described cross-coupling/cyclization chem. including mechanistic insights and problematic side reactions are described. [on SciFinder(R)]
A. J. Epstein, Wang, Y. Z., Jessen, S. W., Blatchford, J. W., Gebler, D. D., Lin, L. B., Gustafson, T. L., Swager, T. M., and Macdiarmid, A. G., "Pyridine-based conjugated polymers. Photophysical properties and light-emitting devices.", Macromolecular Symposia, vol. 116, pp. 27–38, 1997.
The authors study the photophys. properties of the pyridine-based polymers poly(p-pyridyl vinylene) (PPyV) and poly(p-pyridine) (PPy). The primary photoexcitations in the pyridine-based polymers are singlet excitons. The authors observe direct intersystem crossing (ISC) on picosecond time scales with the vol. d. of triplet excitons varying with the sample morphol. (film or powder). These effects are demonstrated clearly by examg. the millisecond photoinduced absorption characteristics of powder and film forms of PPyV. The pyridine-based polymers were shown to be promising candidates for polymer light-emitting devices, both conventional diode device and sym. configured ac light-emitting (SCALE) device. Here the authors examine the role of insulating layers and their interfaces with the emitting layer and electrodes in the SCALE device operation, with emphasis on the central role of the polymer-polymer interfaces. [on SciFinder(R)]
T. M. Swager, "New approaches to sensory materials: molecular recognition in conjugated polymers new transduction methodology.", NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 492, pp. 133–141, 1997.
A review, with 12 refs., is given on the use of conjugated polymers for the development of new sensory methodologies. [on SciFinder(R)]
D. A. {Vanden Bout}, Yip, W. - T., Hu, D., Fu, D. - K., Swager, T. M., and Barbara, P. F., "Discrete Intensity Jumps and Intramolecular Electronic Energy Transfer in the Spectroscopy of Single Conjugated Polymer Molecules", Science, vol. 277, pp. 1074–1077, 1997.
Single-mol. fluorescence spectroscopy of a multichromophoric conjugated polymer (mol. wt. ∼ 20,000) revealed surprising single-step photobleaching kinetics and acute jumps in fluorescence intensity. These jumps were shown not to result from spectral diffusion and were attributed to fluctuations in the quantum yield of emission for the mols. The data indicate efficient intramol. electronic energy transfer along the polymer chain to a localized fluorescence-quenching polymer defect. The defects are created by reversible photochem. of the polymer. These findings have implications for the use of conjugated polymers in light-emitting diode displays and sensors. [on SciFinder(R)]
D. D. Gebler, Wang, Y. Z., Jessen, S. W., Blatchford, J. W., MacDiarmid, A. G., Swager, T. M., Fu, D. K., and Epstein, A. J., "Exciplex emission in heterojunctions of poly(pyridyl vinylene phenylene vinylene)s and poly(vinyl carbazole).", Synthetic Metals, vol. 85, pp. 1205–1208, 1997.
Photoluminescence and electroluminescence spectra were obtained, of heterojunctions formed from poly(vinyl carbazole) (PVK) and poly(pyridyl vinylene phenylene vinylene) (PPyVPV) conjugated polymers. Bilayers of PVK and PPyVPV show a photoluminescence peak which cannot be assigned to either the PVK or the PPyVPV layer. Absorption spectra show that the addnl. feature results from an exciplex at the bilayer interface. The electroluminescence spectrum from the heterojunctions is due to exciplex emission, with internal efficiency of ∼0.1-0.5%. [on SciFinder(R)]
Y. Z. Wang, Gebler, D. D., Fu, D. K., Swager, T. M., MacDiarmid, A. G., and Epstein, A. J., "Light-emitting devices based on pyridine-containing conjugated polymers.", Synthetic Metals, vol. 85, pp. 1179–1182, 1997.
S. W. Jessen, Blatchford, J. W., Lin, L. - B., Gustafson, T. L., Partee, J., Shinar, J., Fu, D. - K., Marsella, M. J., Swager, T. M., MacDiarmid, A. G., and Epstein, A. J., "Absorption and luminescence of pyridine-based polymers.", Synthetic Metals, vol. 84, pp. 501–506, 1997.
The low energy photophysics of the pyridine-based polymers poly(p-pyridine)(PPy), poly(p-pyridylvinylene) (PPyV) and copolymers made up of PPyV and poly(p-phenylenevinylene) (PPyVPV). The absorption and luminescence properties are morphol. dependent. The primary photoexcitations within these polymers are singlet excitons which may emit from individual chains following a random walk to lower energy segments, depending upon the excitation energy. Films display red shifted absorption and emission properties with a decrease in photoluminescence efficiency which can be attributed to aggregate formation in comparison to powder and soln. forms. Photoinduced absorption (PA) studies show direct conversion of singlet to triplet excitons on the ps time scale. Polaron signatures and the transition between triplet exciton states are seen in powder forms using ms PA techniques. Film forms display only a polaron signature at millisecond times indicating that morphol. plays a key role in the long-time photophysics for these systems. Photoluminescence detected magnetic resonance studies also have signatures due to both polarons and triplet excitons. The size of the triplet exciton is limited to a single ring suggesting that the triplet exciton may be trapped by extrinsic effects. [on SciFinder(R)]
D. - K. Fu, Xu, B., and Swager, T. M., "Alternating poly(pyridyl vinylene phenylene vinylene)s: synthesis and solid state organizations.", Tetrahedron, vol. 53, pp. 15487–15494, 1997.
Poly(pyridylvinylenephenylenevinylenes) were synthesized by Heck coupling procedures. These materials display large red shifts in their optical absorption which upon protonation of alkylation of the pyridyl nitrogen. Some of the polymers were found to be liq. cryst. The protonated or alkylated versions exhibit highly organized structures due to charge-transfer interactions between polymer chains. [on SciFinder(R)]
Y. Z. Wang, Gebler, D. D., Fu, D. K., Swager, T. M., and Epstein, A. J., "Color variable bipolar/ac light-emitting devices based on conjugated polymers", Applied Physics Letters, vol. 70, p. 3215, 1997.
The fabrication is reported of color variable bipolar/a.c. LEDs based on conjugated polymers. The devices consist of blends of pyridine-phenylene and thiophene-phenylene based copolymers sandwiched between the emeraldine base form and the sulfonated form of polyaniline. ITO and Al are used as electrodes. The devices operate under either polarity of driving voltage with different colors of light being emitted, red under forward bias, and green under reverse bias. The relative fast time response allows the rapid switching of colors and a.c. operation. [on SciFinder(R)]
D. D. Gebler, Wang, Y. Z., Blatchford, J. W., Jessen, S. W., Fu, D. - K., Swager, T. M., MacDiarmid, A. G., and Epstein, A. J., "Exciplex emission in bilayer polymer light-emitting devices.", Applied Physics Letters, vol. 70, pp. 1644–1646, 1997.
Photoluminescent and electroluminescent studies of bilayer heterojunctions formed from poly(pyridyl-vinylene-phenylene-vinylene) (PPyVPV) derivs. and poly(vinyl carbazole) (PVK) show an emission peak which cannot be ascribed to either the PPyVPV or PVK layer. This peak results from an exciplex at the bilayer interface as demonstrated through studies of absorption and photoluminescence excitation spectra. The photoluminescence efficiency of the exciplex is greater than 20%. The electroluminescence spectrum from the bilayer devices is entirely due to exciplex emission, with internal efficiencies initially achieved exceeding 0.1%. [on SciFinder(R)]
D. M. Knawby and Swager, T. M., "Liquid-Crystalline Heterocyclic Phthalocyanine Analogs Based on Thiophene.", Chemistry of Materials, vol. 9, pp. 535–538, 1997.
The syntheses and mesomorphism of novel octaalkoxymethyl-substituted tetra-2,3-thiophenoporphyrazines, heterocyclic phthalocyanine analogs in which the benzene rings are replaced by thiophene rings, are reported. NMR anal. indicates that these materials exist as isomeric mixts. due to the asymmetry induced by the thiophene ring. Among the attractive features of liq.-cryst. 2,3-thiophenoporphyrazines over phthalocyanines are their lower mesomorphic and isotropic temps. These features resulted in liq. crystallinity at room temp. Similar to liq.-cryst. phthalocyanine derivs., metalation with Cu raises the transition temps. [on SciFinder(R)]
T. M. Swager, "The Molecular Wire Approach to Sensory Signal Amplification.", Accounts of Chemical Research, vol. 31, pp. 201–207, 1998.
A review, with 24 refs., is given on conceptual aspects of how mol. wires (conjugated polymers) can be used to amplify mol. chemosensors. The properties that are responsible are universal and can be utilized in a multitude of schemes. The sensitivity and diversity available suggest that these materials will be important for future sensor technologies. [on SciFinder(R)]
R. P. Kingsborough and Swager, T. M., "Electroactivity enhancement by redox matching in cobalt salen-based conducting polymers.", Advanced Materials (Weinheim, Germany), vol. 10, pp. 1100–1104, 1998.
The synthesis and the electrochem. properties of 2 polymer-transition metal hybrids are reported. The oxidn. potential of the Co-contg. N,N-ethylenebis(salicylidenimine) polymer films was investigated by cyclic voltammetry, and the influence of the film thickness on the redox potential was analyzed. The test results for both polymers are shown and discussed in detail. It was concluded from the exptl. data, that the Co-polymer hybrids display a high cond. and sensitivity to coordinating ligands. Moreover, possible application fields for the hybrid materials are outlined. [on SciFinder(R)]
D. V. Baxter, Chisholm, M. H., Lynn, M. A., Putilina, E. F., Trzaska, S. T., and Swager, T., "Studies of Thermotropic Properties and the Mesophase of Mixtures of n-Alkanoates and Perfluoro-n-alkanoates of Dimolybdenum (M&M).", Chemistry of Materials, vol. 10, pp. 1758–1763, 1998.
S. T. Trzaska and Swager, T. M., "Columnar Mesophases from Nondiscoid Metallomesogens: Rhodium and Iridium Dicarbonyl Diketonates.", Chemistry of Materials, vol. 10, pp. 438–443, 1998.
Nondiscoid Rh and Ir dicarbonyl $\beta$-diketonate complexes organize in columnar liq. crystal phases with hexagonal disordered structures (Colhd). The shape and dipolar attributes of these materials produce highly correlated antiparallel pairwise arrangements. This organization was studied by x-ray diffraction and miscibility studies. The liq. crystallinity is stabilized by decreasing the no. of C atoms in the side chains, and single component room-temp. liq. crystals were developed. The nature of the metal is also important, and Ir mesogens exhibited higher clearing points than the analogous Rh materials. [on SciFinder(R)]
M. Fahlman, Gebler, D. D., Piskun, N., Swager, T. M., and Epstein, A. J., "Experimental and theoretical study of ring substituent induced effects on the structural and optical properties of poly(p-pyridylvinylene phenylenevinylene)s.", Journal of Chemical Physics, vol. 109, pp. 2031–2037, 1998.
Optical absorption and photoluminescence spectroscopy have been carried out on a new class of (phenylene) ring substituted p-pyridylvinylene phenylenevinylene polymers used as active materials in light emitting diodes. The effects of the ring substitutions on the optical absorption and photoluminescence energies are qual. explained through the use of semiempirical quantum chem. modeling of the ring torsion angles. Reduced aggregation through the use of \"strap\" substituents on the phenylene rings also is discussed. [on SciFinder(R)]
D. D. Gebler, Wang, Y. Z., Fu, D. - K., Swager, T. M., and Epstein, A. J., "Exciplex emission from bilayers of poly(vinyl carbazole) and pyridine based conjugated copolymers.", Journal of Chemical Physics, vol. 108, pp. 7842–7848, 1998.
We present photoluminescence and electroluminescence studies of bilayers and blends formed from poly(vinyl carbazole) (PVK) and poly(pyridyl vinylene phenylene vinylene) (PPyVPV) copolymer derivs. Bilayers of PVK and the PPyVPV copolymers have a photoluminescence emission which cannot be assigned to either the photoluminescence of PVK or the PPyVPV layer. The blends of the two polymers show a similar new photoluminescence emission for a large range of concns. Absorption and photoluminescence excitation spectra confirm that the addnl. feature is an excited state species which results from an exciplex at the polymer/polymer interface. Bilayer light-emitting devices utilizing the PPyVPV copolymers show an electroluminescence spectrum consistent with emission from the exciplex. The efficiency of the bilayer devices as compared to single layer devices increases by over three orders of magnitude due to the exciplex formation and the elimination of exciton formation near the luminescence quenching electrodes. [on SciFinder(R)]
B. Xu, Miao, Y. - J., and Swager, T. M., "Palladium Couplings on Metallocalix[4]arenes: An Efficient Synthesis of New Functionalized Cavities.", Journal of Organic Chemistry, vol. 63, pp. 8561–8564, 1998.
An efficient one-pot procedure has been developed for the prepn. of rigid non-fluxional dialkoxy tungsten calix[4]arenes I [HOCHRCHROH = ethyleneglycol, methylcatechol, (S)-(-)-1,1'-bi-2-naphtho
M. B. Goldfinger, Crawford, K. B., and Swager, T. M., "Synthesis of Ethynyl-Substituted Quinquephenyls and Conversion to Extended Fused-Ring Structures.", Journal of Organic Chemistry, vol. 63, pp. 1676–1686, 1998.
An organometallic coupling electrophile-induced cyclization strategy for the synthesis of p-terphenyl compds. has been extended to the synthesis of p-quinquephenyl systems. In this work the authors report the synthesis of various polycyclic arom. systems contg. nine annelated rings including the synthesis of functionalized polycyclic arom. systems. An interesting side reaction which leads to an indenyl spiro ring system is also described. This side reaction can be suppressed by changing the electrophile (from H+ to I+) or by modification of the cyclization precursor. The UV-vis and fluorescence spectra of several of these polycyclic aroms. and the p-quinquephenyl precursors are also reported. [on SciFinder(R)]
J. - S. Yang and Swager, T. M., "Fluorescent Porous Polymer Films as TNT Chemosensors: Electronic and Structural Effects.", Journal of the American Chemical Society, vol. 120, pp. 11864–11873, 1998.
The synthesis, spectroscopy, and fluorescence quenching behavior of pentiptycene-derived phenyleneethynylene polymers, 1-3, are reported. The incorporation of rigid three-dimensional pentiptycene moieties into conjugated polymer backbones offers several design advantages for solid-state (thin film) fluorescent sensory materials. First, they prevent $π$-stacking of the polymer backbones and thereby maintain high fluorescence quantum yields and spectroscopic stability in thin films. Second, reduced interpolymer interactions dramatically enhance the soly. of polymers 1-3 relative to other poly(phenyleneethynylenes). Third, the cavities generated between adjacent polymers are sufficiently large to allow diffusion of small org. mols. into the films. These advantages are apparent from comparisons of the spectroscopic and fluorescence quenching behavior of 1-3 to a related planar electron-rich polymer 4. The fluorescence attenuation (quenching) of polymer films upon exposure to analytes depends on several factors, including the exergonicity of electron transfer from excited polymer to analytes, the binding strength (polymer-analyte interactions), the vapor pressure of the analyte, and the rates of diffusion of the analytes in the polymer films. Films of 1-3 are particularly selective toward nitro-arom. compds. The dependence of fluorescence quenching on film thickness provides an addnl. criterion for the differentiation of nitro-arom. compds. from other species, such as quinones. In short, thinner films show a larger response to nitro-arom. compds., but show a lower response to quinones. Such differences are explained in terms of polymer-analyte interactions, which appear to be electrostatic in nature. The rapid fluorescence response (quenching) of the spin-cast films of 1-3 to nitro-contg. compds. qualifies these materials as promising TNT chemosensory materials. [on SciFinder(R)]
K. B. Crawford, Goldfinger, M. B., and Swager, T. M., "Na+ Specific Emission Changes in an Ionophoric Conjugated Polymer.", Journal of the American Chemical Society, vol. 120, pp. 5187–5192, 1998.
The emission and absorption characteristics of a conjugated poly(phenylene bithiophene) and a monomeric model compd. were investigated as a function of [Li+], [Na+], [K+], and [Ca2+]. The calix[4]arene bithiophene receptor that is present in both compds. provides selectively for Na+ and the absorption and emission characteristics are not affected by Li+, K+, or Ca2+. Both systems display absorption spectra which are relatively insensitive to Na+; however, the Stokes shift of the emission is reduced by added Na+•. For the model system, increasing [Na+] provides a shift of the emission that is consistent with an equil. mixt. of bound and unbound receptor. The polymer displays a larger shift in the emission in response to Na+ and due to multiple binding sites lacks an isoemissive point. The chain length of the polymer also has an effect on this behavior. This behavior may be due to energy migration to regions of the polymer which do not have bound Na+ and can relax to lower energy conformations. This description is also borne out by the redn. in the lifetimes of the excited states with increasing [Na+] for both the polymer and the model system. This mechanism may provide a route to systems which can function as digital indicators at crit. concns. of analytes. [on SciFinder(R)]
J. - S. Yang and Swager, T. M., "Porous Shape Persistent Fluorescent Polymer Films: An Approach to TNT Sensory Materials.", Journal of the American Chemical Society, vol. 120, pp. 5321–5322, 1998.
Spin-cast films of a pentiptycene-derived phenyleneethynylene polymer, 2, display a fast fluorescence response (seconds) to vapors of 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (DNT), and 1,4-benzoquinone (BQ). The fluorescence attenuation of 2 is dependent on the time of exposure to these quenchers and on the thickness of thin films. Thinner films show a better response to TNT and DNT but the opposite is true for BQ. Such differences are attributed to different polymer-analyte interactions. For comparison, corresponding studies on an alkoxy-substituted phenylenethynylene polymer, 3, were also carried out. The results indicate that 2 is superior to 3 as a fluorescent chemosensor in terms of sensitivity, selectivity, solvent soly. and solvent stability. [on SciFinder(R)]
D. Ofer, Swager, T. M., and Wrighton, M. S., "Solid-State Ordering and Potential Dependence of Conductivity in Poly(2,5-dialkoxy-p-phenyleneethynylene).", Chemistry of Materials, vol. 7, pp. 418–425, 1995.
Soln.-cast films of four different poly(2,5-dialkoxy-p-phenyleneethynylene) mols. with varying backbone chain lengths and varying alkoxy substituent chain lengths and 1,4-diethynyl-2,5-dihexadecyloxybenzene-9,10-dibromoanthracene alternating copolymer (I) have been characterized electrochem. and by x-ray diffraction and DSC. The polymers have varying degrees of order and crystallinity based on long-range lamellar structure. Cyclic voltammetry in liq. SO2/electrolyte shows that the onset of oxidn. for poly(2,5-dialkoxy-p-phenyleneethynylene) occurs at ∼1.05 V vs SCE with the more cryst. polymers having slower electrochem. response than the less cryst. ones. In situ characterization of the potential dependence of cond. in the same medium shows that the max. conductivities of poly(2,5-dialkoxy-p-phenyleneethynylene) range from ∼0.2 to ∼5 $Ømega$-1 cm-1, suggesting that higher cond. is assocd. with lower long-range order in the polymer films but showing little dependence on av. polymer chain length. Poly(2,5-dialkoxy-p-phenyleneethynylene) all have max. cond. at ∼1.6 V vs SCE and finite potential windows of high cond. ∼0.55 V wide, indicating that the potential of max. cond. and the width of the window of high cond. are detd. by mol. rather than bulk properties. For I, the onset of oxidn. occurs at ∼0.8 V vs SCE, the potential of max. cond. is ∼1.5 V vs SCE, and the width of the potential window of high cond. is ∼0.85 V. [on SciFinder(R)]
D. D. Gebler, Wang, Y. Z., Blatchford, J. W., Jessen, S. W., Lin, L. B., Gustafson, T. L., Wang, H. L., Swager, T. M., MacDiarmid, A. G., and Epstein, A. J., "Blue electroluminescent devices based on soluble poly(p-pyridine).", Journal of Applied Physics, vol. 78, pp. 4264–4266, 1995. | CommonCrawl |
Noncommutative Geometry Seminar - Jean-Michel Bismut
Cockins Hall 240
Add to Calendar 2017-04-12 12:50:00 2017-04-12 13:45:00 Noncommutative Geometry Seminar - Jean-Michel Bismut Title: The Hypoelliptic LaplacianSpeaker: Jean-Michel Bismut (Université Paris-Sud)Abstract: In this series of three lectures, I will explain the theory of the hypoelliptic Laplacian. If $X$ is a compact Riemannian manifold, and if $\mathcal{X}$ is the total space of its tangent bundle, there is a canonical interpolation between the classical Laplacian of $X$ and the generator of the geodesic flow by a family of hypoelliptic operators $L^X_b\rvert_{b>0}$ acting on $\mathcal{X}$. This interpolation extends to all the classical geometric Laplacians. There is a natural dynamical system counterpart, which interpolates between Brownian motion and the geodesic flow.The hypoelliptic deformation preserves certain spectral invariants, such as the Ray-Singer torsion, the holomorphic torsion and the eta invariants. In the case of locally symmetric spaces, the spectrum of the original Laplacian remains rigidly embedded in the spectrum of the deformation. This property has been used in the context of Selberg's trace formula. Another application of the hypoelliptic Laplacian is in complex Hermitian geometry, where the extra degrees of freedom provided by the hypoelliptic deformation can be used to solve a question which is unsolvable in the elliptic world.In the first lecture, 'Hypoelliptic Laplacian and analytic torsion', I will give the structure of the hypoelliptic Laplacian, which is essentially a combination of the fibrewise harmonic oscillator and of the geodesic flow. I will also describe its construction in de Rham theory, and explain some properties of the hypoelliptic torsion.In the second lecture 'Hypoelliptic Laplacian and the trace formula', I will concentrate on the case of symmetric spaces, and on applications to the evalu-ation of orbital integrals and to Selberg trace formula.In the third (colloquium) lecture 'Hypoelliptic Laplacian, Brownian motion and the geodesic flow', I will explain the basic elements of the theory and em-phasize its connections with dynamical systems. The Hypoelliptic Laplacian Abstract [pdf] Cockins Hall 240 OSU ASC Drupal 8 [email protected] America/New_York public Add to Calendar 2017-04-19 12:50:00 2017-04-19 13:45:00 Noncommutative Geometry Seminar - Jean-Michel Bismut Title: The Hypoelliptic LaplacianSpeaker: Jean-Michel Bismut (Université Paris-Sud)Abstract: In this series of three lectures, I will explain the theory of the hypoelliptic Laplacian. If $X$ is a compact Riemannian manifold, and if $\mathcal{X}$ is the total space of its tangent bundle, there is a canonical interpolation between the classical Laplacian of $X$ and the generator of the geodesic flow by a family of hypoelliptic operators $L^X_b\rvert_{b>0}$ acting on $\mathcal{X}$. This interpolation extends to all the classical geometric Laplacians. There is a natural dynamical system counterpart, which interpolates between Brownian motion and the geodesic flow.The hypoelliptic deformation preserves certain spectral invariants, such as the Ray-Singer torsion, the holomorphic torsion and the eta invariants. In the case of locally symmetric spaces, the spectrum of the original Laplacian remains rigidly embedded in the spectrum of the deformation. This property has been used in the context of Selberg's trace formula. Another application of the hypoelliptic Laplacian is in complex Hermitian geometry, where the extra degrees of freedom provided by the hypoelliptic deformation can be used to solve a question which is unsolvable in the elliptic world.In the first lecture, 'Hypoelliptic Laplacian and analytic torsion', I will give the structure of the hypoelliptic Laplacian, which is essentially a combination of the fibrewise harmonic oscillator and of the geodesic flow. I will also describe its construction in de Rham theory, and explain some properties of the hypoelliptic torsion.In the second lecture 'Hypoelliptic Laplacian and the trace formula', I will concentrate on the case of symmetric spaces, and on applications to the evalu-ation of orbital integrals and to Selberg trace formula.In the third (colloquium) lecture 'Hypoelliptic Laplacian, Brownian motion and the geodesic flow', I will explain the basic elements of the theory and em-phasize its connections with dynamical systems. The Hypoelliptic Laplacian Abstract [pdf] Cockins Hall 240 OSU ASC Drupal 8 [email protected] America/New_York public
Add to Calendar 2017-04-12 13:50:00 2017-04-12 14:45:00 Noncommutative Geometry Seminar - Jean-Michel Bismut Title: The Hypoelliptic LaplacianSpeaker: Jean-Michel Bismut (Université Paris-Sud)Abstract: In this series of three lectures, I will explain the theory of the hypoelliptic Laplacian. If $X$ is a compact Riemannian manifold, and if $\mathcal{X}$ is the total space of its tangent bundle, there is a canonical interpolation between the classical Laplacian of $X$ and the generator of the geodesic flow by a family of hypoelliptic operators $L^X_b\rvert_{b>0}$ acting on $\mathcal{X}$. This interpolation extends to all the classical geometric Laplacians. There is a natural dynamical system counterpart, which interpolates between Brownian motion and the geodesic flow.The hypoelliptic deformation preserves certain spectral invariants, such as the Ray-Singer torsion, the holomorphic torsion and the eta invariants. In the case of locally symmetric spaces, the spectrum of the original Laplacian remains rigidly embedded in the spectrum of the deformation. This property has been used in the context of Selberg's trace formula. Another application of the hypoelliptic Laplacian is in complex Hermitian geometry, where the extra degrees of freedom provided by the hypoelliptic deformation can be used to solve a question which is unsolvable in the elliptic world.In the first lecture, 'Hypoelliptic Laplacian and analytic torsion', I will give the structure of the hypoelliptic Laplacian, which is essentially a combination of the fibrewise harmonic oscillator and of the geodesic flow. I will also describe its construction in de Rham theory, and explain some properties of the hypoelliptic torsion.In the second lecture 'Hypoelliptic Laplacian and the trace formula', I will concentrate on the case of symmetric spaces, and on applications to the evalu-ation of orbital integrals and to Selberg trace formula.In the third (colloquium) lecture 'Hypoelliptic Laplacian, Brownian motion and the geodesic flow', I will explain the basic elements of the theory and em-phasize its connections with dynamical systems. The Hypoelliptic Laplacian Abstract [pdf] Cockins Hall 240 Department of Mathematics [email protected] America/New_York public
Title: The Hypoelliptic Laplacian
Speaker: Jean-Michel Bismut (Université Paris-Sud)
Abstract: In this series of three lectures, I will explain the theory of the hypoelliptic Laplacian. If $X$ is a compact Riemannian manifold, and if $\mathcal{X}$ is the total space of its tangent bundle, there is a canonical interpolation between the classical Laplacian of $X$ and the generator of the geodesic flow by a family of hypoelliptic operators $L^X_b\rvert_{b>0}$ acting on $\mathcal{X}$. This interpolation extends to all the classical geometric Laplacians. There is a natural dynamical system counterpart, which interpolates between Brownian motion and the geodesic flow.
The hypoelliptic deformation preserves certain spectral invariants, such as the Ray-Singer torsion, the holomorphic torsion and the eta invariants. In the case of locally symmetric spaces, the spectrum of the original Laplacian remains rigidly embedded in the spectrum of the deformation. This property has been used in the context of Selberg's trace formula. Another application of the hypoelliptic Laplacian is in complex Hermitian geometry, where the extra degrees of freedom provided by the hypoelliptic deformation can be used to solve a question which is unsolvable in the elliptic world.
In the first lecture, 'Hypoelliptic Laplacian and analytic torsion', I will give the structure of the hypoelliptic Laplacian, which is essentially a combination of the fibrewise harmonic oscillator and of the geodesic flow. I will also describe its construction in de Rham theory, and explain some properties of the hypoelliptic torsion.
In the second lecture 'Hypoelliptic Laplacian and the trace formula', I will concentrate on the case of symmetric spaces, and on applications to the evalu-ation of orbital integrals and to Selberg trace formula.
In the third (colloquium) lecture 'Hypoelliptic Laplacian, Brownian motion and the geodesic flow', I will explain the basic elements of the theory and em-phasize its connections with dynamical systems.
The Hypoelliptic Laplacian Abstract [pdf]
Events Filters:
Seminar - Quantum Symmetry | CommonCrawl |
What Is Unit Vector In Physics
What Is Transduction In Psychology
The Vault Codes Geometry Dash
How Do We Know That Clocks Are Hungry Math Answer
What Is The Easiest Math Class In College
What Is Resonance Effect In Organic Chemistry
Using Components To Add And Subtract Vectors
Using Unit Vectors in Physics
Another way of adding vectors is to add the components. Previously, we saw that vectors can be expressed in terms of their horizontal and vertical components. To add vectors, merely express both of them in terms of their horizontal and vertical components and then add the components together.
Vector with Horizontal and Vertical Components: The vector in this image has a magnitude of 10.3 units and a direction of 29.1 degrees above the x-axis. It can be decomposed into a horizontal part and a vertical part as shown.
For example, a vector with a length of 5 at a 36.9 degree angle to the horizontal axis will have a horizontal component of 4 units and a vertical component of 3 units. If we were to add this to another vector of the same magnitude and direction, we would get a vector twice as long at the same angle. This can be seen by adding the horizontal components of the two vectors and the two vertical components . These additions give a new vector with a horizontal component of 8 and a vertical component of 6 . To find the resultant vector, simply place the tail of the vertical component at the head of the horizontal component and then draw a line from the origin to the head of the vertical component. This new line is the resultant vector. It should be twice as long as the original, since both of its components are twice as large as they were previously.
Adding And Subtracting Vectors
One of the ways in which representing physical quantities as vectors makes analysis easier is the ease with which vectors may be added to one another. Since vectors are graphical visualizations, addition and subtraction of vectors can be done graphically.
The graphical method of vector addition is also known as the head-to-tail method. To start, draw a set of coordinate axes. Next, draw out the first vector with its tail at the origin of the coordinate axes. For vector addition it does not matter which vector you draw first since addition is commutative, but for subtraction ensure that the vector you draw first is the one you are subtracting from. The next step is to take the next vector and draw it such that its tail starts at the previous vectorâs head . Continue to place each vector at the head of the preceding one until all the vectors you wish to add are joined together. Finally, draw a straight line from the origin to the head of the final vector in the chain. This new line is the vector result of adding those vectors together.
Graphical Addition of Vectors: The head-to-tail method of vector addition requires that you lay out the first vector along a set of coordinate axes. Next, place the tail of the next vector on the head of the first one. Draw a new vector from the origin to the head of the last vector. This new vector is the sum of the original two.
How Do You Find The Unit Vector Perpendicular To Two Vectors
The cross-product of two non-parallel results in a vector that is a vector that is perpendicular to both of them. So, for the given two vectors \ and \, we know that, \ will be a vector that is perpendicular to both \. Further, we find the unit vector of this resultant vector to obtain the unit vector perpendicular to the two given vectors.
Don't Miss: Do I Capitalize Bachelor's Degree In Psychology
Application Of Unit Vector
Unit vectors specify the direction of a vector. Unit vectors can exist in both two and three-dimensional planes. Every vector can be represented with its unit vector in the form of its components. The unit vectors of a vector are directed along the axes. Unit vectors in 3-d space can be represented as follows: v = x^ + y^ + z^.
In the 3-d plane, the vector v will be identified by three perpendicular axes . In mathematical notations, the unit vector along the x-axis is represented by i^. The unit vector along the y-axis is represented by j^, and the unit vector along the z-axis is represented by k^.
The vector v can hence be written as:
v = xi^ + yj^ + zk^
Electromagnetics deals with electric forces and magnetic forces. Here vectors are come in handy to represent and perform calculations involving these forces. In day-to-day life, vectors can represent the velocity of an airplane or a train, where both the speed and the direction of movement are needed.
Properties Of A Unit Vector
The contents of this topic can be summarized in the properties of a unit vector in the following manner:
A unit vector has a magnitude of 1.
Unit vectors are only used to specify the direction of a vector.
Unit vectors exist in both two and three-dimensional planes.
Every vector has a unit vector in the form of its components.
The unit vectors of a vector are directed along the axes.
The unit vectors of a vector are perpendicular to the corresponding unit vectors of the same vector.
Recommended Reading: Exponential Growth And Decay Common Core Algebra 1 Homework Answers
Define Unit Vector Class 11
A unit vector definition in Physics is, a unit vector is a direction vector. A unit vector symbol is similar to the mathematical symbol of the exponent, and that is ^. Unit vector symbol in Physics is pronounced as cap or hat.
For calculating the magnitude of any given vector, we use the coordinate system as follows:
A unit vector, i cap indicates the direction of an object along the x-axis.
A unit vector, j cap indicates the direction of an object along the y-axis.
A unit vector, k cap indicates the direction of an object along the z-axis.
So, do you know what is unit vector in Physics?
Well! We define unit vector in Physics with the following equation:
Unit vector = \
So, a unit vector \ having the same direction as a vector \ is given as
\ = \ |\|
\ = a unit vector
\ = represents the vector
|\| = represents the magnitude of the vector
Unit Vector Vs Basis Vector
When reading about vectors I sometimes have seen unit vectors multiplied by the components and other times I've seen basis vectors used instead. $$v=x \hat i+y \hat j+z \hat k$$
$$v=xe_x+ye_y+ze_z$$Occasionally, I've seen both used in a single source. As far as I can tell, they seen to be doing the same thing, i.e., showing what direction each component is pointing while not changing the numerical value of any of the components .
My questions are:What is the difference between an unit vector and a basis vector? And are they interchangeable in specifying the directions of components?
1$\begingroup$It's worth pointing out that it doesn't really make sense to talk about 'a basis vector': any nonzero vector is part of some basis , so it's not really helpful. Rather you need to talk about a particular basis, which is a set of vectors with nice properties of which the vector of interest is one, and this is what people mean usually. It is useful to talk about a unit vector since that is defined as a property of the vector on its own.$\endgroup$ user107153
A unit vector $v$ is a vector whose norm is unity: $||v||=1$. That's all. Any non-zero vector $w$ can define a unit vector $w/||w||$.
For example, $$ and $$ form a basis of the plane . So both $$ and $$ are basis vectors. $$ is a unit vector, but not a basis vector in that case. But you could also consider another basis made of $$ and $$, then $$ would also be a unit vector.
Don't Miss: Theory Of Everything 2 All Coins
Radial And Normal Unit Vectors
In addition to i, j, and k, two other unit vectors are used in this text. A vector of unit magnitude in the radially outward direction is designated by r is formed by taking the position vector r and dividing it by its magnitude:
unit magnitude but preserves the radial direction of r. Unlike the three Cartesian unit vectors, r may vary in direction because the position vector r varies in direction.
The unit vector n ^ is normal, or perpendicular, to a surface at a given point . For the special case of a spherical surface the normal vector n
Figure 2.25. The unit vector r points radially outward. The unit vector n is normal to the surface.
How can the direction cosines be negative? Describe the orientation of a vector having all three direction cosines negative.
Sarhan M. Musa, in, 2016
[Physics] Unit Vectors and Multiplying Vectors
A vector quantity bears both magnitude and direction. For example, displacement is the shortest distance taken to reach your destination. It is displayed as a displacement vector.
A vector also carries a negative magnitude when an object takes the opposite direction. There is one more term called a unit vector. Unit vectors have a magnitude of 1.
A speciality of a unit vector in Physics is, a vector can be represented in the space using a unit vector.
On this page, we will understand what is unit vector in Physics Class 11, and the unit vector definition in Physics in detail.
Read Also: Age Word Problems Algebra
Vectors Vs Unit Vectors
grzz said:A UNIT vector has magnitude of 1.
Ascendant78 said:Actually, I think I might get it now…If I am understanding right, say if I calculated a unit vector force of a gravitational field between two objects to be say 500N and used meters for the units, then if they were 10m away from each other when I calculated this force, it means the force between them is 500N per m, making the overall force 5000N, or is that wrong?
Ascendant78 said:Oh, I get their relation and the concept behind them in that a unit vector is 1 of whatever quantity is being measured. What I don't understand is exactly what they tell you relative to one another? Say if I found the vector force of gravity from a planet, then found the unit vector force of gravity from that same planet, I'm not sure of what each one would be telling me relative to each other? Like I said before, I only ever used the vector force of gravity, I never used the unit vector force, so I don't know what that would tell me about the gravitational force?
velocityvelocity
Some Special Cases Of Vector Addition
The following are some special cases to make vector calculation easier to represent.
1. =0° : Here is the angle between the two vectors. That is, if the value of is zero, the two vectors are on the same side. In this case, the value of the resultant vector will be
Thus, the absolute value of the resultant vector will be equal to the sum of the absolute values of the two main vectors. Notice the image below
2. =180° : Here, if the angle between the two vectors is 180°, then the two vectors are opposite to each other. That is, the value of cos here will be -1.
Here the absolute value of the resultant vector is equal to the absolute value of the subtraction of the two vectors. And the resultant vector will be oriented towards it whose absolute value is higher than the others.
3. a=b and =180° : Here the two vectors are of equal value and are in opposite directions to each other. In this case, the absolute value of the resultant vector will be zero. That is, the resolution vector is a null vector
2. =90° : If the angle between the two vectors is 90 degrees. The value of cos will be zero. So, here the resultant vector will follow the formula of Pythagoras
In this case, the two vectors are perpendicular to each other. And the resultant vector will be located at the specified angle with the two vectors. Suppose, here two vectors a, b are taken and the resultant vector c is located at angle with a vector Then the direction of the resultant vector will be
Read Also: Hawkes Learning Systems Statistics Answer Key
What Is The Origin Of Unit Vector Notation
2$\begingroup$I'm retagging your question, since it is really about history $\endgroup$Oct 24 '11 at 4:16
5 Theo BuehlerOct 24 '11 at 4:43
$\begingroup$While this is a mildly interesting historical question, I do not think MO should become a repository for asking 'where does this common notation come from?' style questions. Searching on google for "history of mathematical notation" gives a number of interesting pages.$\endgroup$Oct 24 '11 at 6:41
Maybe it originates from Hamilton's quaternions $\mathbb$, which has a basis $1,i,j,k$ as a real vector space, and the multiplications there, namely, $i\cdot j=k, j\cdot k=i, k\cdot i=j$ correspond exactly to the wedge product in $\mathbb^3$. So $\mathbb^3$ can be viewed as the imanginary part of $\mathbb$.
Anyway, this is just my understanding or my guess.
What Are Vectors In Physics
A physical quantity is a quantity whose physical properties you can measure. Such as mass, force, velocity, displacement, temperature, etc.
Suppose you are told to measure your happiness. In this case, you can never measure your happiness. That is, you cannot describe and analyze with measure how much happiness you have. So, happiness here is not a physical quantity.
Suppose you have a fever. And the doctor ordered you to measure your body temperature. Then you measured your body temperature with a thermometer and told the doctor. When you tell your doctor about your body temperature, you need to use the word degree centigrade or degree Fahrenheit.
So, the temperature here is a measurable quantity. So we will use temperature as a physical quantity.
In general, we will divide the physical quantity into three types
In this tutorial, we will only discuss vector quantity. But before that, lets talk about scalar.
Read Also: Books Never Written Math Worksheet Answers Geometry
Multiplying Vectors By A Scalar
Multiplying a vector by a scalar changes the magnitude of the vector but not the direction.
Similarly if you take the number 3 which is a pure and unit-less scalar and multiply it to a vector, you get a version of the original vector which is 3 times as long.
Most of the units used in vector quantities are intrinsically scalars multiplied by the vector.
For example, the unit of meters per second used in velocity, which is a vector, is made up of two scalars, which are magnitudes: the scalar of length in meters and the scalar of time in seconds.
In order to make this conversion from magnitudes to velocity, one must multiply the unitvector in a particular direction by these scalars.
Finding Unit Vectors In Three
In the three-dimensional plane, the vector v will be represented by three perpendicular axes, namely the x, y, and the z-axis. In mathematical notations, the unit vector along the x-axis is represented by i^. The unit vector along the y-axis is represented by j^, and the unit vector along the z-axis is represented by k^.
For writing these unit vectors in planar forms, we can make use of parentheses such as below:
Hence, we can rewrite the formula for finding the unit vector in two ways:
u = v / |v|
u = v / ^2 + ^2 +^2)
Or we can also write it as:
u = / ^2 + ^2 + ^2)
For the vector v = , find its unit vector.
We are familiar with the formula for finding the unit vector:
So, by inserting the necessary parameters:
u = / 169
u = / 13
So u is the required unit vector.
For proof, lets find the magnitude of u:
|u| = ^2 + ^2 + ^2)
|u| =
Hence, it is proved that the vector u is the unit vector of v.
Find the unit vector of the vector v = .
The formula for finding the unit vector is stated as:
|u| = 0.99
Don't Miss: Unit 1 Geometry Basics Homework 4 Angle Addition Postulate
Matter Exists In Space And Time
Logically a beginning knowledge of vectors, vectors spaces and vector algebra is needed to understand his ideas.
Examples of this section relate to representation of space as an origin, coordinates and a unitvector basis.
Ladder Boom Rescue: Vector analysis is methodological.
Every vector has a component and a magnitude-direction form.
Newton used vectors and calculus because he needed that mathematics.
Vectors can be used to represent physical quantities.
Vectors are a combination of magnitude and direction, and are drawn as arrows.
Because vectors are constructed this way, it is helpful to analyze physical quantities as vectors.
For example, when drawing a vector that represents a magnitude of 100, one may draw a line that is 5 units long at a scale of $\displaystyle \frac$.
In drawing the vector, the magnitude is only important as a way to compare two vectors of the same units.
Note that there are two vectors that are perpendicular to any plane.
The units of angular velocity are radians per second.
However, it's angular velocity is constant since it continually sweeps out a constant arc length per unit time.
A vector diagram illustrating circular motion.
The red vector is the angular velocity vector, pointing perpendicular to the plane of motion and with magnitude equal to the instantaneous velocity.
All vectors have a length, called the magnitude, which represents some quality of interest so that the vector may be compared to another vector.
Previous articleAnswers To Odysseyware Algebra 1
Next articleWhat Does Caps Stand For In Psychology
What Is Vx In Physics
How The Hippies Saved Physics
What Does Slope Represent In Physics | CommonCrawl |
I am wondering if anyone knows any more on the history of the term 'co-domain' as it relates to functions.
Two sources I found:
Russell and Whitehead, Principia Mathematica, 1915, page 34 :
the class of all terms to which something or other has the relation $R$ is called the converse domain of $R$; it is the same as the domain of the converse of $R$.
Cassius Keyser, Mathematical Philosophy, 1922, page 168:
A relation $R$ has what is called a domain, - the class of all the terms such that each of them has the relation to something or other, - and also a codomain - the class of all the terms such that, given any one of them, something has the relation to it.
It seems to me that when Keyser talks about a 'codomain', he is talking about the same thing as Russell and Whitehead's 'converse domain'. So, it looks like we went from 'converse domain' to 'codomain' .... to 'co-domain'? That would seem to make sense.
Also, both texts talk about relations, not functions. But, a function is of course a special kind of relation. So ... it still makes sense.
However! (and this is really why I am asking this question): the way these two texts talk about the 'converse domain' and 'codomain' is (when applied to functions) what we nowadays call the 'range' or 'image' of the function, and not what we nowadays call its 'co-domain'.
Concrete example:
Take a function $f$ whose domain is defined as $\mathbb{R} - \{ 0 \}$, whose co-domain is defined as $\mathbb{R}$, and whose mapping is defined as $f(x) =1/x$.
For this function, the range or image is $\mathbb{R} - \{ 0 \}$, and that is what (again, if we see this function as a relation) Russell & Whitehead would consider its 'converse domain' what Keyser would call its 'codomain'.
But the 'co-domain' of this function was defined as $\mathbb{R} - \{ 0 \}$
So I think there has been a shift in the use of the term ... That is, it seems like we got:
'converse domain' -> 'codomain' -> 'range'
... while 'co-domain' is something different!
This is weird! What happened? Does anyone have some insight into any of this?
mathematics terminology
Bram28
Bram28Bram28
$\begingroup$ The domain and codomain are both part of what it means to define a function: using domain $\mathbf R-\{0\}$, the codomain could be $\mathbf R - \{0\}$ or $\mathbf R$ or $\mathbf C$ or ... With codomain $\mathbf R - \{0\}$ the function is invertible, but not in the other two cases. Therefore I disagree that in modern times we always would say the domain is $\mathbf R$. It depends on what you are doing. $\endgroup$ – KCd Aug 17 '20 at 20:20
$\begingroup$ Related: hsm.stackexchange.com/questions/4929/… $\endgroup$ – Spencer Aug 17 '20 at 22:28
$\begingroup$ @KcD You are quite right: Just by saying that $f(x)=1/x$ I can;t say what the co-domain is ... or even the domain. My mistake, and I'll fix my Post accordingly! So let's say that this function $f$ has as its domain $\mathbb{R} - \{0 \}$, and as its co-domain $\mathbb{R}$. I can do that, right? But this is weird. Because we now have a situation where the function is not invertible, i.e. there is no converse. And yet, there is a 'converse domain' (R&W), as well as a codomain (Keyser), which is $\mathbb{R} - \{ 0 \}$ ... and yet this is not the 'co-domain'! $\endgroup$ – Bram28 Aug 18 '20 at 1:59
$\begingroup$ You can certainly take the domain of $1/x$ to be $\mathbf R - \{0\}$ and the co-domain to be $\mathbf R$, just as you can take the domain of $1/x$ to be $(0,1)$ and the co-domain to be $(0,5)$. There is nothing weird about this. It all depends on what it is you want to do with the functions. For example, it's natural to consider all rational functions with real coefficients to be "real-valued functions defined where they maximally make sense", so $1/(x^2-x)$ has domain $\mathbf R - \{0,1\}$ and co-domain $\mathbf R$. It is not every function's aspiration in life to be invertible. $\endgroup$ – KCd Aug 18 '20 at 3:08
$\begingroup$ @KcD Right ...but my question is not about the current use of domain and co-domain. If I had a question about that I would simply go to the Mathhematics Stack exchange. My question is a historical question about how the use of the terms evolved: clearly what Russell and Whitehead and Keyser described as 'converse domain' and 'codomain' doesn't match up with the modern usage of 'co-domain'. That to me is the weird part, and I am hoping that someone could tell me how or why that shift happened historically. $\endgroup$ – Bram28 Aug 18 '20 at 3:13
It's an early recognition of duality in set theory. Domain vs Codomain suggests a relationship that is missing from domain and range.
This is hidden in set theory as functions are biased in that they are not symmetrically defined. Nor is it easy to conceptualise one to many functions naturally, and dually to many to one functions, which they do naturally.
This is fixed in category theory where duality is made explicit, rather than in the secret, furtive way it's done in set theory. Moreover, category theory is the correct conceptualisation of covariance as in the notion of general covariance that Einstein used heuristically in his investigations into the general character of physical law.
Interestingly, one of the major discoveries of string theory is the role that dualities play in physics. (In ordinary physics, we see duality manifest itself in the duality between the electric and magnetic fields). It wouldn't surprise me if at bottom this had the same root as dualities in category theory.
Mozibur UllahMozibur Ullah
Not the answer you're looking for? Browse other questions tagged mathematics terminology or ask your own question.
History of the definition of Injective & Surjective Function
When was the term 'elementary function' first coined and who did it?
What is the history of moment generating functions, and the more general characteristic functions?
Origin of use of "quotient" to describe structures induced by equivalence classes
What is the correct statement of Cauchy's erroneous theorem on continuity?
What is the etymology of the mathematical terms "sheaf, stalk, germ"? | CommonCrawl |
RNA structure refinement using NMR solvent accessibility data
Your article has downloaded
Similar articles being viewed by others
Carousel with three slides shown at a time. Use the Previous and Next buttons to navigate three slides at a time, or the slide dot buttons at the end to jump three slides at a time.
High-resolution small RNA structures from exact nuclear Overhauser enhancement measurements without additional restraints
Parker J. Nichols, Morkos A. Henen, … Beat Vögeli
Accelerated cryo-EM-guided determination of three-dimensional RNA-only structures
Kalli Kappel, Kaiming Zhang, … Rhiju Das
Deconvolution of conformational exchange from Raman spectra of aqueous RNA nucleosides
Alex L. Wilson, Carlos Outeiral, … Andrew Almond
Automated NMR resonance assignments and structure determination using a minimal set of 4D spectra
Thomas Evangelidis, Santrupti Nerli, … Konstantinos Tripsianes
RNA structure: a renaissance begins?
Rhiju Das
Extended experimental inferential structure determination method in determining the structural ensembles of disordered protein states
James Lincoff, Mojtaba Haghighatlari, … Teresa Head-Gordon
Mutate-and-chemical-shift-fingerprint (MCSF) to characterize excited states in RNA using NMR spectroscopy
Magdalena Riad, Noah Hopkins, … Katja Petzold
Backbone-independent NMR resonance assignments of methyl probes in large proteins
Santrupti Nerli, Viviane S. De Paula, … Nikolaos G. Sgourakis
Fast NMR method to probe solvent accessibility and disordered regions in proteins
André F. Faustino, Glauce M. Barbosa, … Ivo C. Martins
Christoph Hartlmüller1,2 na1,
Johannes C. Günther1,2 na1,
Antje C. Wolter3,
Jens Wöhnert3,
Michael Sattler ORCID: orcid.org/0000-0002-1594-05271,2 &
Tobias Madl ORCID: orcid.org/0000-0002-9725-52311,2,4
Scientific Reports volume 7, Article number: 5393 (2017) Cite this article
Computational biophysics
Solution-state NMR
NMR spectroscopy is a powerful technique to study ribonucleic acids (RNAs) which are key players in a plethora of cellular processes. Although the NMR toolbox for structural studies of RNAs expanded during the last decades, they often remain challenging. Here, we show that solvent paramagnetic relaxation enhancements (sPRE) induced by the soluble, paramagnetic compound Gd(DTPA-BMA) provide a quantitative measure for RNA solvent accessibility and encode distance-to-surface information that correlates well with RNA structure and improves accuracy and convergence of RNA structure determination. Moreover, we show that sPRE data can be easily obtained for RNAs with any isotope labeling scheme and is advantageous regarding sample preparation, stability and recovery. sPRE data show a large dynamic range and reflect the global fold of the RNA suggesting that they are well suited to identify interaction surfaces, to score structural models and as restraints in RNA structure determination.
In the last decades, ribonucleic acids (RNAs) have been found to be key regulators for numerous important cellular processes including transcription, translation, splicing, cell differentiation as well as cell death and proliferation1. Besides regulatory functions, RNAs are essential for catalysis in ribosomes and spliceosomes, sensing environmental parameters such as temperature or metabolite concentration, as well as storing the genomic information of RNA viruses. Understanding the underlying molecular mechanisms of these processes requires insights into the structure and dynamics at an atomic level. In the last decades, NMR spectroscopy has developed to a powerful technique to study RNA structure and dynamics2,3,4,5,6,7,8. However, technical challenges are caused by poor chemical shift dispersion leading to spectral overlap as well as by the low proton density and the low number of intramolecular interactions limiting the number of observable distance restraints that can be used to define RNA structure. Moreover, the RNA backbone is defined by several torsion angles, requiring a large set of restraints to obtain an accurate structural model. To overcome these limitations, different labeling strategies, including for example specific or uniformly 13C- and/or 15N-labeling and segmental labelling of RNAs were developed to reduce spectral complexity and signal overlap9,10,11,12,13,14,15,16,17,18,19,20,21,22. Large sets of restraints including NOE-based distances, torsion angles and orientational restraints derived from RDCs and J-couplings, respectively, as well as base pairing contacts obtained from J-couplings across hydrogen bonds are typically collected to provide a sufficient number of restraints for high-resolution models of RNAs2, 6, 23,24,25,26.
In previous NMR studies, solvent accessibility data were obtained by measuring the NOE between water and protein protons, but the data were found to be dominated by chemical exchange27, 28. Here, we demonstrate that solvent accessibility data derived from solvent paramagnetic relaxation enhancements (sPREs) provide valuable information to characterize the conformation of RNAs and will be useful to enable NMR studies of larger RNAs. sPRE data are a quantitative measure for solvent accessibility and thus readily provide distance-to-surface information29 (Fig. 1). Directly observing the solvent accessible areas is a promising approach not only for mapping interaction surfaces but also for structure determination, as has been shown for proteins30,31,32. Recently, computational protocols using sPRE data were developed for XplorNIH27 and the Rosetta framework33. The implementation and use of sPRE data has several advantages: i) no covalent labeling of RNAs is required, ii) the sample can be recovered, iii) complete chemical shift assignment is not required and iv) any type of NMR spectrum such as proton 1D, 1H, 15N HSQC or 1H, 13C HSQC experiments can be used. Direct measurements of solvent accessibility data of RNAs have been reported previously using the soluble compound TEMPOL34. While the correlation of TEMPOL-induced sPREs agreed with RNA structure for some parts, the data were limited to a qualitative validation of models. Moreover, unexpectedly large sPREs were found for certain nucleotides and it was later suggested that TEMPOL forms intermolecular hydrogen bonds leading to preferred binding to specific sites35. Here, sPRE data are obtained by titrating the RNA sample with Gd(DTPA-BMA), a soluble, well-characterized Gd3+ chelating paramagnetic contrast agent that was originally developed for MRI imaging31, 36.
Concept of sPRE and exemplary sPRE data of the UUCG tetraloop. (a) sPRE data provide a quantitative measure for solvent accessibility and encode distance-to-surface information. sPRE data are acquired by titrating the RNA with the paramagnetic compound Gd(DTPA-BMA). (b) The NMR solution structure of the UUCG tetraloop (PDB code 2KOC) is shown and for illustration purpose, two solvent-exposed protons (blue spheres) and two buried protons (orange spheres) are highlighted. (c) Schematic representation of the UUCG tetraloop (top) and the GTP-bound GTP aptamer. (d) NMR spectra of the UUCG tetraloop are shown in the absence (black) and presence (magenta) of Gd(DTPA-BMA) with circles around the peaks corresponding to the protons shown in (b). (e) Quantitative sPRE data are obtained by measuring the longitudinal proton R 1 relaxation rate as a function of the concentration of Gd(DTPA-BMA). (f) Proton R 1 rates increase linearly with the concentration of the paramagnetic compound and the slopes correspond to the sPRE values of the respective resonances highlighted in (b).
Gd(DTPA-BMA) provides quantitative solvent-accessibility data of RNAs
To first assess a potential binding to specific RNA moieties, fingerprint spectra in the absence and presence of the paramagnetic compound were recorded for two RNAs exhibiting distinct folds and structural elements. The resulting NMR spectra of the UUCG tetraloop37, 38 and the GTP class II aptamer39, 40 are shown in Supplementary Figures 1 and 2, respectively. As expected, the presence of Gd(DTPA-BMA) causes line-broadening of NMR signals. However, even in the presence of the paramagnetic compound, the vast majority of peaks are still well observable and the absence of chemical shift perturbations confirms the absence of specific binding of the compound to the RNA (Fig. 1d and Supplementary Figures 1 and 2). Thus Gd(DTPA-BMA) can be used as a paramagnetic probe that screens the accessible surface area of RNA molecules.
As unchelated lanthanide ions can efficiently hydrolyze RNA41, we checked for potential cleavage products of the RNAs in the presence of the Gd3+ compound. No degradation products were observed for any RNA studied and sPRE data obtained from a single RNA sample were reproducible even after several sPRE measurements, each followed by a removal of Gd(DTPA-BMA) (data not shown). The absence of phosphodiester hydrolysis can be explained by the very high chelating stability of DTPA-BMA36 and the presence of a slight excess of Ca2+-bound chelator.
The sPRE data were acquired by measuring proton longitudinal relaxation rates (1H-R 1) as a function of increasing concentrations of Gd(DTPA-BMA). As expected, 1H-R 1 rates were found to increase linearly with the concentration of Gd(DTPA-BMA) in the case of carbon-bound protons. For imino and amino protons, the increase of 1H-R 1 rates is larger at small concentrations of the paramagnetic compound (below about 1 mM), and becomes linear at higher concentrations (Supplementary Figure 3). This observation likely reflects an additional exchange contribution that mixes the relaxation mechanisms of water and RNA protons. Nevertheless, a quantitative measure for the solvent accessibility can be obtained for nitrogen-bound exchangeable protons by determining the minimum concentration of Gd(DTPA-BMA) for which a linear response is observed. Only titration points at this concentration or above are used to fit the linear sPRE model (0.5 mM or above for the UUCG tetraloop and 1.29 mM or above for the GTP aptamer; compare Supplementary Tables 1 and 2). For details about data acquisition and analysis, refer to the methods section in the supplementary information.
sPRE data correlate well with RNA structure
Next, we analyzed how sPRE data reflect the structural features of the RNAs studied. To this end, the NMR structures of the UUCG tetraloop and the GTP-bound GTP class II aptamer were used to predict the expected sPRE data using a previously published grid-based approach30, 31 (see method section for details). The experimental sPRE data correlate very well with the predicted data (Fig. 2a and Supplementary Figures 4a and 5). This demonstrates that sPRE data provide a quantitative measure of solvent-accessibility that correlates well with RNA structure. In particular, all buried protons of both RNAs show a small sPRE. Protons for which significant deviations between experimental and back-calculated sPREs are observed are found in loop regions or in terminal nucleotides. These regions are flexible and thus are expected to show a population-weighted average of the sPRE. The structural NMR ensembles used to predict the sPRE data represent the most stable conformations in solution and depend on the algorithm and experimental restraints used for the structure determination. Thus, the NMR ensembles do not necessarily reflect the equilibrium distribution of the different conformations of these solvent-exposed regions of the RNA. Since all conformations of the ensemble were equally weighted in the prediction of the sPRE data, the prediction is expected to deviate for dynamic regions of the RNA. It is further worth noting, that despite the compact fold of the UUCG tetraloop, the experimental sPRE data show a large dynamic range, which renders them a powerful source to characterize the structure of RNAs. The excellent correlation between experimental data and RNA structure in combination with a large dynamic range suggests sPRE data as promising parameters to probe RNA structures by NMR.
sPRE data correlates well with RNA structure. (a) Experimental sPRE data (black) are compared to predicted sPRE data based on NMR solution structures of the UUCG tetraloop (2KOC) and the GTP-bound aptamer (5LWJ). Data are shown for sugar protons (H1′) and aromatic protons (H6 and H8) as indicated. (b) NMR solution structures of the UUCG tetraloop37, 38 (2KOC, left) and the GTP-bound aptamer39, 40 (5LWJ, right, heavy atoms of GTP ligand shown in green) are shown. All protons for which a sPRE value was obtained are shown as spheres and colored according to the sPRE (blue corresponding to high and orange corresponding to low sPRE values).
As described above, significant deviations between predicted and measured sPRE data are observed for some terminal nucleotides. Among all 258 observed sPRE values (93 for the UUCG tetraloop and 165 for the GTP aptamer), only two protons that are located in non-terminal nucleotides show significant deviations (9Gua H1 and 6Ura H3 in the UUCG tetraloop). Both protons are located in the loop region of the RNA in close spatial proximity to each other (Supplementary Figure 4b) and the corresponding sPRE values are underestimated based on the NMR solution structure, independent of which structural model of the UUCG loop motif (PDB codes 1HLX, 1K2G, 1TLR, 1Z31, 2KHY, 2KOC, 2KZL, 2LHP, 2LUB and 2N6X) was used. Although the sPRE of these protons are expected to have an additional contribution due to chemical exchange with water, the sPRE values of 2.2 and 0.44 s−1mM−1 of 6Ura H3 and 9Gua H1, respectively, are significantly larger compared to the average of 0.079 ± 0.037 s−1mM−1 for all other imino protons. This suggests that conformational dynamics in this regions may cause the unusually high experimental sPREs. Indeed, recent molecular dynamics studies that included a large set of NMR relaxation data of the UUCG tetraloop, suggest the loop region to be more flexible than the RNA stem42,43,44,45,46,47,48. Moreover, intermediate structural states with the bases of the loop region experiencing higher solvent exposure were observed in these studies. This suggests that the increased sPREs are a result of increased flexibility and dynamics of the loop region. However, future studies are required to fully understand the usability of sPRE data to detect dynamics in RNAs.
sPRE data provide orthogonal restraints for RNA structure determination
Next, the potential of sPRE data to facilitate the structure determination of RNAs was investigated (Fig. 3). To this end, structural models of the UUCG tetraloop as well as the GTP-bound aptamer were computed using the XplorNIH framework27, 49,50,51 in combination with different sets of experimental NOE data (Supplementary Tables 3 and 4, Supplementary Figure 7). The results show that sPRE data significantly improves convergence and accuracy of the structure determination, in particular in cases where only limited sets of NOEs are available. For example, using only 258 experimental NOEs of the GTP-bound aptamer (30% of the all NOEs), the average RMSD of the obtained models was significantly reduced from 7.80 Å to 3.51 Å (Fig. 3b; Random set 2 in Supplementary Table 3). Next, the performance of the sPRE potential in combination with other experimental NMR-based restraints was investigated (Supplementary Table 5). The benchmark revealed that solvent accessibility data is an orthogonal restraint since its usage improves structure determination of the UUCG tetraloop in combination with NOE, RDC and torsion angle restraints. Strikingly, using only hydrogen bonds-derived restraints in combination with sPRE data, the structure of the UUCG tetraloop can be determined with an RMSD of 2.8 Å compared to the published NMR-based structure (Fig. 3a). Even sparse sPRE data sets, in some cases down to 25%, improve structural quality of the UUCG tetraloop (Supplementary Table 6).
sPRE data improve structure determination of RNAs. Structural models of the UUCG tetraloop (a) and the GTP-bound GTP aptamer (b) were obtained without (left) and with (right) sPRE data using XplorNIH. The 10 best scored models (light gray) in terms of total energy were selected from a total of 200 models and aligned to the corresponding NMR structure (magenta). All restraints used in the computations are indicated below the respective models. In (b) heavy atoms of the GTP ligand are shown as sticks (Reference in green, computed models in dark gray) and the positions of the intrinsically flexible nucleotides A13 and U2140 are indicated.
In summary, we show that sPRE data are efficient NMR observables to probe RNA structure. sPRE data obtained with the paramagnetic compound Gd(DTPA-BMA) yield quantitative information of solvent accessibility as they correlate well with RNA structure, provide structural information for both buried and surface-exposed atoms and can be used as restraints to drive molecular dynamics-based structure determination of RNAs. Since sPRE data reflect the global fold of a RNA they are well suited to identify tertiary contacts or map interaction surfaces with other molecules32, for example, in RNA-protein complexes. As the measurement of sPRE data does not require complete chemical shift assignments, surface accessibility restraints can be recorded for structural studies also in the case of large RNAs, in particular in combination with specific or segmental labelling, where it is difficult to obtain a comprehensive set of experimental restraints for structural analysis. Compared to proteins, several considerations have to be taken into account for sPRE-based studies of RNAs. First, different NMR pulse sequences might be needed, in particular for triple resonance experiments. Second, multiple data sets acquired for different RNA labelling schemes and different solvent conditions need to be combined. In order to compensate for minor differences between these data sets we have proposed an approach in which we normalize the sPRE data using the sPRE of water protons in each sample. Third, lower concentrations of Gd(DTPA-BMA) are used for RNAs compared to proteins due to the higher sPRE caused by the lack of large hydrophobic cores resulting in smaller distances to the RNA surface. Last, chemical exchange of protons with water is typically more efficient in RNAs and modulates sPREs of nitrogen-bound amino and imino protons. In these cases, only the region of linear response is used for quantitative analysis.
Our results show that sPRE-derived restraints are particularly promising to define the global fold of larger RNAs and provide valuable information for structure determination that is orthogonal to other NMR-based restraints in molecular dynamics-based approaches. Recently, accuracy of chemical-shift based RNA de novo structure prediction using the Rosetta framework was significantly enhanced by improving the chemical shift scoring function for RNAs52. Combining this improved score with a sampling algorithm driven by sPRE data33 and specific or segmental labeling could provide powerful tools for structural modeling of large RNAs based on NMR data in the future.
RNA sample preparation
A uniformly 13C, 15N labeled sample of the UUCG tetraloop (5′-GGCACUUCGGUGCC-3′) was purchased from Silantes (Munich, Germany). The NMR buffer contained 20 mM K2HPO4 and 0.4 mM EDTA and was adjusted to pH 6.4. Transcription and purification of the GTP class II aptamer in complex with GTP was been described previously39. The sample was measured in NMR buffer containing 25 mM KH2PO4, 25 mM KCl and 2 mM magnesium acetate. The buffer was adjusted to pH 6.3 and uniformly 13C, 15N labeled RNA was measured in the presence of a two-fold excess of unlabeled GTP. NMR assignments and a solution NMR structure of the UUCG tetraloop were obtained from literature37, 38 (PDB code 2KOC, BMRB 5705). For the GTP class II aptamer in complex with GTP, the recently reported NMR assignments and solution NMR structure were used39, 40 (PDB code 5LWJ, BMRB 25661).
NMR Experiments
NMR sPRE data were obtained by measuring proton R 1 relaxation rates as a function of the concentration of Gd(DTPA-BMA) using a saturation-recovery approach which has been described previously for protein applications30. Briefly, a 7.5 ms proton trim pulse followed by a gradient is used to dephase magnetization. Next, longitudinal magnetization is recovered during a recovery delay of variable length. The recovery delay is followed by a read-out spectrum, such as a 1H,15N or 1H,13C HSQC. This saturation-recovery scheme allows to use a short interscan delay (about 50 ms) which dramatically reduces experimental time compared to measuring proton R2 rates. Pseudo-3D experiments were acquired for 4 to 7 different concentrations of the paramagnetic compound. To recover the NMR RNA sample, the paramagnetic compound was first removed by dialysis against water using a membrane with a 1 kDa molecular weight cut-off and then, the sample was lyophilized and resuspended in the corresponding NMR buffer.
sPRE data for nitrogen-bound protons of the UUCG tetraloop were recorded with a 0.8 mM uniformly 13C, 15N labeled sample at 283 K on an Avance III Bruker 900 MHz NMR spectrometer equipped with a cryo-TXI probe head. Proton R 1 relaxation rates were measured in the absence and presence of 0.5, 1, 1.75, 2.75 and 4.0 mM Gd(DTPA-BMA) using 1H,15N HSQC-based pseudo-3D experiments. sPRE data for carbon-bound protons were acquired with the same sample at 298 K and R 1 relaxation rates were measured in the absence and presence of 0.75, 1.5, 2.25, 3.25 and 4.5 mM Gd(DTPA-BMA) using 1H,13C HSQC-based pseudo-3D experiments. Delay times and other NMR parameters for all experiments as well as the total NMR time are presented in Supplementary Table 1. To obtain sPRE data of the GTP aptamer in complex with GTP, samples containing about 150 µM of (13C, 15N)-GU, (13C, 15N)-C or (13C, 15N)-A labeled RNA and 300 µM unlabeled GTP were measured at 293 K on an Avance III Bruker 900 MHz NMR spectrometer equipped with a cryo-TXI probe head and an Oxford Instruments 600 MHz NMR spectrometer equipped with a Bruker Avance III console and a cryo-TCI probe head. 1H,13C and 1H,15N HSQC-based pseudo-3D experiments were used to acquire proton R 1 relaxation rates for 4 to 7 different concentrations of Gd(DTPA-BMA). Delay times and other NMR parameters for all experiments as well as the total NMR time are presented in Supplementary Table 2. For all RNA samples, amino and imino protons were measured in buffer containing 5 to 10% D2O whereas carbon-bound protons were acquired in buffer containing ≥99.990% D2O (Sigma-Aldrich). For every titration step of Gd(DTPA-BMA), solvent water R 1 was measured and the derived sPRE of the water solvent was used to normalize the RNA sPRE values. The normalized sPRE data were then rescaled by the mean sPRE of the solvent, obtained by averaging the sPRE of water protons in all relevant experiments. This procedure allows to combine sPRE measurements of different samples or different magnetic field strengths (e.g. in different buffers or with different labeling schemes; see Supplementary Figure 6). Furthermore, by plotting the solvent water R 1 against the concentration of Gd(DTPA-BMA) errors in pipetting can be detected and compensated accordingly if necessary.
In the case of nitrogen-bound protons, plotting the proton R 1 against the concentration of Gd(DTPA-BMA) revealed a non-linear correlation with rates at low concentrations of the paramagnetic compound being lower than expected according to a linear model. Consequently, the R 1 for low concentrations (0 mM in the case of the UUCG tetraloop, as well as 0 and 0.645 mM in the case of the GTP aptamer) were omitted in the computations of the sPRE values and only the linear correlation was used (compare Supplementary Table 1 and 2 as well as Supplementary Figure 3). In all cases, at least 4 different concentrations were used to derive the sPRE.
For large RNAs, specific isotope labeling schemes are required to reduce signal overlap9, 10, 53. In a simple and straight-forward approach, only one or two nucleotide types are enriched in NMR active nuclei. While this approach reduces spectral complexity, the measurement of sPREs with different samples might introduce systematic errors, due to pipetting errors. In addition, the usage of different magnetic field strengths affects the scaling of the sPRE data31. To ensure that sPRE data acquired on multiple samples and field strengths are consistent, the sPRE data sets are scaled by referencing to the sPRE of the water solvent in each sample. This referencing notably improves the correlation of calculated and experimental sPRE (Supplementary Figure 6). To normalize the sPRE data, the sPRE of water protons was acquired by measuring the R 1 rate of the water proton signal. The R 1 rate was measured using a pseudo 2D experiments with simple 1D proton read-out (the proton pulse length was set between 0.3 and 1 µs). The change of the R 1 rate of the water protons during the titration Gd(DTPA-BMA) with was used to obtain the sPRE of the water protons. The sPRE of the water solvent was then used to normalize the sPRE values of the RNA signals. The normalized sPRE values of different titration experiments (with different labeling schemes or at different field strengths) were then combined to generate a large sPRE data set for the corresponding RNA. To obtain an absolute quantity, the normalized sPRE data were then multiplied by the mean sPRE of the solvent, obtained by averaging the sPRE of water protons in all relevant experiments. This procedure allows to combine sPRE measurements of different samples or different magnetic field strengths (e.g. in different buffers or with different labeling schemes; see Supplementary Figure 6). Furthermore, by plotting the solvent water R 1 against the concentration of Gd(DTPA-BMA) errors in pipetting can be detected and compensated accordingly if necessary.
The measurement times for the collection of sPRE data can be quite long, as sPRE data are recorded as a series of proton R 1 relaxation experiments. Typical 1H-R 1 rates in the absence of Gd(DTPA-BMA) are relatively slow with an average 1H-R 1 rate of 0.23 s−1 (±0.10 s−1) at 900 MHz and 0.30 s−1 (±0.14 s−1) at 600 MHz. As a consequence, long delay times of several seconds (Supplementary Table 2) are required which in turn increase the overall experimental time. To overcome this drawback, the data sets of both RNAs were re-evaluated by determining the sPREs based on R 1 rates measured only in the presence of 0.75, 1.5, 3.25 and 4.5 mM Gd(DTPA-BMA) for the UUCG tetraloop and 0.8, 1.6, 3.2 and 4 mM for the GTP aptamer. Notably, the obtained sPRE data are fully consistent with those derived from a full set of measurements, but the optimized acquisition scheme reduces experimental time by about 40%. It should be noted, that experimental time could be further reduced by applying selective saturation schemes that employ longitudinal relaxation-enhancing techniques as demonstrated in a previous study54.
NMR data analysis
NMR spectra were processed with the NMRpipe software package55. Peak positions from literature were transferred using CcpNmr Analysis56 and peak intensities of well-resolved peaks were obtained using the nmrglue Python package57. Peak intensities of the pseudo-3D experiments were fitted to an exponential recovery function according to equation (1):
$$I(\tau )={I}_{0}-A\cdot {e}^{-{R}_{1}\cdot \tau }$$
where τ is the recovery delay, I(τ) is the peak intensity measured for the recovery delay τ, I 0 is the maximum peak intensity, R 1 is the longitudinal proton relaxation rate and A is the amplitude of the relaxation.
To estimate the error of the peak intensities ε, the recovery delay τ d was measured twice to obtain two intensity values I(τ d , I, 1) and I(τ d , I, 2), where i is the index of the peak. For every peak i, the difference of the duplicates \({\rm{\Delta }}{I}_{i}=\,I({\tau }_{d},\,i,1)-I({\tau }_{d},i,\,2)\) was computed. The overall error of the pseudo-3D experiment ε pseudo-3D was then computed according to equation (2):
$${\varepsilon }_{\mathrm{pseudo}-\mathrm{3D}}=\sqrt{\frac{1}{2N}\,\cdot \,\sum _{i}^{N}{({\rm{\Delta }}{I}_{i}-\overline{{\rm{\Delta }}I})}^{2}}$$
where N is the number of peaks, and \(\overline{{\rm{\Delta }}I}=\sum _{i}^{N}{\rm{\Delta }}{I}_{i}\) is the average difference of the duplicates. The subtraction of \(\overline{{\rm{\Delta }}I}\) accounts for systematic errors, and in all cases was significantly lower in magnitude than the differences of the duplicates \({\rm{\Delta }}{I}_{i}\).
The error of the peak intensities ε pseudo-3D was then used to estimate the error of the fitted proton relaxation rates ΔR 1. To this end, the experimental data was resampled using the following combined Monte Carlo-type bootstrapping approach: For every peak in a given pseudo-3D experiment, a set of delay-intensity data points, I(τ), was measured. This set is used to generate 1000 new random sets \(\tilde{{I}_{j}(\tau )}\), each having 2.5 times as many data points as the original data set I(τ) (Recovery delays measured as duplicates are only counted as one). These sets were created by randomly selecting data points from the original data set I(τ) and allowing values to be selected multiple times (Increasing the size of the new sets by a factor of 2.5 allows to generate a more diverse ensemble of sets). For every set \(\tilde{{I}_{j}(\tau )}\), the intensity values were randomly altered by adding a random number drawn from a normal distribution centered at 0 and with a standard deviation of ε pseudo-3D. Every set \(\tilde{{I}_{j}(\tau )}\) was then fitted according to equation (1) giving rise to 1000 different \({R}_{1}^{j}\) rates. The error of the proton relaxation rate ΔR 1 was computed as the standard deviation of all values \({R}_{1}^{j}\). This procedure was repeated for every peak and every pseudo-3D experiment.
To obtain the sPRE value, the proton R 1 rates and the corresponding errors ΔR 1 were collected for all measured concentrations of Gd(DTPA-BMA) c para and a weighted linear regression using equation (3) was performed
$${R}_{1}({c}_{{\rm{para}}})={m}_{{\rm{sPRE}}}\cdot {c}_{{\rm{para}}}+{R}_{1}^{0}$$
where R 1(c para) is the proton R 1 measured at the concentration c para of the paramagnetic compound, the slope m sPRE corresponds to the sPRE and \({R}_{1}^{0}\) is the fitted proton R 1 in the absence of the paramagnetic compound. The errors ΔR 1 (obtained from the resampling approach described above) were used as weights in a weighted linear regression and the error of the sPRE value \({\rm{\Delta }}{m}_{{\rm{sPRE}}}\) as well as the error of the relaxation rate \({\rm{\Delta }}{R}_{1}^{0}\) were directly obtained from the weighted linear regression.
Prediction of sPRE Data
To predict sPRE data based on published NMR solution structure, a previously published grid-based approach was used31. Briefly, the structural model was placed in a regularly-spaced grid representing the uniformly distributed paramagnetic compound and the grid was built with a point-to-point distance of 0.1 Å and a minimum distance of 20 Å between the RNA model and the outer border of the grid. Next, grid points that overlap with the RNA model were removed assuming a molecular radius of 3.5 Å for the paramagnetic compound. To compute the sPRE for a given RNA proton \({{\rm{sPRE}}}_{{\rm{predicted}}}^{i}\), the distance-dependent paramagnetic effect29 was numerically integrated over all remaining grid points according to equation (4):
$${{\rm{sPRE}}}_{{\rm{predicted}}}^{i}=c\,\cdot \sum _{{d}_{i,j}\, < \,20\,{\rm{\AA }}}\frac{1}{{{d}_{i,j}}^{6}}$$
where i is the index of a proton of the RNA, j is the index of the grid point, d i, j is the distance between the i-th proton and the j-th grid point and c is an arbitrary constant to scale the sPRE values. Here, c was chosen such that the sum of all predicted sPRE values is equal to the sum of all experimental sPRE values.
Structure Determination Benchmark
To demonstrate the potential of sPRE data for the structure determination of RNAs, structural models of the UUCG tetraloop as well as the GTP class II aptamer in complex with GTP were computed using the XplorNIH 2.40 framework49, 50. The used protocol is based on the gb1_rdc example that is included in the XplorNIH package and was adjusted to fold the respective RNA during an annealing procedure starting from an extended structure. The initial temperature was set to 3500 and the simulation at this high temperature was stopped after reaching 1000 ps or 10,000 steps. The temperature was then reduced to a final temperature of 25 in steps of 12.5 degrees and simulations were run for at least 4 ps or 2000 steps at every temperature step. The annealing procedure was repeated to obtain 200 models. The protocol made use of the recently published torsion potential RNA-ff151 and the sPRE data was included in the structure determination using the nbTargetPot module27 of the framework. nbTargetPot is the scaling factor (or weight) of the sPRE potential. The value of the scaling factor mainly depends on the scaling factor of the other energy terms. The weight of the sPRE potential should not be proportional to the size of the RNA. For the mentioned benchmark, the weight of the sPRE potential was determined by testing different weights and the weight that produced the best models was used for all runs. To this end, one weight was determined for every RNA and the obtained weight was used for all computations of the corresponding RNA. The lower value of the UUCG tetraloop can be explained with the fact that the Xplor runs of the UUCG tetraloop included more energy functions than the ones of the GTP aptamer (in the case of the UUCG tetraloop, torsion angles and RDC, both obtained from PDB entry 2KOC, were used). The sPRE data sets of the UUCG tetraloop and GTP aptamer were filtered to remove all data points with an experimental error above 10% and all data points of nitrogen-bound protons. The nbTargetPot energy was initialized using the slope and intercept parameter as returned by the calibrate function of the nbTargetPotTools module in combination with the NMR structure of the corresponding RNA. The weight of the nbTargetPot energy was set to 1000 in the case of the UUCG tetraloop and 3000 for the GTP aptamer. For more details such as weight factors of all potentials, please refer to the python code of the protocols which are shown at the end of this document for both RNAs.
The annealing protocol was then used to benchmark the benefit of the sPRE data for structure prediction. To this end, structural models of the UUCG tetraloop and the GTP-bound aptamer were computed in the absence and presence of the sPRE potential and in combination with experimental restraints derived from hydrogen bond information and experimental NOEs (PDB entries 2KOC and 5LWJ)37, 38, 40. To simulate several NOE assignments, random experimental NOE subsets were created by randomly drawing a certain percentage of restraints from the full NOE set (compare Supplementary Tables 3 and 4). To account for the effect of the random selection process, at least 3 different random sets with the same number of NOE restrains were created and used to benchmark the impact of the sPRE data. It should be noted, that the randomly created NOE subsets not only account for different levels of assignments, but also simulate different qualities of NOE data as random subsets with the same number of NOE restraints perform differently in driving the structure determination to the correct fold. For every given set of restraints, 200 structure models were computed with and without the sPRE data. To quantify the impact of the sPRE data, the 20 best models of each run (10%, scored according to the total energy) were selected and the RMSD to the published NMR structures was computed. Computation of RMSD was performed on all carbon, nitrogen and phosphorus atoms in the case of the UUCG tetraloop. For the GTP-bound aptamer, the RMSD was computed using all carbon, nitrogen and phosphorus atoms of the GTP ligand and all non-terminal nucleotides except the intrinsically flexible nucleotides A13 and U2140.
A second benchmark was performed to address the performance of the sPRE potential in the absence of any other experimental restraint as well as in combination with other experimental NMR restraints, such hydrogen bonds, NOEs, RDCs and torsion angles. Experimental restraints for the UUCG tetraloop were obtained from PDB entry 2KOC37, 38. Two hundred structure models of the UUCG tetraloop were computed with and without the sPRE data for every restraint set (compare Supplementary Tables 5) and the 20 best scored models according to the total energy were selected. The average RMSD to the published NMR structure was computed for the top 20 models and used to quantify the impact of the sPRE data (computation of RMSD was performed on all carbon, nitrogen and phosphorus atoms).
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. All scripts used for data analysis and back-calculation of sPRE data are openly available (http://mbbc.medunigraz.at/forschung/forschungseinheiten-und-gruppen/forschungsgruppe-tobias-madl/software/).
Morris, K. V. & Mattick, J. S. The rise of regulatory RNA. Nat Rev Genet 15, 423–437, doi:10.1038/nrg3722 (2014).
Furtig, B., Richter, C., Wohnert, J. & Schwalbe, H. NMR spectroscopy of RNA. Chembiochem 4, 936–962, doi:10.1002/cbic.200300700 (2003).
Latham, M. P., Brown, D. J., McCallum, S. A. & Pardi, A. NMR methods for studying the structure and dynamics of RNA. Chembiochem 6, 1492–1505, doi:10.1002/cbic.200500123 (2005).
Bothe, J. R. et al. Characterizing RNA dynamics at atomic resolution using solution-state NMR spectroscopy. Nat Methods 8, 919–931, doi:10.1038/nmeth.1735 (2011).
Dominguez, C., Schubert, M., Duss, O., Ravindranathan, S. & Allain, F. H. Structure determination and dynamics of protein-RNA complexes by NMR spectroscopy. Prog Nucl Magn Reson Spectrosc 58, 1–61, doi:10.1016/j.pnmrs.2010.10.001 (2011).
Wijmenga, S. S. & van Buuren, B. N. M. The use of NMR methods for conformational studies of nucleic acids. Progress in Nuclear Magnetic Resonance Spectroscopy 32, 287–387, doi:10.1016/s0079-6565(97)00023-x (1998).
Wu, H., Finger, L. D. & Feigon, J. Structure determination of protein/RNA complexes by NMR. Methods Enzymol 394, 525–545, doi:10.1016/S0076-6879(05)94022-6 (2005).
Varani, G., Aboul-ela, F. & Allain, F. H. NMR investigation of RNA structure. Prog Nucl Magn Reson Spectrosc 29, 51–127, doi:10.1016/0079-6565(96)01028-X (1996).
Duss, O., Lukavsky, P. J. & Allain, F. H. Isotope labeling and segmental labeling of larger RNAs for NMR structural studies. Adv Exp Med Biol 992, 121–144, doi:10.1007/978-94-007-4954-2_7 (2012).
Liu, Y. et al. Synthesis and applications of RNAs with position-selective labelling and mosaic composition. Nature 522, 368–372, doi:10.1038/nature14352 (2015).
Lu, K., Miyazaki, Y. & Summers, M. F. Isotope labeling strategies for NMR studies of RNA. J Biomol NMR 46, 113–125, doi:10.1007/s10858-009-9375-2 (2010).
Milligan, J. F., Groebe, D. R., Witherell, G. W. & Uhlenbeck, O. C. Oligoribonucleotide synthesis using T7 RNA polymerase and synthetic DNA templates. Nucleic Acids Res 15, 8783–8798 (1987).
Milligan, J. F. & Uhlenbeck, O. C. Synthesis of small RNAs using T7 RNA polymerase. Methods Enzymol 180, 51–62 (1989).
Hennig, M., Williamson, J. R., Brodsky, A. S. & Battiste, J. L. Recent advances in RNA structure determination by NMR. Curr Protoc Nucleic Acid Chem Chapter 7, Unit 7 7, doi:10.1002/0471142700.nc0707s02 (2001).
Wunderlich, C. H. et al. Synthesis of (6-(13)C)pyrimidine nucleotides as spin-labels for RNA dynamics. J Am Chem Soc 134, 7558–7569, doi:10.1021/ja302148g (2012).
Johnson, J. E. Jr., Julien, K. R. & Hoogstraten, C. G. Alternate-site isotopic labeling of ribonucleotides for NMR studies of ribose conformational dynamics in RNA. J Biomol NMR 35, 261–274, doi:10.1007/s10858-006-9041-x (2006).
Alvarado, L. J. et al. Regio-selective chemical-enzymatic synthesis of pyrimidine nucleotides facilitates RNA structure and dynamics studies. Chembiochem 15, 1573–1577, doi:10.1002/cbic.201402130 (2014).
Longhini, A. P. et al. Chemo-enzymatic synthesis of site-specific isotopically labeled nucleotides for use in NMR resonance assignment, dynamics and structural characterizations. Nucleic Acids Res 44, e52, doi:10.1093/nar/gkv1333 (2016).
Wenter, P., Reymond, L., Auweter, S. D., Allain, F. H. & Pitsch, S. Short, synthetic and selectively 13C-labeled RNA sequences for the NMR structure determination of protein-RNA complexes. Nucleic Acids Res 34, e79, doi:10.1093/nar/gkl427 (2006).
Tzakos, A. G., Easton, L. E. & Lukavsky, P. J. Complementary segmental labeling of large RNAs: economic preparation and simplified NMR spectra for measurement of more RDCs. J Am Chem Soc 128, 13344–13345, doi:10.1021/ja064807o (2006).
Xu, J., Lapham, J. & Crothers, D. M. Determining RNA solution structure by segmental isotopic labeling and NMR: application to Caenorhabditis elegans spliced leader RNA 1. Proc Natl Acad Sci USA 93, 44–48 (1996).
Nelissen, F. H. et al. Multiple segmental and selective isotope labeling of large RNA for NMR structural studies. Nucleic Acids Res 36, e89, doi:10.1093/nar/gkn397 (2008).
Clore, G. M. & Iwahara, J. Theory, practice, and applications of paramagnetic relaxation enhancement for the characterization of transient low-population states of biological macromolecules and their complexes. Chem Rev 109, 4108–4139, doi:10.1021/cr900033p (2009).
Lipsitz, R. S. & Tjandra, N. Residual dipolar couplings in NMR structure analysis. Annu Rev Biophys Biomol Struct 33, 387–413, doi:10.1146/annurev.biophys.33.110502.140306 (2004).
Dallmann, A. & Sattler, M. Detection of hydrogen bonds in dynamic regions of RNA by NMR spectroscopy. Curr Protoc Nucleic Acid Chem 59, 7 22 21–19, doi:10.1002/0471142700.nc0722s59 (2014).
Grzesiek, S., Cordier, F., Jaravine, V. & Barfield, M. Insights into biomolecular hydrogen bonds from hydrogen bond scalar couplings. Progress in Nuclear Magnetic Resonance Spectroscopy 45, 275–300, doi:10.1016/j.pnmrs.2004.08.001 (2004).
Wang, Y., Schwieters, C. D. & Tjandra, N. Parameterization of solvent-protein interaction and its use on NMR protein structure determination. J Magn Reson 221, 76–84, doi:10.1016/j.jmr.2012.05.020 (2012).
Modig, K., Liepinsh, E., Otting, G. & Halle, B. Dynamics of protein and peptide hydration. Journal of the American Chemical Society 126, 102–114, doi:10.1021/ja038325d (2004).
Hocking, H. G., Zangger, K. & Madl, T. Studying the structure and dynamics of biomolecules by using soluble paramagnetic probes. Chemphyschem 14, 3082–3094, doi:10.1002/cphc.201300219 (2013).
Madl, T., Bermel, W. & Zangger, K. Use of relaxation enhancements in a paramagnetic environment for the structure determination of proteins using NMR spectroscopy. Angew Chem Int Ed Engl 48, 8259–8262, doi:10.1002/anie.200902561 (2009).
Pintacuda, G. & Otting, G. Identification of protein surfaces by NMR measurements with a pramagnetic Gd(III) chelate. J Am Chem Soc 124, 372–373 (2002).
Madl, T., Guttler, T., Gorlich, D. & Sattler, M. Structural analysis of large protein complexes using solvent paramagnetic relaxation enhancements. Angew Chem Int Ed Engl 50, 3993–3997, doi:10.1002/anie.201007168 (2011).
Hartlmuller, C., Gobl, C. & Madl, T. Prediction of Protein Structure Using Surface Accessibility Data. Angew Chem Int Ed Engl 55, 11970–11974, doi:10.1002/anie.201604788 (2016).
Venditti, V., Niccolai, N. & Butcher, S. E. Measuring the dynamic surface accessibility of RNA with the small paramagnetic molecule TEMPOL. Nucleic Acids Res 36, e20, doi:10.1093/nar/gkm1062 (2008).
Bernini, A. et al. NMR studies on the surface accessibility of the archaeal protein Sso7d by using TEMPOL and Gd(III)(DTPA-BMA) as paramagnetic probes. Biophys Chem 137, 71–75, doi:10.1016/j.bpc.2008.07.003 (2008).
Caravan, P., Ellison, J. J., McMurry, T. J. & Lauffer, R. B. Gadolinium(III) Chelates as MRI Contrast Agents: Structure, Dynamics, and Applications. Chem Rev 99, 2293–2352 (1999).
Furtig, B., Richter, C., Bermel, W. & Schwalbe, H. New NMR experiments for RNA nucleobase resonance assignment and chemical shift analysis of an RNA UUCG tetraloop. J Biomol NMR 28, 69–79, doi:10.1023/B:JNMR.0000012863.63522.1f (2004).
Nozinovic, S., Furtig, B., Jonker, H. R., Richter, C. & Schwalbe, H. High-resolution NMR structure of an RNA model system: the 14-mer cUUCGg tetraloop hairpin RNA. Nucleic Acids Res 38, 683–694, doi:10.1093/nar/gkp956 (2010).
Wolter, A. C. et al. NMR resonance assignments for the class II GTP binding RNA aptamer in complex with GTP. Biomol NMR Assign 10, 101–105, doi:10.1007/s12104-015-9646-7 (2016).
Wolter, A. C. et al. A Stably Protonated Adenine Nucleotide with a Highly Shifted pKa Value Stabilizes the Tertiary Structure of a GTP-Binding RNA Aptamer. Angewandte Chemie. doi:10.1002/anie.201609184 (2016).
Matsumura, K. & Komiyama, M. Enormously fast RNA hydrolysis by lanthanide(III) ions under physiological conditions: eminent candidates for novel tools of biotechnology. J Biochem 122, 387–394 (1997).
Deng, N. J. & Cieplak, P. Free energy profile of RNA hairpins: a molecular dynamics simulation study. Biophys J 98, 627–636, doi:10.1016/j.bpj.2009.10.040 (2010).
Giambasu, G. M., York, D. M. & Case, D. A. Structural fidelity and NMR relaxation analysis in a prototype RNA hairpin. RNA 21, 963–974, doi:10.1261/rna.047357.114 (2015).
Villa, A., Widjajakusuma, E. & Stock, G. Molecular dynamics simulation of the structure, dynamics, and thermostability of the RNA hairpins uCACGg and cUUCGg. J Phys Chem B 112, 134–142, doi:10.1021/jp0764337 (2008).
Borkar, A. N., Vallurupalli, P., Camilloni, C., Kay, L. E. & Vendruscolo, M. Simultaneous NMR characterisation of multiple minima in the free energy landscape of an RNA UUCG tetraloop. Phys Chem Chem Phys 19, 2797–2804, doi:10.1039/c6cp08313g (2017).
Vallurupalli, P. & Kay, L. E. A suite of 2H NMR spin relaxation experiments for the measurement of RNA dynamics. J Am Chem Soc 127, 6893–6901, doi:10.1021/ja0427799 (2005).
Williams, D. J. & Hall, K. B. Experimental and computational studies of the G[UUCG]C RNA tetraloop. J Mol Biol 297, 1045–1061, doi:10.1006/jmbi.2000.3623 (2000).
Akke, M., Fiala, R., Jiang, F., Patel, D. & Palmer, A. G. 3rd. Base dynamics in a UUCG tetraloop RNA hairpin characterized by 15N spin relaxation: correlations with structure and stability. RNA 3, 702–709 (1997).
Schwieters, C. D., Kuszewski, J. J. & Clore, G. M. Using Xplor-NIH for NMR molecular structure determination. Progress in nuclear magnetic resonance spectroscopy 48, 47–62 (2006).
Schwieters, C. D., Kuszewski, J. J., Tjandra, N. & Clore, G. M. The Xplor-NIH NMR molecular structure determination package. Journal of magnetic resonance 160, 65–73 (2003).
Bermejo, G. A., Clore, G. M. & Schwieters, C. D. Improving NMR Structures of RNA. Structure 24, 806–815, doi:10.1016/j.str.2016.03.007 (2016).
Sripakdeevong, P. et al. Structure determination of noncanonical RNA motifs guided by (1)H NMR chemical shifts. Nat Methods 11, 413–416, doi:10.1038/nmeth.2876 (2014).
Dallmann, A. et al. Site-Specific Isotope-Labeling of Inosine Phosphoramidites and NMR Analysis of an Inosine-Containing RNA Duplex. Chemistry. doi:10.1002/chem.201602784 (2016).
Farjon, J. et al. Longitudinal-relaxation-enhanced NMR experiments for the study of nucleic acids in solution. Journal of the American Chemical Society 131, 8571–8577, doi:10.1021/ja901633y (2009).
Delaglio, F. et al. NMRPipe: a multidimensional spectral processing system based on UNIX pipes. J Biomol NMR 6, 277–293 (1995).
Vranken, W. F. et al. The CCPN data model for NMR spectroscopy: development of a software pipeline. Proteins 59, 687–696, doi:10.1002/prot.20449 (2005).
Helmus, J. J. & Jaroniec, C. P. Nmrglue: an open source Python package for the analysis of multidimensional NMR data. J Biomol NMR 55, 355–367, doi:10.1007/s10858-013-9718-x (2013).
This work was supported by the Integrative Metabolism Research Center Graz (to T.M.), the Austrian infrastructure programme 2016/2017 (to T.M.), BioTechMed/Graz (to T.M.), the Omics Center Graz (to T.M.), the Bavarian Ministry of Sciences, Research and the Arts (Bavarian Molecular Biosystems Research Network, to T.M.), the President's International Fellowship Initiative of CAS (No. 2015VBB045, to T.M.), the National Natural Science Foundation of China (No. 31450110423, to T.M.), the Austrian Science Fund (FWF: P28854 and W1226-B18 to T.M.) as well as the Deutsche Forschungsgemeinschaft (DFG) with the grants Wo901/1-1 (to J.W.), SFB1035 (to M.S.), GRK1721 (to M.S.) and MA5703/1-1 (to T.M.).
Christoph Hartlmüller and Johannes C. Günther contributed equally to this work.
Center for Integrated Protein Science Munich, Department Chemie, Technical University of Munich, Lichtenbergstr. 4, 85748, Garching, Germany
Christoph Hartlmüller, Johannes C. Günther, Michael Sattler & Tobias Madl
Institute of Structural Biology, Helmholtz Zentrum München, Ingolstadter Landstr. 1, 85764, Neuherberg, Germany
Institut für Molekulare Biowissenschaften and Zentrum für Biomolekulare Magnetische Resonanz (BMRZ), Goethe-Universität Frankfurt, Max-von-Laue Str. 9, 60438, Frankfurt/M, Germany
Antje C. Wolter & Jens Wöhnert
Institute of Molecular Biology and Biochemistry, Center of Molecular Medicine, Medical University of Graz, Harrachgasse 21, 8010, Graz, Austria
Tobias Madl
Christoph Hartlmüller
Johannes C. Günther
Antje C. Wolter
Jens Wöhnert
Michael Sattler
C.H. and T.M. designed the studies, established and tested the NMR experiments, and performed the prediction of sPRE data. J.G., A.W., J.W. and M.S. prepared the isotope labelled RNA samples. C.H. and T.M., C.H. and J.G. performed the NMR titrations. C.H. processed and fitted the NMR data, performed the structure prediction computations and analyzed the obtained models. C.H. and T.M. wrote the manuscript and all authors have given critical feedback and approved the final version of the manuscript.
Correspondence to Tobias Madl.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Hartlmüller, C., Günther, J.C., Wolter, A.C. et al. RNA structure refinement using NMR solvent accessibility data. Sci Rep 7, 5393 (2017). https://doi.org/10.1038/s41598-017-05821-z
DOI: https://doi.org/10.1038/s41598-017-05821-z
NMR characterization of solvent accessibility and transient structure in intrinsically disordered proteins
Emil Spreitzer
Journal of Biomolecular NMR (2019)
Refining RNA solution structures with the integrative use of label-free paramagnetic relaxation enhancement NMR
Zhou Gong
Shuai Yang
Chun Tang
Biophysics Reports (2019)
Entspannte Moleküle: NMR-Ober - flächen daten zur Strukturbestimmung
Christoph Göbl
BIOspektrum (2018)
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online) | CommonCrawl |
Communications on Pure & Applied Analysis
March 2017 , Volume 16 , Issue 2
Singular periodic solutions for the p-laplacian ina punctured domain
Shanming Ji, Yutian Li, Rui Huang and Xuejing Yin
2017, 16(2): 373-392 doi: 10.3934/cpaa.2017019 +[Abstract](2470) +[HTML](54) +[PDF](459.2KB)
Abstract. In this paper we are interested in studying singular periodic solutions for the p-Laplacian in a punctured domain. We find an interesting phenomenon that there exists a critical exponent pc = N and a singular exponent qs = p-1. Precisely speaking, only if p > pc can singular periodic solutions exist; while if 1 < p ≤ pc then all of the solutions have no singularity. By the singular exponent qs = p-1, we mean that in the case when q = qs, completely different from the remaining case q ≠ qs, the problem may or may not have solutions depending on the coefficients of the equation.
Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin. Singular periodic solutions for the <i>p<\/i>-laplacian ina punctured domain. Communications on Pure & Applied Analysis, 2017, 16(2): 373-392. doi: 10.3934/cpaa.2017019.
Long-term stability for kdv solitons in weighted Hs spaces
Brian Pigott and Sarah Raynor
In this work, we consider the stability of solitons for the KdV equation below the energy space, using spatially-exponentially-weighted norms. Using a combination of the I-method and spectral analysis following Pego and Weinstein, we are able to show that, in the exponentially weighted space, the perturbation of a soliton decays exponentially for arbitrarily long times. The finite time restriction is due to a lack of global control of the unweighted perturbation.
Brian Pigott, Sarah Raynor. Long-term stability for kdv solitons in weighted <i>H<\/i><sup>s<\/sup> spaces. Communications on Pure & Applied Analysis, 2017, 16(2): 393-416. doi: 10.3934/cpaa.2017020.
Center conditions for generalized polynomial kukles systems
Jaume Giné
Abstract. In this paper we study the center problem for certain generalized Kukles systems $\dot{x}= y, \qquad \dot{y}= P_0(x)+ P_1(x)y+P_2(x) y^2+ P_3(x) y^3, $\end{document} where Pi(x) are polynomials of degree n, P0(0) = 0 and P0′(0) < 0. Computing the focal values and using modular arithmetics and Gröbner bases we find the center conditions for such systems when P0 is of degree 2 and Pi for i = 1; 2; 3 are of degree 3 without constant terms. We also establish a conjecture about the center conditions for such systems.
Jaume Gin\u00E9. Center conditions for generalized polynomial kukles systems. Communications on Pure & Applied Analysis, 2017, 16(2): 417-426. doi: 10.3934/cpaa.2017021.
Diffusive predator-prey models with stage structure on prey and beddington-deangelis functional responses
Seong Lee and Inkyung Ahn
In this paper, we examine a diffusive predator-prey model with Beddington-DeAngelis functional response and stage structure on prey under homogeneous Neumann boundary conditions, where the discrete time delay covers the period from the birth of immature prey to their maturity. We investigate the dynamics of their permanence and the extinction of the predator, and provide sufficient conditions for the global attractiveness and the locally asymptotical stability of the semi-trivial and coexistence equilibria.
Seong Lee, Inkyung Ahn. Diffusive predator-prey models with stage structure on prey and beddington-deangelis functional responses. Communications on Pure & Applied Analysis, 2017, 16(2): 427-442. doi: 10.3934/cpaa.2017022.
Existence and upper semicontinuity of (L2, Lq) pullback attractors for a stochastic p-laplacian equation
Linfang Liu and Xianlong Fu
2017, 6(2): 443-474 doi: 10.3934/cpaa.2017023 +[Abstract](2355) +[HTML](53) +[PDF](532.2KB)
In this paper we study the dynamical behavior of solutions for a non-autonomous \begin{document}$p$\end{document}-Laplacian equation driven by a white noise term. We first establish the abstract results on existence and continuity of bi-spatial pullback random attractors for a cocycle. Then by conducting some tail estimates and applying the obtained abstract results we show the existence and upper semi-continuity of \begin{document}$(L^{2}(\mathbb{R}^{n}), L^{q}(\mathbb{R}^{n}))$\end{document}-pullback attractors for this \begin{document}$p$\end{document}-Laplacian equation.
Linfang Liu, Xianlong Fu. Existence and upper semicontinuity of (<i>L<\/i><sup>2<\/sup>, <i>L<sup>q<\/sup><\/i>) pullback attractors for a stochastic <i>p<\/i>-laplacian equation. Communications on Pure & Applied Analysis, 2017, 6(2): 443-474. doi: 10.3934/cpaa.2017023.
Estimates for eigenvalues of a system of elliptic equations with drift and of bi-drifting laplacian
Feng Du and Adriano Cavalcante Bezerra
In this paper, we firstly study the eigenvalue problem of a systemof elliptic equations with drift and get some universal inequalities of PayneP′olya-Weinberger-Yang type on a bounded domain in Euclidean spaces and inGaussian shrinking solitons. Furthermore, we study two kinds of the clampedplate problems and the buckling problems for the bi-drifting Laplacian and getsome sharp lower bounds for the first eigenvalue for these eigenvalue problemon compact manifolds with boundary and positive m-weighted Ricci curvatureor on compact manifolds with boundary under some condition on the weightedRicci curvature.
Feng Du, Adriano Cavalcante Bezerra. Estimates for eigenvalues of a system of elliptic equations with drift and of bi-drifting laplacian. Communications on Pure & Applied Analysis, 2017, 6(2): 475-491. doi: 10.3934/cpaa.2017024.
Multi-peak solutions for nonlinear Choquard equation with a general nonlinearity
Minbo Yang, Jianjun Zhang and Yimin Zhang
In this paper, we study a class of nonlinear Choquard type equations involving a general nonlinearity. By using the method of penalization argument, we show that there exists a family of solutions having multiple concentration regions which concentrate at the minimum points of the potential V. Moreover, the monotonicity of f(s)=s and the so-called Ambrosetti-Rabinowitz condition are not required.
Minbo Yang, Jianjun Zhang, Yimin Zhang. Multi-peak solutions for nonlinear Choquard equation with a general nonlinearity. Communications on Pure & Applied Analysis, 2017, 16(2): 493-512. doi: 10.3934/cpaa.2017025.
Liouville theorems for elliptic problems in variable exponent spaces
SYLWIA DUDEK and IWONA SKRZYPCZAK
We investigate nonexistence of nonnegative solutions to a partial differential inequality involving the p(x){Laplacian of the form
\begin{document}$- {\Delta _{p(x)}}u \geqslant \Phi (x,u(x),\nabla u(x))$ \end{document}
in ${\mathbb{R}^n}$, as well as in outer domain $\Omega \subseteq {\mathbb{R}^n}$, where Φ(x; u; ∇u) is a locally integrable Carathéodory's function. We assume that Φ(x; u; ∇u) ≥ 0 or compatible with p and u. Growth conditions on u and p lead to Liouville-type results for u.
SYLWIA DUDEK, IWONA SKRZYPCZAK. Liouville theorems for elliptic problems in variable exponent spaces. Communications on Pure & Applied Analysis, 2017, 16(2): 513-532. doi: 10.3934/cpaa.2017026.
Asymptotic behavior of solutions to a nonlinear plate equation with memory
Yongqin Liu
In this paper we consider the initial value problem of a nonlinear plate equation with memory-type dissipation in multi-dimensional space. Due to the memory effect, more technique is needed to deal with the global existence and decay property of solutions compared with the frictional dissipation. The model we study is an inertial one and the rotational inertia plays an important role in the part of energy estimates. By exploiting the time-weighted energy method we prove the global existence and asymptotic decay of solutions under smallness and suitable regularity assumptions on the initial data.
Yongqin Liu. Asymptotic behavior of solutions to a nonlinear plate equation with memory. Communications on Pure & Applied Analysis, 2017, 16(2): 533-556. doi: 10.3934/cpaa.2017027.
Global dynamics of solutions with group invariance for the nonlinear schrödinger equation
Takahisa Inui
We consider the focusing mass-supercritical and energy-subcritical nonlinear Schrödinger equation (NLS). We are interested in the global behavior of the solutions to (NLS) with group invariance. By the group invariance, we can determine the global behavior of the solutions above the ground state standing waves.
Takahisa Inui. Global dynamics of solutions with group invariance for the nonlinear schr\u00F6dinger equation. Communications on Pure & Applied Analysis, 2017, 16(2): 557-590. doi: 10.3934/cpaa.2017028.
Existence and stability of periodic solutions for relativistic singular equations
Jifeng Chu, Zaitao Liang, Fangfang Liao and Shiping Lu
In this paper, we study the existence, multiplicity and stability of positive periodic solutions of relativistic singular differential equations. The proof of the existence and multiplicity is based on the continuation theorem of coincidence degree theory, and the proof of stability is based on a known connection between the index of a periodic solution and its stability.
Jifeng Chu, Zaitao Liang, Fangfang Liao, Shiping Lu. Existence and stability of periodic solutions for relativistic singular equations. Communications on Pure & Applied Analysis, 2017, 16(2): 591-609. doi: 10.3934/cpaa.2017029.
The existence and nonexistence results of ground state nodal solutions for a Kirchhoff type problem
Xiao-Jing Zhong and Chun-Lei Tang
In this paper, we investigate the existence and nonexistence of ground state nodal solutions to a class of Kirchhoff type problems
\begin{document} $ -\left( a+b\int_{\Omega }{|}\nabla u{{|}.{2}}dx \right)\vartriangle u=\lambda u+|u{{|}.{2}}u,\ \ u\in H_{0}.{1}(\Omega ), $ \end{document}
where $a, b>0$, $\lambda < a\lambda_1$, $\lambda_1$ is the principal eigenvalue of $(-\triangle, H_0.{1}(\Omega))$. With the help of the Nehari manifold, we obtain that there is $\Lambda>0$ such that the Kirchhoff type problem possesses at least one ground state nodal solution $u_b$ for all $0 < b < \Lambda$ and $\lambda < a\lambda_1$ and prove that its energy is strictly larger than twice that of ground state solutions. Moreover, we give a convergence property of $u_b$ as $b\searrow 0$. Besides, we firstly establish the nonexistence result of nodal solutions for all $b\geq\Lambda$. This paper can be regarded as the extension and complementary work of W. Shuai (2015)[21], X.H. Tang and B.T. Cheng (2016)[22].
Xiao-Jing Zhong, Chun-Lei Tang. The existence and nonexistence results of ground state nodal solutions for a Kirchhoff type problem. Communications on Pure & Applied Analysis, 2017, 16(2): 611-628. doi: 10.3934/cpaa.2017030.
Regularity estimates for continuous solutions of α-convex balance laws
Laura Caravenna
This paper proves new regularity estimates for continuous solutions to the balance equation
\begin{document}${{\partial }_{t}}u+{{\partial }_{x}}f(u)=g\qquad g\ \text{bounded}, f\in {{C}^{\text{2}n}}(\mathbb{R})$ \end{document}
when the flux $f$ satisfies a convexity assumption that we denote as 2n-convexity. The results are known in the case of the quadratic flux by very different arguments in [14,10,8]. We prove that the continuity of $u$ must be in fact $1/2n$-Hölder continuity and that the distributional source term $g$ is determined by the classical derivative of $u$ along any characteristics; part of the proof consists in showing that this classical derivative is well defined at any `Lebesgue point' of $g$ for suitable coverings. These two regularity statements fail in general for $C^{\infty}(\mathbb{R})$, strictly convex fluxes, see [3].
Laura Caravenna. Regularity estimates for continuous solutions of <i>\u03B1<\/i>-convex balance laws. Communications on Pure & Applied Analysis, 2017, 16(2): 629-644. doi: 10.3934/cpaa.2017031.
S-shaped and broken s-shaped bifurcation curves for a multiparameter diffusive logistic problem with holling type-Ⅲ functional response
Tzung-shin Yeh
We study exact multiplicity and bifurcation curves of positive solutions for a multiparameter diffusive logistic problem with Holling type-Ⅲ functional response
\begin{document}${\left\{ {\begin{array}{*{20}{l}} {{u^{\prime \prime }}(x) + \lambda \left[ {ru(1 - \frac{u}{q}) - \frac{{{u^p}}}{{1 + {u^p}}}\% } \right] = 0{\text{,}} - {\text{1}} < x < 1{\text{,}}} \\ {u( - 1) = u(1) = 0{\text{, }}} \end{array}} \right.},$ \end{document}
where u is the population density of the species, p > 1, q, r are two positive dimensionless parameters, and λ > 0 is a bifurcation parameter. For fixed p > 1, assume that q, r satisfy one of the following conditions: (ⅰ) r ≤ η1, p* q and (q, r) lies above the curve
\begin{document}$\begin{array}{l}{\Gamma _1} = \{ (q,r):q(a) = \frac{{a[2{a^p} - (p - 2)]}}{{{a^p} - (p - 1)}}{\rm{, }}\\\quad \quad \quad \quad \quad r(a) = \frac{{{a^{p - 1}}[2{a^p} - (p - 2)]}}{{{{({a^p} + 1)}^2}}}{\rm{, }}\sqrt[p]{{p - 1}}\% < a < C_p^*\} ;\end{array}$ \end{document}
(ⅱ) r ≤ η2, p* q and (q, r) lies on or below the curve Γ1, where η1, p* and η2, p* are two positive constants, and $C_{p}^{*}={{\left(\frac{{{p}^{2}}+3p-4+p\sqrt{%{{p}^{2}}+6p-7}}{4} \right)}^{1/p}}$. Then on the (λ, ||u||∞)-plane, we give a classification of three qualitatively different bifurcation curves: an S-shaped curve, a broken S-shaped curve, and a monotone increasing curve. Hence we are able to determine the exact multiplicity of positive solutions by the values of q, r and λ.
Tzung-shin Yeh. S-shaped and broken s-shaped bifurcation curves for a multiparameter diffusive logistic problem with holling type-\u2162 functional response. Communications on Pure & Applied Analysis, 2017, 16(2): 645-670. doi: 10.3934/cpaa.2017032.
A concentration phenomenon of the least energy solution to non-autonomous elliptic problems with a totally degenerate potential
Shun Kodama
In this paper we study the following non-autonomous singularly perturbed Dirichlet problem:
\begin{document}${\varepsilon ^2}\Delta u -u + K(x)f(u) = 0, \; u > 0\quad {\rm{in}}\; \Omega, \quad u = 0\quad {\rm{on}}\; \partial \Omega, $ \end{document}
for a totally degenerate potential K. Here ε > 0 is a small parameter, $\Omega \subset \mathbb{R}^N$ is a bounded domain with a smooth boundary, and f is an appropriate superlinear subcritical function. In particular, f satisfies $0 < \liminf_{ t \to 0+} f(t)/t^q \leq \limsup_{ t \to 0+} f(t)/t^q < + \infty$ for some $1 < q < + \infty$. We show that the least energy solutions concentrate at the maximal point of the modified distance function $D(x) = \min \{ (q+1) d(x, \partial A), 2 d(x, \partial \Omega) \}$, where $A = \{ x \in \bar{ \Omega } \mid K(x) = \max_{ y \in \bar{ \Omega } } K(y) \}$ is assumed to be a totally degenerate set satisfying ${{A}^{{}^\circ }}\ne \emptyset $.
Shun Kodama. A concentration phenomenon of the least energy solution to non-autonomous elliptic problems with a totally degenerate potential. Communications on Pure & Applied Analysis, 2017, 16(2): 671-698. doi: 10.3934/cpaa.2017033.
A sustainability condition for stochastic forest model
TÔn Vı$\underset{.}{\overset{\hat{\ }}{\mathop{\text{E}}}}\, $T T$\mathop {\text{A}}\limits_. $, Linhthi hoai Nguyen and Atsushi Yagi
A stochastic forest model of young and old age class trees is studied. First, we prove existence, uniqueness and boundedness of global nonnegative solutions. Second, we investigate asymptotic behavior of solutions by giving a sufficient condition for sustainability of the forest. Under this condition, we show existence of a Borel invariant measure. Third, we present several sufficient conditions for decline of the forest. Finally, we give some numerical examples.
T\u00D4n V\u0131$\\underset{.}{\\overset{\\hat{\\ }}{\\mathop{\\text{E}}}}\\, $T T$\\mathop {\\text{A}}\\limits_. $, Linhthi hoai Nguyen, Atsushi Yagi. A sustainability condition for stochastic forest model. Communications on Pure & Applied Analysis, 2017, 16(2): 699-718. doi: 10.3934/cpaa.2017034. | CommonCrawl |
Formula For Z Test Statistic
Formula For Z Test Statistic | added by request
Formula For Z Test Statistic [Most popular]
Formula For Z Test Statistic | added by users
11952 kb/s
Formula For Z Test Statistic | full
Other results for Formula For Z Test Statistic:
Z Test Formula in Statistics | Step by Step Calculation ...
Step 2: Finally, the z-test statistics is computed by deducting population mean from the variable and then the result is divided by the population standard deviation as shown below. Z = (x - μ) / ơ The formula for z-test statistics for a sample is derived by using the following steps: Step 1: Firstly, calculate the sample mean and sample standard deviation the same as above.
Link: https://www.wallstreetmojo.com/z-test-formula/
Z-Test, t-Test, F-Test & χ²-Test Statistic Calculator
getcalc.com's statistic calculator & formulas to estimate Z0 for Z-test, t0 for student's t-test, F0 for F-test & (χ²)0 for χ² test for sample mean, proportion, difference between two means or proportions hypothesis testing in statistics & probability experiments.
Link: https://getcalc.com/statistics-statistic-calculator.htm
Z-test - Wikipedia
A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution.Z-test tests the mean of a distribution in which we already know the population variance σ 2.Because of the central limit theorem, many test statistics are approximately normally distributed for large samples.
Link: https://en.wikipedia.org/wiki/Z-test
Z Test Statistics Formula | Calculator (Examples With ...
Z Test Statistics Formula - Example #1. Suppose a person wants to check or test if tea and coffee both are equally popular in the city. In that case, he can use a z test statistics method to obtain the results by taking a sample size say 500 from the city out of which suppose 280 are tea drinkers.
Link: https://www.educba.com/z-test-statistics-formula/
Statistics - One Proportion Z Test - Tutorialspoint
The test statistic is a z-score (z) defined by the following equation. ${z = \frac{(p - P)}{\sigma}}$ where P is the hypothesized value of population proportion in the null hypothesis, p is the sample proportion, and ${\sigma}$ is the standard deviation of the sampling distribution.
Link: https://www.tutorialspoint.com/statistics/one_proportion_z_test.htm
Test Statistic Calculator - Learning about Electronics
The formula for the test statistic for a single proportion is, Z= (ṗ - p0)/√p0(1-p0)/n ṗ represents the number of people in the same population who have a particular characteristic of interest (for example, the number of women who are currently pregnant in the population). p 0 is the claimed value for the null hypothesis. n is the sample ...
Link: http://www.learningaboutelectronics.com/Articles/Test-statistic-calculator.php
How to Use the Z.TEST Function in Excel - ThoughtCo
Calculate the test statistic, which is a z-score. Calculate the p-value by using the normal distribution. In this case the p-value is the probability of obtaining at least as extreme as the observed test statistic, assuming the null hypothesis is true.
Link: https://www.thoughtco.com/hypothesis-tests-z-test-function-excel-3126622
Difference Between t-test and z-test (with Comparison ...
T-test refers to a univariate hypothesis test based on t-statistic, wherein the mean is known, and population variance is approximated from the sample. On the other hand, Z-test is also a univariate test that is based on standard normal distribution.
Link: https://keydifferences.com/difference-between-t-test-and-z-test.html
One-Sample z-test
Hypothesis test. Formula: . where is the sample mean, Δ is a specified value to be tested, σ is the population standard deviation, and n is the size of the sample. Look up the significance level of the z‐value in the standard normal table (Table in Appendix B).. A herd of 1,500 steer was fed a special high‐protein grain for a month. A random sample of 29 were weighed and had gained an ...
Link: https://www.cliffsnotes.com/study-guides/statistics/univariate-inferential-tests/one-sample-z-test
Z Test: Formula & Example - Video & Lesson Transcript ...
In statistics, z-tests are calculations that can be used to compare sample and population means to see if there is a significant difference. In this lesson, you will learn about these useful ...
Link: https://study.com/academy/lesson/z-test-formula-example.html
Statistics Formulas - Statistics and Probability
List of common statistics formulas (equations) used in descriptive statistics, inferential statistics, and survey sampling. Includes links to web pages that explain how to use the formulas, including sample problems with solutions.
Link: https://www.stattrek.com/statistics/formulas.aspx
Stats: Testing a Single Mean - Richland Community College
This is true not only for means, but all of the testing we're going to be doing. Population Standard Deviation Known. If the population standard deviation, sigma, is known, then the population mean has a normal distribution, and you will be using the z-score formula for sample means. The test statistic is the standard formula you've seen before.
Link: https://people.richland.edu/james/lecture/m170/ch09-mu.html
T Test Formula with Solved Examples | Statistical ...
T-Test Formula The t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal distribution if the value ...
Link: https://byjus.com/t-test-formula/
Test statistic - Wikipedia
A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test.
Link: https://en.wikipedia.org/wiki/Test_statistic
Calculating a z statistic in a test about a proportion ...
Calculating a z statistic in a one-sample z test about a proportion. - [Instructor] The mayor of a town saw an article that claimed the national unemployment rate is 8%.
Link: https://www.khanacademy.org/math/ap-statistics/tests-significance-ap/one-sample-z-test-proportion/v/calculating-a-z-statistic-in-a-significance-test
How to Perform z-test Calculations in Excel - dummies
If you know the variance or standard deviation of the underlying population, you can calculate z-test values in Excel by using the Data Analysis add-in. You might typically work with z-test values to calculate confidence levels and confidence intervals for normally distributed data. To do this, take these steps: To select the z-test tool, click […]
Link: https://www.dummies.com/software/microsoft-office/excel/how-to-perform-z-test-calculations-in-excel/
Z TEST Formula - Guide, Examples, How to Use Z Test Excel
The Z.TEST Function is categorized under Excel Statistical functions. It will calculate the one-tailed P-value (probability value) of a Z-test. As a financial analyst, the Z Test Excel formula is useful for various analyses. For example, we can decide if we should invest in a stock when it provides a specific average daily return. Formula
Link: https://corporatefinanceinstitute.com/resources/excel/functions/z-test-excel-formula/
The P-Value "Formula", Testing Your Hypothesis — Trending ...
Performing the test. The formula for testing a proportion is based on the z statistic. We don't need to use the t distribution in this case, because we don't need a standard deviation to do the test. Here is the formula: Unfortunately, the proportion test often yields inaccurate results when the proportion is small.
Link: https://trendingsideways.com/the-p-value-formula-testing-your-hypothesis
Hypothesis Testing Formula | Calculator (Examples with ...
Step 4: Also, find the z score from z table given the level of significance and mean. Step 5: Compare these two values and if test statistic greater than z score, reject the null hypothesis.In case test statistic is less than z score, you cannot reject the null hypothesis. Examples of Hypothesis Testing Formula (With Excel Template)
Link: https://www.educba.com/hypothesis-testing-formula/
Z-Test with Examples - SlideShare
Z-Test with Examples 1. Z-TEST BY GROUP 04 B.Sc. (Hons.) Agriculture 2nd Semester Assignment presented as the partial fulfillment of the requirement of Course STAT-102 College of Agriculture BZU, Bahadur Sub-Campus Layyah 2. DEFINATION Z test is a statistical procedure used to test an alternative hypothesis against a null hypothesis.
Link: https://www.slideshare.net/MuhammadAnas96/ztest-with-examples
Z Test Statistics, Sample Formula - Statistical Test
Z Test Statistics, Sample formula. Statistical Test formulas list online.
Link: https://www.easycalculation.com/formulas/z-test-stats.html
Statistics For Dummies Cheat Sheet - dummies
From Statistics For Dummies, 2nd Edition. By Deborah J. Rumsey . Whether you're studying for an exam or just want to make sense of data around you every day, knowing how and when to use data analysis techniques and formulas of statistics will help.
Link: https://www.dummies.com/education/math/statistics/statistics-for-dummies-cheat-sheet/
Z-statistics vs. T-statistics (video) | Khan Academy
Z-statistics vs. T-statistics. This is the currently selected item. Small sample hypothesis test. Large sample proportion hypothesis testing. Video transcript. I want to use this video to kind of make sure we intuitively and otherwise and understand the difference between a Z-statistic-- something I have trouble saying-- and a T-statistic. So ...
Link: https://www.khanacademy.org/math/statistics-probability/significance-tests-one-sample/more-significance-testing-videos/v/z-statistics-vs-t-statistics
Z-Test (Z0, Ze & H0) Calculator, Formulas & Examples
Z-test calculator, formulas & example work with steps to estimate z-statistic (Z0), critical value of normal distribution (Ze) & test of hypothesis (H0) for large sample mean, proportion & two means or proportions difference in statistical surveys & experiments.
Link: https://getcalc.com/statistics-z-test-statistic-calculator.htm
P Value Formula | Step by Step Examples to Calculate P-Value
Guide to P-Value Formula. Here we discuss steps for calculation of p value, z statistic with practical examples and downloadable excel template.
Link: https://www.wallstreetmojo.com/p-value-formula/
Excel Z.TEST Function
Cells B1 and B2 of the example spreadsheet show the Excel Z.Test function used to calculate the one-tailed probability value of the Z-Test for two different hypothesized sample means. For the hypothesized sample mean 5.0, the one-tailed probability value of the Z-Test is calculated by the formula:
Link: https://www.excelfunctions.net/excel-z-test-function.html
Two-Sample z-test for Comparing Two Means
Hypothesis test. Formula: . where and are the means of the two samples, Δ is the hypothesized difference between the population means (0 if testing for equal means), σ 1 and σ 2 are the standard deviations of the two populations, and n 1 and n 2 are the sizes of the two samples.. The amount of a certain trace element in blood is known to vary with a standard deviation of 14.1 ppm (parts per ...
Link: https://www.cliffsnotes.com/study-guides/statistics/univariate-inferential-tests/two-sample-z-test-for-comparing-two-means
Z-Test Definition - Investopedia
Z-Test: A z-test is a statistical test used to determine whether two population means are different when the variances are known and the sample size is large. The test statistic is assumed to have ...
Link: https://www.investopedia.com/terms/z/z-test.asp
Z (Normal Distribution) Tests - StatsDirect
Z (Normal Distribution) Tests ... then you should consider using a nonparametric alternative such as the Wilcoxon signed ranks test or the Mann-Whitney U test. The single sample test statistic is calculated as: - where x bar is the sample mean, s² is the sample variance, n is the sample size, µ is the specified population mean and z is a ...
Link: https://www.statsdirect.com/help/parametric_methods/z_normal.htm
Test Statistic? | Yahoo Answers
1) The test statistic is z. The general formula for finding z is: z = ( p̂ - p ) / ( √ [pq/n] ), where p̂ is the sample proportion, p is the population proportion, and n is the sample size. Since the problem you posted did not include any population proportion, it is assumed that p = 0.5 and q = 0.5.
Link: https://answers.yahoo.com/question/index?qid=20141203140140AAvb1YL
Step 2: Finally, the z-test statistics is computed by deducting population mean from the variable and then the result is divided by the population standard deviation as shown below. Z = (x – μ) / ơ The formula for z-test statistics for a sample is derived by using the following steps: Step 1: Firstly, calculate the sample mean and sample standard deviation the same as above.
Z Test Statistics Formula – Example #1. Suppose a person wants to check or test if tea and coffee both are equally popular in the city. In that case, he can use a z test statistics method to obtain the results by taking a sample size say 500 from the city out of which suppose 280 are tea drinkers.
T-Test Formula The t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known.
One-Sample z-test - CliffsNotes
Z Test Formula in Statistics | Step by Step Calculation (Examples)
Z Test Formula in statistics refers to the hypothesis test which is used in order to determine that whether the two samples means calculated are different in case the standard deviations are available and sample The formula for z-test statistics for a population is derived by using the following steps
Statistics - One Proportion Z Test - Tutorialspoint | Formula
Test Statistics is defined and given by the following function: Formula. A survey claims that 9 out of 10 doctors recommend aspirin for their patients with headaches. To test this claim, a random sample of 100 doctors is obtained.
Calculating a z statistic in a test about a proportion... | Khan Academy
The name test statistic seems a little confusing. Sometimes, you will see a formula that looks something like this that you say, hey look, you have your sample proportion, you find the difference between that and the assumed proportion in the null hypothesis, that's what this little zero says, that...
Z Test Statistics Formula | Calculator (Examples With Excel Template)
Z Test statistics is a statistical procedure used to test an alternative hypothesis against the null hypothesis. It is any statistical hypothesis used to determine Z Test Statistics Formula - Example #1. Suppose a person wants to check or test if tea and coffee both are equally popular in the city.
A Z-test is any statistical test for which the distribution of the test statistic under the null ... Estimating equations · Maximum likelihood · Method of moments · M- ...
Z-Test Definition
A z-test is a statistical test used to determine whether two population means are different when the variances are known and the sample size is large. Z-tests are closely related to t-tests, but t-tests are best performed when an experiment has a small sample size. Also, t-tests assume the standard...
In statistics & probability, Z-statistic is inferential statistics function used to analyze variance of large samples to estimate the unknown value of population parameters. t-test formula for test of hypothesis for difference between two sample means.
Statistical Tests — When to use Which ? - Towards Data Science
This test-statistic is then compared with a critical value and if it is found to be greater than the critical value the hypothesis is rejected. Before we move forward with different statistical tests it is imperative to understand the difference between a sample and a population.
Link: https://towardsdatascience.com/statistical-tests-when-to-use-which-704557554740
Statistics Formulas | Hypothesis Testing
List of common statistics formulas (equations) used in descriptive statistics, inferential statistics, and survey sampling. This web page lists statistics formulas used in the Stat Trek tutorials. Each formula links to a web page that explains how to use the formula.
Link: https://stattrek.com/statistics/formulas.aspx
Z Test: Definition & Two Proportion Z-Test - Statistics How To
Set this number aside for a moment. Step 3: Insert the numbers from Step 1 and Step 2 into the test statistic formula:.
Link: https://www.statisticshowto.com/z-test/
A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis...
Z Score Calculator for 2 Population Proportions
Which Statistics Test? The z score test for two population proportions is used when you want to know whether two populations or groups (e.g., males and females; theists and atheists) differ significantly on some single (categorical) characteristic - for example, whether they are vegetarians.
Link: https://www.socscistatistics.com/tests/ztest/
Z Test: Formula & Example - Video & Lesson Transcript | Study.com
In statistics, z-tests are calculations that can be used to compare sample and population means to see if there is a significant difference. In...
How to Find a P-Value from a Z-Test Statistic Tutorial | Sophia Learning
First, select "Formulas", choose the "Statistical" option, and pick "NORM.DIST". Because this is qualitative data, meaning the students answer yes or no to suffering from test anxiety, this is a population proportion and we can use the following formula to calculate the z-test statistic
Link: https://www.sophia.org/how-to-find-a-p-value-from-a-z-test-statistic-top
One sample Z-tests - CRAN
Calculating p-values. In R this looks like: n <- length(x) # calculate the z-statistic z_stat <- (mean(x) - 3) / (2 / sqrt(n)) z_stat #> [1] 2.371708. To calculate a ...
Link: https://cran.r-project.org/web/packages/distributions3/vignettes/one-sample-z-test.html
Test Statistic Calculator
The Test Statistic for One Population Mean Calculator is a calculator that is used when the variable is numerical and only one population or group is being studied. So going back to this example, we use the formula for the test statistic for one population mean, which is, Z= (x - μ0)/(σ/√n).
t test formula - Easy Guides - Wiki - STHDA
Once t-test statistic value is determined, you have to read in t-test table the critical value of Student's t distribution corresponding to the significance level alpha of your Paired t-test formula. To compare the means of the two paired sets of data, the differences between all pairs must be, first, calculated.
Link: http://www.sthda.com/english/wiki/t-test-formula
The P-Value "Formula", Testing Your Hypothesis — Trending Sideways
Again, the formula for the test is based on the z statistic, but it takes on a different form since it involves two samples: The null hypothesis for this test is that both sample proportions come from the same population proportion. If you understand the previous tests, it should be fairly straightforward...
Z-test for Two Proportions Calculator - MathCracker.com
This calculator conducts a Z-test for two population proportions p1 and p2. Select the null and alternative hypotheses, significance level, the sample sizes, the number of favorable cases (or the What is the z-test formula in this case? The formula for a z-statistic for two population proportions is.
Link: https://mathcracker.com/z-test-for-two-proportions
Obtain z test statistic for testing Spearman rho with z-score formula
I know the approach using student t-test to derive p-value from spearman rho coefficient. I also saw permutation test for this operation, but I am somehow confused and prefer to stick to the z-score formula $\frac Link to related unanswered question: Spearman rho statistical significance value (z).
Link: https://stats.stackexchange.com/q/261457
T Test Formula with Solved Examples | Statistical Hypothesis Test
T-Test Formula a statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal...
The Z.TEST Function is categorized under Excel Statistical functions. As a financial analyst, the Z Test Excel formula is useful for various analyses. For example, we can decide if we should invest in a stock when it provides a specific average daily return.
Formula of Z Test Statistic for proportion - Medium
The test statistic gives us an idea of how far away our sample result is from our null hypothesis. Understanding the formula: The statistic - parameter results the DISTANCE from Sample proportion to _Population proportion_.
Link: https://medium.com/statistical-guess/z-test-z-statistics-e2dd1782656d
What is the formula that calculates p value from the Z test statistic?
For example, if you have z test statistic of 1.96, you will find that there is 0.975 area under the curve up to 1.96, so you p-value is 1-0.975 = 0.250. For a two tailed test your p-value is twice the p-vale of the one-tailed test, for this example it would 0.5. Source(s)
Link: https://answers.yahoo.com/question/index?qid=20100427121611AA6xpei
formula for z test statistic - Bing
Z Test Statistics Formula - Example #1. Suppose a person wants to check or test if tea and coffee both are equally popular in the city. In statistics & probability, Z-statistic is inferential statistics function used to analyze variance of large samples to estimate the unknown value of population...
Link: https://www.windowssearch-exp.com/search?q=formula+for+z+test+statistic&FORM=QSRE1
Statistics/Testing Data/t-tests - Wikibooks, open books for an open...
The t- test is the most powerful parametric test for calculating the significance of a small sample mean. A one sample t-test has the following null hypothesis: where the Greek letter. (mu) represents the population mean and c represents its assumed (hypothesized) value.
Link: https://en.wikibooks.org/wiki/Statistics/Testing_Data/t-tests
One Proportion Z Test Statistics Calculator | Formula
One Proportion Z Test is a hypothesis test to make comparison between a group to specified population proportion. That is if one is true, the other one must be false and vice versa. Use this One Proportion Z Test statistics calculator to find the value of Z - test statistic by entering observed...
Link: https://www.easycalculation.com/statistics/test-for-one-proportion.php
Hypothesis testing, Test Statistic (z, p, t, F) | Statistical Hypothesis...
Here the test statistic falls within the critical region. C.K.Taylor Suppose that a doctor claims that 17 year olds have an average body temperature that is A table of z-scores will be necessary. The test statistic is found by the formula for the mean of a sample, rather than the standard deviation we use...
Link: https://www.scribd.com/doc/134856024/Hypothesis-testing-Test-Statistic-z-p-t-F
Z Score Table - Z Table and Z score calculation
George was among the test takers and he got 700 points (X) out of 1000. The average score was 600 (µ) and the standard deviation was 150 (σ). Now We need to standardize his score (i.e. calculate a z-score corresponding to his actual test score) and use a z-table to determine how well he did on the...
Link: http://www.z-table.com/
Finding the P-value given the Test Statistic (for Z-tests) - YouTube
Hypothesis Testing Problems Z Test & T Statistics One & Two Tailed Tests 2 - Продолжительность: 13:34 The Organic Chemistry Tutor 80 590 просмотров. Calculate the P-Value in Statistics - Formula to Find the P-Value in Hypothesis Testing - Продолжительность: 22:42 Math and Science...
Link: https://www.youtube.com/watch?v=Xs97g6gOPLc
Comparing means: z and t tests | 1.2 Test statistics
1.2 Test statistics. A test statistic is a numerical summary of the data that is compared to what would be expected under the null hypothesis. Test statistics can take on many forms such as the z-tests (usually used for large datasets) or t-tests (usually used when datasets are small).
Link: https://mgimond.github.io/Stats-in-R/z_t_tests.html
8.2.3.3 - One Sample Mean z Test (Optional) | STAT 200
The formula for computing a \(z MinitabExpress - Performing a One Sample Mean z Test. Research question: Are the IQ scores of students at one college-prep school above the national average? Open Minitab Express without data. On a PC: Select STATISTICS > One Sample > z On a Mac: Select...
Link: https://online.stat.psu.edu/stat200/lesson/8/8.2/8.2.3/8.2.3.3
One-Sample z-test | Statistics
Hypothesis test. Formula: where is the sample mean, Δ is a specified value to be tested, σ is the population standard deviation, and n is the size of the sample. The null hypothesis of no difference will be rejected if the computed z statistic falls outside the range of -1.96 to 1.96.
80s tv quiz questions; how to pass the nj motorcycle written test; nda 2 2020 answer key; ap world history exam answers 500; florida drug and alcohol test for permit; ap world history exam 2020 date; testing tactics checklist ppt; 38 wrong test answers that are genius; ap chemistry exam answers 2020 by princeton review 2020 edition; make driving test appointment texas; | CommonCrawl |
This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability.
Amphetamine – systematic reviews and meta-analyses report that low-dose amphetamine improved cognitive functions (e.g., inhibitory control, episodic memory, working memory, and aspects of attention) in healthy people and in individuals with ADHD.[21][22][23][25] A 2014 systematic review noted that low doses of amphetamine also improved memory consolidation, in turn leading to improved recall of information in non-ADHD youth.[23] It also improves task saliency (motivation to perform a task) and performance on tedious tasks that required a high degree of effort.[22][24][25]
This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit.
along with the previous bit of globalization is an important factor: shipping is ridiculously cheap. The most expensive S&H in my modafinil price table is ~$15 (and most are international). To put this in perspective, I remember in the 90s you could easily pay $15 for domestic S&H when you ordered online - but it's 2013, and the dollar has lost at least half its value, so in real terms, ordering from abroad may be like a quarter of what it used to cost, which makes a big difference to people dipping their toes in and contemplating a small order to try out this 'nootropics thing they've heard about.
There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says.
Accordingly, we searched the literature for studies in which MPH or d-AMP was administered orally to nonelderly adults in a placebo-controlled design. Some of the studies compared the effects of multiple drugs, in which case we report only the results of stimulant–placebo comparisons; some of the studies compared the effects of stimulants on a patient group and on normal control subjects, in which case we report only the results for control subjects. The studies varied in many other ways, including the types of tasks used, the specific drug used, the way in which dosage was determined (fixed dose or weight-dependent dose), sample size, and subject characteristics (e.g., age, college sample or not, gender). Our approach to the classic splitting versus lumping dilemma has been to take a moderate lumping approach. We group studies according to the general type of cognitive process studied and, within that grouping, the type of task. The drug and dose are reported, as well as sample characteristics, but in the absence of pronounced effects of these factors, we do not attempt to make generalizations about them.
The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away.
Like caffeine, nicotine tolerates rapidly and addiction can develop, after which the apparent performance boosts may only represent a return to baseline after withdrawal; so nicotine as a stimulant should be used judiciously, perhaps roughly as frequent as modafinil. Another problem is that nicotine has a half-life of merely 1-2 hours, making regular dosing a requirement. There is also some elevated heart-rate/blood-pressure often associated with nicotine, which may be a concern. (Possible alternatives to nicotine include cytisine, 2'-methylnicotine, GTS-21, galantamine, Varenicline, WAY-317,538, EVP-6124, and Wellbutrin, but none have emerged as clearly superior.)
"My husband and I (Ryan Cedermark) are so impressed with the research Cavin did when writing this book. If you, a family member or friend has suffered a TBI, concussion or are just looking to be nicer to your brain, then we highly recommend this book! Your brain is only as good as the body's internal environment and Cavin has done an amazing job on providing the information needed to obtain such!"
For 2 weeks, upon awakening I took close-up photographs of my right eye. Then I ordered two jars of Life-Extension Sea-Iodine (60x1mg) (1mg being an apparently safe dose), and when it arrived on 10 September 2012, I stopped the photography and began taking 1 iodine pill every other day. I noticed no ill effects (or benefits) after a few weeks and upped the dose to 1 pill daily. After the first jar of 60 pills was used up, I switched to the second jar, and began photography as before for 2 weeks. The photographs were uploaded, cropped by hand in Gimp, and shrunk to more reasonable dimensions; both sets are available in a Zip file.
The majority of studies seem to be done on types of people who are NOT buying nootropics. Like the elderly, people with blatant cognitive deficits, etc. This is analogous to some of the muscle-building research but more extreme. Like there are studies on some compound increasing muscle growth in elderly patients or patients with wasting, and supplement companies use some of those studies to back their supplements.
Brain-imaging studies are consistent with the existence of small effects that are not reliably captured by the behavioral paradigms of the literature reviewed here. Typically with executive function tasks, reduced activation of task-relevant areas is associated with better performance and is interpreted as an indication of higher neural efficiency (e.g., Haier, Siegel, Tang, Abel, & Buchsbaum, 1992). Several imaging studies showed effects of stimulants on task-related activation while failing to find effects on cognitive performance. Although changes in brain activation do not necessarily imply functional cognitive changes, they are certainly suggestive and may well be more sensitive than behavioral measures. Evidence of this comes from a study of COMT variation and executive function. Egan and colleagues (2001) found a genetic effect on executive function in an fMRI study with sample sizes as small as 11 but did not find behavioral effects in these samples. The genetic effect on behavior was demonstrated in a separate study with over a hundred participants. In sum, d-AMP and MPH measurably affect the activation of task-relevant brain regions when participants' task performance does not differ. This is consistent with the hypothesis (although by no means positive proof) that stimulants exert a true cognitive-enhancing effect that is simply too small to be detected in many studies.
Sarter is downbeat, however, about the likelihood of the pharmaceutical industry actually turning candidate smart drugs into products. Its interest in cognitive enhancers is shrinking, he says, "because these drugs are not working for the big indications, which is the market that drives these developments. Even adult ADHD has not been considered a sufficiently attractive large market."
I can test fish oil for mood, since the other claimed benefits like anti-schizophrenia are too hard to test. The medical student trial (Kiecolt-Glaser et al 2011) did not see changes until visit 3, after 3 weeks of supplementation. (Visit 1, 3 weeks, visit 2, supplementation started for 3 weeks, visit 3, supplementation continued 3 weeks, visit 4 etc.) There were no tests in between the test starting week 1 and starting week 3, so I can't pin it down any further. This suggests randomizing in 2 or 3 week blocks. (For an explanation of blocking, see the footnote in the Zeo page.)
The stimulant now most popular in news articles as a legitimate "smart drug" is Modafinil, which came to market as an anti-narcolepsy drug, but gained a following within the military, doctors on long shifts, and college students pulling all-nighters who needed a drug to improve alertness without the "wired" feeling associated with caffeine. Modafinil is a relatively new smart drug, having gained widespread use only in the past 15 years. More research is needed before scientists understand this drug's function within the brain – but the increase in alertness it provides is uncontested.
Noopept is a Russian stimulant sometimes suggested for nootropics use as it may be more effective than piracetam or other -racetams, and its smaller doses make it more convenient & possibly safer. Following up on a pilot study, I ran a well-powered blind randomized self-experiment between September 2013 and August 2014 using doses of 12-60mg Noopept & pairs of 3-day blocks to investigate the impact of Noopept on self-ratings of daily functioning in addition to my existing supplementation regimen involving small-to-moderate doses of piracetam. A linear regression, which included other concurrent experiments as covariates & used multiple imputation for missing data, indicates a small benefit to the lower dose levels and harm from the highest 60mg dose level, but no dose nor Noopept as a whole was statistically-significant. It seems Noopept's effects are too subtle to easily notice if they exist, but if one uses it, one should probably avoid 60mg+.
A randomized non-blind self-experiment of LLLT 2014-2015 yields a causal effect which is several times smaller than a correlative analysis and non-statistically-significant/very weak Bayesian evidence for a positive effect. This suggests that the earlier result had been driven primarily by reverse causation, and that my LLLT usage has little or no benefits.
"We stumbled upon fasting as a way to optimize cognition and make yourself into a more efficient human being," says Manuel Lam, an internal medicine physician who advises Nootrobox on clinical issues. He and members of the company's executive team have implanted glucose monitors in their arms — not because they fear diabetes but because they wish to track the real-time effect of the foods they eat.
Pharmaceutical, substance used in the diagnosis, treatment, or prevention of disease and for restoring, correcting, or modifying organic functions. (See also pharmaceutical industry.) Records of medicinal plants and minerals date to ancient Chinese, Hindu, and Mediterranean civilizations. Ancient Greek physicians such as Galen used a variety of drugs in their profession.…
Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high.
In the largest nationwide study, McCabe et al. (2005) sampled 10,904 students at 119 public and private colleges and universities across the United States, providing the best estimate of prevalence among American college students in 2001, when the data were collected. This survey found 6.9% lifetime, 4.1% past-year, and 2.1% past-month nonmedical use of a prescription stimulant. It also found that prevalence depended strongly on student and school characteristics, consistent with the variability noted among the results of single-school studies. The strongest predictors of past-year nonmedical stimulant use by college students were admissions criteria (competitive and most competitive more likely than less competitive), fraternity/sorority membership (members more likely than nonmembers), and gender (males more likely than females).
Because executive functions tend to work in concert with one another, these three categories are somewhat overlapping. For example, tasks that require working memory also require a degree of cognitive control to prevent current stimuli from interfering with the contents of working memory, and tasks that require planning, fluency, and reasoning require working memory to hold the task goals in mind. The assignment of studies to sections was based on best fit, according to the aspects of executive function most heavily taxed by the task, rather than exclusive category membership. Within each section, studies are further grouped according to the type of task and specific type of learning, working memory, cognitive control, or other executive function being assessed.
Prescription smart pills are common psychostimulants that can be purchased and used after receiving a prescription. They are most commonly given to patients diagnosed with ADD or ADHD, as well as narcolepsy. However many healthy people use them as cognitive enhancers due to their proven ability to improve focus, attention, and support the overall process of learning.
Clearly, the hype surrounding drugs like modafinil and methylphenidate is unfounded. These drugs are beneficial in treating cognitive dysfunction in patients with Alzheimer's, ADHD or schizophrenia, but it's unlikely that today's enhancers offer significant cognitive benefits to healthy users. In fact, taking a smart pill is probably no more effective than exercising or getting a good night's sleep.
There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects.
"They're not regulated by the FDA like other drugs, so safety testing isn't required," Kerl says. What's more, you can't always be sure that what's on the ingredient label is actually in the product. Keep in mind, too, that those that contain water-soluble vitamins like B and C, she adds, aren't going to help you if you're already getting enough of those vitamins through diet. "If your body is getting more than you need, you're just going to pee out the excess," she says. "You're paying a lot of money for these supplements; maybe just have orange juice."
Since the discovery of the effect of nootropics on memory and focus, the number of products on the market has increased exponentially. The ingredients used in a supplement can tell you about the effectiveness of the product. Brain enhancement pills that produce the greatest benefit are formulated with natural vitamins and substances, rather than caffeine and synthetic ingredients. In addition to better results, natural supplements are less likely to produce side effects, compared with drugs formulated with chemical ingredients.
The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out.
(People aged <=18 shouldn't be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers' sleep. Changes in effects with age are real - amphetamines' stimulant effects and modafinil's histamine-like side-effects come to mind as examples.)
"The author's story alone is a remarkable account of not just survival, but transcendence of a near-death experience. Cavin went on to become an advocate for survival and survivors of traumatic brain injuries, discovering along the way the key role played by nutrition. But this book is not just for injury survivors. It is for anyone who wants to live (and eat) well."
1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
And yet aside from anecdotal evidence, we know very little about the use of these drugs in professional settings. The Financial Times has claimed that they are "becoming popular among city lawyers, bankers, and other professionals keen to gain a competitive advantage over colleagues." Back in 2008 the narcolepsy medication Modafinil was labeled the "entrepreneur's drug of choice" by TechCrunch. That same year, the magazine Nature asked its readers whether they use cognitive-enhancing drugs; of the 1,400 respondents, one in five responded in the affirmative.
When Giurgea coined the word nootropic (combining the Greek words for mind and bending) in the 1970s, he was focused on a drug he had synthesized called piracetam. Although it is approved in many countries, it isn't categorized as a prescription drug in the United States. That means it can be purchased online, along with a number of newer formulations in the same drug family (including aniracetam, phenylpiracetam, and oxiracetam). Some studies have shown beneficial effects, including one in the 1990s that indicated possible improvement in the hippocampal membranes in Alzheimer's patients. But long-term studies haven't yet borne out the hype.
Natural nootropic supplements derive from various nutritional studies. Research shows the health benefits of isolated vitamins, nutrients, and herbs. By increasing your intake of certain herbal substances, you can enhance brain function. Below is a list of the top categories of natural and herbal nootropics. These supplements are mainstays in many of today's best smart pills.
Eugeroics (armodafinil and modafinil) – are classified as "wakefulness promoting" agents; modafinil increased alertness, particularly in sleep deprived individuals, and was noted to facilitate reasoning and problem solving in non-ADHD youth.[23] In a systematic review of small, preliminary studies where the effects of modafinil were examined, when simple psychometric assessments were considered, modafinil intake appeared to enhance executive function.[27] Modafinil does not produce improvements in mood or motivation in sleep deprived or non-sleep deprived individuals.[28]
Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap | CommonCrawl |
The Secret of the Piano 2
Damping of two out of three strings when tuning a piano tone.
We return to the model of The Secret of String Instruments 1 with $N$ strings connected to a common soundboard by a common bridge: For $n=1,..,N,$ and $t\gt 0$
$\ddot u_n + f_n^2u_n=B(U-u_n)$ (1)
$\ddot U + F^2U+D\dot U=B(u - U)$ (2)
where $u_n=u_n(t)$ is the displacement of string $n$ of eigen-frequency $f_n$ at time $t$ and the dot represents time differentiation, $U$ is the displacement of the soundboard with eigen-frequency $F$ and small damping coefficient $D$ representing outgoing sound, $u=\frac{1}{N}\sum_nu_n$ and the right hand side represents the connection between strings and soundboard through the bridge as a spring with spring constant $B$ (with $B\le F$ say). We consider a case of near-resonance with $f_n\approx F$ for $n=1,...,N$ (with a difference of about 1 Hz in a basic case with $F=440$ Hz say).
We are interested in the phase shift between $u_n$ and $U$ in the two basic cases: (i) zero phase shift with strings and soundboard moving together in "unison" mode and (ii) half period phase shift with strings and soundboard moving in opposition in "breathing" mode.
We have by summing over $n$ in (1), concentrating on the interaction between strings and soundboard thus omitting here the damping from outgoing sound setting $D=0$:
$\ddot u + F^2u+\frac{1}{N}\sum_n(f_n^2-F^2)u_n=B(U-u)$ (3)
Introducing $\phi = U+u$ and $\psi =U-u$ representing the two basic modes, we have by summing and subtracting (2) and (3):
$\ddot \phi + F^2\phi = -\frac{1}{N}\sum_n(f_n^2-F^2)u_n\approx 0$
$\ddot \psi + (F^2+2B)\psi =\frac{1}{N}\sum_n(f_n^2-F^2)u_n\approx 0$
with $\phi\approx 2U$ and $\psi\approx 0$ in case (i), and $\phi\approx 0$ and $\psi\approx 2U$ in case (ii), and $F^2+B\approx F^2$.
The difference between the two cases comes out in (1): In case (i) the average of $B(U-u_n)$ is small while in case (ii) the average of $B(U-u_n)\approx 2BU$. The right hand side $B(U-u_n)$ in (1) therefore acts to keep the different $u_n$ in-phase in case (ii), but does not exercise this stabilising effect in case (i), nor in the case of only one string.
The result is that the "breathing" mode of case (ii) can sustain a long aftersound with a sustained energy transfer from strings to soundboard until the strings and soundboard come to rest together.
On the other hand, in case (i) the strings will without the stabilising effect quickly go out of phase with the result that the energy transfer to the soundboard ceases and the outgoing sound dies while the strings are still oscillating, thus giving short aftersound.
You can follow these scenarios in case (i) here and in case (ii) here. We see 10 oscillating strings in blue and a common soundboard in red with strings in yellow, and staples showing string energy in blue and soundboard energy in red. We see strings and soundboard fading together in case (ii) with long aftersound, and soundboard fading before the strings in case (i) with short aftersound.
We saw in The Secret of the Piano 1 that the hammer initialises case (ii) and we have thus now uncovered the reason that there are 2-3 strings nearly equally tuned for most tones/keys of the piano: long aftersound with strings and soundboard fading slowly together.
More precisely, initialising the soundboard from rest by force interaction through the bridge with already initialised string oscillation, will in start-up have the soundboard lagging one quarter of period after the strings with corresponding quick energy transfer, and the phase shift will then tend to increase because the soundboard is dragged by the damping until the "breathing" mode with half period phase shift of case (ii) is reached with slower energy transfer and long aftersound. The "unison" mode with a full period (zero) phase shift will thus not be reached.
The analysis of the interaction string-soundboard may have relevance also for radiative interaction between different bodies as exposed on Computational Blackbody Radiation by suggesting an answer to the following question which has long puzzled me:
What coordinates the atomic oscillations underlying the radiation of a radiating body?
"What coordinates the atomic oscillations underlying the radiation of a radiating body?"
Why coordination? Thermal radiation isn't especially coordinated.
Claes Johnson 21 december 2015 09:05
I think some coordination is needed to give an output: with all phases present to the same degree, the net is zero, right?
No, the net is not zero. I can not do the full derivation here, of course, but from the standard treatise of superposing N waves the resulting intensity is
E_0^2 = \sum_i^N E_{0i}^2 + 2 \sum_{j>i}^N\sum_{i=1}^N E_{0i}E_{0j} \cos(\alpha_j - \alpha_i),
where \alpha_i is the phase. When the phases are uncorrelated, that is present to the same degree, the double sum vanish since there are as many positive as negative contributions and we are left with
E_0^2 = \sum_i^N E_{0i}^2.
The sum of squares of the individual intensities adds upp to the square of the resulting amplitude.
So I don't see how there can be a puzzle.
Well, I guess it comes down to what "uncorrelated" means. It is clear that full cancellation is possible, and one thing is then to explain why this case does not appear, or does indeed appear.I will return to this...
With uniformly distributed phases and equal amplitudes the energy of the superposition/sum of waves is much much smaller than the sum of the energy of the individual waves, as a result of massive cancellation. The question I seek to answer is if there is a mechanism coordinating phases so that this case does not arise in reality.
"With uniformly distributed phases and equal amplitudes the energy of the superposition/sum of waves is much much smaller than the sum of the energy of the individual waves, as a result of massive cancellation."
But if there are uniformly distributed phases and equal amplitude the total energy is just proportional to N*E_{01}^2 if there are N added waves with equal amplitudes E_{01}. The energy just adds the contributing parts.
The cosine will take all values between -1 and 1. Since the amplitudes are the same the double sum vanishes and the result is just the addition of all amplitudes squared.
Do you disagree?
See first random set of slides from a google search showing the expressions.
http://www.erbion.com/index_files/Modern_Optics/Ch5.pdf
Especially slides 4 to 8.
The difference between in-phase and random-phase is a factor N which can be very large. Maybe this says that coherent laser light is much more energetic than incoherent light, rather than saying that incoherent light does not transfer any energy. But then there is a whole range of light with different coherence and energy transfer capability. Right?
You wrote
"I think some coordination is needed to give an output: with all phases present to the same degree, the net is zero, right?"
So you seem to have thought that random phases evens out and don't transfer any energy. That was the thing I objected to.
But you seems to agree now. There is nothing strange with the fact that a solid with thermal energy radiates.
I understand what you are saying. But if you think that you understand the interaction matter-light as something trivial without anything strange whatsoever, then you may be missing something essential, and that is what I am searching to understand.
No I agree.
Light-matter interactions in general are very complicated under some settings. Thats the reason why there are so many active fields doing research on the matter. But in the case of "ordinary" transfer where the statistical limits holds I don't think it's so interesting.
There are many, many interesting and cool things to investigate. Take near-field radiation transfer for instance where one can get a blackbody to transfer many order of magnitudes above the Planck-limit that you get from Plancks law. How cool isn't that?!
ProfEngPeter 31 december 2015 03:47
Hi Claes should have wished you a happy Christmas but late so wish you a happy and healthy New Year.
Something interesting for you. I was pointed to an article on the Catt question in Progress in Physics by Stephen Crothers See here http://ptep-online.com/index_files/issues.html (the article by Weng -a classical model of the photon is also interesting)I wondered about the Catt question and found Ivor Catt's articles on his site here http://www.ivorcatt.co.uk/x111.htm . His questioning of basic electrical theory made me think of you with your questioning of fluid dynamic theory (and of course heat AGW hypothesis) Something about Ivor Catt can be found here on Wiki https://en.wikipedia.org/wiki/Ivor_Catt.
Interesting! Maybe there is a coupling with my idea that "transmission" of light or EM is not performed by busy little photons traveling from source to goal, but rather is a resonance phenomenon carried by standing EM waves capable of two-way propagation of disturbances but with the effect of one-way transfer of heat from hot to cold, for example.
Humanity Saved by Empty COP21 Treaty
Stunning Complete U-Turn at COP21
The Secret of String Instruments (vs Planck's Radi...
Digitaliseringskommissionens Kraftlösa Slutrapport... | CommonCrawl |
Why does the breaking of a 'continuous' symmetry lead to gapless excitations but not that of a discrete symmetry?
Goldstone theorem, especially in the context of condensed matter physics, can be stated as:
Whenever there is spontaneous breakdown of a continuous global symmetry, the spectrum of the theory contains gapless excitations. However, this is not true for breakdown of discrete symmetries.
For example, the spin wave of Heisenberg ferromagnet is a classic example of a gapless excitation resulting from the spontaneous breakdown of the spin rotational symmetry of the Heisenberg model. There exists rigorous, formal proofs of this theorem.
Question But physically, can we understand why this happens (or doesn't happen)?
It's true that in the Heisenberg model $$H=-J\sum_{\langle i,j\rangle}{\bf s}_i\cdot {\bf s}_j\tag{1}$$ the spins ${\bf s}_i$ can rotate in ${\rm 3D}$ while the Ising spins can only point up or down. It's tempting to think, if the spins can rotate continuously, we can excite the system with arbitrarily small energy by gradually tilting the spins infinitesimally w.r.t its immediate neighbors. This is not allowed if the spins are only allowed to flip (e.g., Ising model) and therefore, it requires a finite energy cost to excite the system! In the first case, we thus expect gapless excitation and gapped in the second. But this argument (based only on the freedom of movement of spins) is half-baked!
Continuous movement in ${\rm 3D}$ doesn't seem to ensure emergence of gapless excitations. The Hamiltonian must also have a continuous symmetry. For example, if we think of a generalized Heisenberg model $$H=-\sum_{<ij>}(Js_{ix}s_{jx}+J's_{iy}s_{jy}+J''s_{iz}s_{jz}),\tag{2}$$ the spins can still rotate continuously in ${\rm 3D}$, but the Hamiltonian of Eq.$(2)$ has no rotational symmetry unless the constants $J,J',J''$ are all equal (in which case $(2)$ reduces to form $(1)$)! Therefore, Goldstone's theorem cannot be applied here and there is no certainty that the spectrum will have gapless excitations.
So the question is why is it necessary to have a continuous symmetry in the first place to have gapless excitations? I am not looking for a formal proof.
condensed-matter symmetry symmetry-breaking ferromagnetism collective-excitations
SRSSRS
DISCLAIMER: the heuristic argument provided below largely pertains to classical systems. I make no pretense of rigor, nor of simple generalization to quantum systems.
It's tempting to think, if the spins can rotate continuously, we can excite the system with arbitrarily small energy by gradually tilting the spins infinitesimally w.r.t its immediate neighbors.
Let me start by reviewing the basic physical argument for Goldstone modes. If a continuous global symmetry is broken in the ground state, then the ground state picks out one particular broken symmetry state. By slowly varying the system over space by the continuous symmetry, the system always appears to be in the ground state locally. Globally, there can be as small a "twist" as you want, so the energy must be gapless.
Now, suppose your system has continuous degrees of freedom, but does not have a continuous symmetry. Then by slowly varying the spins, there is no way the system could appear to be locally in the ground state everywhere -- indeed, if it did look like this, then you would certainly have a continuous symmetry! Instead, your system will have larger and larger energy densities the more you "twist".
$\begingroup$ Can you work this out? I tried to write an answer along those likes, but when I did, the terms of the "small" twists did not add up to something small (essentially, each twist contributes a sin(1/N) term, and there are N of those, so this is O(1)). So it seems you need to put some extra information in to make sure that the small twists only contribute O(1/N^2). (To put it more bluntly: Your statement "Globally, there can be as small a "twist" as you want, so the energy must be gapless." is ad hoc completely unjustified, as you are summing many of those small things!) $\endgroup$
– Norbert Schuch
$\begingroup$ Just because many people iterate an supposedly intuitive explanation doesn't make it correct. The multiple of 2pi/L is not 1D specific. My point is more general: Your argument says "each energy contribution is as small as you want". I am saying "But you are adding more and more of them." It is entirely unclear from this "argument" whether this growing number of smaller and smaller numbers will add up to zero, 1, or infinity. As it stands, the intuition could be just as well plain wrong. You should at least try to estimate how this small energy scales with the amount of twist! $\endgroup$
$\begingroup$ There is, by the way, also the possibility that the overall explanation is indeed plain wrong (and not just an argument missing): From the way spin wave excitations are constructed, it does in fact not look very much like a system which has been slowly twisted (indeed, you apply one spin raising operator at some position, which does not seem to change the average state in any region!) $\endgroup$
$\begingroup$ Of course, your point is well-taken and I certainly agree. Perhaps I can assuage your initial concern, now that I've thought about it a bit. Consider a 1D Heisenberg or XY model -- I'll use an XY model, $H = -J \sum_i \cos(\theta_{i+1} - \theta_i)$, for simplicity. The lowest energy Goldstone mode has the small "twist" $\delta \theta = 2\pi / N$ from spin to spin. Then the energy of this mode is $JN (1 - \cos(\delta \theta)) \sim \pi J / N$ as $N \rightarrow \infty$ (I'm not sure where you're $\sin(1/N)$ came from, now that I think about it). $\endgroup$
$\begingroup$ Stackexchange is warning me not to have an extended discussion in the comments, so I will end it here, although I'd be happy to discuss elsewhere. The last thing I'll say is this: of course I agree that my answer is not extremely careful, and also that not all commonly-held dogma is correct in physics. But that does not mean that these sorts of heuristic arguments are not useful: clearly, just by looking at a few examples, the picture I described is at least true of Goldstone modes in classical systems, and is at least a plausible picture for "why" Goldstone modes are gapless. $\endgroup$
Short answer: There are other gapless excitations than Nambu-Goldstone modes.
Longer answer:
The Goldstone theorem guarantees that there is at least one gapless excitation if a continuous symmetry is broken spontaneously, in the absence of long-range forces. That's it. There could be plenty of other gapped and gapless excitations other than the Nambu-Goldstone mode(s).
In fact, the theorem it is constructive, and will tell you how to define this excitation. Namely, by acting with the broken symmetry generator on the broken-symmetry state. Therefore, this excitation is sometimes called a "non-linear realization of the symmetry".
Aron BeekmanAron Beekman
If your Hamiltonian has a continuous symmetry, you can take the ground state and rotate it by an arbitrarily small amount, and it is still a ground state. If you patch together two blocks which differ by a small rotation, you only pay a price at the boundary between the regions, and this penalty can become arbitrarily small as you make the rotation smaller and smaller.
You can then gradually change this small rotation throughout your system, and the price you pay is only the change in rotation, which can be made arbitrarily small. (Symmetry breaking ensures that this state is different from the ground state; otherwise, you might just construct a state which is equal to the ground state.)
However (fortunately? unfortunately?), this is only half the story, since you have to add up these small energy penalties over a large number of positions (since you need to complete a full $2\pi$ rotation globally), and there is no guarantee that the sum of many small things will be small.
To show that this sum is indeed small, you also have to use that the ground state has an extremality property (namely that it is the ground state), which implies that the dependence of the energy on the rotation angle must be quadratic in the angle (a linear term would increase the energy in one direction, but it would necessarily decrease the energy in the other direction), while the number of terms is linear in the inverse angle, so the sum goes to zero for a large system (and thus small angles) at least as one over the system size.
What is different is the symmetry is discrete? For a discrete symmetry, a small rotation of the ground state gives a state which is no longer a ground state, so when you patch together two regions which differ by a small rotation, you pay a price proportional to the volume of one of the regions. On the other hand, if you patch together two regions related by a discrete symmetry transformation, the price you pay at the boundary is constant (as the angle is large). Thus, the excitations you can construct this way have a constant energy.
Consider a 1D Hamiltonian with a continuous symmetry $U_g$, $$ U_g^{\otimes N}H(U_g^\dagger)^{\otimes N} = H\ . $$ Then, if $\vert\psi\rangle$ is a ground state of $H$, so is $U_g^{\otimes N}\vert\psi\rangle$.
Now split the chain into two parts A and B, and apply $U_g$ to the sites in A, and $U_h$ to the sites in B. Then, the energy for the Hamiltonian terms inside A and B will still be the same. However, the terms at the boundary will have some higher energy. Assuming a two-body Hamiltonian $H=\sum h$, and denoting the two-site reduced density matrix by $\rho$, this energy will be $$ e'=\mathrm{tr}[h(U_g\otimes U_h)\rho(U_g\otimes U_h)^\dagger] = \mathrm{tr}[h(I\otimes U_{g^{-1}h})\rho(I\otimes U_{g^{-1}h})^\dagger]\ , $$ where I have used the symmetry of the Hamiltonian.
Now if the group is continuous, you can choose $g^{-1}h$ such that $$ \|U_{g^{-1}h}-I\|\le \epsilon\ . $$ This is what is special about the continuous symmetry, and what fails for a discrete symmetry!
Then, you will find that the change in energy at each boundary is $$ \Delta e = e' - \mathrm{tr}[h\rho] \le 2 \|h\|\epsilon\ .\tag{1} $$
You might now think that we are done, since we can now proceed to combine this by splitting the system into $L$ blocks which you rotate by $2\pi/L$ each. Then, $\epsilon \propto 1/L\to 0$. However, the problem is that you are adding $L$ such terms, so from (1), you only find that the energy is $O(\epsilon L)=O(1)$, which comes as no surprise.
So to make sure this energy is actually vanishing as $L\to\infty$, we have to make sure that in fact $\epsilon\propto 1/L^2$ or smaller. How can this be the case?
Clearly, this requires that the first-order term in (1) when expanding $$ U_{g^{-1}h} = 1+\epsilon G+O(\epsilon) $$ vanishes; this term is nothing but $$ R:=\epsilon\,\mathrm{tr}[h(I\otimes G)\rho-h\rho(I\otimes G)]\ . $$
It is easy to see that $R\stackrel{!}{=}0$ does not follow from the symmetry of the Hamiltonian. Instead, it must be a property of the ground state, or the ground state and Hamiltonian together!
Indeed, imagine $\Delta e = \epsilon K + O(\epsilon^2)$ for some non-zero constant $K$. Then, summed up over the whole system, the total energy difference is $$ \Delta E = \epsilon L K + O(L\epsilon^2)\ . $$ Indeed, if $\epsilon>0$, this gives a term or order $1$. However, we can also change the sign of $\epsilon$, such that $\epsilon L K<0$ -- in that case, we get an ansatz for a state such that $$ \Delta E < 0\ ! $$ Clearly, this is impossible since $\Delta E$ is the energy difference to the ground state!
Thus, the fact that we are in a ground state rules out a term linear in $\epsilon$, and thus, the energy per site/boundary is $O(\epsilon^2)=O(1/L^2)$, and thus the total energy difference is $$ \Delta E = O(1/L^2)\ , $$ where $L$ can be taken to be the system size (if we twist at each site by an angle proportional to the position).
Norbert SchuchNorbert Schuch
$\begingroup$ I should also acknowlegde that Zack's classical example gave me the intuition why it is 1/N^2 - also in the classical case a sin(x-y) coupling would be linear in the difference, but then the FM state would not be the ground state! $\endgroup$
$\begingroup$ Is there a typo in the line where you say $\Vert U_{g^{-1}h} \Vert < \epsilon$? If $U$ is unitary, then $\Vert U\Vert = 1$. Presumably you mean $\Vert U - \mathbb I\Vert$? $\endgroup$
– J. Murray
$\begingroup$ An interesting follow-up question is how (whether?) this is related to the usual spin wave ansatz which is obtained by applying an operator $\sum e^{ikx} S^+_x$ -- this does not seem obvious. $\endgroup$
$\begingroup$ Glad that your answer worked out, the result is very pretty. The missing piece about the necessity of being in the ground state is a good catch. It's also nice to know that the commonly held classical intuition applies to the quantum case as well (although I would have been very surprised if it didn't). $\endgroup$
$\begingroup$ Thanks. And conversely, the classical case requires (unsurprisingly) the same extra condition of being a ground state to actually work. I'd still be very curious to understand how this links to a spin-wave ansatz, which seems rather different. -- In any case, I had the whole answer written more than a week ago and then I got stuck on the last part - why the 1/N part shoud vanish. In retrospect, it is obvious one has to use this condition. It was really the classical case which made me realize it! $\endgroup$
But this argument (based only on the freedom of movement of spins) is half-baked! Continuous movement in 3D doesn't seem to ensure emergence of gapless excitations.
No, certainly not, but it's close. You're skipping over the part about the energy cost. In your proposed generalized Heisenberg model, $J\neq J'\neq J''$ would generically imply the existence of one or more discrete ground states, separated from one another by higher energy configurations. Starting from the ground state, it would require a finite amount of energy to excite the system, which means that it's gapped.
If all of the $J$'s are very close to one another, then you have an approximate (not exact) symmetry, which would correspond to low-energy (but not exactly massless) excitations. This is the case e.g. for pions, which can be thought of as the pseudo-Goldstone modes corresponding to the approximate chiral-flavor symmetries which are broken by the different masses of the quarks, and which therefore have far smaller masses than most hadronic particles.
So, it's not continuous movement but rather continuous movement at no energy cost which implies the existence of massless Goldstone modes. If the energy cost is small, then the corresponding pseudo-Goldstone modes will be light, and we would say that we have an approximate (or weakly-broken) symmetry.
J. MurrayJ. Murray
Not the answer you're looking for? Browse other questions tagged condensed-matter symmetry symmetry-breaking ferromagnetism collective-excitations or ask your own question.
Difference between gapless excitations and Goldstone bosons in Condensed matter physics
Why are there gapless excitations in the anti-ferromagnetic Heisenberg model while the true ground state is a singlet?
When do gauge theories have protected gapless excitations?
Goldstone's theorem, symmetry breaking and the Heisenberg model
Where do the Goldstone Boson degrees of freedom come from?
How to rigorously argue that the superposition state is unstable in spontaneously symmetry breaking case
Finite temperature spontanous symmetry breaking and Goldstone bosons | CommonCrawl |
Sustained preventive chemotherapy for soil-transmitted helminthiases leads to reduction in prevalence and anthelminthic tablets required
Denise Mupfasoni ORCID: orcid.org/0000-0002-8052-82721,
Mathieu Bangert1,
Alexei Mikhailov1,
Chiara Marocco1 &
Antonio Montresor1
The goal of soil-transmitted helminthiases (STH) control programmes is to eliminate STH-associated morbidity in the target population by reducing the prevalence of moderate- and heavy-intensity infections and the overall STH infection prevalence mainly through preventive chemotherapy (PC) with either albendazole or mebendazole. Endemic countries should measure the success of their control programmes through regular epidemiological assessments. We evaluated changes in STH prevalence in countries that conducted effective PC coverage for STH to guide changes in the frequency of PC rounds and the number of tablets needed.
We selected countries from World Health Organization (WHO)'s Preventive Chemotherapy and Transmission control (PCT) databank that conducted ≥5 years of PC with effective coverage for school-age children (SAC) and extracted STH baseline and impact assessment data using the WHO Epidemiological Data Reporting Form, Ministry of Health reports and/or peer-reviewed publications. We used pooled and weighted means to plot the prevalence of infection with any STH and with each STH species at baseline and after ≥5 years of PC with effective coverage. Finally, using the WHO STH decision tree, we estimated the reduction in the number of tablets needed.
Fifteen countries in four WHO regions conducted annual or semi-annual rounds of PC for STH for 5 years or more and collected data before and after interventions. At baseline, the pooled prevalence was 48.9% (33.1–64.7%) for any STH, 23.2% (13.7–32.7%) for Ascaris lumbricoides, 21.01% (9.7–32.3%) for Trichuris trichiura and 18.2% (10.9–25.5%) for hookworm infections, while after ≥5 years of PC for STH, the prevalence was 14.3% (7.3–21.3%) for any STH, 6.9% (1.3–12.5%) for A. lumbricoides, 5.3% (1.06–9.6%) for T. trichiura and 8.1% (4.0–12.2%) for hookworm infections.
Countries endemic for STH have made tremendous progress in reducing STH-associated morbidity, but very few countries have data to demonstrate that progress. In this study, the data show that nine countries should adapt their PC strategies and the frequency of PC rounds to yield a 36% reduction in drug needs. The study also highlights the importance of impact assessment surveys to adapt control strategies according to STH prevalence.
Soil-transmitted helminths are a group of intestinal parasites that are transmitted to humans through ingestion of infective eggs or transcutaneous penetration of larvae excreted in human faeces which contaminate the soil and water sources [1]. Soil-transmitted helminthiases (STH) are among the most common neglected tropical diseases (NTDs) in developing countries. Worldwide, it is estimated that 820 million people are infected with roundworm (Ascaris lumbricoides), 460 million with whipworm (Trichuris trichiura) and 460 million with hookworms (Necator americanus and Ancylostoma duodenale) in 102 countries [2]. In 2010, these infections contributed 3.4 million disability-adjusted life-years to the global burden of disease [1].
The World Health Organization (WHO) recommends preventive chemotherapy (PC) with albendazole or mebendazole to control STH-related morbidity alongside targeted health education and improved water and sanitation [3]. PC is the regular, coordinated administration of anthelminthic medicines to population groups at risk for STH morbidity. The target populations include preschool-age children (PSAC), school-age children (SAC), women of reproductive age (WRA) and adult groups particularly exposed to STH. The 2020 WHO goal for control of STH is to treat ≥75% of PSAC and SAC in all STH endemic countries [4]. In 2017, more than 500 million SAC (69% of total SAC in need) received PC for STH globally, with 73% of implementation units reaching 75% effective coverage [5].
The main objective of the STH control programme is to eliminate morbidity in the target population by reducing the prevalence of moderate- and heavy-intensity infections to < 2% [3]. A meta-analysis conducted by Marocco et al. showed that after five years of PC, 80% of individuals with infections of moderate and heavy intensity were cured or remained with only light-intensity infection [6].
Since 2010, WHO with the support of drug producers has donated albendazole or mebendazole for the control of STH. In 2018 alone, more than 485 million tablets were donated to endemic countries [5].
The WHO manual on STH control in SAC recommends that control programmes managers collect parasitological indicators every 3–5 years of PC with effective coverage in order to measure the effect of the intervention on the health of the population at risk [4] and, eventually, to reduce the number of PC rounds if the prevalence of STH infection has reduced to a certain level using the WHO decision tree (Additional file 2) [4].
In this study we evaluated the changes in STH prevalence from countries that conducted STH PC for ≥5 years with effective treatment coverage (that is, national treatment coverage ≥75% as defined by WHO) for SAC. These data also give an indication of expected changes in prevalence if countries conduct ≥5 years of PC, and therefore also the expected number of drugs needed by endemic countries over time.
National PC coverage data
We accessed WHO's Preventive Chemotherapy and Transmission control (PCT) databank, an open web-based database on PC implementation in 102 endemic countries [7], to identify countries that had implemented effective coverage of PC (defined as ≥75% national coverage) among SAC for ≥5 years.
Epidemiological data reporting form
The Epidemiological Data Reporting Form (EPIRF) is a standardized form designed to collect epidemiological data on all diseases targeted by PC, namely STH, lymphatic filariasis (LF), onchocerciasis and schistosomiasis. Indicators reported include: survey type, number of rounds of PC delivered prior to survey, date of survey, number of people examined, and numbers of people diagnosed positive for each STH species and for overall STH. Countries receiving donated anthelminthic medicines are invited to periodically report to WHO any changes in the epidemiological situation using the EPIRF. We extracted baseline and impact EPIRF data from endemic countries previously identified as having achieved ≥5 years of effective coverage.
For countries in which the EPIRF was not available we reviewed the available literature on baseline and impact assessment surveys for STH prevalence. Only surveys targeting SAC were considered. We searched official publications such as national NTD master plans and Ministry of Health survey reports as well as data published in the peer-reviewed scientific literature between 2000 and 2017. For online searches we used the words: mapping, baseline, impact assessment, soil-transmitted helminths and country names using google search and MEDLINE.
Studies were eligible for inclusion if, for baseline, the data were collected before the national STH control programme was initiated and, for impact assessment, the data were collected after PC was implemented for ≥5 years with effective coverage. In addition, for all studies, SAC was the study population, the information on STH prevalence overall and/or for each STH species available. From each publication identified we extracted year, type of survey, target population, prevalence of STH and of each species, number of people examined, intensity of STH infection and diagnostic tool used. For studies where prevalence was reported by species only, we calculated the prevalence of any STH infection using the following equation [4]:
$$ \frac{\left(\boldsymbol{a}+\boldsymbol{t}+\boldsymbol{h}\right)-\left(\boldsymbol{a}\ast \boldsymbol{t}+\boldsymbol{a}\ast \boldsymbol{h}+\boldsymbol{t}\ast \boldsymbol{h}\right)+\left(\boldsymbol{a}\ast \boldsymbol{t}\ast \boldsymbol{h}\right)}{\mathbf{1.06}} $$
Where a = prevalence of ascariasis; t = prevalence of trichuriasis and h = prevalence of hookworm infections (all expressed as a proportion).
Data were collated, cleaned, analysed and visualized using the R (Version 3.5, R core Team, Vienna, Austria) statistical program. For data in the EPIRF form, we estimated national means using weighted cluster data for each country and for each STH species. We plotted the prevalence of any STH infection and of each species before the start of PC for STH and the prevalence after ≥5 years of PC with effective coverage using pooled and weighted means. Finally, according to the WHO STH decision tree (Additional file 2) and after ≥5 years of PC with effective coverage, countries with STH prevalence < 2% should suspend PC, < 10% should conduct PC once every 2 years, and < 20% should do PC once a year. In this study, we also estimated the decrease in drug needs for countries according to the change in STH prevalence during the ≥5 years of PC.
Number of countries that conducted ≥5 years of PC for STH
From the WHO/PCT databank we identified 24 countries that conducted ≥5 years of PC for STH (Additional file 3) between 2003 and 2017. Three countries (Burkina Faso, Mali and Togo) reported their baseline data and 11 countries (Burkina Faso, Burundi, Mali, Ghana, Sierra Leone, Cameroon, Rwanda, Mexico, Nicaragua, Bangladesh and Lao People's Democratic Republic) reported their impact assessment data through EPIRF. Two countries (Burkina Faso and Mali) collected impact assessment data for STH through transmission assessment surveys (TAS) for LF. TAS is conducted after 5 years of annual LF MDA with a coverage over 65% to confirm that the prevalence of infection is below a threshold at which recrudescence is unlikely to occur. WHO recommends collecting data on STH epidemiology simultaneously in the area to determine if STH PC should be continued after LF MDA stopped. Ten countries conducted national surveys. In addition, we identified baseline surveys conducted before PC for 12 countries and identified two impact assessments surveys through publications and two unpublished reports on STH PC impact from the Ministry of Health. In total 15 countries in four WHO regions conducted PC STH for ≥5 years and collected data before and after interventions (Fig. 1).
Flow chart summary on countries selected and data sources used for baseline and impact assessment surveys. SAC: School-age children; STH: Soil-transmitted helminth
Prevalence of STH infection at baseline before PC
Most of the baseline data were collected during 2002–2009 (Table 1). The average size of the population surveyed was 8868 (range 266–22 166) and 80% of the surveys were designed to be nationally representative. Six, four and four countries reported high, moderate and low STH prevalence, respectively. Figure 2 (left column) shows the pooled prevalence by country; the cumulative prevalence was 48.9% (33.1–64.7%) for any STH, 23.2% (13.7–32.7%) for A. lumbricoides, 21.01% (9.7–32.3%) for T. trichiura and 18.2% (10.9–25.5%) for hookworm infections.
Table 1 Soil-transmitted helminth infection prevalence in SAC before and five or more years after initiation of preventive chemotherapy for STH in the 15 countries
Prevalence of STH (first row), Ascaris lumbricoides (second row), hookworm (third row) and Trichuris trichiura (fourth row) infections in countries before (left column) and after (right column) ≥ 5 years of preventive chemotherapy with effective coverage. Each dot represents a published study or WHO EPIRF data. The size of the dot indicates the number of people assessed. The red square indicates pooled and weighted mean prevalence with 95% confidence intervals. WHO: World Health Organization; EPIRF: Epidemiological data reporting form
Prevalence of STH infection after ≥5 years of PC with effective coverage
Between 2003 and 2017, the countries included in this study conducted and reported 5–14 rounds of PC, averaging eight rounds of PC for STH for SAC with effective coverage. Eleven countries conducted national surveys to measure the impact of PC on STH infection, while two countries collected the data during LF TAS (Table 1). These surveys were conducted during 2012–2017; the average population size surveyed was 6439 (range 990–16 890). In five countries STH prevalence was moderate while in nine STH prevalence was low. The pooled prevalence was 14.3% (7.3–21.3%) for any STH, 6.9% (1.3–12.5%) for A. lumbricoides, 5.3% (1.06–9.6%) for T. trichiura and 8.1% (4.0–12.2%) for hookworm infections (Fig. 2, right column). In countries with low-level STH prevalence at the beginning of their STH control programme, the prevalence was reduced to < 2% while countries with high prevalence moved to low or moderate prevalence (Table 1).
Estimated reduction in number of anthelminthic tablets
For the 5 years and more of PC for STH, five countries conducted one round annually while 10 countries did semi-annual rounds. Figure 3 shows that if the 15 countries that conducted PC for ≥5 years had applied WHO thresholds for changing the frequency of PC rounds using the impact assessment results on STH infection, the number of anthelminthic tablets needed for STH PC would have reduced by an average of 36%. According to the WHO decision tree (Additional file 2), three countries would have suspended STH PC in some areas (because the STH prevalence was < 2%) while six would have decreased the frequency of PC from semi-annual to annual or biennial, and six countries would have maintained the previous frequency of PC.
Anthelminthic drug reduction after ≥5 years of STH PC with effective coverage. STH: Soil-transmitted helminth; PC: Preventive chemotherapy
In this study we report data from 15 STH endemic countries that conducted ≥5 years of PC with effective coverage among SAC. The overall result demonstrates a reduction of any STH prevalence and of STH species prevalence overall, representing tremendous progress for these countries. Importantly, in three countries the prevalence of any STH in the follow up survey was less than 2%, meaning that they successfully eliminated STH morbidity.
A change in prevalence status implies a change in the frequency of PC rounds according to the WHO decision tree (Additional file 2). Countries with an STH prevalence < 2% after ≥5 years of PC should suspend the intervention and maintain surveillance to detect possible rebounds of prevalence, whereas countries with a prevalence of 2–10% should proceed to semi-annual PC. To maintain the gains made when scaling down or discontinuing PC for STH, the national STH control programme should ensure that the water, sanitation and hygiene (WASH) component is also implemented. A modelling study on the impact of deworming and WASH on STH transmission has shown that discontinuing PC and sustaining PC gains require continuation of WASH interventions [22]. Only by achieving Sustainable Development Goal 6, which seeks to ensure universal access to basic WASH in communities, schools and healthcare facilities by 2030, will reductions of morbidity due to STH infection and others NTDs associated with water and sanitation be sustained [23].
Among the 15 countries included in the study, 10 are co-endemic for LF so community-based deworming was the strategy used. Most of these countries have already started scaling down mass drug administration (MDA) for LF. Three countries (Burkina Faso, Mali and Ghana) had a low STH prevalence at the beginning of the programme, but by conducting LF MDA with ivermectin and albendazole or diethylcarbamazine citrate and albendazole, STH infections were also treated. Meanwhile, in 2019, many implementation units have already stopped PC for LF and STH.
The limitations of this study are that the data used as the baseline for STH prevalence may have been collected after the start of LF elimination programmes and thus may under-evaluate the real STH prevalence before mass treatment was initiated. For countries in which we know this was true, we used survey data collected before the start of MDA for LF, as in Cameroon, where we used baseline data collected in 1987. Moreover, impact assessment surveys were never implemented in individuals who had participated also in the baseline surveys, sometimes not even in the same area, making the comparison difficult. Finally, not all surveys were nationally representative and thus did not truly reflect the status of STH endemicity in the country. Standardized implementation of impact surveys in all endemic countries would strengthen these data.
This study shows the importance of conducting impact assessment surveys after ≥5 years of STH PC with effective coverage to guide national STH control programmes in adapting the number of PC rounds according to the new epidemiological situation. Furthermore, a reduction in the frequency of PC and the consequent reduction of associated costs would help endemic countries to achieve national support for, and ownership of, their PC programmes. It would be necessary to confirm that, in different countries and in different epidemiological situation, the reduction in frequency of PC suggested by the decision tree is sufficient to maintain the improvement achieved.
All relevant data are within the manuscript and its Additional files.
EPIRF:
NTD:
Neglected tropical disease
Preventive chemotherapy
PCT:
Preventive chemotherapy and transmission control
PSAC:
Preschool-age children
Soil-transmitted helminthiase
Water, sanitation and hygiene
Montresor A, Trouleau W, Mupfasoni D, Bangert M, Joseph SA, Mikhailov A, Fitzpatrick C. Preventive chemotherapy to control soil-transmitted helminthiasis averted more than 500 000 DALYs in 2015. Trans R Soc Trop Med Hyg. 2018. https://doi.org/10.1093/trstmh/trx082.
Guideline: preventive chemotherapy to control soil-transmitted helminth infections in at-risk population groups. Geneva: World Health Organization; 2017.https://apps.who.int/iris/bitstream/handle/10665/258983/9789241550116-eng.pdf. Accessed 20 Mar 2019.
Soil-transmitted helminthiases: eliminating soil-transmitted helminthiases as a public health problem in children: progress report 2001–2010 and strategic plan 2011–2020. Geneva: World Health Organization; 2012. https://apps.who.int/iris/bitstream/handle/10665/44804/9789241503129_eng.pdf. Accessed 20 Mar 2019.
Helminth control in school-age children: a guide for managers of control programmes, 2nd. Geneva: World Health Organization; 2011. https://apps.who.int/iris/bitstream/handle/10665/44671/9789241548267_eng.pdf. Accessed 13 Mar 2019.
Schistosomiasis and soil-transmitted helminthiases: number of people treated in 2017. Wkly Epidemiol Rec. 2018;93:681–92 https://apps.who.int/iris/bitstream/handle/10665/276933/WER9350.pdf. Accessed 15 Mar 2019.
Marocco C, Bangert M, Joseph SA, Fitzpatrick C, Montresor A. Preventive chemotherapy in one year reduces by over 80% the number of individuals with soil-transmitted helminthiases causing morbidity: results from meta-analysis. Trans R Soc Trop Med Hyg. 2017. https://doi.org/10.1093/trstmh/trx011.
Yajima A, Mikhailov A, Mbabazi PS, Gabrielli AF, Minchiotti S, Montresor A, et al. Preventive chemotherapy and transmission control (PCT) databank: a tool for planning, implementation and monitoring of integrated preventive chemotherapy for control of neglected tropical diseases. Trans R Soc Trop Med Hyg. 2012;106:215–22.
Ortu G, Assoum M, Wittmann U, Knowles S, Clements M, Ndayishimiye O, et al. The impact of an 8-year mass drug administration programme on prevalence, intensity and co-infections of soil-transmitted helminthiases in Burundi. Parasit Vectors. 2016. https://doi.org/10.1186/s13071-016-1794-9.
Soares Magalhães RJ, Biritwum N-K, Gyapong JO, Brooker S, Zhang Y, Blair L, et al. Mapping helminth co-infection and co-intensity: geostatistical prediction in Ghana. PLoS Negl Trop Dis. 2011. https://doi.org/10.1371/journal.pntd.0001200.
Baker JM, Trinies V, Bronzan RN, Dorkenoo AM, Garn JV, Sognikin S, et al. The associations between water and sanitation and hookworm infection using cross-sectional data from Togo's national deworming program. PLoS Negl Trop Dis. 2018. https://doi.org/10.1371/journal.pntd.0006374.
Koroma JB, Peterson J, Gbakima AA, Nylander FE, Sahr F, Soares Magalhães RJ, et al. Geographical distribution of intestinal schistosomiasis and soil-transmitted helminthiasis and preventive chemotherapy strategies in Sierra Leone. PLoS Negl Trop Dis. 2010. https://doi.org/10.1371/journal.pntd.0000891.
Ratard RC, Kouemeni LE, Ekani Bessala MM, Ndamkou CN, Sama MT, Cline BL. Ascariasis and trichuriasis in Cameroon. Trans R Soc Trop Med Hyg. 1991. https://doi.org/10.1016/0035-9203(91)90170-4.
Ratard RC, Kouemeni LE, Ekani Bessala MK, Ndamkou CN. Distribution of hookworm infection in Cameroon. Ann Trop Med Parasitol. 1992;86:413–8.
Mupfasoni D, Ruberanziza E, Karibushi B, Rujeni N, Kabanda G, Kabera M. National school prevalence survey on soil-transmitted helminths and schistosomiasis, Rwanda. Int J Antimicrob Agents. 2009;34(Suppl 2):S15.
Champetier de Ribes G, Fline M, Désormeaux AM, Eyma E, Montagut P, Champagne C, Pierre J, et al. Intestinal helminthiasis in school children in Haiti in 2002. Bull Soc Pathol Exot. 2005;98:127–32.
Flisser A, Valdespino JL, García-García L, Guzman C, Aguirre MT, Manon ML, et al. Using national health weeks to deliver deworming to children: lessons from Mexico. J Epidemiol Community Health. 2008. https://doi.org/10.1136/jech.2007.066423.
Rosewell A, Robleto G, Rodríguez G, Barragne-Bigot P, Amador JJ, Aldighieri S. Soil-transmitted helminth infection and urbanization in 880 primary school children in Nicaragua, 2005. Trop Doc. 2010. https://doi.org/10.1258/td.2010.090425.
A situation analysis: neglected tropical diseases in Bangladesh. Dhaka: Ministry of Health & Family Welfare, Government of Bangladesh; 2010 http://pdf.usaid.gov/pdf_docs/pnady849.pdf. Accessed 15 Mar 2019.
Montresor A, Zin TT, Padmasiri E, Allen H, Savioli L. Soil-transmitted helminthiasis in Myanmar and approximate costs for countrywide control. Tropical Med Int Health. 2004. https://doi.org/10.1111/j.1365-3156.2004.01297.
Tun A, Myat SM, Gabrielli AF, Montresor A. Control of soil-transmitted helminthiasis in Myanmar: results of 7 years of deworming. Tropical Med Int Health. 2013. https://doi.org/10.1111/tmi.12130.
Rim HJ, Chai JY, Min DY, Cho SY, Eom KS, Hong SJ, et al. Prevalence of intestinal parasite infections on a national scale among primary schoolchildren in Laos. Parasitol Res. 2003. https://doi.org/10.1007/s00436-003-0963.
Coffeng LE, Vaz Nery S, Gray DJ, Bakker R, de Vlas SJ, Clements ACA. Predicted short and long-term impact of deworming and water, hygiene, and sanitation on transmission of soil-transmitted helminths. PLoS Negl Trop Dis. 2018. https://doi.org/10.1371/journal.pntd.0006758.
Bangert M, Molyneux DH, Lindsay SW, Fitzpatrick C, Engels D. The cross-cutting contribution of the end of neglected tropical diseases to the sustainable development goals. Infect Dis Poverty. 2017. https://doi.org/10.1186/s40249-017-0288-0.
We are thankful for national NTD programme managers, WHO country and regional offices for providing the data through the WHO PCT databank. We also thank Karen CICERI-REYNOLDS and Albis Francesco GABRIELLI for proof reading the article.
Department of Control of Neglected Tropical Diseases, World Health Organization, Geneva, Switzerland
Denise Mupfasoni, Mathieu Bangert, Alexei Mikhailov, Chiara Marocco & Antonio Montresor
Denise Mupfasoni
Mathieu Bangert
Alexei Mikhailov
Chiara Marocco
Antonio Montresor
DM, MB and AMo conceptualized the study.DM and AM did data collection. DM and MB did data analysis. DM and MB drafted the full paper. AMo revised edited the manuscript. All authors contributed to final development of the article. All authors read and approved the final manuscript.
Correspondence to Denise Mupfasoni.
WHO decision tree for STH control programmes. (DOCX 178 kb)
Soil-transmitted helminthiases (STH) preventive chemotherapy (PC) coverage, by country, 2003–2017. (DOCX 22 kb)
Mupfasoni, D., Bangert, M., Mikhailov, A. et al. Sustained preventive chemotherapy for soil-transmitted helminthiases leads to reduction in prevalence and anthelminthic tablets required. Infect Dis Poverty 8, 82 (2019). https://doi.org/10.1186/s40249-019-0589-6
Soil-transmitted helminthiases
Control; morbidity | CommonCrawl |
The exact mechanism of energy release durning bond formation on the atomic level
Image a situation:
We have two atomic hydrogen atoms.
We magically collide them together dead center and perpendicular.
Each atom has exact amount of kinetic energy.
We collide them with precisely enough energy to form molecular hydrogen (H2).
Chemical reaction occurs: H + H -> H2 + energy
My question is:
How the bond formation energy is being released?
Is it some kind of EM radiation? Photons? 2.1. If it is photons, what determines their energy and direction?
If it is not EM, what is it? Vibration of the new molecule? 3.1. If it is "vibration" why does the hydrogen molecule do not decay into two atomic hydrogen?
Is there only one mechanism of energy release? Does it depends on the bond type?
I tried to search, but I can find only formulas/laws without a description how does it happens in 3D space, step-by-step with drawings/schemas/simulations.
thermodynamics physical-chemistry
OlegSerov
OlegSerovOlegSerov
$\begingroup$ Most probably, it will release EM. $\endgroup$
– philip_0008
$\begingroup$ What do you need all that magic "dead center" stuff for? Collisions happen at different impact parameters and we can deal with that. The energy to form molecular hydrogen is zero. It's an exothermal reaction and quite a hot one, given that the dissociation energy of H2 is 4.48eV. $\endgroup$
– CuriousOne
$\begingroup$ I need that "dead center" to simplify everything, because if they were not in dead center - the resulting energy probably will be in the rotation moment of a molecule. $\endgroup$
– OlegSerov
$\begingroup$ Often this needs a third body. $\endgroup$
This is an interesting question. If you have two isolated atoms (and they don't need to be $\mathrm{H}$) in the gas phase moving towards one another and colliding, is quite straightforward as we are in the situation that total energy is conserved. If we assume that one atom is stationary, it then is easier to describe what happens and makes no difference to the answer.
As there is no barrier between the reactants and products, the initial kinetic energy is converted into potential energy of say $\mathrm{H_2}$ (I am ignoring effects of centrifugal barrier for the sake of simplicity). The atoms move together until they reach a minimum separation caused by their mutual repulsion. (This is not the equilibrium separation but shorter than this.) At this point they now move apart again.
The reason for this is that in this isolated situation total energy is conserved; all that happens is that the kinetic energy is converted into potential energy of the molecule, but them as the atoms repel one another they are still above the dissociation energy of the molecule and so fly apart. Actually, they were always above the dissociation energy for them to move together in the first place.
The figure shows a one dimensional cut through the spherically symmetric potential energy surface, which is appropriate as we deal with atoms. (If an atom reacts with a diatomic then a 2D contour plot is needed). The energy profile is shown in the diagram.
The initial kinetic energy is the energy of the green line above zero which we take as the energy of the two atoms when distant from one another. As the atoms approach the red line shows the potential energy which increases (in the negative sense) as the molecule forms and the potential is attractive to larger separations that $r_e$ the equilibrium internuclear separation and repulsive at shorter separation. Only the zero-point vibrational energy level is shown.
The kinetic energy rises as the potential energy falls, as total energy is constant, and mirrors the potential energy. When the inner repulsive energy is reached then the two atoms now start to move apart. The time for this approach and separation across the potential well, is approximately half a vibrational period, so typically a few femtoseconds up to about $0.1$ picosecond at most) depending on the atoms ($1\,\rm fs = 10^{-15}\, s$). This is shown by the top horizontal arrow. Not much can happen in this time. The $\mathrm{H_2}$ cannot radiates if it is in its ground electronic state because of radiation selection rules, but $\mathrm{HI}$ could do this and emit an infra-red photon. However, the chance of this happening is vanishingly small as the radiative lifetime in the infra-red is vast compared to the vibrational period. (If the kinetic energy is so large that excited states are produced fluorescence may occur, but again the chance is very small for the same reason.)
If there is a surface involved, for example one atom is on a dust particle, then energy can be lost to the particle and the $\mathrm{H_2}$ stabilised. (I'm assuming no separate dust particle collision with $\mathrm{H}$ atoms and diffusion together on the dust to form $\mathrm{H_2}$.) Similarly, if there is another collision by a 3rd body, for example, the $\mathrm{H}$ atoms are in a gas at some reasonable pressure (either $\mathrm{H_2}$ or a un-reactive gas say at a few hundred torr) then there is a good chance that the collision will remove energy from the nascent $\mathrm{H_2}$ and so stabilise it. (Here I assume that the gas is cold enough to remove vibrational energy on average rather than add it.)
andselisk
porphyrinporphyrin
$\begingroup$ I read it once and did not understand it. I will try an read it again later. So the answer is that the energy produced via forming bonds between hydrogen is the vibration of hydrogen molecule? In other words, while the hydrogen atoms stays "within" hydrogen molecule they are moving back-forward to each other? $\endgroup$
$\begingroup$ yes, this is on the right lines. To summarise, if there is no energy loss then its an 'in and out' process and the $H_2$ molecule only lasts for a few femtoseconds. If there is sufficient energy loss (due to collisions with 3rd party) then the $H_2$ molecule is formed and has some large amount of vibrational energy and vibrates back and forth. Other collisions will remove more vibrational energy. If you now consider a large number of such molecules they will eventually have the Boltzmann energy distribution appropriate to the prevailing temperature. $\endgroup$
– porphyrin
$\begingroup$ So, If there were no energy loss due to interaction with other matter/energy the H2 atom will last only a moment? $\endgroup$
$\begingroup$ @OlegSerov; yes thats correct, typically a few tens of femtoseconds, longer for $I_2$ than $H_2$ as you might expect. $\endgroup$
$\begingroup$ On the second thought, never mind. $\endgroup$
For the situation you've outlined, yes, the only way for the two atoms to form a molecule is via the emission of a photon. This is known as 'photoassociation', which isn't very common (although its reverse process, photodissociation, is much more frequent).
In the absence of a radiative energy sink, no associative processes are possible, and this helps to underline a misconception in the way you set up the question:
There is no such energy! If you have two hydrogen atoms, then by definition they have positive energy, but the bound states of the system have negative energy, i.e., they do not have enough energy to form two free hydrogen atoms at infinity. What you've proposed is a scattering process, and unless there's somewhere to put that energy difference, the atoms will retain their initial positive energy, and end up breaking up and remaining as free atoms.
In a real-world cloud of atomic hydrogen, the atoms will quickly associate into diatomic hydrogen molecules, but they rarely do so via the route that you've specified. Instead, in most gases in the laboratory, the crucial process is three-body collisions: these are (obviously) much less frequent than two-body collisions, but they allow for associative processes where two of the atoms join up into a bound molecular state and dump the energy difference into the kinetic energy of the third atom: $$ \rm H+H+H \to H_2 + fast \ H $$ (That said, the relative importance of this process depends critically on the density of the gas, and if the gas is too dilute, then this process won't happen. Indeed, you don't necessarily have any guarantee that any association process will happen meaningfully. This is particularly important in cold-atoms BEC work, where many of the experimental conditions are designed to keep associative processes at a minimum (including keeping the cloud very dilute, so that the low density inhibits three-body collisions).)
Now, to dig into the specific questions posed in the current bounty:
The existing answer proposes dissipation through a collision with a third body. Another possible mechanism is electronic excitations. Can either of these occur, depending on temperature and density? Is there data on temperature and density dependence of the reaction rate that would support either or both explanations?
Electronic excitations are indeed an expected feature, but they're a part of the photoassociation mechanism: if the electronic excitation energy is not returned into the nuclear degrees of freedom (which will lead to dissociation), then it needs to be emitted radiatively. There's nowhere else for the energy to go, and there's no chance of the energy getting retained indefinitely in an electronic excitation.
Similarly, there's essentially no way to produce photoassociation (i.e. radiative disposal of the energy excess) other than through electronic excitations $-$ the nuclei just aren't that optically active. Thus, you expect photoassociation to depend sensitively on the energy of the incoming nuclei: if it matches that of an electronic excitation of the bound state (with some considerable leeway, though, since there's a significant energy span of bound states from the rovibrational ground state all the way to threshold) then you expect this resonance to have a higher signal.
From a (brief) look at the literature, I don't get a sense that the photoassociation of hydrogen has been the object of much attention, but it does have some literature dedicated to it (most notably A.P. Mosk, "Photoassociation of Spin-Polarized Hydrogen", Phys. Rev. Lett. 82, 307 (1999), from what I can tell). The reaction $$ \rm H+H \to H_2 + \it some\rm\text{ form of energy} \tag 1 $$ is obviously of clear importance to astrophysics, so there's bound to be a huge literature about it, and the array of possible conditions of interest is much too broad for a Physics SE post (particularly once you introduce dust grains that can act as catalysts and as third-bodies that can absorb the extra energy).
As regards actual astrophysical environments, and how the astrophysical literature views the reaction $(1)$, from what I can tell, V. Pirronello et al. offer a good summary ["Formation of Molecular Hydrogen: The Mother of All Molecules: An Experimental Investigation", in Exobiology: Matter, Energy, and Information in the Origin and Evolution of Life in the Universe, J Chela-Flores & F Raulin (eds), Springer, Dordrecht, 1998)]:
The problem of the formation of molecular hydrogen arises from the fact that such simple molecule is not efficiently formed by the direct reaction of two hydrogen atoms colliding in the gas phase. This is a consequence of the fact that the protomolecule, just formed in a high vibrational level, has to release quickly (roughly in a time comparable with its vibrational period) an energy excess of about four and a half electronvolt (or at least a good fraction of it) to became stable. In the gas phase such a proto-molecule is isolated and the only way to achieve this goal is through the emission of a photon; this process is however quite slow because involves roto-vibrational forbidden transitions and the usual unavoidable result is that the two hydrogen atoms restart wondering in the cloud. This mechanism, that is important at the early stages of the universe for the formation of primordial molecular hydrogen, is inside already developed galaxies unable to explain the observed abundances of H$_2$. It was then proposed already decades ago (Gould and Salpeter, 1963; Hollenbach and Salpeter, 1970; Hollenbach, Werner and Salpeter, 1971; Williams, 1968) that a three body reactions (with the third one taking the excess energy) would have been the way to overcome the problem. In the interstellar medium, due to the low densities involved ($n_\mathrm H$ of the order of $10^2 \:\rm cm^{-3}$, in the so called "diffuse clouds"), the third body cannot be another atom, not even H that is the most abundant, but it must be a dust grain.
The series of processes that have to occur are: collision and sticking of two H atoms with a grain of interstellar dust, mobility of both of them and, upon encounter, recombination with formation ofH2 and in most of cases release in the gas phase.
Not the answer you're looking for? Browse other questions tagged thermodynamics physical-chemistry or ask your own question.
Where does the energy that release when bond is formed between two atoms come from?
Reason for thermal motion
How to simulate the formation of chemical bonds by calculating electrodynamic forces?
Correlation energy - is it the difference between the Hartree-Fock energy and exact energy, or Hartree-Fock PE and exact PE?
Chemical bond and virtual particles
How is temperature related to quantum vibrational states of molecules?
What mechanism at the microscopic level determines whether a system heats up or not?
What is the energy of an atomic orbital?
Time evolution of molecular signal from a laser-induced plasma - finding the right model for my data
On an atomic level is energy only transferred through phonons and photons?
Thermometers respond only to translational kinetic energy: misconception? | CommonCrawl |
data descriptors
Predictive mapping of the global power system using open data
Data Descriptor
C. Arderne ORCID: orcid.org/0000-0002-7904-22161,
C. Zorn ORCID: orcid.org/0000-0003-3101-10042,3,
C. Nicolas1 &
E. E. Koks ORCID: orcid.org/0000-0002-4953-45272,4
Scientific Data volume 7, Article number: 19 (2020) Cite this article
Limited data on global power infrastructure makes it difficult to respond to challenges in electricity access and climate change. Although high-voltage data on transmission networks are often available, medium- and low-voltage data are often non-existent or unavailable. This presents a challenge for practitioners working on the electricity access agenda, power sector resilience or climate change adaptation. Using state-of-the-art algorithms in geospatial data analysis, we create a first composite map of the global power system with an open license. We find that 97% of the global population lives within 10 km of a MV line, but with large variations between regions and income levels. We show an accuracy of 75% across our validation set of 14 countries, and we demonstrate the value of these data at both a national and regional level. The results from this study pave the way for improved efforts in electricity modelling and planning and are an important step in tackling the Sustainable Development Goals.
Measurement(s) electric power system • public utility line
Technology Type(s) digital curation • computational modeling technique
Factor Type(s) geographic location
Sample Characteristic - Location Earth (planet)
Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.11298584
Background & Summary
Reliable infrastructure networks form the backbone of a prosperous modern economy, with electricity networks providing a central role in helping to deliver health, education, and other infrastructure services. However, 11% of the global population still lacks access to a reliable electricity supply1 with lower income countries disproportionately represented. Burundi, for instance, still has over 90% of the population without electricity access. In response, Sustainable Development Goal 7 (SDG 7) calls for universal access to affordable, reliable, and modern energy services by 20302. Achieving this goal is similarly important in achieving other SDGs3,4,5 and as such requires a coordinated effort – not only in the targeted expansion of electricity networks, but also towards increases in renewable energy share of the global energy mix and improved network resilience. To measure the joint global progress on achieving SDG 7, detailed spatial information on the current locations and properties of electricity infrastructure is essential.
Physical assets, such as generation plants, transmission networks (typically higher voltage lines used for the bulk movement of electricity), distribution networks (typically lower voltage lines for distributing electricity from transmission networks to end consumers), substations (for transforming between voltages), and related assets are important to assess the practicalities of connecting isolated communities to wider transmission networks. While more developed countries and their infrastructure owners/operators could be expected to hold suitably complete inventories and geospatial datasets, many lower-income countries data on infrastructure and planned investments are often outdated or incomplete6. This makes it difficult for government bodies and utility companies to plan for extending, strengthening, and modifying their electricity networks. Even in advanced economies, where this network data may exist, it can be difficult to find it in an open and standardized way. Projects such as OpenStreetMap have greatly democratized access to geospatial data generally7, and while there have been significant efforts on collating data on electricity generation infrastructure8, and high voltage transmission lines in OpenGridMap9, a globally consistent database of medium and lower voltage distribution networks still lag. Reasons for this can be attributed to the significantly larger number of distribution infrastructure owners/operators required to contribute data (compared to transmission operators which are often national scale bodies), non-existent datasets in GIS-ready formats, restricted public access for security reasons, and the difficulties in digitizing from openly available aerial photographs, whether this be due to the generally smaller overhead line structures or the fact cables are often buried in many urban centers.
In addition to the physical assets, the geospatial locations of electrified settlements within each country is often still unknown. To fill this gap, several studies have developed methods to map these locations and compute accessibility metrics when combined with high-resolution population maps. However, applications have largely been focused on country or sub-continental scales10,11,12. While these methods are showing great promise, a globally consistent approach and map of accessibility is still lacking which can ultimately aid development and planning efforts in attaining SDG 7.
In response, we present the first composite map of the global power grid using publicly available open data – generated through the new open-source tool gridfinder13, based on work by Rohrer at Facebook: https://code.fb.com/connectivity/electrical-grid-mapping. This tool applies multiple filtering algorithms to night-time light imagery to identify locations most likely to be producing light from electricity. These light sources (target-locations) are then connected to known electricity networks through a least-cost routing algorithm following roads and known distribution lines (adopted from OpenStreetMap). This results in connected networks at two voltage levels we define as high voltage (HV, >70 kV) and medium voltage (MV, 10–70 kV). The dataset shows 97% of the global population reside within 10 km of electricity lines of >10 kV, where the vast majority of those over 10 km being across Sub-Saharan Africa.
The dataset is validated two-fold. Firstly, against 16 electricity networks across 14 countries representing the range of World Bank income groupings: High, Upper-Middle, Lower-Middle, and Low. Across an equal-area grid (edge length 15 km), we compare observed and gridfinder delineated networks to show a predictive accuracy of 75%. Secondly, we evaluate the effectiveness of the dataset in predicting accessibility with respect to income groups, human development index (HDI), and investment requirements as commonly reported in the literature and used as key reporting metrics.
A further accompanying dataset is presented representing the density of low voltage (LV) infrastructure, herein defined as below 1 kV. The small geographic scale and prevalence of buried LV cables push the boundaries of satellite imagery detecting abilities. To create this dataset we therefore turn to spatial heuristics to create a weighted layer of electricity access rate by applying an algorithm calibrated to national urban and rural accessibility statistics.
Results of this study pave the way for improved efforts in electricity modelling and planning. Although only predictive, this standardized global dataset of transmission, distribution and low-voltage lines will be a valuable starting point for electrification planners and researchers in several fields, such as assessing social inequalities14, estimating exposure to natural hazards15, and quantifying electricity infrastructure roll-out requirements12,16. An important limitation is that this dataset does not attempt to replicate actual network configurations or precise structures, as would be needed for electrical modelling such as power flow modelling; this was seen as out of scope, and is unlikely to be possible at a global scale with current data sources. The use of open data now allows anyone to freely use or develop this tool (see Code Availability) further to model the power network for any location, as we have demonstrated for the entire globe.
A schematic overview of the computational process for generating a map of the global power grid is illustrated in Fig. 1.
Simplified overview of methodology. Data inputs in green, filtering steps in yellow, intermediate results in red; outputs in blue. The sources of data inputs are either given in Table 1 or provided in the Technical Validation section.
Input data and processing
The datasets used in generating the global datasets are described in Table 1. To develop the initial high-voltage network layer, there are several existing power network datasets, both open and proprietary. Chief among these is OpenStreetMap, which is relatively comprehensive for transmission lines. Additionally, there are many national and regional datasets released under various licenses, such as SciGRID17, and outputs from ENTSO-E, but no attempt at a complete picture of the world. Machine learning methods for feature detection of transmission networks from satellite imagery have made some progress, but still rely heavily on manual tagging18. For this exercise, we have determined that the data from OpenStreetMap represent a sufficiently detailed picture of high-voltage transmission lines (here defined as above 70 kV). In some areas, it is comprehensively traced from satellite imagery and ground truth, while in others (particularly developing countries) accuracy and completeness is variable, depending on whether someone has taken the effort to map an area. Although it is often unsuitable for power flow analysis, it is enough for the goal of mapping approximate infrastructure. The vast majority (92%) of the well-tagged data fit our definition of high-voltage, resulting in a 6.6 million km global transmission network.
Table 1 Input data sources. All data sources listed are openly available online.
The goal of this research is to produce a single global dataset that is valuable to researchers and practitioners and easy to replicate, and to serve as a starting point for future improvements and region-specific efforts. This guided the decision to rely exclusively on open data sources such as OpenStreetMap and publicly released satellite imagery and derived products from NASA, NOAA, ESA and others. Focusing on a specific country or region allows the inclusion of locally valuable data sources (e.g. improved grid and road networks, substation locations, improved data on isolated grids, disaggregated access levels), as well as fine-tuning parameters to with attention to local results. In addition, smaller context or additional effort could be used to better understand temporal changes in electricity access and infrastructure.
Identifying electrification targets
The first step in predicting distribution lines is predicting the points that they connect. The definition of electrification targets as locations likely to be connected to a medium-voltage network informed the approach described below. To filter transient values such as fires and reflections, twelve months of night-time lights imagery from VIIRS19 are merged, using for each pixel the 70th percentile value. This value is chosen based on manual testing over several sample countries covering different development levels and land types; higher values (above 80) did not sufficiently filter transient events and extraneous sources (such as fires, gas flaring, snow, and lunar reflections), while lower values (below 60) erased significant sources of real light-emissions. In order to highlight locations that are brighter than their immediate surroundings, thus accentuating even dim points that nonetheless stand out above background light, a two-dimensional filter is convolved over the image with
$$F=\left\{\begin{array}{ll}1/{(1+\left|d\right|)}^{3}, & d\ne 0\\ 0, & d=0\end{array}\right.$$
where d is that pixel's perpendicular distance from a given square sample's centroid. The goal of this filter is to find pixels with a higher value than their neighborhood, biased towards closer areas. To achieve this bias, a non-linear function is needed, and a cubic function was found to achieve better results than a square function. The filter is normalized so that ΣF = 0, and the output is then subtracted from the original image.
A threshold is then applied to create a binary raster of electrification targets. A more lenient threshold value provides higher granularity in detecting small or marginal locations, but creates many more false positives. As this analysis focuses on distribution-connected locations, we settled on a value of 0.1; lower values (below 0.05) allowed too many false positives (rural areas confirmed via satellite imagery to contain no habitation) while higher values (above 0.2) quickly lost identifiably electrified locations. A different filtering function would require this threshold to be re-examined, and, as above, a closer inspection of a specific geographical areas might reveal a more appropriate location-specific threshold. Additional filters are applied to remove locations with no population, or on slopes above 25° and permanent ice caps, where it very unlikely that distribution lines cross. The output is a global two-dimensional array of target coordinates that must be connected by network lines. An example is given in Fig. 2.
Night-time lights as a proxy for electricity access in four countries with different income levels. (a) Kenya, low-income but regions of dense light-emission; (b) Bolivia, lower-middle income; (c) Azerbaijan, higher-middle income and dense population; (d) New Zealand's North Island, high-income but largely rural. This data can also be visualized interactively here: https://gridfinder.org.
Several studies have validated this approach with ground-truth data20,21,22, but there is yet no standardized methodology. In addition, validations have been done with limited geographical and socio-cultural scope, and with small datasets. Thus, there is significant uncertainty extrapolating these techniques globally. Given most indoor lights do not show up on night-time light imagery; in effect the images measure output from street lights and other large industrial light source. Since our primary purpose is to identify locations very likely connected to the distribution network and not to assess the connection status of individual buildings or households, these issues are of less concern. Attempting to highlight every area with access at a smaller scale would necessitate a closer examination of this issue and more detailed data.
Deriving MV network distribution lines
To produce data on medium-voltage lines (here defined as 10 to 60 kV), we turn to predictive methods and the tool gridfinder13. Previous efforts have used Voronoi diagrams and minimum spanning trees, but these require highly detailed data on substations and demand points9. The typical approach to a shortest path between points is some form of Dijkstra's algorithm23 (which assumes only two points), while the typical approach to creating a minimum spanning tree is Prim's algorithm (which assumes a fixed, known distance between points)24. Starting with the electrification targets developed above, we apply a many-to-many variant of Dijkstra's algorithm23 with existing road networks as a cost function. The algorithm (see Box 1) greedily adds points as it grows, starting at a single point and branching outwards, keeping track of cost/distance and adding targets as they are found. This process repeats until all cells have been visited.
The algorithm attempts to connect every location with the shortest possible distance, while following roads (preferring larger) where possible. In other words, it attempts to predict distribution networks based on the assumptions of optimum network topology and network lines tending to follow roads. Although both assumptions are frequently false, for a variety of historical reasons, our work indicates that they hold true most of the time: while for cost reasons transmission lines are often built in straight lines, distribution lines generally follow roads making them easier to build and maintain. The algorithm relies exclusively on open and easily accessible datasets (but could be improved with proprietary data sources) and produces medium-voltage network data for any given area and model parameters.
The algorithm is weighted by a cost function based on existing roads from OpenStreetMap. Existing grid lines from OSM are assigned a cost of 0.00; the largest roads (motorways) are assigned a cost of 0.10, the smaller roads (tertiary) a cost of 0.17, and the others in between; these values are arbitrary but achieved a reasonable strategy of preferring larger roads. Areas with no road are assigned a cost of 1.00. This creates a two-dimensional array representing the cost of traversing each cell between the target points. The algorithm in Box 1 is then applied, using the electrified targets and this cost array as inputs. The result is post-processed by removing lines that replicate existing OSM lines, so that the result only shows additions to the OSM data. The final result is therefore disaggregated between lines directly modelled from OSM data, and new lines discovered by the algorithm. These are run individually for each country, under the assumption that MV lines typically do not cross international borders, and then merged into a single global raster. The output is an estimate for each cell of whether it contains a medium-voltage line. To remove predicted lines in remote areas that are unlikely to be connected to the main network, we used a buffer of 100 km on the OSM data and removed all lines beyond that. There was a similar process for predicted submarine lines, for example in Indonesia. Moreover, several administrative areas had to be split or removed, such as exclaves, foreign island territories, or spread-out islands. In addition, some very small city-states and islands were removed, as the algorithm does not yet function well below a certain scale.
The algorithm is shown in operation for Uganda at https://gridfinder.org/animated. The wider resulting dataset is given in Fig. 3 at the global scale, comprises 7 million km of distribution lines (Fig. 3a) and national/regional scales (Fig. 3b–d).
Results of the distribution line modelling. (a) Global visualization of predicted distribution lines (b) Kenya; (c) Bolivia; (d) Azerbaijan.
Overall, gridfinder predicts that 97% of the global population lives within 10 km of a MV line, however, large variations are observed between regions, income levels, and across the human development index (HDI) indicator (Fig. 4a,b). In Sub-Saharan Africa, 6% of the population is further than 10 km from the distribution network, while this is around 1% for all other regions. In contrast, South Asia, where although most of the population is close to the network, 10% of the population does not have access to electricity1. This highlights an important issue in energy access — network extension is a necessary but non-sufficient factor of electrification. If the households' willingness to pay is too low, living close to the network will not change the electrification status of those households. This is an important topic to examine if the Sustainable Development Goals are to be reached.
Regional electrical infrastructure proximity and investment value, by country and income level. (a) Proportion of population within 10 km of distribution lines by HDI; (b) Population within 10 km in low-access countries compared with electricity access rate; (c) Infrastructure investment per capita by HDI; (d) Infrastructure investment in low-access countries compared with electricity access rate. See Table 1 for data sources with additional data collated for HDI: https://gateway.euro.who.int/en/indicators/hfa_42-0500-undp-human-development-index-hdi.
The investment values (see Fig. 4c,d) are calculated based on the predicted amount of distribution lines, multiplied by a constant cost per kilometer. They show a large discrepancy in resources between different regions and income levels. Geographically spread-out and energy-intensive populations (such as Sweden) have a large amount of infrastructure, while more compact countries can get by on less. Most income and HDI groups are represented below the 50 USD/capita line, indicating that this is not a hard limit to development. These regional differences are likely even bigger since the gridfinder tool only connects locations with a single line when, there will frequently be multiple lines, either on the same or different towers to provide redundancy in the network. Therefore, these investment values may significantly under predict the divergence, as more developed areas, particularly those at greater risk to natural hazards could be more likely to have such redundancies built in.
Local access heuristics
In countries with universal access to electricity, accessibility maps mirror population distribution. However, in countries with lower access rates, we turn to night-time lights, urban extents, the output of our gridfinder analysis, and national statistics to attempt to quantify the number of people in each grid cell with meaningful electricity access. We apply an iterative algorithm (outlined in Box 2) to produce a gridded surface representing the number of people estimated to have electricity access in each cell. The goal of this algorithm is to match national statistics on urban and rural access rates, and to heuristically apply these rates based on population density, overall brightness, and brightness per capita. This algorithm works by splitting each country into eight groups with the urban-rural split on one dimension and four quantiles of population density on the other. We therefore aim to capture variations in brightness per capita between and within each of these groups. This process is iterated, modifying the factor connecting brightness and access for each group, until a result is achieved satisfactorily close to national statistics. This is limited by being based purely on national statistics (as opposed to sub-nationally disaggregated data), and subsequent research for specific countries or regions could improve on this.
The resulting geospatial database is used to estimate the length of lines in each cell based on population, demand and household size16, according to:
$$length=\frac{demand\sqrt{area\times hh\times pop}}{capacity}$$
where length is the length of low-voltage lines in that cell, demand is the peak per capita demand, area is the cell size, hh is the number of people per household, pop is the number of electrified people in that cell (as defined in Box 1), and capacity is the power capacity of the low-voltage line. This equation is derived from a consideration of the number of lines needed to serve a given capacity, as well as the physical length needed to reach all buildings in an area17. Parameters were varied by region and calibrate against detailed low-voltage network data. These are then combined to create a single global raster of low-voltage infrastructure comprising 69 million km that can be used to estimate investment costs, and combined with the access estimate to calculate connection costs per household. Results are presented in Fig. 5, highlighting the disparities between low- and high-access countries.
Choropleth of predicted global low-voltage infrastructure per capita. Map shows the level of installed low-voltage lines, disaggregated into four levels.
Box 1. Simplified algorithm for connecting electrification targets. The complete code is available in the code repository (see Code Availability, where the earlier Rohrer algorithm is also referenced). An example showing how this works in practice is also available: https://gridfinder.org/algo
targets: binary array of locations with electrified equal to 1
costs: array of same shape with values [0, 1] defining cost of traversing
dist = array of same shape as targets storing distance to last electrified
queue = heap of locations sorted by distance to grid
start = first location with targets == 1
add start to queue
while length(queue) > 0:
current = top location from queue
for each location around current:
if location is connected:
set current as parent of location
follow back from location setting all as grid
calculate distance from previous
if location already visited and distance < previous distance:
change distance for location
set location as visited
add location to queue
Box 2. Algorithm for estimating population with access to electricity from night light data. The complete code is available online in the code repository (see Code Availability)
pop: 2D array of population
urban: same-shape binary array of urban/rural
ntl: same-shape array with night-time lights values
targets: same-shape binary array showing whether electrified
access: access rates disaggregated into total, urban, rural
n: percentile for weights calculation
if access.urban >90%:
pop_access = pop × access.urban + pop × access.rural3.
remove all >1 km from targets
for each level in urban:
for each quartile of pop:
get pop at this level and quartile
B = brightness per person for each cell
b = brightness at nth percentile of group
W = 1 − (b − B)/B
combine all W
pop_access = W × pop14.
if abs(pop_access – access.total) > 2%:
adjust n and repeat from line 1
Data Records
All resulting data are hosted online25 at the following https://doi.org/10.5281/zenodo.3538890 in both vector and raster formats and is freely available under a CC BY 4.0 license. The data are further visualized at https://gridfinder.org. All datasets used in the study are either cited in the Methods section or listed in Table 1.
Technical Validation
MV network validation
To validate the medium-voltage results, we consider 16 different networks from 14 countries to represent a wide range of electricity network development and accessibility. Datasets were downloaded as open data or under license to ITRC researchers from Enexis (Netherlands); Ergon, and Western Power (Australia); Western Power and Energy NorthWest (UK); GeoBolivia (Bolivia); and Westpower (New Zealand). Remaining validation datasets were supplied by the World Bank Group, some of which are openly available (https://datacatalog.worldbank.org). Countries represented in each income group are: Australia, Netherlands, New Zealand, United Kingdom (High Income), Namibia (Higher Middle Income), Bolivia, Kenya, Nigeria, Zambia (Lower Middle Income), Burundi, Ethiopia, Malawi, Tanzania, and Uganda (Low Income).
Model performance was then measured by comparing the presence of observed lines (from the above country data) to those produced by the gridfinder model across a grid of equal area cells fitted to the electricity distribution extent of each validation set. These grid squares ranged from 500 m up to 100 km (Supplementary Fig. 1 compares predictions at varying resolutions). For each cell, the presence of an observed line form the validation set and the predicted presence of an MV line are computed and given a binary classification. Each cell is then assigned a True-Positive (TP), True-Negative (TN), False-Positive (FP) or False-Negative (FN) attribute. A true-positive (TP) reading would be given if both an MV line is observed in the validation dataset and gridfinder predicts the presence of a MV line. A True-Negative (TN) reading would be given if both no MV line is observed in the validation dataset and no MV line is predicted by gridfinder. A FP reading indicates gridfinder predicts an MV line that is not observed, and finally a FN reading indicates the opposite such that gridfinder does not predict an MV line that is observed.
We do not attempt to validate the electrical validity or accuracy of the results, as this is out of scope of our objective and unlikely to be possible on a global scale with current data sources. Follow-up work could attempt to verify these aspects for individual countries, using location-specific data on topology and structures. This limitation means these data are less suited for detailed electrical modelling.
Performance metrics considered include (i) precision, (ii) recall, (iii) accuracy, and (iv) intersection-over-union (IOU). Precision (Eq. 3) is a measure of how likely an MV network is observed given that the model has predicted a MV line is in place. The precision metric penalizes gridfinder for predicting the presence of a MV line within the cell when there is no line observed. Maximizing precision ensures gridfinder is not overestimating the presence of lines and thus providing a false sense of true accessibility. This is therefore a critical metric to consider.
$${\rm{Precision}}\,{\rm{(}} \% {\rm{)}}={\rm{100}}\frac{{\rm{TP}}}{{\rm{TP}}+{\rm{FP}}}$$
Accuracy (Eq. 4) is simply a measure of the total correct predictions over the total validation set. This metric provides a good overview but can be misleading should there be large areas where it is comparatively easy to predict an absence of an electricity network – such as over extreme topographies (mountain ranges or deserts) or large forested areas, amongst others. In such instances, the overall accuracy metric is heavily influenced by the large proportion of TN.
$${\rm{Accuracy}}\,{\rm{(}} \% {\rm{)}}={\rm{100}}\frac{{\rm{TP}}+{\rm{TN}}}{{\rm{TP}}+{\rm{TN}}+{\rm{FP}}+{\rm{FN}}}$$
IOU is an alternative metric that excludes TN predictions is the IOU (Eq. 5). Adopted from the object detection literature, here the IOU evaluates how well the model predicts the known presence of distribution lines – disregarding any areas where the model and observed data agree there are no MV lines. While a target of 100% is ideal, object detection models assume IOU > 50% as satisfactory and should be a minimum target.
$${\rm{IOU}}\,( \% )=100\frac{{\rm{TP}}}{{\rm{TP}}+{\rm{FN}}+{\rm{FP}}}$$
where all of these metrics in Eqs. 2–4 range between 0–100% with 100% being the most desirable.
Supplementary Fig. 1 tracks these metrics for a range of grid sizes. Under the above assumption of IOU > 50% being a target, a grid size of 15 km is suggested most appropriate for viewing the dataset (Supplementary Table 1). At this scale, the validation dataset comprises 34800 individual cells with gridfinder predicting the following readings: TP: 8832 (25% of dataset), TN: 17270 (50%), FN: 5938 (17%), and FP: 2760 (8%). These results are shown spatially in Fig. 6.
Gridfinder validation at 15 km grid resolution with confusion matrix categories across continents. Each panel represents the countries across different regions namely: (a) Africa; (b) Europe; (c) South America; (d) Australasia.
Overall, we observe validation performance not necessarily correlating directly with country income group. While all networks show validation precision of >50% (Supplementary Table 1, Supplementary Figs. 2 and 4) lower accuracies are observed for Australia (both Queensland and Western Australia). We attribute this to relatively low night light in rural areas such that gridfinder does not have as many target electrification sites, however, given accessibility is in-fact 100%, one would expect more MV lines to be present across the distribution extent. While this result may be concerning for predicting accessibility in geographies comparable to Australia, the already 100% access and highly detailed geospatial datasets (often openly available) mean the gridfinder is not necessarily required in such areas – with much more focus and application expected in those regions targeting SDG7.
In these regions targeting SDG7 and which gridfinder is of the greatest initial interest, the main inconsistencies in these results stem from connected villages that fail to show up in the filtered data, due to not producing enough light above background noise. In many cases, these villages will be technically connected but very few households will have electricity access. Incorrect routing assumptions, where distribution lines do not follow a road, or try to avoid a topographical feature, which is not currently accounted for, also produce mistakes. Finally, as the algorithm produces a minimum spanning tree, there are no predicted redundant lines, which are frequently built for reliability reasons.
This type of experimental work presents a difficult conundrum: we are developing new data with no precedent, so there is a limit to the validation that is possible, and therefore the ability to iterate and improve results is severely limited. For example, the results could be improved by using daily, rather than monthly, satellite imagery, in order to finely control the filtering process. However, optimizing this process will require highly detailed ground-truth data, which are only available in limited areas. Further, strong geographic variations limit the ability to generalize findings based on these data. These findings should be viewed as a step forwards, and an enabler of new analysis and additional ground-truthing; they should not be relied upon for decision-making without independent validation.
Local access validation
Overall, our results compare favorably with recent work in the literature, which is performed at regional scales11,26. However, at the global scale, further calibration will need to be sought over time as more datasets become available. In urban areas, we find an over prediction of access rates, whereas the algorithm under predicts in rural areas. This over prediction is primarily caused by bright industry and streetlights, which do not necessarily imply household connections. Conversely, rural villages may have access that provides valuable services, even if weak or intermittent, but may not produce enough light to rise above the noise. In most cases, it was possible to calibrate the results to closely match official World Bank statistics (see Supplementary Fig. 5). However, in several cases it was not possible, and the model results are significantly off the World Bank statistics. These cases highlighted limitations with our modelling approach, but also surfaced interesting realities about the distribution of electricity access between urban and rural areas. The clearest failures were Pakistan and Libya, both with modelled access rates close to 100%, but with official rates around 70%. In Pakistan, the 100% official urban access rate is automatically assigned to urban areas. In this case, the limited quality of the population and urban extents, along with differing definitions of urban, means that this already over predicts the actual electricity access rate. In Libya, falling access rates over the last 20 years create a difficult situation for an automated algorithm to handle. On the other hand, the model sometimes predicts access rate lower than the official statistics. It is mostly the case for countries where the reliability of the grid is low and/or where off-grid access is high. Indeed, the World Bank statistics capture both on-grid and off-grid access under the same access rate while our model computes on-grid access only. For example, in Myanmar and Ethiopia, where the World Bank access rates are 70% and 44%, respectively, the model underpredicts rates of 50% and 35% respectively. These are much closer to the official on-grid access rates of 40% and 33% respectively. Unfortunately, this comparison is only possible for a few countries for which the World Bank statistics were established using the MTF survey27,28.
Datasets are available in standard formats (GeoTIFF for raster data and GeoPackage for vector data) in the WGS84 (EPSG:4326) coordinate system. All data can be viewed and analyzed without any further edits – as shown on gridfinder.org, and are available for download25 at the following, https://doi.org/10.5281/zenodo.3538890. The HV and MV data are shared as a single vector data file in the standard GeoPackage format and in the WGS84 coordinate system (EPSG:4326). The LV data are shared as raster data in the standard GeoTIFF format in the World Mollweide projected coordinate system (EPSG:54009), while the electrified targets data are also in raster GeoTIFF format but in the WGS84 coordinate system (EPSG:4326). Access, investment, and total line length results are aggregated to the country level in Supplementary Table 2.
As raised above, the dataset has the potential to be used in a wide range of applications. It will be a valuable starting point for electrification planners who have called for further application of this methodology11,12. Similarly, others assessing social inequalities14 and estimating exposure to natural hazards15 will be able to apply such datasets. With the code easily accessible via GitHub repositories (as given in the Code Availability section), users can apply their own parameters that could better represent their region of interest. Similarly, as more NTL data are published at higher resolutions and continuous additions are being made to OpenStreetMap these datasets can be easily updated with little delay.
Code availability
All scripts used are available at https://github.com/carderne/predictive-mapping-global-power (scripts and routines for preparing and finalizing data), https://github.com/carderne/gridfinder (primary code for preparing night-time lights and creating medium-voltage predictions), and https://github.com/carderne/access-estimator (access estimations and low-voltage predictions). The output data are available at the following, https://doi.org/10.5281/zenodo.3538890. See also the original Facebook algorithm here: https://github.com/facebookresearch/many-to-many-dijkstra.
IEA, IRENA, UNSD, WB & WHO. Tracking SDG 7: The Energy Progress Report 2019. (The World Bank, 2019).
United Nations. The Sustainable Development Goals Report 2018. (United Nations, 2015).
Adair-Rohani, H. et al. Limited electricity access in health facilities of sub-Saharan Africa: a systematic review of data on electricity access, sources, and reliability. Global Health: Science and Practice 1, 249–261 (2013).
McCollum, D. L. et al. Connecting the sustainable development goals by their energy inter-linkages. Environ. Res. Lett. 13, 033006 (2018).
ADS Article Google Scholar
Nerini, F. et al. Mapping synergies and trade-offs between energy and the Sustainable Development Goals. Nat. Energy 3, 10–15 (2018).
Gurara, D., Klyuev, V., Mwase, N. & Presbitero, A. Trends and Challenges in Infrastructure Investment in Developing Countries. International Development Policy | Revue internationale de politique de développement 10 (2018).
Haklay, M. & Weber, P. Openstreetmap: User-generated street maps. IEEE Pervasive Comput. 7, 12–18 (2008).
Byers, L. et al. A Global Database of Power Plants: June 2019 Technical Note. (World Resources Institute, 2019).
Rivera, J., Leimhofer, J. & Jacobsen, H.-A. OpenGridMap: towards automatic power grid simulation model generation from crowdsourced data. Computer Science-Research and Development 32, 13–23 (2017).
Dornan, M. Access to electricity in Small Island Developing States of the Pacific: Issues and challenges. Renew. Sust. Energy Rev. 31, 726–735 (2014).
Falchetta, G., Pachauri, S., Parkinson, S. & Byers, E. A high-resolution gridded dataset to assess electrification in sub-Saharan Africa. Sci. Data 6, 110 (2019).
Korkovelos, A., Khavari, B., Sahlberg, A., Howells, M. & Arderne, C. The role of open access data in geospatial electrification planning and the achievement of SDG7. an OnSSET-based case study for Malawi. Energies 12, 1395 (2019).
Arderne, C. carderne/gridfinder. Zenodo, https://doi.org/10.5281/zenodo.3266988 (2019).
Weiss, D. et al. A global map of travel time to cities to assess inequalities in accessibility in 2015. Nature 553, 333 (2018).
ADS CAS Article Google Scholar
Koks, E. et al. A global multi-hazard risk analysis of road and railway infrastructure assets. Nat. Commun. 10, 2677 (2019).
Mentis, D. et al. Lighting the World: the first application of an open source, spatial electrification tool (OnSSET) on Sub-Saharan. Africa. Env. Res. Lett. 12, 085003 (2017).
Medjroubi, W., Müller, U. P., Scharf, M., Matke, C. & Kleinhans, D. Open data in power grid modelling: new approaches towards transparent grid models. Energy Reports 3, 14–21 (2017).
Development Seed. Machine Learning for High Resolution High Voltage Grid Mapping: Pilot project for Nigeria, Zambia, Pakistan. (World Bank, 2018).
NOAA & NCeI Earth Observations Group. Version 1 VIIRS Day/Night Band Nighttime Lights. (NOAA, 2017).
Doll, C. & Pachauri, S. Estimating rural populations without access to electricity in developing countries through night-time light satellite imagery. Energy Policy 38, 5661–5670 (2010).
Min, B. & Gaba, K. Tracking electrification in Vietnam using nighttime lights. Remote Sens. 6, 9511–9529 (2014).
Min, B., Gaba, K. M., Sarr, O. F. & Agalassou, A. Detection of rural electrification in Africa using DMSP-OLS night lights imagery. Int. J. Remote Sens. 34, 8118–8141 (2013).
Dijkstra, E. W. A note on two problems in connexion with graphs. Numerische mathematik 1, 269–271 (1959).
MathSciNet Article Google Scholar
Prim, R. C. Shortest connection networks and some generalizations. The Bell System Technical Journal 36, 1389–1401 (Bell, 1957).
Arderne, C., Nicolas, C., Zorn, C. & Koks, E. Data from: Predictive mapping of the global power system using open data. Zenodo, https://doi.org/10.5281/zenodo.3538890 (2019).
Falchetta, G., Gernaat, D., Hunt, J. & Sterl, S. Hydropower dependency and climate change in sub-Saharan Africa: A nexus framework and evidence-based review. J. Clean. Prod. 231, 1399–1417 (2019).
Koo, B., Yoo, H., Keller, S., Rysankova, D. & Portale, E. Myanmar Beyond Connections: Energy Access Diagnostic Report Based on the Multi-Tier Framework. (World Bank, 2019).
Padam, G. et al. Ethiopia Beyond Connections: Energy Access Diagnostic Report Based on the Multi-Tier Framework. (World Bank, 2018).
We thank Brandon Rohrer, Dimitry Gershenson and Anna Lerner for their work on the methodology (see https://code.fb.com/connectivity/electrical-grid-mapping); Brian Min for discussions in analyzing VIIRS imagery; Benjamin Stewart, Albertine Potter van Loon, Nicolina Lindblad, Hicham Latnai, and Tom Russell for their valuable feedback. C.Z. and E.K. acknowledge support from the UK Engineering and Physical Science Research Council under grant EP/N017064/1: MISTRAL: Multi-scale InfraSTRucture systems AnaLytics.
World Bank Group, Washington, D.C., USA
C. Arderne & C. Nicolas
Environmental Change Institute, University of Oxford, Oxford, UK
C. Zorn & E. E. Koks
Department of Civil and Environmental Engineering, University of Auckland, Auckland, New Zealand
C. Zorn
Institute for Environmental Studies, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
E. E. Koks
C. Arderne
C. Nicolas
C.J.A. analyzed data, developed gridfinder, produced results, wrote manuscript. C.Z. performed sensitivity analysis, contributed to code, wrote manuscript. C.N. developed access rate/low-voltage methodology, analyzed results, wrote manuscript. E.K. overall guidance, check results, wrote manuscript.
Correspondence to C. Arderne.
The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata files associated with this article.
Arderne, C., Zorn, C., Nicolas, C. et al. Predictive mapping of the global power system using open data. Sci Data 7, 19 (2020). https://doi.org/10.1038/s41597-019-0347-4
Received: 30 August 2019
Building and Calibrating a Country-Level Detailed Global Electricity Model Based on Public Data
Maarten Brinkerink
, Brian Ó. Gallachóir
& Paul Deane
Energy Strategy Reviews (2021)
Projected material requirements for the global electricity infrastructure – generation, transmission and storage
S. Deetman
, H.S. de Boer
, M. Van Engelenburg
, E. van der Voet
& D.P. van Vuuren
Resources, Conservation and Recycling (2021)
Satellite Observations Reveal Inequalities in the Progress and Effectiveness of Recent Electrification in Sub-Saharan Africa
Giacomo Falchetta
, Shonali Pachauri
, Edward Byers
, Olha Danylo
& Simon C. Parkinson
One Earth (2020)
Smart Characterization of Rogowski Coils by Using a Synthetized Signal
Alessandro Mingotti
, Lorenzo Peretto
& Roberto Tinarelli
Sensors (2020)
Scientific Data ISSN 2052-4463 (online) | CommonCrawl |
The impact of contextualization on immersion in healthcare simulation
Henrik Engström1,
Magnus Andersson Hagiwara2,
Per Backlund1,
Mikael Lebram1,
Lars Lundberg2,3,
Mikael Johannesson1,
Anders Sterner2 &
Hanna Maurin Söderholm4
Advances in Simulation volume 1, Article number: 8 (2016) Cite this article
The aim of this paper is to explore how contextualization of a healthcare simulation scenarios impacts immersion, by using a novel objective instrument, the Immersion Score Rating Instrument. This instrument consists of 10 triggers that indicate reduced or enhanced immersion among participants in a simulation scenario. Triggers refer to events such as jumps in time or space (sign of reduced immersion) and natural interaction with the manikin (sign of enhanced immersion) and can be used to calculate an immersion score.
An experiment using a randomized controlled crossover design was conducted to compare immersion between two simulation training conditions for prehospital care: one basic and one contextualized. The Immersion Score Rating Instrument was used to compare the total immersion score for the whole scenario, the immersion score for individual mission phases, and to analyze differences in trigger occurrences. A paired t test was used to test for significance.
The comparison shows that the overall immersion score for the simulation was higher in the contextualized condition. The average immersion score was 2.17 (sd = 1.67) in the contextualized condition and −0.77 (sd = 2.01) in the basic condition (p < .001). The immersion score was significantly higher in the contextualized condition in five out of six mission phases. Events that might be disruptive for the simulation participants' immersion, such as interventions of the instructor and illogical jumps in time or space, are present to a higher degree in the basic scenario condition; while events that signal enhanced immersion, such as natural interaction with the manikin, are more frequently observed in the contextualized condition.
The results suggest that contextualization of simulation training with respect to increased equipment and environmental fidelity as well as functional task alignment might affect immersion positively and thus contribute to an improved training experience.
This work draws on a case study collaboration bringing together expertise from different simulation areas (medical simulation and serious games) with the overall focus to improve simulator training in prehospital emergency care. The prehospital context, i.e., all activities taking place from an initial alarm call until a patient is delivered at the hospital emergency unit, is a complex process [1] that includes many different dimensions and challenges: e.g., communication and teamwork skills, transport and driving, medical skills, and decision-making. Today, current training practice (e.g., in the regional ambulance organizations that participated in our project) is that different aspects are trained in isolation, e.g., medical skills using patient manikins, driving using driving simulators, and teamwork in teamwork sessions. As discussed by Rice [2], many types of skills, e.g., cognitive, social, and non-technical skills are neither immediately challenged nor synthesized in traditional manikin-based simulation, unless time and effort is put into the environment in which the manikin is placed. In cases when the learning goal is to train complex healthcare processes or team collaboration, better learning could potentially be created through increasing both breadth and detail in the simulated scenario. In the context of this research, breadth refers to the number of activities and phases, i.e., the overall process from dispatch to ER handover. Detail refers to increasing the realism and enriching each phase of that process so that it better mirrors the whole array of activities in terms of interactions, skills, tools, and information (audio, tactile, interactive, communicative, technical, etc.) that prehospital nurses carry out and use during an ambulance mission. The idea of having increased richness in terms of covering the whole prehospital chain is in line with the 1990s refinement of aviation simulators when context was taken into account by using full missions. According to, e.g., Rudolph et al. [3] and Dieckmann et al. [4], this was one of the factors that led to a dramatically improved aviation safety. Similarly, we believe that taking mission context into account, e.g., by including the driving route activity from the ambulance station and back and recreating the interior characteristics of a patient's home, as well as present changes in the emotional and physical state of the patient, is likely to increase the physical, conceptual, and emotional realism [3] of the simulation.
In this work, we assess a highly contextualized mixed-reality approach to simulation design. This approach strives to increase immersion through a combination of role-play, technical props, and contextualization of work tasks. In healthcare education, role-play has been used for over 40 years varying from dialogue between students portraying a nurse and a patient to more complex situations [5]. In simulation, role-play can be used as a way to integrate the communication processes that normally are present in real care situations [6]. When designing our simulation approach, we strived to create environments, activities, and scenarios that included natural role-play, e.g., interaction between the trainees (prehospital nurses) and the patient, with bystanders or family members, with attending physicians through phone and handover, and within-team collaboration when prehospital nurses are working together. In addition, a combination of physical and digital props was used to create a more engaging environment.
The participants' involvement in a simulated scenario can be characterized by a number of different terms (Andersson Hagiwara M, Backlund P, Maurin Söderholm H, Lundberg L, Lebram M, Engström H: Measuring participants' immersion in healthcare simulation: the development of an instrument, Forthcoming), such as flow, presence, cognitive absorption, buy-in, suspension of disbelief, and the as-if concept. This paper studies participants' involvement in simulator training in terms of immersion [7, 8]. In healthcare simulation, the term immersion is often used in relation to virtual reality (VR) and seen as being determined by technical components. Here, we use a definition more commonly adopted in the game research community, pertaining to immersion as a subjective psychological experience [7–11]: "the subjective impression that one is participating in a comprehensive, realistic experience" [12] (p.66). This definition emphasizes immersion as lived experience rather than a property of a technical environment and thus helps us to conceptualize what and how that might bring prehospital personnel engaged in a training scenario into and out of an immersive state. One important property of immersion, which differentiates it from, e.g., flow, is that it is a continuum (from engagement to total immersion). This makes it meaningful to quantify the level of immersion. The primary approach today to measure immersion is through questionnaires, e.g., Jennett et al. [8], capturing participants' experience post taking part in an activity. So far it does not, to our knowledge, exist in any measure that can be used to observe immersion in a non-intrusive, objective way. In another paper, we present the development and validation process of the Immersion Score Rating Instrument (ISRI) designed to observe and measure immersion in mixed-reality healthcare simulation. This instrument consists of 10 triggers that indicate reduced or enhanced immersion among participants in a simulation scenario. Triggers refer to events such as jumps in time or space (sign of reduced immersion) and natural interaction with the manikin (sign of enhanced immersion) and can be used to calculate an immersion score.
In the study presented in this paper, we have manipulated breadth and detail in two scenarios to explore how this might affect participants' immersion (see Table 1 for an overview). Thus, the research question addressed in this paper is: How is participants' immersion during a simulated scenario affected by contextualization? In order to investigate this, we apply the newly developed ISRI, which allows us to (1) observe and measure immersion at a general level, (2) identify variations during different phases of a healthcare scenario, and (3) analyze individual triggers that might reduce or enhance participants' immersion.
Table 1 Experiment condition design per ambulance mission phase
Fidelity in healthcare simulation
During a simulation, immersion is affected by several different factors (e.g., physical, structural, communicative, personal, or contextual). In healthcare simulation, these are commonly referred to as different categories of fidelity [13] and often discussed in relation to the fidelity level of a manikin. Archer et al. [14] describe fidelity by using three dimensions: equipment fidelity, which concerns how closely the simulator resembles the real system it refers to. The second dimension, environmental fidelity, refers to the context in which the simulator is placed. Finally, the third dimension is called psychological fidelity and refers to the degree to which the trainee perceives or accepts the simulation to be "real". According to Tun et al. [15], contemporary definitions of fidelity typically refer to the level of realism of a simulation. Furthermore, their review of the fidelity concept reveals that definitions are not clear and may refer to either the physical (engineering) aspects of a simulation, i.e., the extent to which a simulation reflects the physical properties of the real-world concept, or its subjective dimension, called psychological or perceptual fidelity. As fidelity is not a clearly defined concept, it has recently been criticized for being too imprecise [16]. Hamstra et al. even propose to abandon the term fidelity and replace it with the terms physical resemblance and functional task alignment [16]. The benefit of doing so is that it allows us to focus more on the functional alignment with the learning task rather that the current overemphasis of physical resemblance. Accordingly, we classify the first two dimensions proposed by Archer et al. [14] as dimensions of the physical resemblance whereas the third dimension concerns the buy-in to the simulation, i.e., the degree to which the trainee accepts the situation as believable and suitable for its purpose. Functional task alignment is another matter, and Hamstra et al. [16] emphasize the importance of close alignment between the clinical task and the simulation task. Functional task alignment can be strengthened by an appropriate correspondence between the simulator and the applied context. Similar staffing and spatial arrangements can help to achieve this. In the case of prehospital training, we argue that these features may be present in an enriched scenario context for the patient simulator. This does not mean that the physical resemblance of the patient simulator is unimportant, only that it should be considered with respect to the training goal, in our case prehospital care.
As can be seen from the above discussion, fidelity is not a clear concept and neither is its relation to learning. According to Hamstra et al. [16], there is a positive relation between cognitive engagement and learning outcome. However, physical resemblance is only one parameter when enhancing learner engagement. Rettedal [17] discusses participants' perceptions of realism regarding simulated scenarios and points out the suspension of disbelief as a central concept for successful simulation. Horcik et al. [18] refer to this and claim that involvement in simulation requires that participants suspend disbelief. Based on studies of various immersive interfaces, Huiberts [11] asserts that immersion, in a digital environment, can enhance education by allowing multiple perspectives, situated learning, and transfer. The results from a relatively recent study of the learning outcomes of science educational games by Cheng et al. [19] indicate that that immersion leads to a higher gaming performance, which in turn plays a role in learning performance.
To summarize, we see that fidelity is a complex phenomenon which lacks a clear definition. We acknowledge that physical resemblance and functional task alignment are important factors when discussing fidelity and the effectiveness of simulation training. The physical resemblance does not only relate to the manikin but also includes the equipment as well as environmental fidelity. We summarize our view of fidelity in Fig. 1. In our case, this means that we not only utilize a patient simulator manikin as well as physical props to create some sense of realism but we also consider functional task alignment when including tasks from the whole prehospital process to be carried out according to standard procedure.
The dimensions of fidelity
Study setting
The experiment was conducted in November 2014 as a collaborative multidisciplinary effort between serious games and prehospital care researchers from two universities in Sweden and training officers from a regional ambulance center.
Twelve teams (24 professionally active ambulance nurses) from four different healthcare organizations in the surrounding region participated in the study. All participants were working full time as ambulance nurses and had earlier experience from simulation training.
The study was approved by the research ethics adviser at the University of Borås, Sweden and conducted in accordance with the ethical recommendations of the Swedish Research Council [20]. During an introductory session to the experiment, the principal investigator informed study participants about the participation in the study, their rights, and our responsibilities as researchers. Informed oral and written consent was obtained from all participants.
The study had a randomized controlled crossover design comparing immersion between two types of simulation training conditions: one basic, mirroring how training currently is done in the regional ambulance organizations participating in the study, and one contextualized, where we strived to capture more of the complexity of the prehospital work process. Table 1 illustrates the design of the two experiment conditions per central phases of an ambulance mission. The phases were determined by factors such as change of physical location (transport) or different segments of the on-scene assessment and treatment where the highest number of, or most important, decisions are made [21]. This resulted in the six phases presented in Table 1. In the contextualized simulation design, we utilized a mixed-reality approach, recreating parts of the environment through physical props, e.g., using a real ambulance as interface to the simulated driving. The same ambulance was also used for actual loading and for patient care during transport to hospital. In both conditions, a Laerdal SimMan 3G simulator was used. The manikin was operated via Wi-Fi where the operator was playing the role of the patient by communicating via the manikin's integrated speaker system.
Randomization and control
In each condition, two different medical scenarios were used. Upon arrival at our facility, participants were randomly assigned to which condition and medical scenario to start with. Hence, the scenarios were organized in blocks in order to vary: (1) the type of medical scenario ("elderly man with respiratory distress" or "drug addict with respiratory distress") in each of the conditions (contextualized/basic) and (2) the order in which participants did the scenarios (Fig. 2).
Flowchart, randomized controlled crossover design
Experiment protocol
When arriving at the ambulance station, participants were given an introduction to the study and its aims, their participation including reading and signing consent forms, and responding to a background information questionnaire. Next, they were subjected to the two different simulation conditions (contextualized/basic). Before each condition, participants were introduced to the simulation by the experiment leader and given time to familiarize themselves with the manikin and any equipment provided. During each condition, participants were working through an ambulance mission as described in Table 1. Each block was concluded by a debriefing session with the attending emergency physician who participated in the handover phase. In all, each team spent 5 h (including lunch and refreshment breaks) at the ambulance facility where the study was carried out.
The entirety of all the simulations was recorded by a number of video recorders and one handheld audio recorder. To analyze the video recorded sessions, we utilized a recently developed instrument, ISRI. This instrument consists of 10 triggers (T1-T10) that are used to determine participants' immersion during the simulation. Here, a trigger refers to an event in the simulation that was considered a sign for reduced or enhanced participant immersion. As illustrated in Table 2, triggers T1-T7 indicate issues that reduce immersion, i.e. breaks in immersion, while triggers T8-T10 indicate enhanced immersion. Details of the ISRI development process including the complete trigger inventory are reported in (Andersson Hagiwara M, Backlund P, Maurin Söderholm H, Lundberg L, Lebram M, Engström H: Measuring participants' immersion in healthcare simulation: the development of an instrument, Forthcoming).
Table 2 Trigger definitions and directions (i.e., if they indicate reduced or enhanced immersion)
Trigger and timestamp assignment
When applying ISRI to the recorded sessions, five researchers first took part of interrater training and then analyzed two to three teams each (i.e., in total four to six sessions per rater). Here, a rater watched the recording of a session in a computerized system where video and trigger assignment input was integrated (Fig. 3). When a situation arose that indicated reduced or enhanced immersion, the rater stopped the video and selected an appropriate trigger, optionally including a subheading. For each assigned trigger, the rater also indicated the trigger strength from 1 (weak indication) to 3 (strong indication). A two-way mixed, consistency, average-measures intraclass correlation (ICC) of overall interrater reliability showed excellent results (with ICC = 0.92). Next, all video recordings were manually timestamped in time intervals corresponding to the phases defined in Table 1.
Video analysis interface (showing video from phase 1, ambulance en route)
ISRI score calculation
The triggers assigned during a time interval are used to compute a summarizing ISRI score. During a time interval of ∆ minutes, each trigger t (1 ≤ t ≤ 10) occurs n t times. The total number of trigger occurrences during the interval is hence \( {\displaystyle {\sum}_{t=1}^{10}}{n}_t \). Each occurrence i (1 ≤ i ≤ n t ) of a trigger t has been assigned a strength ω ti (1 ≤ ω ti ≤ 3). The total immersion score, s, for the interval is computed as
$$ s=\frac{{\displaystyle {\sum}_{t=8}^{10}}{\displaystyle {\sum}_{i=1}^{n_t}}{\omega}_{ti}-{\displaystyle {\sum}_{t=1}^7}{\displaystyle {\sum}_{i=1}^{n_t}}{\omega}_{ti}}{\Delta} $$
The immersion score is hence the sum of all strengths assigned to positive triggers (T8–T10) minus the sum of all strengths assigned to negative triggers (T1–T7) divided by the length of the interval in minutes. The scores reported in this paper are computed based on the whole simulation and on its individual phases. The length of these varies depending on teams' performances and the nature of the condition. By dividing the trigger strength with time, the immersion score can be used to compare sessions and phases with different durations. In this way, the ISRI score can be said to be normalized to provide an intensity value. An alternative would be to use only the sum of strengths, which would result in a metric that would aggregate the trend over time. However, a long running scenario with a positive immersion trend would then get a much higher score than a similar short scenario. This would make it difficult to compare the score from different types of condition durations as, for example, is possible with a post-questionnaire instrument.
Comparison between conditions
To determine differences in ISRI score between the two conditions, a paired t-test was used.
Order effects
Potential order effects were explored by calculating the difference in ISRI score between the contextualized and basic condition. Independent t tests on the differences were then conducted with order and type of scenario as independent variables.
All statistical analyses were performed using the statistical software program SPSS 21.0 (SPSS Inc., Chicago, IL).
All teams worked through the entire simulation in both conditions. On average, the contextualized condition took 34 min (sd = 3.5), ranging from 28 to 39 min, and the basic condition took on average 15 min (sd = 3.4), ranging from 10 to 20 min. This is a reasonable difference since the contextualized condition included more and longer steps, e.g., actual driving mirroring realistic transport times.
Overall immersion differences between conditions and phases
For all groups, the overall immersion score for the simulation was higher in the contextualized condition. The average immersion score was 2.17 (sd = 1.67) in the contextualized condition and −0.77 (sd = 2.01) in the basic condition (Fig. 4). The difference is significant at p < .001, using a paired t test.
The ISRI score (contextualized vs. basic) for the whole scenario (n = 12). The difference is significant at p < .001
In all, the overall immersion trigger analysis clearly shows higher immersion in the contextualized condition than in the basic. The assignment of scenarios ("elderly man…" or "drug addict…") to conditions did not have any effect (p = .911) on the difference in ISRI score between conditions. The average difference was 2.89 for teams with the "elderly man…" scenario in the contextualized condition and 2.99 for teams with the "drug addict…" in the contextualized condition. There is a tendency that the order of conditions has an effect on the ISRI score. The average difference was 3.44 for teams starting with the basic condition and 2.44 for teams starting with the contextualized condition. Although this difference is not significant (p = .246), it is still notable and should be considered in future studies.
These results do not, however, tell us anything about when during the simulation the participants' immersion is higher or lower. Therefore, we have explored in which of the mission phases (as defined in Table 1) differences in immersion were located. Immersion score within each condition was calculated per each of the phases. Figure 5 illustrates how teams' immersion varies during the different phases of the simulation.
The ISRI score (contextualized vs. basic) for each phase of the scenario (n = 12)
Here, we can clearly see that the contextualized condition has less variance during the process, and that immersion increases during phase 2 (on scene assessment) and 4 (on scene treatment), and decreases in phases 3 (initial patient assessment), 5 (scene departure and transport), and 6 (patient handover to emergency department). In the basic condition, immersion is more fluctuating, with its relative highest points in phase 3 (initial patient assessment) and lowest in phases 2 (on scene assessment) and 5 (scene departure and transport). As Fig. 5 illustrates, phase 3 is the only positive peak in the basic condition.
The fact that the differences between the conditions are largest in phases 2 and 5 is not surprising. In these phases, the participants in the basic condition had to pretend that they were loading and transporting, while the contextualized condition allowed actual loading and real-time driving in a real ambulance vehicle (integrated with a driving simulator). Immersion differences are smallest in phases 3 and 6, the two phases that were most similar in terms of simulation design and equipment (see Table 1). Hence, in order to understand what the immersion differences consist of, and what fidelity dimensions, activities, or props in our simulation design that might affect these, we need to investigate the trigger distribution per each phase in more detail.
Understanding immersion differences for different activities and mission phases
The immersion score is computed from the individual trigger groups, which can contribute negatively or positively to the total score (see Eq. 1). To explore the underlying factors, the scores shown in Fig. 5 have been decomposed into individual trigger group components, shown in Fig. 6. The sum of all trigger groups for a phase in Fig. 6 corresponds with the mean value of the corresponding phase for that condition (shown in Fig. 5). For example, phase 6 in the basic condition has a mix of positive and negative triggers which balance out and result in an immersion score close to zero, as can be seen in Fig. 5. Hence, Fig. 6 helps in visualizing the distribution of the different triggers beyond the computed mean value.
The ISRI score per phase split into the individual triggers. The sum of the 10 trigger values constitutes the immersion score shown in Fig. 5. Each trigger value is the mean of all teams (n = 12)
Differences in trigger occurrences
The basic condition is dominated by two negative triggers: destructive interactions (T1—typically interventions of the instructor) and jumps in time and/or space (T3). These triggers are dictating the score during phase 2 (on-scene assessment) and phase 5 (scene departure and transport) in the basic condition, as compared to the contextualized where positive triggers are dominating these phases. In fact, jumps in time and/or space (T3) are almost not present in the contextualized condition and destructive interactions (T1) is present on a relatively constant low level during the whole scenario except in the last phase. For the whole simulation, both these differences are significant at p < .001, using a paired t test.
Changes in trigger occurrences
As can be seen in Fig. 5, the basic condition exhibits a shift from a positive overall immersion score during phase 3 (initial patient assessment) to a negative during phase 4 (on-scene treatment). As illustrated in Fig. 6, this shift consists of increased destructive interactions (T1), followed by an almost proportional decrease of natural responses to stimuli (T8), natural interaction with the simulator (T9), and with participants (T10). A natural response in the contextualized condition is, e.g., when the participant comforts the patient manikin by patting its arm during transport to hospital; while an unnatural response could be events such as the participant driving the ambulance lets go of the steering wheel and starts doing something else while being en route (contextualized) or both participants engaging in patient treatment even though they (in the basic condition) have verbalized that they are en route driving to the ER. There is also a clear increase in pretended operations (T4), such as the participant just putting down a heap of ECG cables on the patient's chest while verbalizing "I'm taking an ECG".
In contrast, the difference between phase 3 and phase 4 is much less apparent in the contextualized condition where the total immersion score instead increases slightly (Fig. 5). Here, negative triggers (T1–T7) are almost unchanged between phases; instead, there is a change in the distribution of positive triggers. Natural interaction with the simulator (T9) increases while there are fewer natural responses to stimuli in the simulation (T8).
Unnatural execution of operations and technological distractions
There is an almost complete absence of pretended operations (T4) in the contextualized condition, while appearing in most phases of the basic condition. The presence of this difference in phases 3 and 4 is somewhat surprising, as the same manikin and technical and medical equipment are used in both conditions, and hence presents similar conditions and tools for patient care activities. These differences are significant at p < .05, using a paired t test. Interestingly, technological distractions (T7) is the least occurring trigger, it is barely present in any of the conditions or phases.
Differences in similarly designed phases
The three phases initial patient assessment, on-scene treatment, and patient handover to ER (phases 3, 4, and 6) are most similar between the conditions in terms of how the simulation was designed. As shown in Fig. 5, this similarity is reflected as small differences in total immersion. Now we turn to what this looks like in terms of individual triggers. The mix of triggers during phase 3 (initial patient assessment) is similar in the contextualized and basic conditions. The dominating trigger in both conditions is natural interaction with the simulator (T9), which here is necessary to do in order to determine a diagnosis. After this phase, T9 is gradually decreasing in the basic condition. Instances of unnatural interaction with manikin or participants (T5) are however clearly more present in the basic condition. For the whole simulation, the difference in T5 occurrences is significant at p < .05, using a paired t test.
Overall, our analysis shows higher immersion in the contextualized condition than in the basic. The overall immersion score is higher and participants' immersion does not fluctuate as much during the different phases as it does during the basic scenario. Triggers pertaining to events that might be disruptive for participants' immersion, such as interventions of the instructor and illogical jumps in time/space are less frequent in the contextualized condition, while triggers referring to natural interaction with the manikin or other participants are more frequent. Furthermore, operations, tasks, and interactions are to a higher extent conducted in a natural, more realistic way in the contextualized condition. This suggests that contextualization might support a better workflow during a simulation scenario, provide less interruptions, reduce uncertainty of what to do next, and that it promotes natural execution of tasks, as well as natural interaction with manikin or participants. Although the overall immersion is higher in the contextualized condition, both conditions have a mixture of triggers that enhance or reduce immersion. This resembles the results reported in [18] where participants' concerns shifted between issues related to the targeted work and issues related to the simulation. They state that "a unique and stable immersion was never observed" [18] (p.98).
Contextualization includes however a higher number of technological components to increase environmental fidelity and functional task alignment, e.g., identical functional replicas of the same IT and telecommunication equipment normally used by prehospital nurses. Even so, there were minimal occurrences of technological distractions (T7) in both conditions. Hence, it appears that additional technical components did not introduce additional distractions.
Phases 3, 4, and 6 were the ones most similar between conditions in terms of fidelity. Although differences in total immersion between the conditions are smallest in these phases, there are some interesting differences in trigger occurrences. In the basic condition, for example, the natural interaction with the simulator is declining after the initial patient assessment. This is in contrast to the contextualized condition where it increases. Hence, even though the equipment fidelity (manikin, medical equipment, and tools) was close to the same in these phases, it appears that the increased environmental fidelity (dog barking, worried neighbors present) and functional task alignment (interactions with medical control, dispatch info, patient data) might compensate for potential frustration or immersion disruptions induced by the manikin. The increase of T9 (natural interaction with the simulator, here by, e.g., physically touching, talking calmly to, and comforting the patient) in phase 4 may indicate that the participants at this point in the simulation perceive the manikin as a real patient who needs to be involved when treatment is given. This suggests that role-playing plays a crucial role in our results, bringing a positive enforcement of natural interaction with the manikin and within the team as well as with other participants in the simulation. This resonates with the suggestions by Dieckmann et al. [4], that different types of fidelity can influence immersion in different ways. For example, it is probably possible to reach a high level of immersion without a high level of physical fidelity; instead functional task alignment where a realistic sense of stress or time pressure is created, or natural compassionate interaction with a distressed patient, influence immersion positively.
According to the findings by Dieckmann et al. [22] of perceived realism in healthcare simulation, it is the interaction between interrelated subparts, such as the simulation manikin, and the interaction and role-play in the team that creates the sense of perceived realism [22]. Our results expand on this idea, showing that increased fidelity, natural integration of different phases, and role-play as a way to promote interaction increases immersion and that these components actually might compensate for unrealistic or interruptive events or equipment during a healthcare simulation scenario.
The present study has some limitations. For example, it is difficult to say how the difference in time between the basic and the contextualized scenarios have affected the immersion. The ISRI score is computed using time as a denominator. In this way, the score will not be accelerated by the longer durations forced by the nature of some setups (e.g., that loading and transport has to be executed in the contextualized condition). A potential criticism of using time as a denominator is that teams can aim to increase their score by executing operations faster. This is however not an issue as the ISRI score is not used to evaluate the performance of teams. In the presented study, teams were furthermore not aware of any details of the ISRI evaluation.
Although the ISRI score is a normalized metric, differences in duration is still an important factor and further work is needed to better understand how it affects the comparison of conditions and practicalities of training sessions.
The present experiment was not preceded by a power calculation. Since ISRI is a newly developed instrument, a reliable power calculation is difficult. We hope that the results from the study can be used for power calculations to future studies including the ISRI instrument.
The presented study reveals a tendency that the immersion difference may be affected by the order of conditions. This effect is not significant and does not invalidate the main result of the study, but more studies are needed to better understand if and why it appears.
The variables manipulated between the conditions are numerous (e.g., medical scenario/clinical condition, home environment, physical, psychological, and environmental fidelity factors) and thus makes it difficult to isolate the relative impact of specific manipulations or factors on immersion.
It is also impossible to estimate how other factors outside the simulation scenarios affect immersion, as for example, individual differences, earlier experience of simulation, and expectations. The present study does not evaluate how immersion affects learning or performance. More research is needed of the appropriate level of immersion in connection to different learning goals.
This study addresses how immersion in simulated prehospital training scenarios is affected by contextualization. We have studied this by applying the ISRI tool, which allows us to observe and objectively measure immersion. We conclude that contextualization of training scenarios has a positive effect on participants' immersion experience, that it contributes to a better workflow, and promotes realistic interactions and task executions, compared to a basic simulation scenario. This suggests that efforts put into increasing physical resemblance and functional task alignment affects immersion positively. Future studies are however needed to further explore how immersion is affected by specific fidelity components (e.g., noise, home environment, or equipment props) and, perhaps more urgently, structured evaluations of the impact of immersion on learning and performance in healthcare simulations.
Söderström E, van Laere J, Backlund P, Maurin Söderholm H. Combining work process models to identify training needs in the prehospital care process. In: Johansson B, Andersson B, Holmberg N, editors. Perspectives in business informatics research. Lund: Springer International Publishing; 2014. p. 375–89.
Rice A. The use of simulation mannequins in education. J Paramedic Pract. 2013;5:550–1.
Rudolph JW, Simon R, Raemer DB. Which reality matters? Questions on the path to high engagement in healthcare simulation. Simul Healthc. 2007;2(3):161–3.
Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc. 2007;2:183–93.
Nehring WM, Lashley FR. Nursing simulation: a review of the past 40 years. Simul Gaming. 2009;40:528–52.
Clapper TC. Role play and simulation. Educ Dig. 2010;75:39–43.
Brown E, Cairns P. A grounded investigation of game immersion. CHI '04 Extended Abstracts on Human Factors in Computing Systems. Vienna, Austria: ACM; 2004:1297–300.
Jennett C, Cox AL, Cairns P, Dhoparee S, Epps A, Tijs T, et al. Measuring and defining the experience of immersion in games. Int J Hum Comput Stud. 2008;66:641–61.
Ermi LMF. Fundamental components of the gameplay experience: analysing immersion, Changing views: worlds in play: 2nd international DiGRA conference. 2005. p. 15–27.
Qin H, Patrick Rau PL, Salvendy G. Measuring player immersion in the computer game narrative. Int J Hum Comput Interact. 2009;25:107–33.
Huiberts S. Captivating sound: the role of audio for immersion in games, Doctoral thesis. Portsmouth: University of Portsmouth and Utrecht School of the Arts; 2010.
Dede C. Immersive interfaces for engagement and learning. Science. 2009;323:66–9.
Gates M, Parr M, Hughen JE. Enhancing nursing knowledge using high-fidelity simulation. J Nurs Educ. 2012;51:9–15.
Archer F, Wyatt A, Fallows B. Use of simulators in teaching and learning: paramedics' evaluation of a patient simulator. Australasian J Paramedicine. 2007;5(2):6. http://ro.ecu.edu.au/jephc/vol5/iss2/6.
Tun JK, Alinier G, Tang J, Kneebone RL. Redefining simulation fidelity for healthcare education. Simul Games. 2015;46(2):159–74.
Hamstra SJ, Brydges R, Hatala R, Zendejas B, Cook DA. Reconsidering fidelity in simulation-based training. Acad Med. 2014;89(3):387–92.
Rettedal A. Illusion and technology in medical simulation: if you cannot build it, make them believe. In: Dieckmann P, editor. Using simulations for education, training and research. Lengerich: Pabst Science Publishers; 2009. p. 202–14.
Horcik Z, Savoldelli G, Poizat G, Durand MA. Phenomenological approach to novice nurse anesthetists' experience during simulation-based training sessions. Simul Healthc. 2014;9(2):94–101.
Cheng MT, Shet HC, Annetta LA. Game immersion experience: its hierarchical structure and impact on game-based science learning. J Comput Assist Learn. 2015;31:232–53.
Hermerén G. Good research practice. Stockholm: The Swedish Research Council; 2011.
Jensen J, Croskerry P, Travers A. EMS: consensus on paramedic clinical decisions during high-acuity emergency calls: results of a Canadian Delphi study. CJEM. 2011;13(5):310–8.
Dieckmann P, Manser T, Wehner T, Rall M. Reality and fiction cues in medical patient simulation: an interview study with anesthesiologists. J Cogn Eng Decis Mak. 2007;1:148–68.
We would like to acknowledge the ambulance personnel that participated in the original study. We would also like to express our gratitude to Laerdal and SAAB Venture AB for providing equipment and personnel.
School of Informatics, University of Skövde, Box 408, 541 28, Skövde, Sweden
Henrik Engström, Per Backlund, Mikael Lebram & Mikael Johannesson
Centre for Prehospital Research, Faculty of Caring Science, Work Life and Social Welfare, University of Borås, 501 90, Borås, Sweden
Magnus Andersson Hagiwara, Lars Lundberg & Anders Sterner
Swedish Armed Forces Centre for Defence Medicine, Box 5155, 426 05, Västra Frölunda, Sweden
Lars Lundberg
Centre for Prehospital Research, Swedish School of Library and Information Science, Faculty of Librarianship, Information, Education and IT, University of Borås, 501 90, Borås, Sweden
Hanna Maurin Söderholm
Henrik Engström
Magnus Andersson Hagiwara
Per Backlund
Mikael Lebram
Mikael Johannesson
Anders Sterner
Correspondence to Lars Lundberg.
HE and HMS conceived of the study and participated in its design and coordination and drafted the manuscript. MAH, PB, ML, LL, MJ, and AS participated in its design and drafted the manuscript. All authors were involved in the design and implementation of the original experiment, read, and approved the final manuscript.
Engström, H., Andersson Hagiwara, M., Backlund, P. et al. The impact of contextualization on immersion in healthcare simulation. Adv Simul 1, 8 (2016). https://doi.org/10.1186/s41077-016-0009-y
Accepted: 09 February 2016
DOI: https://doi.org/10.1186/s41077-016-0009-y
Contextualized | CommonCrawl |
Search all SpringerOpen articles
Environmental Sciences Europe
RESEARCH | Open | Published: 26 February 2019
Impact of silver nanoparticles (AgNP) on soil microbial community depending on functionalization, concentration, exposure time, and soil texture
Anna-Lena Grün1,6,
Werner Manz1,
Yvonne Lydia Kohl2,
Florian Meier3,
Susanne Straskraba4,
Carsten Jost5,
Roland Drexel3 &
Christoph Emmerling ORCID: orcid.org/0000-0002-1286-75046
Environmental Sciences Europevolume 31, Article number: 15 (2019) | Download Citation
Increasing exposure to engineered inorganic nanoparticles takes actually place in both terrestric and aquatic ecosystems worldwide. Although we already know harmful effects of AgNP on the soil bacterial community, information about the impact of the factors functionalization, concentration, exposure time, and soil texture on the AgNP effect expression are still rare. Hence, in this study, three soils of different grain size were exposed for up to 90 days to bare and functionalized AgNP in concentrations ranging from 0.01 to 1.00 mg/kg soil dry weight. Effects on soil microbial community were quantified by various biological parameters, including 16S rRNA gene, photometric, and fluorescence analyses.
Multivariate data analysis revealed significant effects of AgNP exposure for all factors and factor combinations investigated. Analysis of individual factors (silver species, concentration, exposure time, soil texture) in the unifactorial ANOVA explained the largest part of the variance compared to the error variance. In depth analysis of factor combinations revealed even better explanation of variance. For the biological parameters assessed in this study, the matching of soil texture and silver species, and the matching of soil texture and exposure time were the two most relevant factor combinations. The factor AgNP concentration contributed to a lower extent to the effect expression compared to silver species, exposure time and physico–chemical composition of soil.
The factors functionalization, concentration, exposure time, and soil texture significantly impacted the effect expression of AgNP on the soil microbial community. Especially long-term exposure scenarios are strongly needed for the reliable environmental impact assessment of AgNP exposure in various soil types.
The production volume and the application fields of silver nanoparticles (AgNP) increased continuously in the last decade given by their unique properties such as high surface-to-volume ratio, high chemical reactivity, and specific optical properties [1,2,3]. Apart from the initial medical utilization, AgNP are actually used in households, industry and agriculture such as for water purification, plant growth promotion and textiles cleaning [4, 5]. In consequence, their emission into the environment during all stages of the life cycle, including production, product use, disposal and weathering is unavoidable [6]. The exact extent of the release is unknown due to missing reliable and robust analytical methods for detecting trace concentrations of AgNP in complex matrices [7]. Thus, several scientists modeled the fate and concentrations of AgNP in the environment and named the soil compartment as one of the main sink of AgNP released into the environment [2, 6, 8,9,10,11,12,13,14]. For Europe, an annual AgNP increase of 0.6 t and 2.09 t was calculated for soils and sediments, respectively [8].
Today, information about the impact of AgNP on the soil microbiome are still rare, although microbial communities are important and sensitive targets for determining the environmental hazards of AgNP [15]. Recently, we registered significant negative effects on soil microbial biomass (− 38.0%), bacterial ammonia oxidizers (− 17.0%), and the beta-Proteobacteria population (− 14.2%) after 1-year exposure to 0.01 mg AgNP/kg in a loamy soil, while Acidobacteria (44.0%), Actinobacteria (21.1%) and Bacteroidetes (14.6%) were significantly stimulated [16, 17]. Therefore, a detrimental disturbance on soil ecosystem functions, such as nitrification, organic carbon transformation and chitin degradation could be assumed.
Numerous studies documented the differing physico–chemical and concomitant toxicological behavior of AgNP in dependence of the soil type. As a function of pH, ionic strength, temperature, amount of dissolved ions and of natural organic matter, oxygen concentration, grain size distribution and others [18,19,20,21], AgNP could undergo various physico–chemical transformations such as reduction, oxidation, aggregation, dissolution, complexation and further secondary reactions [21,22,23,24]. Consequently, these transformations in turn affect the toxicity mechanism of AgNP as well as their bioavailability. For example, in comparative studies with different soil types, Schlich and Hund-Rinke [19] as well as Rahmatpour et al. [25] showed that AgNP caused lower toxicity in soils with higher clay content due to the AgNP immobilization by heteroaggregation with clay particles [19, 23, 25].
In addition, the AgNP species itself may significantly impact on its environmental behavior. Their size, shape, surface-coating agent, charge and stability are only a few of the properties by which AgNP can differ [1]. Today, extensive functionalization strategies are available to modify the surface chemistry of a variety of engineered nanoparticles (NP) [26]. Those coatings are used to stabilize NP against aggregation when stable suspensions are required for product functionality or for improved delivery of the product. The coating may also provide other functionalities, such as biocompatibility or targeting of specific cells in biomedical applications [27]. For example, AgNP can be coated by citrate or polyvinylpyrrolidine to increase their stability [28], modified with ATP to act as a selective antibiotic [29] or equipped with COOH– and NH2-groups to affect their surface charge in terms of their function in imaging and drug delivery [30]. Once released into the environment, the surface functionalization of AgNP significantly determines its physico–chemical fate, its bioavailability and its toxicity [22, 26]. Exemplary, Wu et al. [31] observed in a nanocosm experiment that polyethylene glycol AgNP had the highest overall toxicity, followed by silica AgNP, and lastly aminated silica-coated AgNP due to there different dissolution rates und thus stability.
Further to the soil type and the AgNP functionalization, several studies documented a significant impact of the exposure time on the toxicity of AgNP in soils [16, 32,33,34]. By a statistically significant regression and correlation analysis between silver toxicity and exposure time we recently confirmed loamy soils as a sink for silver nanoparticles and their concomitant silver ions due to ageing processes of the silver species and their slow return to the biological soil system [17].
Considering the predicted increase of AgNP into the soil environment, the known toxicity of AgNP to the soil microbial community as well as the variable fate of AgNP in the soil compartment, the aim of this study was to give a more holistic view of the impact of AgNP exposure on the soil microbial community in dependence of the factors AgNP functionalization, AgNP concentration, exposure time, and soil texture. The study was conducted with a long-term incubation period of 90 days using three soil textures (loam, clay, sand) and two different charged AgNP at concentrations in a range of 0.01–1.00 mg AgNP/kg soil. We quantified the effects on several biological parameter: the microbial biomass, the abundance of bacteria, the enzymatic activity as well as marker genes for selected processes of the inorganic nitrogen cycle and for selected higher bacterial taxa. Furthermore, we used AgNO3 as control to determine the effect of Ag+ ions on the AgNP results. Based on our preceding observation that the nitrate content of 36.5% in the AgNO3 compound might also have effects on the microbial community [16], we used NO3 as a further control. This experiment was restricted to the loamy soil. Here, we analysed the impact charged and uncharged AgNP as well as the effects of AgNO3 and NO3 on the microbial community.
Silver nanoparticles and controls
Two differently functionalized AgNP were used, Ag10-COOH functionalized with carboxy groups and Ag10-NH2 functionalized with amino groups. The AgNP were synthesized by a ligand exchange starting from hydrophobic silver particles (Ag-HPB) and the addition of toluene and mercaptopropionic acid or cysteamine hydrochloride in MeOH to receive the final Ag10-COOH or Ag10-NH2 colloidal solutions. The concentration of the stock solutions were 180 mg/L for Ag10-COOH and 21 mg/L for Ag10-NH2. Size, shape and nanoparticle surface charge (ζ-potential) of AgNP were analysed by transmission electron microscopy (Philips CM 12, Netherlands), dynamic light scattering (DLS, Zetasizer Nano S, Malvern Instruments Ltd., UK), asymmetrical flow field-flow fractionation (AF4, AF2000 MT, Postnova Analytics GmbH, Germany) and Laser-Doppler-microelectrophoresis (Malvern Zetasizer Nano-ZS, Malvern Instruments Ltd., UK).
Methods of synthesis and particle characterization can be found in detail in Additional file 1.
Additionally to the analysis of the stock solution, ζ-potential and hydrodynamic diameter of the AgNP were determined at different pH values (pH 4, pH 7.4 and pH 10). Prior to the measurements the stock solution was diluted in pure water (Millipore) with the pH-values 4, 7.4 or 10 to a concentration of 10 µg/mL, vortexed for 10 s, incubated for 1 h or 24 h under permanent rotation (100 rpm at 37 °C) and vortexed for 30 s prior to analysis via Zetasizer Nano-ZS.
Silver nitrate (AgNO3) was used as a positive control. Silver concentrations in the AgNO3 controls were the same as those in the AgNP treatments. As a further control, NO3− was used in form of KNO3. The nitrate concentration in the KNO3 controls were the same as those in the AgNO3 treatments.
Test soils
Three soil textures were selected: a silty sand, a loamy clay and a silty loam. Approximately 20 kg of each soil was sampled from the A-horizon (0–30 cm depth) in spring 2015 next to Trier, Germany. Land-use was forest for the sandy soil and arable field for the clayey and the loamy soil. The soil had a clay content of 0–5% for the sandy, 45–65% for the clayey, and 25–35% for the loamy soil, respectively. After sampling, the soils were thoroughly sieved to < 2 mm and stored at 6 °C until further use. Characteristic soil parameters are listed in Table 1.
Table 1 Characterization of the test soils
Before starting the experiment, the soil was moistened and incubated at 18 °C for 7 days. The application of the test materials was performed in petri dishes, each filled with soil equivalent to 25 g dry weight.
AgNP test solutions were prepared immediately before use: AgNP stock solutions were sonicated at 42 W/L for 15 min and gradually diluted with UPW. Then, 1 mL of the agent species Ag10-COOH, Ag10-NH2, AgNO3 or KNO3 solutions, at different concentrations, was added in small drops onto the soil surface to obtain final concentrations of 0.01, 0.10 and 1.00 mg/kg dry weight. Negative controls only received an application of UPW. Soil water content after the addition of the test solutions were average 22.0% (sand), 19.4% (clay) and 18.5% (loam), which was equivalent to 42.6%, 41.4%, and 37.4% WHCmax, respectively. For each soil texture, agent species, concentration, and day, separate samples in different soil dishes were prepared as 4 replicates (e.g. 4 × 0.01 mg Ag10-COOH kg−1 sand for day 1). Subsequently, soils were extensively mixed by stirring with a spoon, and then they were transferred to plastic containers (Centrifuge Tubes, 50 mL, VWR, Darmstadt, Germany) and sealed by Parafilm®. They were incubated at 15.1 (± 1.8 °C) in the dark for 1, 14, 28, and 90 days. Water evaporation was determined gravimetrically and then compensated with the addition of UPW. Samples were finally stored at − 20 °C. For analyses, samples were defrosted by incubation overnight at 6 °C. Each replicate was analysed on the effect expressions of the target variables leucine aminopeptidase activity, microbial biomass as well as of functional and taxonomic genes.
Furthermore, results of our previous studies [16, 17] of the effect assessment of AgPure, with an average size of 20 nm and polyacrylate stabilization, were used to calculate the impact of the five agent species AgPure, Ag10-COOH, Ag10-NH2, AgNO3, and KNO3 at concentrations of 0.01; 0.1; and 1.0 mg Ag/kg soil after exposure of 1, 14, 28, and 90 days in the silty loam soil on the same target variables.
Analysis of biological parameters
Leucine aminopeptidase activity
Leucine aminopeptidase (EC 3.4.1.1; LAP) was investigated according to Marx et al. [35], with modifications [36]. Briefly, 1 mol/L L-leucin-7-AMC was used as substrate for LAP, and 7-amino-4-methylcoumarin [37] was used as a standard. Incubation of the soil slurry with the substrate was performed at 30 °C. Measurements of fluorescence were performed after 0 and 2 h using a Victor Multilabel Plate Reader (Perkin Elmer, Germany; excitation wavelength: 355 nm, emission wavelength: 460 nm). LAP activity was calculated as substrate turnover per g dry soil h.
The potential contribution of AgNP to the total fluorescence signal was measured for each AgNP concentration. Then, AgNP was added to autoclaved soil, and the same procedure was conducted. The resulting fluorescence signal was compared to the fluorescence intensity of the pure autoclaved soil. At this, the used AgNP did not exhibit autofluorescence (data not shown).
DNA extraction and microbial biomass measurements
DNA extraction and purification were performed using the Genomic DNA from soil kit (Macherey–Nagel, Düren, Germany) according to the manufacturer's instructions and stored at − 20 °C. For the measurement of microbial biomass, 10 µL of DNA was transferred into the well of a 96-well microplate and shaken for 5 s before absorbance was measured at 260 nm [38] using Victor Multilabel Plate Reader (Perkin Elmer, Germany).
Quantitative detection of functional and taxonomic genes
16S rRNA genes were used as a proxy to quantify the abundance of bacteria, as described by Bach et al. [39]. The abundance of bacteria harboring the nifH gene was used as a marker for the potential to fix nitrogen and measured according to Rösch et al. [40]. To analyze the effects of the different silver materials on the ammonia-oxidizing bacteria, the amoA primer system described by Rotthauwe et al. [41] was used. To quantify the abundance of taxon-specific 16S rRNA gene copy numbers, qPCR assays for Acidobacteria [42, 43], Actinobacteria [43, 44], alpha-Proteobacteria [45], Bacteroidetes [43, 46], and beta-Proteobacteria [47, 48] were performed. Detailed descriptions of the used assays are given by Grün and Emmerling [17] and Grün et al. [16].
All qPCR reactions were conducted on a thermal cycler equipped with an optical module (Analytik Jena, Jena, Germany). All samples were run in triplicate wells. Single qPCR reactions were prepared in a total volume of 20 µL. The InnuMix SYBR-Green qPCR Master-Mix was purchased from Analytik Jena (Jena, Germany). Primer concentrations were 10 pmol/µL, and amplification specificity was assessed by melting curve analysis and gel electrophoresis on a 1.5% agarose gel after qPCR. Standard curves were based on cloned PCR products from the respective genes [43].
All data were processed using IBM SPSS Statistics for Windows, Version 23.0 (IBM Corp., Armonk, USA). The obtained biological values of qPCR (copy gene number per kg dry soil), leucine aminopeptidase activity (substrate turnover per g dry soil h) and measurement of microbial biomass (ng DNA per g dry soil) of negative controls (0.00 mg Ag/kg) of the 4 sample replicates were averaged for each biological parameter and day. Subsequent, the relative variation of each silver treated sample of one concentration and one sampling date were calculated as follow:
$${\text{Relative}}\,{\text{variation}}\,(\% )\, = \,\frac{{{\text{biological}}\,{\text{value}}\,{\text{of}}\,{\text{treated}}\,{\text{sample}}}}{{{\text{averaged}}\,{\text{biological}}\,{\text{value}}\,{\text{of}}\,{\text{untreated}}\,{\text{samples}}}}\, \times \,100$$
In the following, the relative variation was set as target variables. Exposure time (1, 14, 28, 90 days), concentration (0.01; 0.1; 1.0 mg Ag/kg soil), soil texture (loam, sand, clay) and silver species (Ag10-COOH, Ag10-NH2, AgNO3) were set as factors with different factor levels (e.g. Ag10-COOH, Ag10-NH2, AgNO3).
For the effect assessment of a factor level on the target variable, the mean of a target variable (e.g. leucine aminopeptidase) at a distinct factor level (e.g. Ag10-COOH) was calculated. Within a factor, the target variable was also pre-evaluated for a normal distribution by the Shapiro–Wilk test and variance homogeneity by the Levene test.
To test the influence of the four factors as well as the influence of the factor combination on the target variable, multi-factorial ANOVA was performed. Here, tests of between-subject effects provided information about significant relationships. By means of the Bonferroni post hoc test the significance of group mean differences within a factor were calculated.
Finally, a multivariate analysis of variance was performed to simultaneously mitigate the influence of the four factors on the 10 dependent target variables. The factors were set as independent variables, whereas the relative variations of the biological parameters were set as dependent variables. The test statistic was computed by Pillai's trace.
Characterization of the AgNP suspensions prior to application into soils
Characteristics of the Ag10-COOH stock solution in water were as follows: hydrodynamic diameter (DLS, z-average): 85.3 ± 2.9 nm; polydispersity index (PDI) (DLS) = 0.234 ± 0.031; ζ-potential: − 41.4 ± 1.1 mV (in UPW). AF4-UV-DLS measurement revealed an average hydrodynamic diameter of 18.1 ± 0.5 nm at the UV peak maximum. Over the main UV peak the hydrodynamic diameter ranged from 18 nm to around 28 nm. Larger particle sizes with a hydrodynamic diameter up to around 118 nm were detected as well but larger particle fractions were low in concentration based on the corresponding UV signal (Fig. 1a).
AF4-UV-DLS measurements of Ag10-COOH (a) and Ag10-NH2 (b)
Characteristics of the Ag10-NH2 stock solution in water were as follows: hydrodynamic diameter (DLS, z-average): 62.1 ± 3.1 nm; polydispersity index (PDI) (DLS) = 0.379 ± 0.072; ζ-potential: + 39.2 ± 0.2 mV (in UPW). Average hydrodynamic diameter at the UV peak maximum obtained from AF4-UV-DLS measurement: 82.3 nm ± 3.1 nm with a size distribution from around 8 nm to 142 nm (Fig. 1b).
The analysis of the ζ-potential and hydrodynamic diameter of the AgNP under different pH values resulted in pH-dependent characteristics of the NP (Fig. 2). The hydrodynamic diameter was 197 ± 13.9 nm after 1 h incubation in a pH 4 solution and 169 ± 9.2 nm in a pH 7.4 solution for Ag10-NH2 (Fig. 2a). There was no difference in the results after 1 h and 24 h incubation. Under alkaline conditions (pH 10), the hydrodynamic diameter increased to 601 ± 85 nm (1 h) or 1911 ± 475 nm (24 h). Starting from the stock solution, with a positive surface charge of + 39.2 ± 0.2 mV, the surface charge decreased with increasing pH value (Fig. 2b). After 1 h incubation of Ag10-NH2 in aqueous pH 4 solution, the ζ-potential was + 36.9 ± 2.5 mV and decreased to − 7.4 ± 1.5 mV after 1 h incubation in a matrix with pH 10. But there was no significant difference between the results after 1 h and 24 h incubation.
PH-dependent hydrodynamic diameter and ζ-potential of Ag10-COOH (a, b) and Ag10-NH2 (c, d). N = 6
Ag10-COOH behaved contrarily. After 1 h incubation, the hydrodynamic diameter of Ag-COOH at pH 4, 7.4 and 10 was comparable. After 24 h the size significantly increased at pH 4, but not at pH 7.4 and 10 (Fig. 2c). The ζ-potential mainly did not vary significantly between the different pH values. After incubation for 24 h at pH 10 there was a significant increase in the surface charge to negative (− 54.1 ± 2.9 mV) (Fig. 2d).
Transmission electron microscopy revealed an averaged diameter of 5.7 ± 2.3 nm for Ag10-COOH (Fig. 3a). Their distribution among the grid was not homogeneous but rather arranged in clusters. Between the dark grey and black single particles there was a lighter colored layer with very small objects in it. This could be a residue from the production process. The single particles were shaped roundish to oval. The Ag10-NH2-NP showed an average diameter of 6.2 ± 3.4 nm (Fig. 3b). They were not distributed homogeneously among the grid and agglomerate to secondary particles with a diameter of over 100 nm. Beside these particles there were areas with single nanoparticles which had a roundish to oval shape and a plain surface.
Transmission electron microscope (TEM) image of Ag10-COOH (a) and Ag10-NH2 (b) in water
Impact of silver species, exposure time, concentration, soil texture and their combinations on biological parameters
Results of multi-factorial ANOVA revealed significant main effects of the factors silver species, concentration, exposure time and soil texture on the relative variation of the bacterial phyla Actinobacteria, Acidobacteria, alpha-Proteobacteria, Bacteroidetes and beta-Proteobacteria (Table 2). Comparable significant results were observed for the leucine aminopeptidase (LAP) activity, the microbial biomass, the abundance of all bacteria (16S rRNA), as well as for the ammonia-oxidizing bacteria (amoA) (Table 2). In case of the relative variation of the free-living nitrogen-fixing bacteria (nifH) only the factors and factor levels of exposure time and soil texture caused significant effects (Table 2). Silver species and concentration provoked no significant impacts on nifH (Table 2). The interaction effects of the factor combinations were predominantly significant for all variables (Table 2).
Table 2 Results of the (M)ANOVA including the main factors silver species, concentration, time, and soil texture, N = 4320
The considered main factors as well as their interactions could explain at least 81.5% (R2 = 0.815) of the variance of the respective target variables; in the case of beta-Proteobacteria even 93.9% (R2 = 0.939, Table 2).
Results of multivariate analysis revealed significant main effects for all factors and factor combinations (Table 2). MANOVA scored the factor combination soil texture × silver species as the strongest option (F = 53.2), followed by the combination of soil texture × exposure time (F = 46.8) (Table 2). Nevertheless, also the main factors silver species (F = 44.2), exposure time (F = 43.9) and soil texture (F = 46.5) could broadly explain the variance compared to the error variance itself (Table 2). The main factor concentration could elucidate the least variance (F = 12.7) (Table 2).
In Fig. 4 the influence of the highest scored factor combinations silver species × soil texture and exposure time × soil texture on the relative variation of the biological parameters are highlighted.
Impact of factor combinations on the entirety of the relative variation of the biological parameters. a Silver species × soil texture, b exposure time × soil texture. The error bars represent the 95% confidence interval. N = 4320. The relative variations (%) were set as dependent variable. The factors were set as independent variable. To calculate the impact of the factor combination silver species× soil texture on the relative variation of the biological parameters, the relative variations were unconnected to the factors concentration and exposure time (a). To calculate the impact of the factor combination exposure time × soil texture on the relative variation of the biological parameters, the relative variations were unconnected to the factors concentration and silver species (b)
The silver species Ag10-COOH caused similar effects in the loamy (101.2%) and the clayey (99.4%) soil, whereas Ag10-COOH diminished the relative variation of the biological parameters significantly in pairwise comparison to the sandy soil (93.3%; p = 0.000) (Fig. 4a). The effects of Ag10-NH2 were also very similar between the loamy (95.8%) and the clayey soil (98.0%), causing no significant differences. The pairwise comparison of clayey and sandy (94.0%) soil exhibited significant differences on the biological parameters (p = 0.003), while between sandy and loamy soil no differences could be detected. AgNO3 control slightly stimulated the entirety of the relative variation of the biological parameters in the loamy (101.2%) and sandy (103.6%) soil. In contrast AgNO3 caused a significant decrease of the relative variation (89.1%, p = 0.000) in the clayey soil. The order of increasing toxicity were for the loam Ag10-COOH = AgNO3 < Ag10-NH2, for the clayey soil Ag10-COOH = Ag10-NH2 < AgNO3 and for the sandy soil AgNO3 < Ag10-NH2 = Ag10-COOH.
As shown in Fig. 4b, a 1-day exposure to the silver species at different concentration led to a clear distinction between the effect in the sandy soil relative to the clayey (p = 0.029) as well as the loamy soil (p = 0.001). Here, the sandy soil exhibited the lowest toxicity. During the mid-term exposure of 14 and 28 days, this trend reversed and the sandy soil proved to be more toxic in response to treatment with silver compared to the loamy soil (p ≤ 0.05). However, after 90 days of silver exposure, an increase in the relative deviation of the biological variables from their untreated controls from the sandy to the clayey to the loamy soil could be observed (Fig. 4b). At this, there were only significant pairwise differences between the sandy and the loamy soil (p = 0.000), as well as between the sandy and the clayey soil (p = 0.001).
Impact of agent species, exposure time, concentration and their combinations on biological parameters in a loamy soil
Results of multi-factorial ANOVA revealed significant main effects of the factors agent species and exposure time on the relative variation of all biological target variables in the loamy soil (Table 3). In the case of the main factor concentration, only LAP activity, microbial biomass, amoA, Actinobacteria, Acidobacteria, and beta-Proteobacteria were significantly affected by the factor levels. The main factor exposure time was able to explain the largest part of the variance compared to the error variance of all target variable except for Acidobacteria and beta-Proteobacteria (Table 3). For these, the highest F-values were created by the main factor agent species. The main factor concentration could elucidate the least variance.
Table 3 Results of the (M)ANOVA including the main factors agent species, concentration, and time, N = 2400
The majority of significant mean differences in a factor were found for Ag10-COOH and AgPure, 0.01 and 0.10 mg/kg silver as well as 0.01 and 1.00 mg/kg silver, and 1 day and 14 days, 14 days and 90 days as well as 28 days and 90 days (Table 3). No factor combination achieved higher variance explanations than the single main factors (Table 3).
Based on the results of the MANOVA, all factors and factor combinations revealed significant main effects (Table 3). The main factor exposure time clarified the largest amount of variance by a F-value of 56.0, followed by the main factor agent species (F = 18.8) (Table 3). The main factor concentration could elucidate the least variance again. The means of the highest scored individual factor levels of exposure time and agent species are visualized in Fig. 5.
Impact of the factors exposure time (a) and agent species (b) in a loamy soil. The error bars represent the 95% confidence interval. N = 2400. The relative variations (%) were set as dependent variable. The factors were set as independent variable. To calculate the effect of the exposure time on the relative variation of the biological parameters, the relative variations were calculated unconnected to the factors concentration and agent species (a). For the calculation of the effect expression of the factor agent species, the relative variations were unconnected to the factors concentration and exposure time (b)
The relative variation of the biological parameter in the presence of the different silver and nitrate concentrations changed significantly with the exposure time (p = 0.000) (Fig. 5a). After a decrease of the relative variation on day 1, the relative variation increased by 14.5% on day 14 to 109.4% (p = 0.000). Within the mid-term exposure, a similar, but negatively change of 10.8% could be observed at day 28 (p = 0.000). At the end of the experiment, a further, but not significant decline of the relative variation of the biological parameter could be observed (Fig. 5a).
By means of the entirety of the relative variation of the biological parameters the means of AgPure (96.4%) and Ag10-NH2 (95.8%) caused no significant pairwise disparity in their effect characteristic (Fig. 5b). This could also be stated for the means of Ag10-COOH (101.2%), AgNO3 (101.2%) and KNO3 (104.2%) (Fig. 5b). In contrast, AgPure and Ag10-NH2 each differed significantly from Ag10-COOH, AgNO3 and KNO3 in their pairwise comparisons (p ≤ 0.007).
The results of the factor combinations of the multivariate multi-factorial analysis revealed lower F-values than for the main factors agent species and exposure time individually. However, the factor combination agent species × exposure time was scored as the strongest option (F = 14.5) within the factor combination possibilities (Table 3). At this, 1 day exposure demonstrated Ag10-NH2 (89.8%) as the agent species with the highest influence on the relative variation of the biological parameter in the loamy soil, followed by Ag10-COOH (92.3%) (Fig. 6). AgPure and AgNO3 showed only small influences on the relative variation. Between AgNO3 and KNO3 no significant pairwise difference could be observed. At day 14, all agent species provoked an increase of the relative variation. The biological parameter were stimulated by 1.8% (AgPure) to 16.0% (AgNO3) (p = 0.000). Again, between AgNO3 and KNO3 no significant pairwise difference could be observed. After 4 weeks of exposure, the stimulating effect of all agent species weakened (Fig. 6). In the case of Ag10-COOH and KNO3 a decrease of the relative variation of 7.3% and 5.8%, respectively, could be observed. The strongest effects on the relative variation of the biological parameters due to the different agent species were created at day 90 (Fig. 6). The agent species Ag10-COOH and KNO3 provoked significant stimulations of the biological parameters of 12.5% and 15.1%, respectively (Table 4). In contrast, AgPure, AgNO3 and Ag10-NH2 significantly diminished the relative variation by 14.8, 16.3 and 17.8%, respectively (Table 4).
Impact of the factor combinations exposure time × agent species in a loamy soil. The error bars represent the 95% confidence interval. N = 240. The relative variations (%) were set as dependent variable. The factors were set as independent variable. To calculate the impact of the factor combination exposure time × agent species on the relative variation of the biological parameters, the relative variations were unconnected to the factor concentration
Table 4 Pairwise comparisons (p) of the post-hoc test, N = 2400
The soil microbial community is responsible for several soil ecosystem services, like the recycling of organic matter, the soil fertility and structure, the biogeochemical nutrient cycles, the toxin degradation and the pathogen control [49,50,51,52,53]. Especially bacteria are the main performer of functional processes, which are integral for maintenance of healthy soil environments [54].
In this study, we used the DNA content of soil samples as a proxy for the microbial biomass and the abundance of 16S rRNA genes as an indicator of bacterial abundance. Furthermore, we measured LAP activity as well as the gene abundance of nifH and amoA as proxies for the nitrogen cycle, which drives many ecosystem activities in soils, including plant production [40, 41, 55]. Finally, we documented the response of Acidobacteria, Actinobacteria, alpha-Proteobacteria, Bacteroidetes and beta-Proteobacteria as representatives of the main soil bacterial phyla. Despite the phylogenetic diversity [56], not cultivability [57] and functional redundancy [58] still make it difficult to link members of bacterial phyla in soils with their function, some specific soil functions could be assigned to specific soil bacteria [41, 59,60,61,62,63].
The influence of freezing and defrosting is known to influence the structure and function of microbial communities [64, 65]. Nevertheless, this procedure was executed considering the high amount of samples to be processed. Since this method was applied to all samples, the results were comparable and give statistical inferences about the influence of the factors silver/agent species, concentration, exposure time and soil texture.
Silver/agent species
The main determining factor silver species caused almost significant effects on the investigated biological parameters of the three test soils (Table 2), whereby the main factor agent species caused significant effects on all investigated biological parameters in the loamy soil (Table 3).
In a simultaneous consideration of all biological parameters by the MANOVAs, an increase in the toxicity of KNO3 (104.2%) = Ag10-COOH (98.0%/101.2%) = AgNO3 (97.95%/101.2%) < AgPure (96.4%) = Ag10-NH2 (95.9%/95.8%) could be observed, whereby between Ag10-COOH and AgNO3 no significant pairwise difference could be calculated (Additional file 1: Fig. S1). Values of the main factor agent species are shown in italic.
The coating of NP is crucial for their reactivity and physico–chemical transformations, such as the dissolution rate, the availability of surface areas, the surface charge, the aggregation rate and the long-term stability [27, 66, 67]. Analysis of the ζ-potential of the AgNP under different pH values (Fig. 2b, d), indicated no significant impact of pH on ζ-potential of the Ag10-COOH particles. The high negative ζ-potentials of − 33.9 ± 1.8 mV at pH 4 and − 39.4 ± 1.5 mV at pH 7.4 of the Ag10-COOH indicate high stability of these nanoparticles. The COOH-coating could minimize Ag+ ion release as well as direct contact of the AgNP with microorganisms or soil compartments such as clay particles or natural organic matter due to their function as a physical barrier [27, 31, 66]. The investigation of Long et al. [68] confirmed this hypothesis. They examined the Ag+ release and toxicity of different coated AgNP which were very similar to our Ag10-COOH particles and observed only a slightly release of Ag+ and a lower associated toxicity to Escherichia coli. In addition, the negative surface charge could led to an attachment of soil cations such as Ca2+, Mg2+ or K+ on the Ag10-COOH particles and an increase of their physico–chemical barrier.
Furthermore, the coating of AgNP plays a crucial role in determining their cellular uptake mechanism [26], where the negative surface charge of Ag10-COOH indicates a low affinity to negatively charged membranes of microorganisms. Nevertheless, Ag10-COOH induced both adverse and advantageous effects in view of the investigated individual biological parameters. Here, individual defense strategies [28, 69] as well as species dependent toxicological susceptibility [17, 52, 70] of the biological parameters might be the underlying reasons.
In contrast, the ζ-potential of Ag10-NH2 decreased with increasing pH in our experiment (Fig. 2b) and consequently a decrease of stability could be deduced in the test soils. As a consequence of the missing physico–chemical barrier a high availability of AgNP surface area as well a high release rate of Ag+ ions is expectable. Both, the mean of the relative variation of all biological parameters (95.9%/95.8%) as well as the mean of LAP activity (96.5%/95.1%), 16S rRNA (95.7%/92.0%), amoA (92.6%/92.9%), Acidobacteria (89.9%/78.7%) and Bacteroidetes (93.5%/96.6%) (Table 2, 3) underlined the toxic effects of Ag10-NH2. Moreover, the positive surface charge of Ag10-NH2 could have led to a strong association with the negatively charged membranes of microorganisms [71, 72] and affect the surface tension of the membrane resulting in an increase of pore formations [26].
AgPure showed similar harmful effects such as Ag10-NH2, which could be also attributed to release of Ag+ ions. Their low particle concentration, their polyacrylate stabilization and the high pH value of soil could have prevented initial aggregation and agglomeration of AgPure [16, 17, 73]. In addition, the high concentration of divalent cations, such as Ca2+ und Mg2+ in the loamy soil (Table 1) could promote AgPure and Ag10-NH2 dissolution, resulting in the displacement of Ag+ ions from the nanoparticle surface and thus toxicity [74].
Surprisingly, the AgNO3 control documented low toxicity by average 97.95% and 101.2% of the relative variation. The use of silver nitrate is ubiquitously as control in toxicological studies with AgNP to estimate the influence of dissolved Ag+ ions from AgNP [1, 4, 75, 76]. Here, the toxicity order of the MANOVAs suggest a high release of Ag+ ions due to Ag10-COOH (98.0%/101.2%) relative to Ag10-NH2 (95.9%/95.8%) and AgPure (96.4%) and thus a high direct toxic impact of Ag10-NH2 and AgPure NP itself. However, an individual view on the biological parameters with regard to the ANOVAs resulted in a different manner. The bacterial abundance (S16 rRNA) and the abundance of Actinobacteria were stimulated by Ag10-COOH exposure and diminished by AgNO3, whereas LAP activity, microbial biomass and Acidobacteria were stimulated or not influenced by AgNO3 and diminished by Ag10-COOH (Table 2). These individual responses of the biological parameters explain the majority of significant mean differences found between Ag10-COOH and AgNO3 within the main factors silver species. By averaging MANOVA, these important observations are lost and could have led to deceptive conclusions. Based on the individual ANOVAs (Tables 2, 3), as well as the particle and soil characterizations, the hypothesis of a high release of Ag+ by Ag10-COOH should be excluded and could be related rather to Ag10-NH2 and AgPure.
In detail, the presented results in Tables 2 and 3 revealed no or stimulation effects due to AgNO3 in case of LAP, amoA, Acidobacteria and Bacteroidetes, whereas the used AgNP caused predominantly lower relative variations of these biological parameters. Similar observations were documented in short-term studies dealing with the effect of low concentrations of AgNP and AgNO3 on organisms related to nitrogen cycle, where AgNO3 might cause lower or even stimulating effects relative to the AgNP. Quite recently, we observed stimulating effects by AgNO3 to ammonia-oxidizing and nitrogen fixing bacteria after short-term exposure, whereby AgNP led to their decrease [16]. Schlich et al. [77] documented a significant stimulation of the nitrite production after 7 day exposure to 0.19 mg/kg AgNO3 up to 19.4%, whereby AgNP caused an inhibition. Yang et al. [78] found that 2.5 µg/L of Ag+ as AgNO3 upregulated AMOA genes amoA1 and amoC2 by 2.1 by 3.3-fold. Furthermore, Choi et al. [79], Masrahi et al. [24] and Liang et al. [80] observed lower effects on microbial nitrification due to AgNO3 relative to AgNP after short-term exposure. Here, in view of the very similar effect expressions of AgNO3 (101.2%) and KNO3 (104.2%) we suspect, that the nitrate contained in the AgNO3 might also impact the biological parameter such as the silver itself in the AgNO3 control. LAP activity and amoA are proxies for the nitrogen cycle. Because of the spatial structure imposed by soil particles resulting in local variations in oxygen availability over small distance [81], both aerobic and anaerobic conditions can be found in the same soil sample. Thus, AgNO3 control could act as substrate for nitrate reducing bacteria, such as gamma-Proteobacteria or Acidobacteria [63, 82, 83], which were less sensitive to AgNP or Ag+ in form of AgNO3 [78, 84, 85] and promoted their increase due to nitrate utilization via denitrification and/or dissimilatory nitrate reduction [81, 83] resulting in new nitrogen sources for the ammonia-oxidizing and nitrogen fixing bacteria.
The increase of the abundance of Bacteroidetes might be the result of harboring silver resistance genes [69] and simultaneously increased availability of carbon through decreased silver-sensitive microbes, which promoted the as r-strategist know Bacteroidetes [56]. Nevertheless, the average factor expression of AgNO3 in case of Actinobacteria (90.8%/88.7%) and beta-Proteobacteria (94.0%/94.3%) (Tables 2, 3) indicated still harmful effects. Consequently, there could be an interplay of stimulating and detrimental effects, which might underlay in the agent itself, but also in consequential shifts of the microbial community and nutrient availability.
The F-values of the biological parameters in both MANOVAs indicated the main factors silver species and agent species as strong options to explain large parts of the variances compared to the error variances (Tables 2, 3). With respect to the three different used AgNP, our results confirmed a distinctive impact of the AgNP functionalization on their fate and toxicity in the soil environment. The general applicability of AgNO3 as suitable positive control should be subject of further investigations.
The main determining factor concentration caused as well almost significant effects on the investigated biological parameters in the three test soils (Table 2). The majority of significant mean differences in the factor were found for 0.01 and 1.00 mg/kg silver. Here, the toxicity of the silver species increased predominantly with increasing concentrations (Additional file 1: Fig. S2), which was already observed in a variety of recent soil studies [5, 24, 86, 87].
The concentration of AgNP strongly influences their physico–chemical transformation. Once released into the environment, AgNP concentration is crucial for dissolution and aggregation mechanisms [28]. Merrifield et al. [63] documented AgNP homoaggregation as insignificant at realistic environmental concentrations, whereby dissolution and also heteroaggregation processes were more likely. Based on the low test concentrations of 0.01–1.00 mg/kg silver, dissolution seemed to be the most probable transformation type in our study. Dissolution hypothesis was supported by the average high pH value of soils that could have prevented initial aggregation and agglomeration of AgNP [73] as well as the average high concentration of divalent cations, such as Ca2+ und Mg2+ (Table 1), which promoted AgNP dissolution and resulted in the displacement of Ag+ ions from the nanoparticle surface [74]. Furthermore, the high ζ-potentials of + 36.8 and + 39.2 mV for Ag10-NH2 as well as of − 33.9 and − 39.4 mV for Ag10-COOH at pH 4 and 7.4, respectively, indicated high interparticular repulsive forces, which diminished the aggregation probability due to high stability [88]. However, according to Klitzke et al. [14] a decrease of ζ-potential after AgNP exposure to soil solution has to be assumed.
A direct interaction of the microbes with the differently charged AgNP is also conceivable. However, the relative variation of the biological parameters differed only by 2.0% between 0.01 and 1.00 mg/kg silver. This low effect expression can be attributed to the low bioavailability of the silver species as well as to bacterial resistance mechanisms. The released silver ions could bind to clay particles [19, 33] or organic material [20, 89, 90] and thus be less bioavailable. In addition, the likelihood of a microbe to encounter a silver particle or ion is generally low concerning the applied low test concentrations. In the event of a coincidence, bacteria possess various commonly defense mechanisms to escape the toxic influence of silver, such as the thickness of their peptidoglycan-membrane as the first line of defense [91], efflux systems to extrude heavy metal ions [4, 92, 93] and the production of extracellular proteins and exopolysaccharide [94] to neutralize small amounts of toxic compounds [95, 96]. Furthermore, some bacteria obtain specific silver resistance genes, which encode periplasmic silver-specific binding protein (SilE), silver efflux pumps (P-type ATPase), and membrane potential-dependent polypeptide cations/proton antiporter (SilCBA) [4].
Nevertheless, compared to the F-values of the other main factors, the factor concentration could elucidate the least variance of the mean values considering all variables (Tables 2, 3). Based on our results, the main factor concentration at environmentally relevant levels has, therefore, only a very weak influence on the AgNP effect characteristics respecting the microbial community in soils.
The main determining factor exposure time caused consistently significant effects on the investigated biological parameters in the three test soils after silver exposure (Table 2), with the majority of significant mean differences between the factor expressions 1 day and 28 d. With the exception of the relative variation of the Bacteroidetes and the ammonia-oxidizing bacteria (amoA), all biological parameter were negatively affected at day 1 (Table 2). This observation was rather unusual, because several studies measured high and fast sensitivity of ammonia-oxidizing bacteria with AgNP and AgNO3 [24, 77, 78]. Apart from that, we recently recognized a tolerance of amoA harboring bacteria after short-term exposure to AgNP [16]. Furthermore, Schlich et al. [77] and Samarajeewa et al. [86] observed stimulatory effects with ionic and nanoparticulate silver to ammonia-oxidizing bacteria, which could be ascribed to hormone-like responses to low silver concentrations. By contrast, the stimulation of Bacteroidetes due to silver addition was in agreement with previous observations [52, 69, 70, 97, 98]. They exhibit silver resistance genes [78].
As the exposure time increased until day 28, the injury on the microbial community decreased (Additional file 1: Fig. S3). It is likely that the short-term effects after 1 day were observed as a result of the initial release of bioavailable Ag+ by AgNP and AgNO3, causing toxic effects on the microbial community. Dissolution experiments [99, 100] verified fast solubility of AgNP in soils. With increasing exposure time, the silver species became less bioavailable due to interactions with organic matter, clay minerals or pedogenic oxides [19, 33]. Furthermore, upcoming mechanisms such as self-protection [4, 91,92,93, 95, 96], resistance [4], resilience [58] and/or cryptic growth [101], might also be possible explanations for the limited effects on the microbial community in the soils.
Results of the ANOVA using data of the loamy soil only documented a similar trend (Table 3). Here, all biological parameters were also negatively affected at day 1, with the exception of the relative variation of the Bacteroidetes and the ammonia-oxidizing bacteria (amoA). In virtue of the high clay content (approximately 30%) of the loamy soil and its high content of organic carbon (2.9%) (Table 1), the retention of AgNP and Ag+ ions arose earlier and the toxicity of the agents decreased even at day 14 (Table 3) [14, 20, 102, 103].
On day 90, an impairment of the microbial community could be observed relative to day 28 for both ANOVAs (Tables 2, 3). Similar trends were observed in our previous studies [16, 17]. As AgNP and Ag+ ions gradually aged, they slowly returned to the biological soil system as a continuously sink of silver agents. Diez-Ortiz et al. [33] also documented a progressive increase in AgNP toxicity with time and assumed a time-dependent enlargement of silver in soil pore water due to slow dissolution.
The F-values of the biological parameters in both MANOVA indicated the main factor exposure time as a strong option to explain a large part of the variances compared to the error variances (Tables 2, 3). Especially in case of the exclusion of the factor soil texture from the MANOVA, the factor exposure time yielded the maximum F-value (Table 3). These results strongly underline the significance of the exposure time for AgNP ecotoxicity investigations, attributable to the changes of silver bioavailability and its specification status during long-term experiments in soils.
Soil texture
Also the main factor soil texture caused significant effects on all investigated biological parameters in the three test soils (Table 2). The majority of significant mean differences were found between the loamy soil and the clayey soil. This was noticeable, because their investigated soil characteristics such as pH, TOC, TN and CEC were very similar. The most distinct difference resulted from their grain size distributions of clay (45–65% vs. 25–35%) and sand (5–40% vs. 25–45%) (Table 1). Actually, recent studies have shown a positive correlation between the clay content of soils and the toxicity of AgNP: lower grain size of soils resulted in lower toxicity of AgNP [19, 23, 25]. Indeed, this was appropriate for the sandy soil with a mean of 97.0% and the loamy soil with a mean of 99.4% relative abundance of all biological parameters (Additional file 1: Fig. S4). However, the mean of the relative abundance of the clayey soil (95.5%) illustrated the opposed situation: lower grain size of the clayey soil resulted in higher toxicity of silver (Additional file 1: Fig. S4). In fact, not only the particle size distribution can be responsible for the silver toxicity. Schlich and Hund-Rinke [19] documented that the highest AgNP toxicity was associated with more acidic soils, whereas the lowest toxicity was associated with more alkaline soils. They supposed that the soil pH value influences AgNP dissolution and the release of ions [19]. Similarly, this could be observed for the loamy soil (pH = 7.2) and the sandy soil (pH = 3.2) in this study (Table 2). At the same time, the results of the clayey soil with a pH value of 7.4 seemed to disagree again with the hypothesis.
The results of the factor combination silver species × soil texture (Fig. 4a) might elucidate the crux of the observed mean silver toxicity order loamy soil < sandy soil < clayey soil. In case of Ag10-NH2 and Ag10-COOH, the loamy and the clayey soil exhibited lower toxicity compared to the sandy soil (Fig. 4a). This confirmed the positive correlation between the grain size distribution of soils and the toxicity of AgNP as well as the association of high AgNP toxicity by more acidic soils [19, 23, 25]. In contrast, AgNO3 caused a strong decrease of the entirety of the relative variation of the biological parameters in the clayey soil (Fig. 4a), which caused an overwhelmed influence on the average formation in the MANOVA of the silver species. This indicated a completely different fate of AgNO3 compared to AgNP in the test soils.
Soil texture× silver species
The factor combination soil texture × silver species yielded the highest F-value in the MANOVA considering the entirety of the relative variation of the biological parameters (Table 2).
With regard to the investigated AgNP it could be observed that their toxicity was lower in the loamy and clayey soil than in the sandy soil (Fig. 4a). In case of Ag10-NH2 the high availability of soil cations in the loamy and the clayey soil (Table 1) might have promoted the dissolution of the lower stable AgNP [14], resulting in the release of Ag+ ions. These were bound to clay particles and became less bioavailable [33]. The absent effect of the AgNO3 control as measure for Ag+ impact in the loamy soil might confirm this hypothesis. Furthermore, the high content of natural organic matter, derived by the land-uses of the loam and clay locations, could have been reduced the toxicological effects of the Ag10-NH2 due to inhibition of oxidation [104]. At this, positive charged Ag10-NH2 were adsorbed by negatively charged organic matter, which dominated now the surface properties leading to higher steric stability and thus also lower bioavailability [18].
In contrast, the sandy soil exhibited contrary soil properties and thus toxicity (Table 1). Although the low amount of soil cations reduced the dissolution affinity of AgNP, the low pH of 3.2 and the concomitant higher amount of OH− ions might decreased the Ag10-NH2 stability due to the deprotonation of the amino groups. Under aerobe conditions, AgNP oxidation followed and Ag+ ions were released [104]. The small clay content of the sandy soil (0.0–5.0%) did not provide enough capacity for Ag+ retention resulting in direct harmful interactions of Ag+ with the microorganisms, such as interactions with enzymes of the respiratory chain reaction, increase of DNA mutation frequencies or morphological changes of cell wall membrane [105]. However, the stimulating effects of the AgNO3 control on the biological parameters in the sandy soil disagreed with the Ag+ dissolution theory (Fig. 4a). Therefore, it might be that the Ag10-NH2 particle itself caused the negative effects in the sandy soil. Positively charged NP were observed to strongly associate with membranes, which leads to a higher cellular uptake [26].
The lower toxicity of the Ag10-COOH relative to the Ag10-NH2 particles based in general on their higher stability and their surface barrier, which reduced Ag+ dissolution. Furthermore, their negative surface charge promoted the attachment of soil cations of the loamy and the clayey soil and could have prevented direct interactions with negatively charged membranes of microorganisms [26, 27, 31, 66, 68]. Analogously to the Ag10-NH2, Ag10-COOH showed strongest toxicity in the sandy soil (Fig. 4a). Here, interactions with soil cations as well as natural organic material seemed to be less probable considering the soil and AgNP characteristics (Table 1). In addition to a low oxidation of Ag10-COOH and the concomitant lower Ag+ release, a direct interaction of these AgNP with microorganisms can be assumed. An internalization of negatively charged nanoparticles could be occured through nonspecific binding and clustering of the particles on cationic sites on the plasma membrane and subsequent endocytosis or by direct diffusion through the cell membrane [106]. Here, Ag10-COOH could act as a trojan horse: metabolization processes in food vacuoles and lysosomes allow the uncoating of AgNP and enable the direct release of Ag+ ions into the cytoplasm causing intracellular damages [32, 107].
However, there is always a challenge to accurately differentiate what proportion of the toxicity is from the ionic form and what proportion of the nanoform [76]. For resolving this question, AgNO3 was used as measure for Ag+ impact in our study. Results of silver nitrate exposure in the loamy soil confirmed their low contribution to the AgNP toxicity (Fig. 4a). Ag+ ions seemed to be bound to clay particles or other soil compartments and became less bioavailable. Conspicuously, AgNO3 caused the highest toxicity in the clayey soil and a low stimulation in the sandy soil. In fact, Schlich and Hund-Rinke [19] observed also a lower toxicity of AgNO3 compared to AgNP in a sandy soil (RefeSol 04A) investigating the potential ammonium oxidation, but there was no promotion. The high toxicity of AgNO3 in the clayey soil resulted by a decrease of the bacterial taxa, whereas LAP activity, nifH and amoA were hardly influenced or even stimulated (data not shown). In case of the proxies for nitrogen cycle, we suspect a significant portion of the nitrate compound of the AgNO3 control in the results. However, reasons for the detrimental effect on the bacterial structure were intricate to find. Probably, interactions with existing soil contaminants could be a reason for the high AgNO3 toxicity. The clay location was farmed and treated with herbicides and pesticides. With this, AgNO3 might have been bound to such contaminants, thus creating a new toxic agent. For example, Uwizeyimana et al. [108] indicated that pesticide and metal mixtures negatively affected earthworms. However, detailed studies on the impact of nanoparticle-pesticide combinations on microbial community in soils are currently lacking, but necessary to perform a comprehensive risk assessment of AgNP in soils.
Soil texture× exposure time
The factor combination soil texture × exposure time revealed the second highest F-value of 46.8 in the MANOVA (Table 2). With regard to the results in Fig. 4b, it could be clearly observed, that the spans of the effect characteristics of the individual soils differ significantly from each other during the four examination dates. While the effect levels in the sandy soil on days 1, 14, 28 and 90 extended over an amount of 6.3% (lowest value of 93.7% on day 14, highest value of 100.0% on day 90), the amounts for the loamy and the clayey soil were 21.5% and 20.3%, respectively. This indicated the loamy and the clayey soil as more complex soil systems considering the interplay of physico–chemical and biological interactions and transformations of the silver species contrary to the sandy soil. Whereas the effect expressions of the biological parameters in the sandy soil were indicative for consistent conditions, the effect expressions of the loamy and the clayey soil suggested a variety of temporal changes in the complex soil-time framework.
In case of the sandy soil it can be assumed, that there were continuously barely interactions of the silver species with the soil type characteristics, causing throughout similar effect expressions. In addition, the indigenous microbial community might show no observable adaption ability over time.
In contrast, the relative variations of the biological parameters in the loamy and the clayey soil differed not only in dependence on the exposure time, but also in their effect strength from each other, although their soil characteristics were very similar (Table 1). For example, on day 14, it could be assumed for the loamy soil, that the silver species were transformed, might be bound to the clay particles or capped by organic matter [33, 104], resulting in a decrease of toxicity and a stimulation of the biological parameters due to their defense arsenal [4, 91,92,93, 95, 96] (Fig. 4b). Based on the similar soil characteristics, this should be also noted for the clayey soil. However, the biological parameter was diminished by 14.0% in the clayey soil at day 14 suggesting divergent indigenous microbial communities with different capacities to resist and adapt on silver emission. Also interactions with herbicides and pesticides of the clayey soil might have led to the creation of a new toxicant, which provoked the harmful effect. At day 28, the effective expressions of the loamy and the clayey soil reproached and none or stimulating effects were documented (Fig. 4b). At this, ageing of the silver species and their return to the biological soil system could have reduced the stimulating effects on the biological parameters in the loamy soil. Contrary, the ageing in the clayey soil of the silver species or the new toxicants as well as the now developed defense mechanisms of the microbial community after 4 weeks exposure caused stimulation by 6.3% (Fig. 4b).
Based on our data, however, we can only speculate about these events. More detailed investigations are necessary to reveal the time-dependent fate of the silver bioavailability in different soils, as well as the time-dependent responses of the microbial community to these silver species. For this purpose, the development of reliable and robust analytical methods for detecting specific silver species at trace concentrations in complex matrices is a fundamental requirement. Next, batch experiments can reveal time-dependent silver transformations and the concomitant effects on the microbial community. Furthermore, it would be useful to further deepen the soil characterization to evaluate possible effects of anthropogenic residues from e.g. herbicides or pesticides on the ecotoxicological potential of silver nanoparticles.
Agent species× exposure time
Regarding the MANOVA with the main factors agent species, concentration and exposure time in the loamy soil, the factor exposure time was able to explain the largest part of variance. The factor combinations achieved lower F-values. Nevertheless, the factor combination agent species × exposure time still achieved the highest F-value of 14.5 for the factor combinations. Furthermore, the results of the pairwise comparisons of the post hoc test (Table 4) as well as the ambiguous role of AgNO3 control as a measure of Ag+ dissolution gave reason to pay attention to this interaction.
Short-term exposure led only to a small difference between the investigated agents AgPure, Ag10-COOH, Ag10-NH2, AgNO3 and KNO3 (Fig. 6). Ag10-COOH diminished the relative variations of the biological parameters by 7.7%. Based on their higher stability and the short exposure time, physico–chemical transformation could be excluded and led to the assumption of direct harmful interaction of these AgNP with microorganisms. In contrast, Ag10-NH2 was reported as strong instable AgNP due to their surface functionalization. At this, higher dissolution of Ag+ ions as toxicological agent might be a probable explanation for the decrease of the biological parameters by 10.2% (Fig. 6). However, relating to the low effects of the AgNO3 control (97.95%), the Ag+ ion dissolution theory seemed less likely. Therefore, a direct interaction could be assumed as well. AgPure caused on average no effects (100.7%) on day 1 and thus proved to be inert to physico–chemical transformations after short-term exposure. The KNO3 control diminished the biological parameters on average by 6.3%, but without significant difference to AgNO3 (Fig. 6, Table 4). This slightly adverse effect might be the result of NO2− accumulation, due to nitrate reduction [109, 110].
After 14 days exposure, the differences between the effect characteristics of the agent species changed. Ag10-NH2 (111.2%), AgNO3 (116.0%) as well as KNO3 (112.0%) stimulated the biological parameter significantly, whereas AgPure (101.8%) and Ag10-COOH (107.0%) caused only low stimulatory effects. In case of the silver species, their bioavailability as well as those of the released Ag+ ions could be reduced at this time point due to interactions with organic matter, clay minerals or pedogenic oxides [14, 20, 102, 103]. Furthermore, emerging self-protection mechanisms, such as the production of extracellular proteins or polysaccharides of the soil microbiome could neutralize toxic ions or cap AgNP [95, 96]. In addition, resilience mechanisms [58, 78, 101] might be possible explanations for the limited effects on the microbial community in the soil at day 14. The increase of the biological parameter due to AgNO3 and KNO3 exposure might result by an increase of nitrate reduction and the concomitant increase of nitrogen for the microbiome [81, 83]. Due to the synergy of microbial silver resistance mechanisms as well as the nitrogen substrate source, AgNO3 increased the relative variation of the biological parameters. As already mentioned, several authors documented stimulation or minor negative effects on microorganisms related to nitrogen cycle after exposure of low concentrations of AgNO3 in short-term exposure experiments [16, 24, 77,78,79,80].
Starting by day 28, ageing of the silver species and their slow return to the biological soil system presented a continuous sink of bioavailable silver, reducing the stimulating effects on the biological parameters (Fig. 6). An increase in AgNP toxicity with time can be linked to time-dependent enlargement of silver in soil pore water due to dissolution [33, 111]. However, the defense arsenal of bacteria was sufficient to resist silver toxicity at day 28. After 3 months exposure, there seemed to be a shock load of silver in case of AgPure (85.2%), Ag10-NH2 (82.4%) and AgNO3 (83.8%) to which the bacterial community was not immediately prepared. Small-scale bioavailability, chemical alterations and possible transformations (oxidation, reduction, dissolution, sulfidation) of AgNP and Ag+ [23, 24, 33, 90, 112, 113] in the loamy soil are possible physicochemical causes for the abrupt toxicity. Furthermore, it might be assumed that after short- and mid-term adaption to the silver contamination as well as the positioning of the silver species in the soil system, the bacterial population might have lost members with silver tolerance and were unanticipatedly shocked by the return of silver toxicant at day 90 resulting in strong reductions of the biological parameters [17].
In contrast, Ag10-COOH caused a significant stimulation of the investigated parameters by 12.5%, confirming their high stability. With prolonged exposure time, the likelihood of bounds between the negative charged Ag10-COOH and soil cations increased and with that their physico–chemical barrier, resulting in lower bioavailability. Together with the presumably low number of free Ag+ ions in case of Ag10-COOH, no toxicity could be observed.
Based on the missing significant differences of AgNO3 and KNO3 at days 1 and 14, it could not be determined, if the effects were caused only by the Ag+ of the AgNO3 control, but maybe due to the NO3− of the KNO3 control. First at the exposure times of 28 days and 90 days, significantly different effect characteristics of AgNO3 and KNO3 by 13.0% (p = 0.000) and 31.5% (p = 0.000) (Fig. 6, Table 4), respectively, could be observed. This caused us to suspect, that initial at this late time points the nitrate contained in the AgNO3 control did not further exceedingly influence the results of the AgNO3 control. In consequence, especially in case of biological parameters related to nitrogen cycle, such as LAP activity or the abundance von amoA harboring bacteria, the usage of AgNO3 as proxy for Ag+ release in AgNP short-term effect assessments could be deceptive. There is an urgent need for further research into the suitability of AgNO3 as a measure of Ag+ release. For example, batch experiments investigating all steps within the soil nitrogen cycle (in particular nitrogen fixation, nitrification, denitrification, dissimilatory nitrate reduction to ammonium) after short- and long-term exposure of AgNO3 and NO3− will be helpful to resolve in detail, at which step the nitrate released from the AgNO3 control presents an advantage for the microbial community and at which step the harmful influence of the silver predominates.
Impacts of the factors silver species, agent species, concentration, exposure time and soil texture on the relative variation of 10 biological parameters were analysed by 16S rRNA qPCR as well as fluorometric and photometric techniques. The used AgNP were characterized in detail by electron microscopy, dynamic light scattering and asymmetrical flow field-flow fractionation. Analyses of variance revealed the factors silver species, exposure time and soil texture as the most relevant determinants for the effect expressions of the biological parameters. Furthermore, the factor combinations soil texture × silver species as well as soil texture × exposure time facilitates even larger explanations of the variance of the biological parameter.
Overall, the presented results demonstrate the importance of considering several factors in the effect assessment of AgNP. Based on our study, the relationship of the soil texture × silver species is the most significant factor combination for the environmental fate and toxicities of AgNP in soils.
AgNP:
silver nanoparticle(s)
AF4:
asymmetrical flow field-flow fractionation
Ag-HPB:
hydrophobic silver particle(s)
DLS:
mercaptopropionic acid
nanoparticle(s)
PDI:
polydispersity index
UPW:
Reidy B, Haase A, Luch A, Dawson K, Lynch I (2013) Mechanisms of silver nanoparticle release, transformation and toxicity: a critical review of current knowledge and recommendations for future studies and applications. Materials 6:2295–2350
Sun TY, Mitrano DM, Bornhöft NA, Scheringer M, Hungerbühler K, Nowack B (2017) Envisioning Nano release dynamics in a changing world: using dynamic probabilistic modeling to assess future environmental emissions of engineered nanomaterials. Environ Sci Technol 51:2854–2863
Abbasi E, Milani M, Fekri Aval S, Kouhi M, Akbarzadeh A, Tayefi Nasrabadi H, Nikasa P, Joo SW, Hanifehpour Y, Nejati-Koshki K (2016) Silver nanoparticles: synthesis methods, bio-applications and properties. Crit Rev Microbiol 42:173–180
Pareek V, Gupta R, Panwar J (2018) Do physico–chemical properties of silver nanoparticles decide their interaction with biological media and bactericidal action? A review. Mater Sci Eng C 90:739–749
Hänsch M, Emmerling C (2010) Effects of silver nanoparticles on the microbiota and enzyme activity in soil. J Plant Nutr Soil Sci 173:554–558
Gottschalk F, Kost E, Nowack B (2013) Engineered nanomaterials in water and soils: a risk quantification based on probabilistic exposure and effect modeling. Environ Toxicol Chem 32:1278–1287
Nowack B, Baalousha M, Bornhöft N, Chaudhry Q, Cornelis G, Cotterill J, Gondikas A, Hassellöv M, Lead J, Mitrano DM (2015) Progress towards the validation of modeled environmental concentrations of engineered nanomaterials by analytical measurements. Environ Sci Nano 2:421–428
Sun TY, Gottschalk F, Hungerbühler K, Nowack B (2014) Comprehensive probabilistic modelling of environmental emissions of engineered nanomaterials. Environ Pollut 185:69–76
Gottschalk F, Sonderer T, Scholz RW, Nowack B (2009) Modeled environmental concentrations of engineered nanomaterials (TiO2, ZnO, Ag, CNT, fullerenes) for different regions. Environ Sci Technol 43:9216–9222
Gottschalk F, Nowack B (2011) The release of engineered nanomaterials to the environment. J Environ Monit 13:1145–1155
Benn TM, Westerhoff P (2008) Nanoparticle silver released into water from commercially available sock fabrics. Environ Sci Technol 42:4133–4139
Sun TY, Conroy G, Donner E, Hungerbühler K, Lombi E, Nowack B (2015) Probabilistic modelling of engineered nanomaterial emissions to the environment: a spatio-temporal approach. Environ Sci Nano 2:340–351
Dale A, Casman E, Lowry G, Lead J, Viparelli E, Baalousha M (2015) Modeling nanomaterial environmental fate in aquatic systems. Environ Sci Technol 49:2587
Klitzke S, Metreveli G, Peters A, Schaumann GE, Lang F (2014) The fate of silver nanoparticles in soil solution—sorption of solutes and aggregation. Sci Total Environ 535:54–60
Holden PA, Schimel JP, Godwin HA (2014) Five reasons to use bacteria when assessing manufactured nanomaterial environmental hazards and fates. Curr Opin Biotechnol 27:73–78
Grün A-L, Straskraba S, Schulz S, Schloter M, Emmerling C (2018) Long-term effects of environmentally relevant concentrations of silver nanoparticles on microbial biomass, enzyme activity, and functional genes involved in the nitrogen cycle of loamy soil. J Environ Sci 69:12–22
Grün A-L, Emmerling C (2018) Long-term effects of environmentally relevant concentrations of silver nanoparticles on major soil bacterial phyla of a loamy soil. Environ Sci Eur 30:1–13
Cornelis G, Hund-Rinke K, Kuhlbusch T, Van den Brink N, Nickel C (2014) Fate and bioavailability of engineered nanoparticles in soils: a review. Crit Rev Environ Sci Technol 44:2720–2764
Schlich K, Hund-Rinke K (2015) Influence of soil properties on the effect of silver nanomaterials on microbial activity in five soils. Environ Pollut 196:321–330
Settimio L, McLaughlin MJ, Kirby JK, Langdon KA, Janik L, Smith S (2015) Complexation of silver and dissolved organic matter in soil water extracts. Environ Pollut 199:174–184
Bundschuh M, Filser J, Lüderwald S, McKee MS, Metreveli G, Schaumann GE, Schulz R, Wagner S (2018) Nanoparticles in the environment: where do we come from, where do we go to? Environ Sci Eur 30:1–17
Lowry GV, Gregory KB, Apte SC, Lead JR (2012) Transformations of nanomaterials in the environment. Environ Sci Technol 46:6893–6899
Pachapur VL, Larios AD, Cledón M, Brar SK, Verma M, Surampalli R (2016) Behavior and characterization of titanium dioxide and silver nanoparticles in soils. Sci Total Environ 563:933–943
Masrahi A, VandeVoort AR, Arai Y (2014) Effects of silver nanoparticle on soil-nitrification processes. Arch Environ Con Tox 66:504–513
Rahmatpour S, Shirvani M, Mosaddeghi MR, Nourbakhsh F, Bazarganipour M (2017) Dose-response effects of silver nanoparticles and silver nitrate on microbial and enzyme activities in calcareous soils. Geoderma 285:313–322
Saei AA, Yazdani M, Lohse SE, Bakhtiary Z, Serpooshan V, Ghavami M, Asadian M, Mashaghi S, Dreaden EC, Mashaghi A (2017) Nanoparticle surface functionality dictates cellular and systemic toxicity. Chem Mat 29:6578–6595
Louie SM, Tilton RD, Lowry GV (2016) Critical review: impacts of macromolecular coatings on critical physicochemical processes controlling environmental fate of nanomaterials. Environ Sci Nano 3:283–310
Lead JR, Batley GE, Alvarez PJ, Croteau MN, Handy RD, McLaughlin MJ, Judy JD, Schirmer K (2018) Nanomaterials in the environment: behavior, fate, bioavailability, and effects—an updated review. Environ Toxicol Chem 27:2029–2063
Datta LP, Chatterjee A, Acharya K, De P, Das M (2017) Enzyme responsive nucleotide functionalized silver nanoparticles with effective antimicrobial and anticancer activity. New J Chem 41:1538–1548
Fröhlich E (2012) The role of surface charge in cellular uptake and cytotoxicity of medical nanoparticles. Int J Nanomed 7:1–15
Wu F, Harper BJ, Harper SL (2017) Differential dissolution and toxicity of surface functionalized silver nanoparticles in small-scale microcosms: impacts of community complexity. Environ Sci Nano 4:359–372
Grün A-L, Scheid P, Hauröder B, Emmerling C, Manz W (2017) Assessment of the effect of silver nanoparticles on the relevant soil protozoan genus Acanthamoeba. J Plant Nutr Soil Sci 180:602–613
Diez-Ortiz M, Lahive E, George S, Ter Schure A, Van Gestel CA, Jurkschat K, Svendsen C, Spurgeon DJ (2015) Short-term soil bioassays may not reveal the full toxicity potential for nanomaterials; bioavailability and toxicity of silver ions (AgNO 3) and silver nanoparticles to earthworm Eisenia fetida in long-term aged soils. Environ Pollut 203:191–198
Zhai Y, Hunting ER, Wouterse M, Peijnenburg W, Vijver MG (2016) Silver nanoparticles, ions and shape governing soil microbial functional diversity: nano shapes micro. Front Microbiol 7:1–9
Marx MC, Wood M, Jarvis SC (2001) A microplate fluorimetric assay for the study of enzyme diversity in soils. Soil Biol Biochem 33:1633–1640
Ernst G, Henseler I, Felten D, Emmerling C (2009) Decomposition and mineralization of energy crop residues governed by earthworms. Soil Biol Biochem 41:1548–1554
Kreuzer K, Adamczyk J, Iijima M, Wagner M, Scheu S, Bonkowski M (2006) Grazing of a common species of soil protozoa (Acanthamoeba castellanii) affects rhizosphere bacterial community composition and root architecture of rice (Oryza sativa L.). Soil Biol Biochem 38:1665–1672
Töwe S, Kleineidam K, Schloter M (2010) Differences in amplification efficiency of standard curves in quantitative real-time PCR assays and consequences for gene quantification in environmental samples. J Microbiol Methods 82:338–341
Bach H-J, Tomanova J, Schloter M, Munch J (2002) Enumeration of total bacteria and bacteria with genes for proteolytic activity in pure cultures and in environmental samples by quantitative PCR mediated amplification. J Microbiol Methods 49:235–245
Rösch C, Mergel A, Bothe H (2002) Biodiversity of denitrifying and dinitrogen-fixing bacteria in an acid forest soil. Appl Environ Microbiol 68:3818–3829
Rotthauwe J-H, Witzel K-P, Liesack W (1997) The ammonia monooxygenase structural gene amoA as a functional marker: molecular fine-scale analysis of natural ammonia-oxidizing populations. Appl Environ Microbiol 63:4704–4712
Barns SM, Takala SL, Kuske CR (1999) Wide distribution and diversity of members of the bacterial kingdom Acidobacterium in the environment. Appl Environ Microbiol 65:1731–1737
Muyzer G, De Waal EC, Uitterlinden AG (1993) Profiling of complex microbial populations by denaturing gradient gel electrophoresis analysis of polymerase chain reaction-amplified genes coding for 16S rRNA. Appl Environ Microbiol 59:695–700
Stach JE, Maldonado LA, Ward AC, Goodfellow M, Bull AT (2003) New primers for the class Actinobacteria: application to marine and terrestrial environments. Environ Microbiol 5:828–841
Bacchetti De Gregoris T, Aldred N, Clare AS, Burgess JG (2011) Improvement of phylum- and class-specific primers for real-time PCR quantification of bacterial taxa. J Microbiol Methods 86:351–356
Manz W, Amann R, Ludwig W, Vancanneyt M, Schleifer K-H (1996) Application of a suite of 16S rRNA-specific oligonucleotide probes designed to investigate bacteria of the phylum cytophaga-flavobacter-bacteroides in the natural environment. Microbiol 142:1097–1106
Overmann J, Coolen MJ, Tuschak C (1999) Specific detection of different phylogenetic groups of chemocline bacteria based on PCR and denaturing gradient gel electrophoresis of 16S rRNA gene fragments. Arch Microbiol 172:83–94
Lane D (1991) 16S/23S rRNA sequencing. In: Stackebrandt E, Goodfellow M (eds) Nucleic acid techniques in bacterial systematics. Wiley, England
Kuramae EE, Yergeau E, Wong LC, Pijl AS, van Veen JA, Kowalchuk GA (2012) Soil characteristics more strongly influence soil bacterial communities than land-use type. FEMS Microbiol Ecol 79:12–24
Kallenbach CM, Frey SD, Grandy AS (2016) Direct evidence for microbial-derived soil organic matter formation and its ecophysiological controls. Nat Commun 7:1–10
Ma W, Jiang S, Assemien F, Qin M, Ma B, Xie Z, Liu Y, Feng H, Du G, Ma X (2016) Response of microbial functional groups involved in soil N cycle to N, P and NP fertilization in Tibetan alpine meadows. Soil Biol Biochem 101:195–206
McGee CF, Storey S, Clipson N, Doyle E (2018) Concentration-dependent responses of soil bacterial, fungal and nitrifying communities to silver nano and micron particles. Environ Sci Pollut Res 25:18693–18704
Emmerling C, Schloter M, Hartmann A, Kandeler E (2002) Functional diversity of soil organisms-a review of recent research activities in Germany. J Plant Nutr Soil Sci 165:408–420
Rincon-Florez VA, Carvalhais LC, Schenk PM (2013) Culture-independent molecular tools for soil and rhizosphere microbiology. Diversity 5:581–612
Zhang X, Liu W, Schloter M, Zhang G, Chen Q, Huang J, Li L, Elser JJ, Han X (2013) Response of the abundance of key soil microbial nitrogen-cycling genes to multi-factorial global changes. PLoS ONE 8:e76500
Fierer N, Bradford MA, Jackson RB (2007) Toward an ecological classification of soil bacteria. Ecology 88:1354–1364
Torsvik V, Øvreås L (2002) Microbial diversity and function in soil: from genes to ecosystems. Curr Opin Microbiol 5:240–245
Allison SD, Martiny JB (2008) Resistance, resilience, and redundancy in microbial communities. PNAS 105:11512–11519
Ventura M, Canchaya C, Tauch A, Chandra G, Fitzgerald GF, Chater KF, van Sinderen D (2007) Genomics of Actinobacteria: tracing the evolutionary history of an ancient phylum. Microbiol Mol Biol Rev 71:495–548
Zhang Y, Cong J, Lu H, Li G, Qu Y, Su X, Zhou J, Li D (2014) Community structure and elevational diversity patterns of soil Acidobacteria. J Environ Sci 26:1717–1724
Li X, Rui J, Xiong J, Li J, He Z, Zhou J, Yannarell AC, Mackie RI (2014) Functional potential of soil microbial communities in the maize rhizosphere. PLoS ONE 9:e112609
Ward NL, Challacombe JF, Janssen PH, Henrissat B, Coutinho PM, Wu M, Xie G, Haft DH, Sait M, Badger J (2009) Three genomes from the phylum Acidobacteria provide insight into the lifestyles of these microorganisms in soils. Appl Environ Microbiol 75:2046–2056
Kielak AM, Barreto CC, Kowalchuk GA, van Veen JA, Kuramae EE (2016) The ecology of Acidobacteria: moving beyond genes and genomes. Front Microbiol 7:1–16
Sharma S, Szele Z, Schilling R, Munch JC, Schloter M (2006) Influence of freeze-thaw stress on the structure and function of microbial communities and denitrifying populations in soil. 72:2148–2154
Feng X, Nielsen LL, Simpson MJ (2007) Responses of soil organic matter and microorganisms to freeze–thaw cycles. Soil Biol Biochem 39:2027–2037
Liu C, Leng W, Vikesland PJ (2018) Controlled evaluation of the impacts of surface coatings on silver nanoparticle dissolution rates. Environ Sci Technol 52:2726–2734
Badawy AME, Luxton TP, Silva RG, Scheckel KG, Suidan MT, Tolaymat TM (2010) Impact of environmental conditions (pH, ionic strength, and electrolyte type) on the surface charge and aggregation of silver nanoparticles suspensions. Environ Sci Technol 44:1260–1266
Long Y-M, Hu L-G, Yan X-T, Zhao X-C, Zhou Q-F, Cai Y, Jiang G-B (2017) Surface ligand controls silver ion release of nanosilver and its antibacterial activity against Escherichia coli. Int J Nanomed 12:1–14
Yang Y, Quensen J, Mathieu J, Wang Q, Wang J, Li M, Tiedje JM, Alvarez PJ (2014) Pyrosequencing reveals higher impact of silver nanoparticles than Ag+ on the microbial community structure of activated sludge. Water Res 48:317–325
McGee C, Storey S, Clipson N, Doyle E (2017) Soil microbial community responses to contamination with silver, aluminium oxide and silicon dioxide nanoparticles. Ecotoxicology 26:449–458
Cho EC, Xie J, Wurm PA, Xia Y (2009) Understanding the role of surface charges in cellular adsorption versus internalization by selectively removing gold nanoparticles on the cell surface with a I2/KI etchant. Nano Lett 9:1080–1084
Villanueva A, Cañete M, Roca AG, Calero M, Veintemillas-Verdaguer S, Serna CJ, del Puerto MM, Miranda R (2009) The influence of surface functionalization on the enhanced internalization of magnetic nanoparticles in cancer cells. Nanotechnology 20:1–9
Wang D, Jaisi DP, Yan J, Jin Y, Zhou D (2015) Transport and retention of polyvinylpyrrolidone-coated silver nanoparticles in natural soils. Vadose Zone J 14:vzj2015.01.0007
Li X, Lenhart JJ, Walker HW (2010) Dissolution-accompanied aggregation kinetics of silver nanoparticles. Langmuir 26:16690–16698
Maillard J-Y, Hartemann P (2013) Silver as an antimicrobial: facts and gaps in knowledge. Crit Rev Microbiol 39:373–383
McShan D, Ray PC, Yu H (2014) Molecular toxicity mechanism of nanosilver. J Food Drug Anal 22:116–127
Schlich K, Klawonn T, Terytze K, Hund-Rinke K (2013) Hazard assessment of a silver nanoparticle in soil applied via sewage sludge. Environ Sci Eur 25:1–14
Yang Y, Wang J, Xiu Z, Alvarez PJ (2013) Impacts of silver nanoparticles on cellular and transcriptional activity of nitrogen-cycling bacteria. Environ Toxicol Chem 32:1488–1494
Choi O, Deng KK, Kim N-J, Ross L, Surampalli RY, Hu Z (2008) The inhibitory effects of silver nanoparticles, silver ions, and silver chloride colloids on microbial growth. Water Res 42:3066–3074
Liang Z, Das A, Hu Z (2010) Bacterial response to a shock load of nanosilver in an activated sludge treatment system. Water Res 44:5432–5438
Giles ME, Morley NJ, Baggs EM, Daniell TJ (2012) Soil nitrate reducing processes-drivers, mechanisms for spatial variation, and significance for nitrous oxide production. Front Microbiol 3:407–423
Ji B, Yang K, Zhu L, Jiang Y, Wang H, Zhou J, Zhang H (2015) Aerobic denitrification: A review of important advances of the last 30 years. Biotechnol Bioprocess Eng 20:643–651
Tiedje JM (1988) Ecology of denitrification and dissimilatory nitrate reduction to ammonium. Biol Anaerob Microor 717:179–244
Asadishad B, Chahal S, Akbari A, Cianciarelli V, Azodi M, Ghoshal S, Tufenkji N (2018) Amendment of agricultural soil with metal nanoparticles: effects on soil enzyme activity and microbial community composition. Environ Sci Technol 52:1908–1918
Panáček A, Kvítek L, Smékalová M, Večeřová R, Kolář M, Röderová M, Dyčka F, Šebela M, Prucek R, Tomanec O (2018) Bacterial resistance to silver nanoparticles and how to overcome it. Nat Nanotechnol 13:65
Samarajeewa AD, Velicogna JR, Princz JI, Subasinghe RM, Scroggins RP, Beaudette LA (2017) Effect of silver nano-particles on soil microbial growth, activity and community diversity in a sandy loam soil. Environ Pollut 220:504–513
He S, Feng Y, Ni J, Sun Y, Xue L, Feng Y, Yu Y, Lin X, Yang L (2016) Different responses of soil microbial metabolic activity to silver and iron oxide nanoparticles. Chemosphere 147:195–202
Metreveli G, Frombold B, Seitz F, Grün A, Philippe A, Rosenfeldt RR, Bundschuh M, Schulz R, Manz W, Schaumann GE (2016) Impact of chemical composition of ecotoxicological test media on the stability and aggregation status of silver nanoparticles. Environ Sci Nano 3:418–433
Jacobson AR, McBride MB, Baveye P, Steenhuis TS (2005) Environmental factors determining the trace-level sorption of silver and thallium to soils. Sci Total Environ 345:191–205
Levard C, Hotze EM, Lowry GV, Brown GE Jr (2012) Environmental transformations of silver nanoparticles: impact on stability and toxicity. Environ Sci Technol 46:6900–6914
Tripathi DK, Tripathi A, Singh S, Singh Y, Vishwakarma K, Yadav G, Sharma S, Singh VK, Mishra RK, Upadhyay R (2017) Uptake, accumulation and toxicity of silver nanoparticle in autotrophic plants, and heterotrophic microbes: a concentric review. Front Microbiol 8:1–16
Jung WK, Koo HC, Kim KW, Shin S, Kim SH, Park YH (2008) Antibacterial activity and mechanism of action of the silver ion in Staphylococcus aureus and Escherichia coli. Appl Environ Microbiol 74:2171–2178
Nies DH (1999) Microbial heavy-metal resistance. Appl Microbiol Biotechnol 51:730–750
Keesstra SD, Bouma J, Wallinga J, Tittonell P, Smith P, Cerdà A, Montanarella L, Quinton JN, Pachepsky Y, van der Putten WH (2016) The significance of soils and soil science towards realization of the United nations sustainable development goals. Soil 2:111–128
Wu B, Wang Y, Lee Y-H, Horst A, Wang Z, Chen D-R, Sureshkumar R, Tang YJ (2010) Comparative eco-toxicities of nano-ZnO particles under aquatic and aerosol exposure modes. Environ Sci Technol 44:1484–1489
Sudheer Khan S, Bharath Kumar E, Mukherjee A, Chandrasekaran N (2011) Bacterial tolerance to silver nanoparticles (SNPs): aeromonas punctata isolated from sewage environment. J Basic Microbiol 51:183–190
Juan W, Kunhui S, Zhang L, Youbin S (2017) Effects of silver nanoparticles on soil microbial communities and bacterial nitrification in suburban vegetable soils. Pedosphere 27:482–490
Ma Y, Metch JW, Vejerano EP, Miller IJ, Leon EC, Marr LC, Vikesland PJ, Pruden A (2015) Microbial community response of nitrifying sequencing batch reactors to silver, zero-valent iron, titanium dioxide and cerium dioxide nanomaterials. Water Res 68:87–97
Cornelis G, Kirby JK, Beak D, Chittleborough D, McLaughlin MJ (2010) A method for determination of retention of silver and cerium oxide manufactured nanoparticles in soils. Environ Chem 7:298–308
Shoults-Wilson WA, Reinsch BC, Tsyusko OV, Bertsch PM, Lowry GV, Unrine JM (2011) Role of particle size and soil type in toxicity of silver nanoparticles to earthworms. Soil Sci Soc Am J 75:365–377
Postgate JR (1967) Viability measurements and the survival of microbes under minimum stress. Adv Microb Physiol 1:1–23
Cornelis G, DooletteMadeleine Thomas C, McLaughlin MJ, Kirby JK, Beak DG, Chittleborough D (2012) Retention and dissolution of engineered silver nanoparticles in natural soils. Soil Sci Soc Am J 76:891–902
Sagee O, Dror I, Berkowitz B (2012) Transport of silver nanoparticles (AgNPs) in soil. Chemosphere 88:670–675
Liu J, Hurt RH (2010) Ion release kinetics and particle persistence in aqueous nano-silver colloids. Environ Sci Technol 44:2169–2175
Marambio-Jones C, Hoek EM (2010) A review of the antibacterial effects of silver nanomaterials and potential implications for human health and the environment. J Nanopart Res 12:1531–1551
Verma A, Stellacci F (2010) Effect of surface properties on nanoparticle–cell interactions. Small 6:12–21
Park E-J, Yi J, Kim Y, Choi K, Park K (2010) Silver nanoparticles induce cytotoxicity by a Trojan-horse type mechanism. Toxicol Vitro 24:872–878
Uwizeyimana H, Wang M, Chen W, Khan K (2017) The eco-toxic effects of pesticide and heavy metal mixtures towards earthworms in soil. 55:20–29
Stein LY, Arp DJ (1998) Loss of ammonia monooxygenase activity in Nitrosomonas europaea upon exposure to nitrite. Appl Environ Microbiol 64:4098–4102
Bollag J-M, Henninger NM (1978) Effects of nitrite toxicity on soil bacteria under aerobic and anaerobic conditions. Soil Biol Biochem 10:377–381
Das P, Barua S, Sarkar S, Chatterjee SK, Mukherjee S, Goswami L, Das S, Bhattacharya S, Karak N, Bhattacharya SS (2018) Mechanism of toxicity and transformation of silver nanoparticles: inclusive assessment in earthworm-microbe-soil-plant system. Geoderma 314:73–84
Cornelis G, Pang L, Doolette C, Kirby JK, McLaughlin MJ (2013) Transport of silver nanoparticles in saturated columns of natural soils. Sci Total Environ 463:120–130
Gunsolus IL, Mousavi MP, Hussein K, Bühlmann P, Haynes CL (2015) Effects of humic and fulvic acids on silver nanoparticle stability, dissolution, and toxicity. Environ Sci Technol 49:8078–8086
JC designed and manufactured the Ag10-NH2 and Ag10-COOH nanoparticles. SS performed and evaluated the transmission electron microscopy of AgNP. KY, MF, and DR performed the ζ-potential and size measurements of AgNP as well as the asymmetrical flow field-flow fractionation (AF4) of AgNP. GAL and EC designed and performed the soil experiments. GAL, EC and MW analysed the data and wrote the manuscript. All authors read and approved the final manuscript.
We appreciate Elvira Sieberger (University of Trier) for her excellent laboratory support and assistance.
All datasets on which the conclusions of the paper rely are presented in the main manuscript (Additional files 1 and 2).
The study was financially supported by the BMBF (Grant No. 03X0150) and the German Research Foundation (DFG Research unit FOR 1536, MA 3273/3-2).
Department of Biology, Institute for Integrated Natural Sciences, University of Koblenz-Landau, Universitätsstr. 1, 56070, Koblenz, Germany
Anna-Lena Grün
& Werner Manz
Fraunhofer Institute for Biomedical Engineering IBMT, Joseph-von-Fraunhofer-Weg 1, 66280, Sulzbach, Germany
Yvonne Lydia Kohl
Postnova Analytics GmbH, Max-Planck Straße 14, 86899, Landsberg, Germany
Florian Meier
& Roland Drexel
Institute of Molecular Biosciences, J.W. Goethe University, Max-von-Laue-Strase 9, 60438, Frankfurt am Main, Germany
Susanne Straskraba
PlasmaChem GmbH, Schwarzschildstraße 10, 12489, Berlin, Germany
Carsten Jost
Department of Soil Science, Faculty of Regional and Environmental Science, University of Trier, Campus II, 54286, Trier, Germany
& Christoph Emmerling
Search for Anna-Lena Grün in:
Search for Werner Manz in:
Search for Yvonne Lydia Kohl in:
Search for Florian Meier in:
Search for Susanne Straskraba in:
Search for Carsten Jost in:
Search for Roland Drexel in:
Search for Christoph Emmerling in:
Correspondence to Christoph Emmerling.
Additional file 1. Characterization of AgNP.
Additional file 2. Raw data.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Silver nanoparticles
Soil microbial community
Functional diversity
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Applied Network Science
A road network simplification algorithm that preserves topological properties
Jinyoung Pung1,
Raissa M. D'Souza1,2,3 na1,
Dipak Ghosal1 na1 &
Michael Zhang4 na1
Applied Network Science volume 7, Article number: 79 (2022) Cite this article
A road network can be represented as a weighted directed graph with the nodes being the traffic intersections, the edges being the road segments, and the weights being some attribute of a road segment. Such a representation enables researchers to analyze road networks in consistent and automatable ways from the perspectives of graph theory. For example, analysis of the graph along with the traffic demand pattern can identify critical road segments based on centrality measures. However, due to the complexity of real-world road networks and the computationally expensive algorithms, it is challenging to extend the such methods to a large-scale road network. In this paper, we present a simple yet efficient network simplification framework based on graph theory that sub-samples and simplifies the graph while preserving key topological characteristics in the original network. Our method iteratively identifies and removes network elements that do not contribute to transportation functionality, such as self-loops, dead-ends, and interstitial nodes that lies on the same road line. We applied this method to three small cities with distinct street patterns and one large city, and showed that topological characteristics in the original networks are preserved by comparing two distinct kinds of centrality distributions in the original and simplified networks.
In an urban road network, many roads and streets have complex inter-connections. Due to their complexity, researchers often represented road networks as spatial graphs for systematic analysis. This enables graph-theoretic algorithms to be applied to road networks. However, even a small city contains more than thousands of road segments and a detailed graph representation makes it challenging to use original road networks as input to analysis models (Bazzi et al. 2010).
To deal with this issue, researchers used a subsampled version of the city-level road network. Porta et al. (2006) investigated spatial graphs of four different cities by Multiple Centrality Assessment which assesses four different centrality measures to capture topological and geometric characteristics with different perspectives. They limited the area of study to 1-square-mile sub-regions of the original networks. Park and Yilmaz (2010) investigated centrality measures and their entropy in road networks. In their study, small graphs with a maximum of 104 nodes are used to represent sub-regions of cities. Youn et al. (2008) assessed the price of anarchy by computing the difference in travel time between an origin and destination pair in social optima and Nash equilibria. Instead of the original network, they used the skeleton network of each city which consists of only principal arterial roads. Although these subsampling approaches make the studies computationally feasible, the results obtained from only a small portion of a city are biased to that selected part and may not be representative of the city. Also, selecting specific types of roads omits roads of other types regardless of their topological importance which can result in unexpected disconnections.
Similarly, traffic simulation studies of city-wide networks using microscopic traffic simulators such as VISSIM (Fellendorf and Vortisch 2010) also suffer from expensive computational costs. Many different user groups such as researchers and transportation system authorities have used traffic simulators to test new ideas and to easily collect data without interfering with real-world networks. However, they have considerable execution runtimes, even for moderate size cities. Furthermore, since traffic simulators must account for the stochastic nature of the traffic demand, they need to be run multiple times for reliable results. This further exacerbates the poor scalability with the size of the road network (Antoniou et al. 2014).
Motivated by these issues, we propose a novel road network simplification framework that preserves topological and geometric characteristics of the original network. Our method iteratively removes cul-de-sacs and gridiron patterns made of low-level roads from a road network represented as a directed graph. Porta et al. found that betweenness centrality and information centrality well represent the backbone and collective behaviors of road networks (Porta et al. 2006) so we assessed these two centrality distributions of the original and simplified networks to examine differences in topological characteristics and found that this simple proposed process significantly lightened the volume of the original networks while the simplified networks have very similar centrality distribution to the original networks.
Given that our framework preserves key topological characteristics of the original network after simplification, it suggests that a simplified network can be used for road network analysis instead of the large and detailed original network. This makes road network analysis more scalable with the size of the road network. Also, unlike most previous approaches, our method takes input as a directed graph that preserves the directionality of roads, which allows the simplification framework to be used for road network analysis of more diverse purposes such as routing of vehicles.
In practice, reduced representations of road networks are commonly used for road network generalization in cartographic maps. Since the real-world space has enormous information, maps with limited space need to deliver information efficiently by highlighting relatively more important information. In this context, road network generalization methods usually work as a selective omission process, which sorts network elements in the order of given criteria, and selectively omits less important network elements with low importance ranks.
There are various techniques for road network generalization. Following early work by Mackaness and Beard (1993), many researchers (Mackaness 1995; Thomson and Richardson 1995; Jiang and Claramunt 2004; Jiang and Harrie 2004) proposed graph-based approaches for road network generalization, in which road segments and intersections are represented as nodes and edges respectively (or vice versa). These studies identified important network elements to be selected in generalized maps by incorporating graph-theoretic algorithms such as minimum-spanning-tree, shortest path, and centrality. However, since they consider mostly on topological relationships, traditional graph-based approaches ignored semantic and geometric properties of roads.
Thomson and Richardson (1999), based on the principle of 'Good Continuity', defined a stroke as a group of road segments that have the same road type and intersect at a small angle. Instead of individual road segments, they considered strokes for their generalization model, then computed the importance rank of strokes, and selected important strokes. In this approach, output networks have continuous network elements after generalization. Inspired by the stroke-based method, several studies (Thomson and Brooks 2001; Liu et al. 2010; Yu et al. 2020) developed their methods with strokes as selection units. Chen et al. (2009) used a mesh, a closed region by road segments, as a network element unit. Their method starts with the identification of meshes that have a density beyond a given threshold which are then merged with their adjacent meshes. Compared to the stroke-based approach, the mesh-based approach achieves an uniform distribution of roads in the generalized network.
More recently, data-driven approaches have been proposed to reflect traffic flow patterns in addition to the geometric and topological properties of networks. Yu et al. (2020) modified the stroke-based method by including the relationship between road segments and traffic flows in computing the importance score of strokes. Van De Kerkhof et al. (2020) sorted a set of car trajectories that consist of consecutive road segments, and selected the road segments that belong to high-rank trajectories. Those approaches can have better connectivity for routes that are frequently used by drivers.
The above-mentioned road network generalization methods can reduce the volume of road networks by any given threshold in continuous ways. However, the primary objective of those approaches is in making the map in different scales more "readable" and the threshold is set regardless of the structure of network, causing unexpected changes in connectivity and topological characteristics after generalization. Although recent data-driven approaches attempt to prevent unexpected disconnection of functionally important road segments, they rely on past data and thus potentially useful road segments can be removed causing undesired disconnection.
There are several road network simplification methods that work without any pre-determined threshold for determining the size of simplified networks. Boeing (2017) simplified road networks by removing interstitial nodes that lie on the same road line. Since these nodes are just extending the roads which do not branch, they are treated as redundant and replaced with a single edge that concatenates all the road segments into one. Huynh and Selvakumar (2020) proposed a simplification method that iteratively cut short dangling paths, identify clusters of road network components, and then collapsed each cluster into a single node.
However, there are several drawbacks in these approaches. First, the method proposed by Boeing (2017) only focuses on sequential road chunks on the same road line. Second, the method proposed by Huynh and Selvakumar (2020) is applicable only to undirected graphs and thus cannot be used for analysis of the dynamic nature of road networks. Also, their method makes the assumption that road networks are strictly planar meaning that they can be well represented by a simplified two-dimensional model without any underpass or overpass which is not always true. In this paper, we propose a novel road network simplification method that preserves key topological characteristics similar to the original, in which a directed graph converges to a simplified directed graph without any pre-determined threshold.
OSMnx (Boeing 2017) is a Python package that downloads road networks from OpenStreetMap (Haklay and Weber 2008) and constructs them into primal, non-planar and weighted multidigraphs. This means that nodes and directed edges represent intersections and roads respectively (primal), grade-separated roads such as overpass and underpass do not have an intersection (non-planar), and geographic and spatial information of roads such as road length is included in the edge attributes that can be used as weights. We utilized OSMnx for our study since the above distinctive features of the package takes the dynamic nature of the road networks into account.
Urban road networks have hierarchical structures. High-level roads (e.g., motorways and arterial roads) transport a large number of vehicles at fast speeds while low-level roads (e.g., residential roads) have lower speed limits and are used to provide access between high-level roads and local areas. Low-level roads have little impact on the vehicular dynamics from the perspective of global transportation. However, some low-level roads may provide detours and shortcuts between sub-regions or distribute traffic avoiding congestion on high-level roads. Thus, arbitrary loss of low-level roads may have non-trivial impact on traffic flow and topological context should be considered in any road network simplification process. In our method, we distinguish trivial low-level roads which are superfluous from topologically important roads, then selectively omit such roads so that the topological characteristics of the original network can be preserved after simplification. To identify the redundant roads, we utilize three patterns in residential street network suggested by Southworth and Ben-Joseph (2013): loops and lollipops, lollipops on a stick, and gridiron. Figure 1 shows an example network for each pattern.
Example networks of residential street patterns: a loops and lollipops, b lollipops on a stick, and c gridiron
The loops and lollipops pattern is characterized by the presence of loops and cul-de-sacs. Both loops and cul-de-sacs are not likely to contribute to transportation functionality since a loop tends to get back to its starting point and a cul-de-sac does not provide any through pass to the rest of the network. The lollipops on a stick pattern consists of a few through streets with branching off cul-de-sacs from those streets, where cul-de-sacs are considered as redundant as explained above. Some studies highlight the effect of cul-de-sacs on road networks. The studies in Batac and Cirunay (2022) and Distel (2015) point out that travel from a dead-end node to another is sinuous, especially if the length of the path is very short, which may translate into a degradation in the quality of travel. Also, in the study in Li et al. (2022), cul-de-sacs may provide access points where traffic may flow into a traffic analysis zone (TAZ) of interest from outside the TAZ. As some features of a TAZ are computed using the number of access points to the TAZ, cul-de-sacs are not trivial in their study. Consequently, studies in Batac and Cirunay (2022), Li et al. (2022), and Distel (2015) show that cul-de-sacs have local impacts on road networks. Since in this study, we perform a city-level analysis of road networks and focus on the network-wide properties, we remove cul-de-sacs in the proposed simplification framework ignoring their potential local impacts.
The gridiron pattern is a simple system of two series of parallel streets crossing at right angles to form a pattern of rectangular blocks, which provides a lot of route choices. Also, the studies in Distel (2015) and Daganzo et al. (2011) argue that gridiron pattern may have a critical impact on the road network as it may cause gridlock traffic congestion especially when traffic demand is very high. In our study, however, we simplify gridiron pattern that only consists of low-level roads. These low-level roads will remain unused since they have the same direction and length as their nearest high-level roads and will be used only when the high-level roads are highly degraded or disrupted. Thus we consider them as trivial elements from a network-wide traffic perspective, and decided to remove them through simplification. We used the road type information that is tagged by OSM to identify low-level roads and roads with residential tag are considered as low-level roads.
Our method identifies the target patterns using the topological, geometric, and semantic information of a road network. The framework is depicted in Fig. 2 and consists of five steps:
Parallel edges are removed from the input graph leaving only the shortest edge between two adjacent nodes. Those edges are relatively long and generally used to provide access from main roads to residential areas.
Self-loop edges are removed from the graph. Circular ends for easy turning at the end of roads, which are represented as self-loops in a graph, not only add unnecessary overhead, but also make a dead-end node have at least two adjacent nodes (itself and its neighbor). Removing self-loops ensures dead-end nodes have a single adjacent node for the next step.
The graph is simplified by removing dead-ends, which are the nodes that have only one adjacent node and incident edges of the nodes. These components are only used to provide access to the end node and can be collapsed to the entrance of each cul-de-sac.
Areas with gridiron pattern are simplified by removing low-level components. The nodes that satisfy all the following conditions are removed along with their incident edges: (1) have exactly 4 adjacent neighbors, (2) the maximum length of the incident edges is less than 300 m, (3) the road type of all the incident edges is residential, and (4) at least two nodes under the conditions above are adjacent.
The interstitial nodes on a single road line are removed by replacing the sub-edges with a single unified edge. We used the method proposed by Boeing (2017) for this step. These five steps are iterated until the input graph converges to the final graph upon which no further simplification can be made.
The five-step simplification framework and components being removed in each step
Tracking regional node density of the original network
The simplified network has lower node density than the original, especially in residential areas of the network, as the framework removes nodes in target areas. This can lead to different topological characteristics (e.g. centrality measures) after simplification. To circumvent this, we set a node attribute aggr_node_number to keep track of the regional node density in the original network. The value of the attribute is initialized to 1 for each node in the original graph. When a node or a group of nodes is removed for simplification, the aggr_node_number value of the node, or the summation of aggr_node_number values of the group of nodes being removed, is distributed equally to its neighbors. Intuitively, this attribute of a node in the simplified network represents the number of other nodes that were collapsed into it, which can be used to approximate the node density in the original network. In our study, we used it to better estimate the centrality measure of original networks from simplified networks as explained in detail later. There are other potential benefits of this attribute such as using it for generating origin and destination pairs in traffic simulations.
Centrality measures and estimation
In graph theory and network analysis, the basic idea of centrality is that there are relatively more central or important nodes and edges in a network. Since the first set of centrality indices were defined for social network analysis by Freeman (1978), various centrality indices have been suggested and widely applied to many other fields of study including road network analysis (Porta et al. 2006; Park and Yilmaz 2010; Zhang et al. 2011; Huynh and Selvakumar 2020). In our study, we utilized two centrality indices: betweenness centrality and information centrality. Porta et al. (2006) showed those centrality indices nicely capture the backbone structure of a road network and collective behaviors. Observing the difference in the distribution of centrality measurements before and after simplification provides a measure of how much a simplification method distorts the topological characteristic of a road network.
Edge betweenness centrality
Edge betweenness centrality is a concept that generalize Freeman's betweenness centrality to edges (Girvan and Newman 2002), which shows how frequently an edge lies on the shortest paths connecting a pair of nodes in a graph. In a road network, the higher betweenness an edge has, the more it provides shortest routes and is likely to contribute to the transportation in a city. The betweenness centrality \(C^B\) of an edge e is defined by
$$\begin{aligned} C^B(e) = \frac{1}{N(N-1)} \sum _{s,t \in V} \frac{\sigma (s,t|e)}{\sigma (s,t)} \end{aligned}$$
where N is the number of nodes in a graph, V is the set of nodes, \(\sigma (s,t)\) is the number of shortest paths between an origin and destination pair (s, t), and \(\sigma (s,t|e)\) is the number of those paths that passing through edge e.
Edge information centrality
Based on the concept of efficient propagation of information over a social network, Latora and Marchiori defined information centrality as the relative drop in the network efficiency caused by the removal of a node (Latora and Marchiori 2004) where network efficiency represents how efficiently information is exchanged over the network (Latora and Marchiori 2001). Fortunato et al. (2004) generalized information centrality to edges and defined edge information centrality. Applying edge information centrality to a road network, we recognize network efficiency as the summation of the ratio between the length of the straight line and the shortest path between each origin and destination pair (s, t). The normalized network efficiency E for a weighted graph G as proposed in Vragović et al. (2005) is given by:
$$\begin{aligned} E(G) = \frac{1}{N(N-1)} \sum _{s,t \in V; s \ne t } \varepsilon _{st} = \frac{1}{N(N-1)} \sum _{s,t \in V; s \ne t } \frac{d^{Eucl}_{st}}{d_{st}} \end{aligned}$$
where \(\varepsilon _{st}\) is the efficiency of travel from node s to t, \(d^{Eucl}_{st}\) is the Euclidean distance between a pair of nodes s and t, and \(d_{st}\) is the length of the shortest path from node s to t. In case there is no path from s to t, \(d_{st}=\infty\) and, consequently, \(\varepsilon _{st}=0\). Thus edge information centrality is well defined for either weakly connected or disconnected graphs. The removal of an edge forces origin and destination pairs to choose alternative paths or there may not be any available alternatives, in either case the network suffers from a high drop in efficiency. For example, the removal of bridges or the removal of the only connection to a large sub-graph is likely to cause high drop in efficiency. Thus such edges have high information centrality. The information centrality \(C^I\) of an edge e is defined by
$$\begin{aligned} C^I(e) = \frac{E(G_{org})-E(G^e_{cut})}{E(G_{org})} \end{aligned}$$
where \(G_{org}\) is an original graph and \(G^e_{cut}\) is the graph that removed edge e from \(G_{org}\).
Estimating centrality from a simplified network
Roads and intersections in residential areas are significantly simplified by our method. To preserve the original density of nodes we use the method of preserving overall aggr_node_number discussed above. Although the removed nodes have minimal effect on the transportation functionality, the decreased node density in those areas would result in a difference in topological characteristics and centrality distribution between the simplified and original networks.
Specifically, we estimate centrality in the original network from a simplified network by using the value of aggr_node_number attribute which represents the number of nodes that are removed in the vicinity of a node. For computing centrality in a simplified network, we can use the attribute as a weight to estimate centrality of the same element in the original network.
The estimated betweenness centrality \({\hat{C}}^B\) of an edge e is defined by
$$\begin{aligned} {\hat{C}}^B(e) = \frac{1}{N(N-1)} \sum _{s,t \in V'} \frac{\sigma (s,t|e)}{\sigma (s,t)} \times aggr_s \times aggr_t \end{aligned}$$
where \(V'\) is the set of nodes in the simplified graph, and \(aggr_v\) is the aggr_node_number value of a node v. Similarly to \({\hat{C}}^B\), we estimate the network efficiency E of the original graph G from a simplified graph \(G_{simple}\). The estimated network efficiency \({\hat{E}}\) is defined by
$$\begin{aligned} {\hat{E}}(G_{simple}) = \frac{1}{N(N-1)} \sum _{s,t \in V'; s \ne t } \frac{d^{Eucl}_{st}}{d_{st}} \times aggr_s \times aggr_t \end{aligned}$$
By plugging in \({\hat{E}}\) into Eq. (3), the estimated information centrality \({\hat{C}}^I\) can be derived.
The time complexity of computation of centrality is O(|V||E|) (Girvan and Newman 2002) for betweenness centrality and \(O(|V||E|^3)\) (Fortunato et al. 2004) for information centrality where |V| is the number of nodes and |E| is the number of edges in a network. The attribute aggr_node_number is simply a multiplicative factor in the time complexity which is dominated by the size of graph defined by the number of nodes and edges in the graph. Thus, estimating the centrality measures in the original network from a simplified network has substantially lower computational cost than directly computing centrality measures of the original network.
Experiment and results
We simplified the road networks of three cities in the United States to evaluate our method. In the first experiment, we investigate three small cities where each city has very distinct features. The first selected city is Davis, a small college town in California. The city has plenty of residential areas with cul-de-sacs. On the contrary, the next selected city, Portales in New Mexico mostly consists of gridiron pattern such as two series of parallel streets crossing at right angles. Finally, we selected Petaluma in California which has multiple street patterns and a geographical constraint which is a river that crosses the city. The latter has more general and combined street patterns than the two other cities. Also, as the separated regions by the river are interconnected with a few bridges, it is likely to be sensitive to possible distortion in topological characteristics after simplification. In the second experiment, we extend the study to a big city, Columbia in South Carolina, where streets and roads in various patterns are interconnected in a complex manner. Since this city has multiple patterns in its road network, the analysis of this city demonstrates that our simplification algorithm works for general cities. The distinctive features of the road network in the selected cities are summarized in Table 1.
Table 1 Four cities with distinct features and the description of street pattern in the cities
In addition to our proposed simplification method, we considered a benchmark which is a naive but frequently used method that omits all the residential roads in a network for comparison.
Visualization of original and simplified road networks. Left column (a, d, g): input network, middle column (b, e, h): simplified by the proposed method, right column (c, f, i): simplified by omitting all residential road. Top row (a, b, c): Davis, middle row (d, e, f): Portales, bottom row (g, h, i): Petaluma
Table 2 Statistic results of simplification
Figure 3 and Table 2 show the visualizations and statistical results of the network simplification process for the selected cities. We observed that the proposed method efficiently simplified road networks in all three cities by about the factor of two. The naive heuristic that omits all the residential roads can simplify the networks to more reduced representations than our method. However, it oversimplifies networks and makes arbitrary disconnections regardless of the functionality of transportation, distorting the topological characteristics of the original networks.
To identify the difference in topological characteristics that resulted from simplification, we measured betweenness centrality \(C^B\) and information centrality \(C^I\) in each network before and after simplification. For the networks simplified by our proposed method, each centrality is measured in two different ways: the standard centrality C and estimated centrality \({\hat{C}}\) using the aggr_node_number approach. Figures 4 and 5 visualize the distributions of \(C^B\) and \(C^I\) in the city of Davis respectively. In the visualizations, the simplified maps resulting from the proposed method share key features and details with the original network especially with respect to centrality measures. However, the map which simply omits residential roads shows a clearly different distribution, having more widespread centrality or high values in erroneous components. Centrality distribution in Portales and Petaluma also showed the same tendency as Davis.
\(C^B\) distribution in Davis road network: a original, b proposed method, c proposed method with centrality estimation, and d simplified by omitting all residential roads
\(C^I\) distribution in Davis road network: a original, b proposed method, c proposed method with centrality estimation, and d simplified by omitting all residential roads
For quantitative evaluation, we used correlation coefficient to compare the centrality of the edges of an original network and the corresponding edges of the simplified network. If the correlation coefficient has a high value, there is a strong association between the centrality of the original network and the simplified network, which supports that the key topological properties in the original network are preserved after simplification.
Specifically, we used Pearson (\(\rho\)) and Spearman (R) correlation coefficient. Pearson correlation coefficient represents the linear relationship between two variables, that is, a change in the centrality of the simplified network is proportional to the centrality of the original network. Spearman correlation coefficient can evaluate the monotonic relationship between two variables, i.e., how the rank of centrality measurements is similar between the original and simplified network.
Since the edge segments that lie on the same road line are replaced with a single unified edge throughout our proposed method, the maximum centrality measurement of such edge segments in the original network is matched to the centrality measurement of the unified edge in a simplified network to compute the correlation coefficient.
Table 3 is the results of quantitative analysis. In every case, the proposed method has the highest level of correlation with the two measures of centrality, having very high values from 0.761 to 0.998. It implies that the topological characteristics in the simplified networks are very similar to the original networks, and the centrality distribution in the original network can be accurately approximated in an efficient way using a simplified network. On the other hand, omitting all the residential roads for simplification resulted in low values of correlation factors from 0.382 to 0.857, and turned out to change the topological characteristics of the original network considerably.
Table 3 Pearson (\(\rho\)) and Spearman (R) correlation coefficient between centrality measurement of original and simplified networks
We extended our experiment to a big city where streets and roads in various patterns are interconnected in a more complicated manner. We simplified the road network in Columbia which is the capital city of South Carolina in the United States. The original network has 11,113 nodes and 29,781 edges. Our method reduced the size by 5457 nodes (− 51%) and 15,275 edges (− 49%), whereas simply omitting all residential roads reduced the size by 3433 nodes (− 69%) and 6851 edges (− 77%) (Fig. 6).
Visualization of the road network in the city of Columbia: a input network, b simplified by the proposed method, and c simplified by omitting all residential road
Similar to the first experiment, the number of nodes and edges are simplified by the factor of two by our proposed method. Although the naive approach that omits all residential roads can further reduce the size of the road network, it ends up having undesired disconnections between sub-regions. Figure 7 and Table 4 clearly shows our method has a very similar \(C^B\) distribution to the original network and high correlation coefficient while the other naive method has a different distribution and low correlation coefficient.
\(C^B\) distribution in Columbia road network: a original, b proposed method, c proposed method with centrality estimation, and d simplified by omitting all residential roads
Table 4 Pearson (\(\rho\)) and Spearman (R) correlation coefficient between centrality measurement of original and simplified networks in Columbia
Considering the heavy computational cost to compute the information centrality for Columbia, we do not obtain this result. However, here we provide an estimate of the time it would to take to compute information centrality in the original and simplified network with our machine which has a 2.6 GHz CPU. In the original network, it would take about 240 days, but in the simplified network by our method, it is estimated to take less than 42 days which is about six times faster than the original network.
This article proposes a road network simplification framework that selectively removes network components that have little impact on transportation functionality: cul-de-sacs and gridiron patterns consisting of low-level roads. In this way, the topological characteristics of a road network are preserved while efficiently de-densifying the network. The method keeps track of the regional node density of the original network, which can be used to more precisely estimate topological characteristics of the original network such as centrality measurement. We applied our method to three small cities with distinct street patterns and a big complex city, then computed centrality distributions in the road networks of the cities. By measuring the correlation coefficient between the centrality measurements in the original and simplified networks, we quantitatively showed that the topological characteristics in the original network are successfully preserved after simplification. We argue that this is an important step to enable large-scale road network analysis. Having similar topological characteristics to the original network, the simplified network can be used for the analysis of vehicular dynamic behaviors instead of a massive, thus computationally expensive, original network.
We have used publicly available road network data. More information of these data are provided in the corresponding references.
\(C^B\) :
Betweenness centrality
\({\hat{C}}^B\) :
Estimated betweenness centrality
\(C^I\) :
Information centrality
\({\hat{C}}^I\) :
Estimated information centrality
Antoniou C, Barcelò J, Brackstone M, Celikoglu H, Ciuffo B, Punzo V, Sykes P, Toledo T, Vortisch P, Wagner P (2014) Traffic simulation: case for guidelines
Batac RC, Cirunay MT (2022) Shortest paths along urban road network peripheries. Phys A Stat Mech Appl 597:127255
Bazzi A, Masini BM, Pasolini G, Torreggiani P (2010) Telecommunication systems enabling real time navigation. In: 13th International IEEE conference on intelligent transportation systems. IEEE, pp 1057–1064
Boeing G (2017) OSMnx: new methods for acquiring, constructing, analyzing, and visualizing complex street networks. Comput Environ Urban Syst 65:126–139
Chen J, Hu Y, Li Z, Zhao R, Meng L (2009) Selective omission of road features based on mesh density for automatic map generalization. Int J Geogr Inf Sci 23(8):1013–1032
Daganzo CF, Gayah VV, Gonzales EJ (2011) Macroscopic relations of urban traffic variables: bifurcations, multivaluedness and instability. Transp Res B Methodol 45(1):278–288
Distel MB (2015) Connectivity, sprawl, and the cul-de-sac: an analysis of cul-de-sacs and dead-end streets in Burlington and the surrounding suburbs
Fellendorf M, Vortisch P (2010) Microscopic traffic flow simulator VISSIM. In: Fundamentals of traffic simulation. Springer, New York, pp 63–93
Fortunato S, Latora V, Marchiori M (2004) Method to find community structures based on information centrality. Phys Rev E 70(5):056104
Freeman LC (1978) Centrality in social networks conceptual clarification. Soc Netw 1(3):215–239
Girvan M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99(12):7821–7826
Haklay M, Weber P (2008) Openstreetmap: user-generated street maps. IEEE Pervasive Comput 7(4):12–18
Huynh HN, Selvakumar R (2020) Extracting backbone structure of a road network from raw data. In: International conference on computational science. Springer, pp 582–594
Jiang B, Claramunt C (2004) A structural approach to the model generalization of an urban street network. GeoInformatica 8(2):157–171
Jiang B, Harrie L (2004) Selection of streets from a network using self-organizing maps. Trans GIS 8(3):335–350
Latora V, Marchiori M (2001) Efficient behavior of small-world networks. Phys Rev Lett 87(19):198701
Latora V, Marchiori M (2004) A measure of centrality based on the network efficiency. Open-Access J Phys. https://arxiv.org/pdf/math/0402050.pdf
Li H, Wu D, Zhang Z, Zhang Y (2022) Safety impacts of the discrepancies and accesses between adjacent traffic analysis zones. J Transp Saf Secur 14(3):359–381
Liu X, Zhan FB, Ai T (2010) Road selection based on voronoi diagrams and "strokes'' in map generalization. Int J Appl Earth Observ Geoinf 12:194–202
Mackaness W (1995) Analysis of urban road networks to support cartographic generalization. Cartogr Geogr Inf Syst 22(4):306–316
Mackaness WA, Beard KM (1993) Use of graph theory to support map generalization. Cartogr Geogr Inf Syst 20(4):210–221
Park K, Yilmaz A (2010) A social network analysis approach to analyze road networks. In: ASPRS annual conference, San Diego, pp 1–6
Porta S, Crucitti P, Latora V (2006) The network analysis of urban streets: a primal approach. Environ Plann B Plann Des 33(5):705–725
Article MATH Google Scholar
Southworth M, Ben-Joseph E (2013) Streets and the shaping of towns and cities
Thomson RC, Brooks R (2001) Exploiting perceptual grouping for map analysis, understanding and generalization: the case of road and river networks. In: International workshop on graphics recognition. Springer, pp 148–157
Thomson RC, Richardson DE (1995) A graph theory approach to road network generalisation. In: Proceeding of the 17th international cartographic conference, pp 1871–1880
Thomson RC, Richardson DE (1999) The 'good continuation' principle of perceptual organization applied to the generalization of road networks
Van De Kerkhof M, Kostitsyna I, Van Kreveld M, Löffler M, Ophelders T (2020) Route-preserving road network generalization. In: Proceedings of the 28th international conference on advances in geographic information systems, pp 381–384
Vragović I, Louis E, Díaz-Guilera A (2005) Efficiency of informational transfer in regular and complex networks. Phys Rev E 71(3):036122
Youn H, Gastner MT, Jeong H (2008) Price of anarchy in transportation networks: efficiency and optimality control. Phys Rev Lett 101(12):128701
Yu W, Zhang Y, Ai T, Guan Q, Chen Z, Li H (2020) Road network generalization considering traffic flow patterns. Int J Geogr Inf Sci 34(1):119–149
Zhang Y, Wang X, Zeng P, Chen X (2011) Centrality characteristics of road network patterns of traffic analysis zones. Transp Res Rec 2256(1):16–24
JP acknowledges the support of the Republic of Korea Navy.
Raissa M. D'Souza, Dipak Ghosal and Michael Zhang assert joint second authorship
Department of Computer Science, University of California, Davis, CA, 95616, USA
Jinyoung Pung, Raissa M. D'Souza & Dipak Ghosal
Department of Mechanical and Aerospace Engineering, University of California, Davis, CA, 95616, USA
Raissa M. D'Souza
Santa Fe Institute, Santa Fe, NM, 87501, USA
Department of Civil and Environmental Engineering, University of California, Davis, CA, 95616, USA
Jinyoung Pung
Dipak Ghosal
JP, RD, DG, and MZ conceived the work and coordinated the research. JP is responsible for the design, implementation, and evaluation of the simplification framework, and authored the manuscript. RD, DG and MZ contributed research expertise and edited the manuscript. All authors read and approved the final manuscript.
Correspondence to Jinyoung Pung.
Pung, J., D'Souza, R.M., Ghosal, D. et al. A road network simplification algorithm that preserves topological properties. Appl Netw Sci 7, 79 (2022). https://doi.org/10.1007/s41109-022-00521-8
Network pruning | CommonCrawl |
A quantitative risk assessment for human Taenia solium exposure from home slaughtered pigs in European countries
Marina Meester1,
Arno Swart1,
Huifang Deng1,
Annika van Roon1,
Chiara Trevisan2,
Pierre Dorny2,
Sarah Gabriël3,
Madalena Vieira-Pinto4,5,
Maria Vang Johansen6 &
Joke van der Giessen1
Taenia solium, a zoonotic tapeworm, is responsible for about a third of all preventable epilepsy human cases in endemic regions. In Europe, adequate biosecurity of pig housing and meat inspection practices have decreased the incidence of T. solium taeniosis and cysticercosis. Pigs slaughtered at home may have been raised in suboptimal biosecurity conditions and slaughtered without meat inspection. As a result, consumption of undercooked pork from home slaughtered pigs could pose a risk for exposure to T. solium. The aim of this study was to quantify the risk of human T. solium exposure from meat of home slaughtered pigs, in comparison to controlled slaughtered pigs, in European countries. A quantitative microbial risk assessment model (QMRA) was developed and porcine cysticercosis prevalence data, the percentage of home slaughtered pigs, meat inspection sensitivity, the cyst distribution in pork and pork consumption in five European countries, Bulgaria, Germany, Poland, Romania and Spain, were included as variables in the model. This was combined with literature about cooking habits to estimate the number of infected pork portions eaten per year in a country.
The results of the model showed a 13.83 times higher prevalence of contaminated pork portions from home slaughtered pigs than controlled slaughtered pigs. This difference is brought about by the higher prevalence of cysticercosis in pigs that are home raised and slaughtered. Meat inspection did not affect the higher exposure from pork that is home slaughtered. Cooking meat effectively lowered the risk of exposure to T. solium-infected pork.
This QMRA showed that there is still a risk of obtaining an infection with T. solium due to consumption of pork, especially when pigs are reared and slaughtered at home, using data of five European countries that reported porcine cysticercosis cases. We propose systematic reporting of cysticercosis cases in slaughterhouses, and in addition molecularly confirming suspected cases to gain more insight into the presence of T. solium in pigs and the risk for humans in Europe. When more data become available, this QMRA model could be used to evaluate human exposure to T. solium in Europe and beyond.
Taenia solium is a zoonotic tapeworm, with pigs as intermediate hosts and humans as definitive hosts. Pigs can become infected by ingestion of T. solium eggs. When eggs are ingested, oncospheres hatch from them, penetrate the intestinal walls and migrate towards the muscles. The oncospheres develop into T. solium cysticerci within 60 to 70 days [1]. Humans can become infected when pork with T. solium cysticerci is eaten raw or undercooked [2]. The adult tapeworm manifests in the human intestines, causing taeniosis. Human taeniosis is often undiagnosed, with mainly abdominal pain and bloating as reported symptoms [3].
Humans can obtain cysticercosis from direct contact with tapeworm carriers, contaminated food or water or through autoinfection or self-infection due to lack of sanitation [4]. Besides muscles, humane predilection sites are the eyes, subcutaneous tissues and brain. In contrast to human taeniosis, human cysticercosis may cause major health problems. Neurocysticercosis (NCC) is the most severe form of human cysticercosis, where cysticerci localize in the central nervous system. NCC is responsible for almost a third of all preventable epilepsy in endemic regions, mostly situated in low income countries [5].
The risk factors for human cysticercosis include poor personal hygiene, poor pig-raising practices [6], a lack of safe drinking water and sanitary latrines [7], consumption of infected, undercooked pork and poor knowledge about cysticerci in meat products [6, 8]. These conditions prevail in low income countries where pigs are raised and consumed, i.e. most countries in Latin America, sub-Saharan Africa and South and Southeast Asia [5]. In Europe, 4% of all pig holders raise 91% of all pigs [9]. These farms hold at least 200 pigs and have a biosecurity that is designed to minimize the transmission of pathogens like T. solium. Besides the structure and hygiene of European farms, meat inspection is obligatory at slaughterhouses in the European Union (EU), according to European Regulation 854/2004, chapter IV [10]. As a result, every pig carcass in the slaughterhouse is checked for cysticerci. Since almost no cases are reported in Europe [11], T. solium seems to be only a minor foodborne agent in Europe. Nevertheless, various recently published papers conclude differently [12,13,14,15,16]. A systematic review on the epidemiology of T. solium and Taenia saginata showed that one or more T. solium taeniosis cases were diagnosed in 4 out of 18 countries in western Europe. Human cysticercosis was even reported in all countries except Iceland. Most of these patients had visited endemic countries, which might explain the acquired infection, but there are also patients that had never left their country [13, 15]. Autochthonous cysticercosis cases could come from travellers with a taeniosis infection. But, this does not explain the porcine cysticercosis, that is notified in Austria, Bulgaria, Germany, Poland, Romania, Serbia and Spain, all between 1999 and 2015 [12, 13, 16].
Apparently, the conditions necessary for the transmission of T. solium between pigs and humans still persist in some European countries. Taenia solium transmission via the home slaughter of pigs was considered as a risk factor for the exposure to T. solium as a result of a questionnaire administered to members of the COST action TD1302, the European Network on Taeniosis/Cysticercosis (CYSTINET) that reported that home slaughter of pigs takes place in several countries, often without proper meat inspection [17].
The aim of this study was to analyse the risk of T. solium exposure from home slaughtered pigs, in comparison to controlled slaughtered pigs in European countries using a quantitative microbial risk assessment model (QMRA) that addresses the chain from production to consumption of pork.
We developed a QMRA model that followed the steps from porcine cysticercosis prevalence up until exposure of humans to infected pork portions. First, a general model description of the steps is given in Fig. 1. Secondly, the data sources and the calculations necessary to assess the risk of exposure per country are described in detail. The calculation steps are visualized in Fig. 2.
Conceptual risk chain for T. solium exposure
Model layout (formula numbers in parentheses)
The model was divided in three subsections: production, inspection and consumption. All steps were made at country level. The following steps were included (Fig. 1): (1) The model sets off with the reported prevalence in pigs. (2A) With calculations to determine the exposure rate and sensitivity of meat inspection, the adjusted prevalence and infection load of the porcine cysticercosis cases were defined. (2B) The adjusted prevalence and infection load of home slaughtered pigs was obtained using prevalence data of a country (Spain) where pork of home slaughtered pigs is inspected. (3) National data of the number of pigs slaughtered in slaughterhouses and outside slaughterhouses were multiplied by the prevalences of porcine cysticercosis to calculate the number of infected pigs for both controlled conditions (control) and home slaughtered conditions (home) apart. (4) Meat inspection, including test sensitivity was included in the 'controlled' branch of the model. For the 'home' branch, a comparison was made between meat and no meat inspection. (5) All carcasses which tested false negative were not withdrawn from the food chain and passed on to the section consumption. (6) With the aid of the infection load of the carcasses and the cyst distribution in pork cuts, the probability of a cysticercus to enter a cut was predicted (by "cut" we denote an anatomical part of the pig, such as "heart", "loin", etc.). (7) The weight of the cuts and a standard portion size were obtained to calculate the cyst distribution of the portions. By taking into account the total number of portions eaten in a country in a year, the portion prevalence and total number of infected portions could be obtained. (8) A subdivision between portions cooked and portions eaten raw was estimated. The portions consumed well-cooked were assigned zero risk, to calculate the final portion prevalence after cooking and thus to calculate (9) The risk of exposure.
Data sources and calculations
Test sensitivity meat inspection
Official European examination of swine carcasses is described in chapter IV of the European Regulation 854/2004 [10]. This regulation lists all organs and muscles that need to be visually inspected. Regarding T. solium, the following organs need to be visually inspected: the tongue, diaphragm, pericardium and heart. Before 2013, the heart had to be incised lengthwise once, in order to view the ventricles and septum of the heart. As only the heart was cut to detect cysticerci, we assumed that the cut in the heart was the basis of European meat inspection for T. solium when we analyzed the data from Europe.
The sensitivity of meat inspection depends on the pig's infection load [18]. To model this, the probability (f) to uncover a single cyst in the heart was calculated (formula 1).
$$ f=\frac{\mathrm{Mean}\ \mathrm{heart}\ \mathrm{surface}\ \mathrm{revealed}\ \mathrm{by}\ \mathrm{meat}\ \mathrm{inspection}\ \left({cm}^2\right)}{\mathrm{Mean}\ \mathrm{heart}\ \mathrm{surface}\ \mathrm{revealed}\ \mathrm{by}\ \mathrm{total}\ \mathrm{slicing}\ \left({cm}^2\right)} $$
The surface revealed by meat inspection is the area that can be inspected after the lengthwise incision mentioned above. Total slicing is the golden (standard) method to find T. solium cysticerci. Organs and muscles are sliced in 0.5 cm thick slices so that all cysticerci are uncovered. As such, total slicing gives the largest possible area that can be checked for T. solium cysticerci. The surfaces of formula 1 are adopted from Boa et al. [19].
The probability to find at least one cyst in the heart during detection was obtained with formula 2. When the total number of cysticerci in the heart (nheart) increases, the detection probability follows.
$$ P\left(\mathrm{detect}>0\ \mathrm{cysticerci}\ \right|\mathrm{cysticerci}={n}_{\mathrm{heart}}\Big)=1-{\left(1-f\right)}^{n_{\mathrm{heart}}} $$
Exposure rate and infection load
The exposure of pigs to T. solium eggs depends on certain risk factors that differ between countries and regions. We supposed that pigs are exposed to the eggs, resulting in an exposure rate (λheart), of eggs in the heart per lifetime. The probability of having an infection with nheart cysticerci was described by a Poisson distribution (formula 3), which is used for events that happen at random with a constant rate to an individual, i.e. an animal [20]. A higher exposure to eggs leads to a higher probability and infection load. When the exposure rate of a country is known, the formula can be used to determine the number and load of infected pigs with cysticercosis in that country, adjusted for the meat inspection sensitivity.
$$ P\left(\mathrm{cysticerci}={n}_{\mathrm{heart}}\right)=\mathrm{Poisson}\left({\lambda}_{\mathrm{heart}}\right)=\frac{\lambda_{\mathrm{heart}}}{n_{\mathrm{heart}}!}{e}^{-{\lambda}_{\mathrm{heart}}} $$
Note that the exposure rate is the rate of exposure of the heart per lifetime, since this is the muscle that the prevalence is derived from. In the section "Cyst distribution and weight of pork cuts" scaling factors are introduced to derive infection loads in other muscles. When combining formulae (2) and (3), the following formula results:
$$ P\left(\mathrm{detect}\right)=1-{e}^{-f{\lambda}_{\mathrm{heart}}} $$
where P(detect) is the probability to find a positive pig, given a certain f and λheart. This probability of finding a positive pig is analogous to the reported prevalence in European countries, as the sensitivity of meat inspection and the exposure rate lead to found cases in the slaughterhouse.
We entered f and the reported prevalences as P(detect) in formula 4, yielding λheart for each country. To derive λpig, the exposure rate of the whole pig instead of the heart, the exposure rate was divided by the probability of a cyst to develop in the heart (pheart) (formula 5).
$$ {\lambda}_{\mathrm{pig}}=\frac{\lambda_{\mathrm{heart}}}{p_{\mathrm{heart}}} $$
A binomial distribution was used to find all infected and non-infected pigs in the model, with n the number of pigs and P(detect) (formula 4) the probability of detection.
The reported prevalences were acquired in three steps. Firstly, the number of porcine cysticercosis cases per country was adopted from two reviews about the epidemiology of T. solium and T. saginata [13, 16]. An additional literature search was done for European countries that were lacking from the reviews [21, 22]. Secondly, for all countries that reported an annual number of cases but no total number of tested pigs, the total number of pigs slaughtered in slaughterhouses was taken from Eurostat [23]. Thirdly, the annual number of cases was divided by the annual number of slaughtered pigs to generate a prevalence of reported cases. This is the controlled reported prevalence, because all reported cases were found in slaughterhouses [13, 16, 21, 22].
The adjusted number of infected pigs originating from controlled housing was divided by the total number of pigs assessed to obtain the adjusted prevalence in a controlled setting. By "adjusted prevalence" we mean the reported prevalence, adjusted for the sensitivity of meat inspection (formula 3).
Home slaughtered pigs are more likely reared in uncontrolled housing systems. This could imply that home slaughtered pigs have also had a higher exposure to T. solium. This assumption is supported by data from Spain, where home slaughtered animals are inspected according to the same method as regularly slaughtered animals. The reported prevalence in Spanish pigs under controlled conditions ranges between 0.02–0.03%, while amongst home slaughtered pigs a prevalence of 0.16–0.43% is reported (2011–2013) [13]. The ratio between controlled and home reported prevalence in Spain was used to calculate the home prevalence in other countries in our model.
Initially, the controlled and home reported prevalence of Spain were entered in formula 4, attaining two exposure rates, the controlled exposure rate called\( {\lambda}_{\mathrm{heart}}^c \) and the home exposure rate\( {\lambda}_{\mathrm{heart}}^h \) . These were divided by pheart to obtain \( {\lambda}_{\mathrm{pig}}^c \) and \( {\lambda}_{\mathrm{pig}}^h \) (formula 5).
Formula 6 demonstrates the step to the exposure conversion, that was applied in the model for all countries to convert the adjusted controlled prevalence in the adjusted home prevalence.
$$ \mathrm{Exposure}\ \mathrm{conversion}=\frac{\lambda^c\ }{\lambda^h} $$
Number of slaughtered pigs
The database Eurostat records the annual number of slaughtered pigs per European country, as well as the number of pigs slaughtered at places other than the slaughterhouse [23, 24]. Slaughtering 'outside the slaughterhouse' was adopted as home slaughtering in our calculations. The yearly slaughter records taken into account are the same years for which the national number of porcine cysticercosis cases is known. The average of these years was used in the model to calculate an average prevalence.
Cyst distribution and weight of pork cuts
The distribution of T. solium cysticerci in pig carcasses is not homogeneous. The predilection sites described are for instance the pork shoulder, pork leg and psoas muscle [25]. To take into account the cyst distribution in the model, literature data were used. In a paper of Boa et al. [19] naturally infected pigs were slaughtered and in every half carcass the cysticerci per muscle group or organ were counted by the total slicing method. The average amount of cysticerci per cut was divided by the average total cysticerci of the 24 pigs. The mean percentage of total cysticerci in the cut was divided by the mean percentage of the weight of that cut to calculate the relative cyst density [19]. The relative cyst density is the probability of a cyst being present in a cut. The relative cyst density of the heart was used in formula 5 as pheart. Also the relative cyst density was used in a binomial function that is defined in the section "Cysticerci per consumed portion".
The weight of the pork cuts was not available from literature. Only the weights relative to the average carcass weight were given (Mean Weight %) [19]. To obtain the actual cut weights in kilograms, literature about porcine brain weights of pigs in the same age class was used, since brain weight is a stable proxy for age [26]. This Weightbrain was taken to convert the Mean Weight% of cuts to Weightcut. This is shown in formula 7.
$$ {\mathrm{Weight}}_{\mathrm{cut}}=\frac{{\mathrm{Weight}}_{\mathrm{brain}}}{\mathrm{Mean}\ \mathrm{Weight}{\%}_{\mathrm{brain}}}\ast \mathrm{Mean}\ \mathrm{Weight}{\%}_{\mathrm{cut}} $$
The trunk muscles, musculus psoas, musculus triceps brachii, forelimb, abdominal muscles and hindlimb were not deducted from the brain weight, because those are only parts of the pork cuts loin, tenderloin, shoulder, foreleg, belly and ham, respectively. For these cuts we assumed a homogeneous distribution within the complete cut, so that the relative cyst distribution of the muscles described in Boa et al. [19] could be used for the entire pork cuts that we assessed. The weight of these cuts was collected from literature [27,28,29,30].
Cysticerci per consumed portion
A couple of steps were followed to determine how many cysticerci end up in the consumed portions of all pork cuts. First of all, the number of portions per cut was calculated. Therefore, the cut fraction and the total number of portions consumed in a country were determined with the following formulae:
$$ \mathrm{Cut}\ \mathrm{fraction}=\frac{{\mathrm{Weight}}_{\mathrm{cut}}}{{\mathrm{Weight}}_{\mathrm{carcass}}} $$
$$ \mathrm{Total}\ \mathrm{portions}=\mathrm{Population}\ \mathrm{size}\ast \frac{\mathrm{Pork}\ \mathrm{consumption}\ \left(\frac{kg}{inhab}/ yr\right)}{\mathrm{Portion}\ \mathrm{size}\ (g)}\ast 1000\ (g) $$
$$ \mathrm{Total}\ \mathrm{portions}\ \mathrm{home}\ \mathrm{slaughter}\mathrm{ed}\ \mathrm{pigs}=\mathrm{Total}\ \mathrm{portions}\ast \mathrm{Fraction}\ \mathrm{home}\ \mathrm{slaughter} $$
Using samples from a multinomial distribution with probabilities given by formula 8, and the number of trials by formula 9 ('controlled') or 10 ('home'), a distribution of cuts compliant to formula 8 was generated. Secondly, a binomial function was used to calculate the number of cysticerci that end up in a certain cut. The number of trials of the binomial function is the number of cysticerci in the pigs, calculated in step 2 of the risk chain model. The probability of a cyst entering a cut is equal to the relative cyst density that was described before. Thirdly, the probability of a cyst in a cut being present in a portion from this cut is equal to the fraction portion (formula 11).
$$ \mathrm{Fraction}\ \mathrm{portion}=\frac{{\mathrm{Weight}}_{\mathrm{portion}}}{{\mathrm{Weight}}_{\mathrm{cut}}} $$
With this proportion as probability, and the cysticerci per cut as number of trials, a second binomial distribution provided the number of cysticerci in a portion. The abovementioned binomial distributions were applied to every portion that was annually eaten in a country, thus giving the total of infected portions. The total number of portions eaten from controlled pigs was derived from formula 9. This number was multiplied by the fraction home slaughtered pigs to obtain the total number of home slaughtered portions in a country (formula 10). The final outcome is the portion prevalence (formula 12).
$$ \mathrm{Portion}\ \mathrm{prevalence}\ \left(\%\right)=\frac{\mathrm{No}.\mathrm{of}\ \mathrm{infected}\ \mathrm{portions}}{\mathrm{Total}\ \mathrm{no}.\mathrm{of}\ \mathrm{portions}}\times 100\% $$
As only raw or undercooked meat confers an actual risk to public health, cooking practices were appraised in the model. Two approaches were taken to differentiate between raw and cooked consumed portions. The first approach, cooking scenario 1, was an indicative estimation of raw consumption, with the aid of personal communication with traditional Dutch farmers who used to prepare home slaughtered pork, and some websites addressing pork cuts and cooking methods. In this approach, a specific estimation is given of what fraction of a cut is eaten raw. The second approach, cooking scenario 2, was based on three scenarios (2A, 2B and 2C): cooking 10, 50 and 90% of the cuts. In this approach a standard fraction of every cut is assumed eaten raw. We assumed perfect inactivation of cysticerci during cooking. So, only the fractions of the cuts estimated eaten raw have viable cysticerci according to the model.
After the cooking step, the final portion prevalence and total number of infected pork portions, originating from pigs raised under controlled housing and from home slaughtered pigs, could be determined for every country included in the model. Furthermore, the separate attribution of the cuts to the total portion prevalence was assessed.
Inclusion of variability
In several places in the model we employed variability distributions (e.g. the Poisson distribution for number of cysticerci). A single run of the model calculates a large number of pigs (one million), the result is a distribution over individual pigs, from which prevalences and numbers of infected portions could be calculated. Five hundred iterations of this model were performed and outputs stored. The means, 2.5% and 97.5% percentiles were calculated. These numbers represent variability in the output, not uncertainty. One should interpret these numbers as indications of natural variation that one may expect to see due to chance.
The quantitative risk assessment model was run in R v.3.4.3 [31], with data stored in Microsoft Excel 2010 spreadsheets.
Data were available for five countries on the prevalence of porcine cysticercosis and the number of home slaughtered pigs, namely Bulgaria, Germany, Poland, Romania and Spain. The results of these countries are presented henceforth.
Test sensitivity of meat inspection
As mentioned in the methods section, European meat inspection does not reveal all present cysticerci in pig carcasses. According to Boa et al. [19], the lengthwise incision of the heart that is performed during meat inspection gives access to 136 cm2 of the heart. Total slicing reveals 425 cm2. The inspection proportion of the area is 32%. In other words, when cutting the heart, each heart cysticercus has a probability of f = 0.32 to be detected [19]. From formula 1, with f = 0.32, the relation between the infection load of the heart and the sensitivity of the current method of European examination of swine carcasses was obtained. Figure 3 demonstrates this relationship, showing that meat inspection sensitivity is very low when there are only a few cysticerci in the heart and nearly 100% when there are ten or more cysticerci in the heart.
Sensitivity of meat inspection, dependent on cysticerci in the pigs' heart
With the aid of reported prevalences and the probability of finding a cysticercus in the heart, formula 3 led to the exposure rate of pig hearts to T. solium eggs (Fig. 4). The reported prevalences are given in Table 1.The calculated heart exposure rates for every country were corrected for the probability of any cysticercus to be located in the heart, pheart = 3.6 × 10-2 [19] to obtain the \( {\lambda}_{\mathrm{pig}}^c \). The calculated values of \( {\lambda}_{\mathrm{pig}}^c \) are given in Table 1.
Exposure rate of pigs to T. solium eggs in a lifetime, by country and housing
Table 1 Data inputs and parameters
The reported prevalence of the countries included in the model is shown in Table 1. The adjusted prevalence of pigs that were raised uncontrolled and slaughtered at home was calculated via the exposure conversion (Table 2).
Table 2 Exposure conversion
In Table 3, column 3 shows that the calculated adjusted prevalence of pigs in controlled housing is approximately 86 times higher than the reported prevalence due to the low sensitivity of meat inspection, especially with a low infection load. The calculated adjusted prevalence of home slaughtered animals is another 12–14 times higher than the calculated controlled pig prevalence (Table 3, column 5). The highest prevalences are found in Spain and Bulgaria (Fig. 5).
Table 3 Prevalence of controlled and home slaughter
Adjusted T. solium prevalence of controlled and home slaughtered pigs, by country
Slaughter data
The average fraction of home slaughter in the different countries is given in Table 1. Spain has the lowest fraction of home slaughter, namely 6.0 × 10-4. In Romania, more than half of the pigs are slaughtered outside slaughterhouses.
Cysticerci distribution and weight of pork cuts
The relative cyst density and weight of cuts can be reviewed in Table 4. Pork organs or cuts that did not contain any cysticerci are not named as they are not relevant for the model. These are, for instance, the liver and kidneys [19]. The output of formula 6 is shown in column 4 of Table 2. The Weightbrain was set at 0.135 kg [26].
Table 4 Taenia solium cysticerci distribution and weight of pork cuts
The cut fraction determined with formula 8 and the fraction portion with formula 11 are shown in the last two columns of Table 4. The portion weight and number of portions that are annually eaten in the five included countries are demonstrated in Table 1. The results from the binomial distributions used in this step, present the number of infected portions that consumers are actually exposed to, if all portions would be eaten raw.
The portion prevalence is highest in Spain and Bulgaria, where 0.03% and 0.02% of the 100 g portions are infected, respectively, when pigs are slaughtered at home (Fig. 6; no cooking). The variability intervals (V.I.) are relatively small, which can be interpreted as little natural variation when prevalences are repeatedly calculated from hypothetical large samples of portions.
Taenia solium cysticerci infected pork portions prevalence by country and housing, before and after cooking compared
In Spain and Germany, the total number of infected portions (actual number of portions consumed times the portion prevalence of contamination) is higher under controlled conditions than when home slaughtered, while in Poland it is almost equal and in the other countries this is the other way around (Fig. 7; no cooking). Again, variability is limited, with some notable exceptions, namely Germany and Poland under controlled conditions. This means that estimates of numbers of contaminated portions are likely to give varying results over multiple (hypothetically comparable) surveillance results. The V.I.'s for Spain are large in an absolute sense, but not when viewed relative to the large absolute number of contaminated portions.
Total exposure to T. solium cysticerci infected pork portions by country and housing, before and after cooking scenario 1 compared
The cooking scenario 1 is described in Table 5. The Fraw.prep is the fraction that people are expected to eat raw. For example, the esophagus may be eaten raw when it is a component of ground pork. The Fprep.eaten.raw is the fraction of this ground pork that will be eaten raw instead of cooked. For the tenderloin, the whole cut is prepared undercooked, so the Fraw.prep is 1. Yet, the tenderloin is eaten medium/rare, so the whole cut has an Fprep.eaten.raw of 0.4. This gives a total raw fraction (Fraw.prep * Fprep.eaten.raw) of the tenderloin of 0.4. Cooking scenario 2 is based on the second approach. As described in the methods, a fixed fraction of 0.1, 0.5 and 0.9 is considered to be eaten raw.
Table 5 Cooking scenario 1: Fraction of pork cuts eaten raw
The results of cooking pork are shown in Figs. 6, 7 and 8. In Fig. 7 the total exposure in all countries in a year is given, when the population of that country eats everything raw and when it cooks the portions as is estimated with scenario 1. These results can also be seen in Table 6. Cooking according to scenario 1 gives a 4 times lower total exposure of infected portions.
Prevalence of T. solium cysticerci infected pork portions after different cooking scenarios
Table 6 Total exposure of the population to infected T. solium portions per country, before and after cooking the pork
The portion prevalence also decreases after cooking scenario 1 is applied. In Fig. 6 the portion prevalence before and after cooking is shown. A 50 times smaller portion prevalence remains after cooking. This is only shown for controlled pork, but for home slaughtered pigs the relative difference between before and after cooking is the same.
Scenario 2 is compared with scenario 1 in Fig. 8 for Spain. This figure demonstrates that if the raw fraction increases (e.g. 10 to 90%), more portions that are eaten contain viable T. solium cysticerci, according to the model. Cooking according to scenario 1 only leaves a higher portion prevalence than the in scenario 2 described raw fraction of 0.1. The figure only shows the results of controlled slaughter pigs in Spain since comparable results were obtained using the same scenarios for the other countries. The variability shown in Fig. 8 is limited, meaning that differences between these hypothetical cooking practices would in principle be observable with an appropriate population survey; natural variation alone will not make the scenarios indistinguishable.
The attributions of the different cuts to the total exposure of consumers are shown in Fig. 9. The muscles are responsible for 80% (V.I. 62–100%) of the infected portions, and the organs for 20% (V.I. 10–30%). The organs that belong to this 20% are the esophagus, heart and diaphragm. The other organs are not eaten raw (Table 5). There is quite some overlap between the variability intervals, which means that due to chance alone the real ordering might be different. However, those cuts for which the intervals do not overlap will retain their relative ordering in the attribution.
Attribution of cuts to the total of T. solium cysticerci infected pork portions
We have built a quantitative microbiological risk assessment (QMRA) model for T. solium from pork production to consumption and implemented published data about the porcine cysticercosis prevalence, home slaughter numbers, the distribution of cysticerci in pork cuts and consumption quantities of pork. We present the results of the QMRA model for the risk of human exposure to T. solium due to consumption of pork in five European countries.
We demonstrated that the detection of T. solium cysticerci during meat inspection is dependent on the area of the body that is inspected and the infection load of the carcasses. The probability of finding an infected pig is low, since the reported T. solium prevalences in European countries are very low [13, 16] along with the sensitivity of meat inspection. This finding is in line with a study that evaluates meat inspection and other tests for the detection of cysticercosis [18]. We obtained the original data of that study to test our model (Dorny et al. [18]; unpublished data). The data consist of 65 pigs that were slaughtered, then inspected for T. solium according to the routine meat inspection protocol in that country and at last sliced to find all T. solium cysticerci [18]. Thirty-two pigs were infected with T. solium. We used our pheart and formula 2 to determine which infected pigs would be found with meat inspection according to our model and compared it to the pigs that were actually found with meat inspection. With an arbitrary cut-off of 0.5 in our model, distinguishing between 'detects' and 'non-detects', our model had an error rate of 4/32. Just as meat inspection is a predictor of infection with a certain sensitivity and specificity, our model of meat inspection efficiency is also a predictor of "measured infection status", with associated sensitivity and specificity. The sensitivity of our model on meat inspection is 75% and the specificity is 100%. The positive predictive value is 100%. The four pigs that were found infected during meat inspection, but not according to our model, could be predicted by our model per chance and due to the sharp cut-off of 0.5. Additionally, those four misdetects had fairly high numbers of cysticerci, casting doubt on the experimental outcome for those pigs. Furthermore, the meat inspection that was done in the study of Dorny et al. [18] included the heart and other organs like the masseter muscles while we only included the heart. Altogether, the predictive value of our model, regarding meat inspection, seems very high.
The exposure rates of home slaughter pigs are a factor 13.5 higher than those of controlled slaughter pigs as calculated from the exposure conversion that is derived from data of only one country, Spain. A home slaughter exposure rate based on prevalence data specific for the different countries would improve the outcomes of the model because uncontrolled housing of pigs or backyard pig keeping can exist over a wide range of husbandry practices. That affects the exposure of the pigs in that country. Although the exposure conversion is a substantial uncertainty in the model, the Spanish prevalence data did indicate that it is highly pertinent to take into account that home slaughtered pigs could have been subjected to a higher number of eggs in their lifetime than controlled slaughtered pigs. Furthermore, the Spanish data showed similar exposure conversions over the years, strengthening the idea that the estimate is robust.
By combining the sensitivity of meat inspection and the exposure rate we predicted the adjusted prevalence. That the calculated adjusted prevalence is about 86 times higher than the reported prevalence is not surprising, when we bear in mind the low sensitivity of meat inspection. Nonetheless, the prediction could be an overestimation because species misclassification might occur during inspection since the reported Taenia spp. cases were (except for Portugal) not confirmed by a diagnostic tool like polymerase chain reaction (PCR) [13, 16]. Other Taenia spp. for which pigs can serve as intermediate hosts are T. hydatigena and T. asiatica. However, there is no proof that T. asiatica is present in Europe, so we assume the contribution of this Taenia species to be negligible [32,33,34]. In addition, the porcine cysticercosis findings that are reported in slaughterhouses may be T. hydatigena cases, as this is a common parasite in Europe, especially in sheep raising areas. Nevertheless, the main predilection sites of T. hydatigena differs from T. solium, so a meat inspector should be able to distinguish between T. hydatigena and T. solium cases [35, 36].
We do realize that a bias is possible in the prevalence differences between countries. As the prevalence data depend on what is reported in the slaughterhouses, it is reasonable that the countries with more comprehensive reporting systems end up with the highest prevalence. As such, the fact that Spain and Bulgaria have the highest adjusted prevalence does not necessarily mean that their prevalence is indeed highest. In the ideal situation, the conversion from reported to adjusted prevalence could have been done separately for all countries. However, we only had these data from Spain and needed to assume the conversion is the same for the other countries. This also affects the remainder of the results. Because we used identical data for different countries, one can see patterns that are constant for the different countries, e.g. in Fig. 4. As a consequence, we addressed the results in general, instead of addressing it separately for each country.
In every country the home slaughter portion prevalence is a factor of 13 higher than the controlled slaughter portion prevalence. Regardless of this, the total number of infected portions was higher under controlled than home reared conditions in Germany and Spain. This can be explained by the small share of home slaughter in those countries, giving a very small number of total portions.
The portion size of 100 grams was chosen since the estimated consumption of pork meat is given in grams per capita per day that in the database of the FAO [37]. For the five countries in the model a consumption between 69 (Bulgaria) and 149 (Germany) grams per day has been reported. For the sake of clarity, the variety of portion sizes over the years and countries was not included in this risk assessment.
Since the portion prevalence is very low (shown in Fig. 6: the highest is 0.036%) the chance that someone gets exposed to more than one infected portion per year is very low. So we assume that every infected portion is eaten by someone else. For home slaughter though, this assumption might be wrong. If a family keeps some pigs for their own consumption and those pigs are all reared at the same place at the same time, all pigs might be infected and so the family has a much higher risk of exposure to T. solium cysticerci than the average. This illustrates that the portion prevalence for home slaughter is more complicated to translate to a quantitative risk on population level. The portion prevalence of home slaughtered pork was also assessed when meat inspection is done on home slaughtered carcasses. This is not shown in the results because the difference between the portion prevalences was negligible and remained as high as without meat inspection. We connect this to the low sensitivity of meat inspection. In a country with a higher exposure of the pigs, more pigs would be found infected in slaughterhouses (as the animal would have potentially multiple cysticerci in the heart, increasing the sensitivity of meat inspection), so meat inspection then would make a difference.
Fortunately, cooking of the pork portions goes along with a conversion in the number of infective portions. If the scenario that was presented in Table 5 is a good estimation of cooking practices, the risk is decreased by a factor 3 due to cooking. We have chosen scenarios where meat is either raw or perfectly cooked before being eaten, instead of a model where inactivation is a function of cooking time and temperature, as was done in a QMRA for another meat borne parasite Trichinella spp. [27]. Despite the fact that a publication about heat inactivation of Taenia cysticerci was available, the time to inactivation was not contemplated so we could not adopt these results for our model [38].
Estimating the fraction of a cut that is eaten raw is complex. Furthermore, the raw cuts are often dried, smoked or pickled with salt, and when a whole pig is slaughtered for one family a large quantity will be frozen. Freezing for four days at -5 °C, three days at -15 °C or one day at -24 °C effectively kills cysticerci [39]. Salt pickling lowers the viability of Taenia metacestodes due to changes in the osmotic potential, causing a membrane rupture [40]. The other preparation methods have not been evaluated as far as we know. Thus, the raw fraction of pork cuts eaten is a limitation of this study. Even with this considered, our model still has significant relevance, since raw meat consumption is common in European countries, although the consumption preferences depend on the cultural background and personal customs [41,42,43,44].
We identified a heterogeneous distribution of cysticerci in the pig carcasses in our model. The cyst density was used to calculate the share of each cut in the total of infected portions. The largest attribution after cooking comes from the tenderloin and ham. This might change when cooking is performed differently. For example the masseter muscles do not currently add to the risk because they are always eaten thoroughly cooked according to various sources [45,46,47,48,49]. They do, however, have a very high relative cyst density so if cooking habits change or a niche group prefers them raw, then the masseter would contribute to the risk. Another factor that could change the attributions of the cuts is the cysticerci viability in the meat. Taenia cysticerci can survive for three years after experimental infection [50]. In a study of pigs that were slaughtered 26 weeks post-infection, the mean total viability of cysticerci was 99% (SD ± 1) [51]. Pigs are often slaughtered around 20 weeks of age, so the assumption of 100% viability seems reasonable. Yet, these were experimentally infected pigs that received a single high dose of eggs, while naturally infected pigs are likely exposed to eggs all their lives. Furthermore, backyard pigs may be slaughtered at a later age, as they do not grow as efficiently as pigs in controlled housing. Studies with naturally infected pigs show that the viability fraction depends on the total infection load of the pigs (Dorny et al. [18] unpublished results) and varies over different cuts [19]. As we did not have enough data to include cysticerci viability in the model, we assumed that all cysticerci were viable.
In some countries the exposure from controlled housing is higher than from home slaughter (e.g. Spain, Germany), even though the portion prevalence of home slaughter is higher. This is due to much higher consumption frequencies of controlled produced pork. The average number of infected portions in the five countries assessed was 5.3 × 103 according to the model.
To validate the model, the calculated portion prevalence must be compared with reported human taeniosis cases. However, many tapeworm cases will never be diagnosed due to the mild and vague symptoms [52]. Moreover, often the number of general taeniosis infections is reported, without specifying which Taenia spp. [13, 16]. Despite these concerns, in Poland from 2007 to 2009 a total of 278 human cases have been reported. One hundred and eighty cases were due to T. saginata and the other 98 cases were 'other tapeworms' (i.e. 35% of the total cases) [16]. If the other tapeworms were almost all T. solium cases it means there were on average around 33 cases per year. This would mean that from the annual 42644 infected portions, 0.08% result in reported infections. In Romania from 2007 to 2009, 1463 taeniosis cases have been reported. If in Romania 35% was also due to T. solium, around 170 cases per year would have been T. solium taeniosis cases [16]. This would imply that 0.8% of the total infected portions cause an infection. Although the difference between those countries a factor of 10, it is not impossible when we take into consideration the earlier described food customs in Romania and the percentage of home slaughter that is nine times higher in Romania than in Poland. As such, the number of people exposed as estimated by the QMRA model is reasonable, taking into account that the number of reported human cases is assumed an underestimation of the real cases.
In conclusion, we developed a model to assess the relative exposure to T. solium in Europe, comparing pork originating from home slaughtered pigs with pork originating from controlled housing raised pigs. Our model takes into account different stages of the food chain, from the prevalence that starts at the pig farm to the portion prevalence that ends up on the consumer's plate. This makes it possible to look at the effect of every step in the chain on the final exposure. The most important finding is that there is still a potential risk of a T. solium infection in Europe. This risk depends first on the reported porcine cases after meat inspection, which has a very low sensitivity, especially when pigs have a low infection load. Therefore, the adjusted prevalence of T. solium is much higher than reported, as we showed. Secondly, the portion prevalence of pork from home slaughter is 13.83 times higher than from controlled housed pigs. Thus, home slaughter is a very important risk factor for exposure to T. solium. Finally, exposure to T. solium depends on many factors and differs per country due to husbandry of pigs and cooking habits. The results of the model can be improved if more information about the prevalence among pigs (controlled and home slaughtered) and consumer behaviour regarding raw meat consumption is acquired. Therefore, it would be useful if European countries develop a better monitoring system for T. solium in pigs, preferably based on a more sensitive method instead of visual inspection [53] and molecular confirmation of suspected findings in the slaughterhouse. In addition, a comprehensive survey about raw meat consumption would reduce uncertainty in the estimates on the raw consumed portions and give a better perception of cultural differences (e.g. following the methodology of [54]). When these factors become better known, the QMRA model could support the assessment of human exposure to T. solium, both in and outside Europe.
kg:
NCC:
QMRA:
Quantitative microbial risk assessment
V.I.:
Variability interval
Yoshino K. Studies on the postembryonal development of Taenia solium. Pt. III. On the development of Cysticercus cellulosae within the definitive intermediate host. Taiwan Igakkai Zasshi J Med Assoc Formosa. 1933;32:166–9.
Taylor MA, Coop RL, Wall RL. Parasites of pigs. Veterinary Parasitology. Oxford: Blackwell Publishing Ltd; 2007. p. 343–5.
Gonzales I, Rivera JT, Garcia HH. Pathogenesis of Taenia solium taeniasis and cysticercosis. Parasite Immunol. 2016;38:136–46.
Mwanjali G, Kihamia C, Kakoko DV, Lekule F, Ngowi H, Johansen MV, et al. Prevalence and risk factors associated with human Taenia solium infections in Mbozi District, Mbeya Region, Tanzania. PLoS Negl Trop Dis. 2013;7:e2102.
Coral-Almeida M, Gabriel S, Abatih EN, Praet N, Benitez W, Dorny P. Taenia solium human cysticercosis: a systematic review of sero-epidemiological data from endemic zones around the world. PLoS Negl Trop Dis. 2015;9:e0003919.
Cao W, van der Ploeg CP, Xu J, Gao C, Ge L, Habbema JD. Risk factors for human cysticercosis morbidity: a population-based case-control study. Epidemiol Infect. 1997;119:231–5.
Sanchez AL, Medina MT, Ljungstrom I. Prevalence of taeniasis and cysticercosis in a population of urban residence in Honduras. Acta Trop. 1998;69:141–9.
Schantz PM, Sarti E, Plancarte A, Wilson M, Criales JL, Roberts J, et al. Community-based epidemiological investigations of cysticercosis due to Taenia solium: comparison of serological screening tests and clinical findings in two populations in Mexico. Clin Infect Dis. 1994:18879–85.
Eurostat. Population on 1 January by age and sex, 2013. http://ec.europa.eu/eurostat/web/products-datasets/-/ef_lspigaa. Accessed 14 Sept 2017.
European Commission. Regulation (EC) No. 854/2004 of the European Parliament and of the Council laying down specific rules for the organisation of official controls on products of animal origin intended for human consumption. In: L 226/88. Official Journal of the European Union; 2004. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:226:0083:0127:EN:PDF.
EFSA. Scientific report on harmonised epidemiological indicators for public health hazards to be covered by meat inspection by swine. EFSA J. 2371;2011:9.
Devleesschauwer B, Allepuz A, Dermauw V, Johansen MV, Laranjo-Gonzalez M, Smit GS, et al. Taenia solium in Europe: still endemic? Acta Trop. 2017;165:96–9.
Laranjo-Gonzalez M, Devleesschauwer B, Trevisan C, Allepuz A, Sotiraki S, Abraham A, et al. Epidemiology of taeniosis/cysticercosis in Europe, a systematic review: western Europe. Parasit Vectors. 2017;10:349.
Zammarchi L, Strohmeyer M, Bartalesi F, Bruno E, Munoz J, Buonfrate D, et al. Epidemiology and management of cysticercosis and Taenia solium taeniasis in Europe, systematic review 1990–2011. PLoS One. 2013;8:e69537.
Fabiani S, Bruschi F. Neurocysticercosis in Europe: still a public health concern not only for imported cases. Acta Trop. 2013;128:18–26.
Trevisan C, Sotiraki S, Laranjo-Gonzalez M, Dermauw V, Wang Z, Karssin A, et al. Epidemiology of taeniosis/cysticercosis in Europe, a systematic review: eastern Europe. Parasit Vectors. 2018;11:569.
AMv R, Vieira-Pinto M, Trevisan C, Gabriel S, Thys S, Vang Johansen M, et al. Identification of practices of home slaughtering of livestock and official meat inspection management in slaughterhouses in the different EU countries to assess the risk of T. solium and T. saginata. In: Cystinet Working Group and Management Committee Meeting; 2015. https://www.cystinet.org/wp-content/uploads/2014/04/CYSTINET_PARIS_ABSTRACTS_FINAL.pdf.
Dorny P, Phiri IK, Vercruysse J, Gabriel S, Willingham AL 3rd, Brandt J, et al. A Bayesian approach for estimating values for prevalence and diagnostic test characteristics of porcine cysticercosis. Int J Parasitol. 2004;34:569–76.
Boa ME, Kassuku AA, Willingham AL 3rd, Keyyu JD, Phiri IK, Nansen P. Distribution and density of cysticerci of Taenia solium by muscle groups and organs in naturally infected local finished pigs in Tanzania. Vet Parasitol. 2002;106:155–64.
Lindsey JK. Poisson distribution. Introductory Statistics. New York, USA: Oxford University Press Inc.; 1995. p. 102–106.
World Organisation for Animal Health (OIE). World Animal Health Information Database (WAHIS Interface) Disease Information. 2013. http://www.oie.int/wahis_2/public/wahid.php/Diseaseinformation/. Accessed 17 Sept 2017.
Oleleu AM, Cozma A, Toma-Naic A, Oleleu I, Cozma V. Detection of high endemic and zoonotic risk areas regarding the infestation with Taenia solium larvae in pigs in Romania. Bulletin UASVM CN. 2016;73:214–9.
Eurostat. Slaughtering in slaughterhouses - annual data. 2017. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=apro_mt_pann&lang=en. Accessed 21 Aug 2017.
Eurostat. Estimates of slaughtering, other than in slaughterhouses - annual data. 2017. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=apro_mt_sloth&lang=en. Accessed 21 Aug 2017.
Viljoen NF. Cysticercosis in swine and bovines, with special reference to South African conditions. Onderstepoort J Vet. 1937;9:337–570.
Minervini S, Accogli G, Pirone A, Graic JM, Cozzi B, Desantis S. Brain mass and encephalization quotients in the domestic industrial pig (Sus scrofa). PLoS One. 2016;11:e0157378.
Franssen F, Swart A, van der Giessen J, Havelaar A, Takumi K. Parasite to patient: a quantitative risk model for Trichinella spp. in pork and wild boar meat. Int J Food Microbiol. 2017;241:262–75.
Skewes O, Morales R, Gonzalez F, Lui J, Hofbauer P, Paulsen P. Carcass and meat quality traits of wild boar (Sus scrofas. L.) with 2n = 36 karyotype compared to those of phenotypically similar crossbreeds (2n = 37 and 2n = 38) raised under same farming conditions. 1. Carcass quantity and meat dressing. Meat Sci. 2008;80:1200–4.
Marcoux M, Pomar C, Faucitano L, Brodeur C. The relationship between different pork carcass lean yield definitions and the market carcass value. Meat Sci. 2007;75:94–102.
Monziols M, Collewet G, Bonneau M, Mariette F, Davenel A, Kouba M. Quantification of muscle, subcutaneous fat and intermuscular fat in pig carcasses and cuts by magnetic resonance imaging. Meat Sci. 2006;72:146–54.
R Development Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2017. https://www.R-project.org/
Ale A, Victor B, Praet N, Gabriel S, Speybroeck N, Dorny P, et al. Epidemiology and genetic diversity of Taenia asiatica: a systematic review. Parasit Vectors. 2014;7:45.
Galan-Puchades MT, Fuentes MV. Updating Taenia asiatica in humans and pigs. Parasitol Res. 2016;115:4423–5.
Singh SK, Prasad KN, Singh AK, Gupta KK, Chauhan RS, Singh A, et al. Identification of species and genetic variation in Taenia isolates from human and swine of North India. Parasitol Res. 2016;115:3689–93.
Nguyen MT, Gabriel S, Abatih EN. Dorny P. A systematic review on the global occurrence of Taenia hydatigena in pigs and cattle. Vet Parasitol. 2016;226:97–103.
Braae UC, Kabululu M, Normark ME, Nejsum P, Ngowi HA, Johansen MV. Taenia hydatigena cysticercosis in slaughtered pigs, goats. and sheep in Tanzania. Trop Anim Health Prod. 2015:471523–30.
Faostat. Food Supply - Livestock and Fish Primary Equivalent. 2017. http://www.fao.org/faostat/en/#data/CL. Accessed 23 Aug 2017.
Buttar BS, Nelson ML, Busboom JR, Hancock DD, Walsh DB, Jasmer DP. Effect of heat treatment on viability of Taenia hydatigena eggs. Exp Parasitol. 2013;133:421–6.
Sotelo J, Rosas N, Palencia G. Freezing of infested pork muscle kills cysticerci. Jama. 1986;256:893–4.
Rodriguez-Canul R, Argaez-Rodriguez F, de la Gala DP, Villegas-Perez S, Fraser A, Craig PS, et al. Taenia solium metacestode viability in infected pork after preparation with salt pickling or cooking methods common in Yucatan, Mexico. J Food Prot. 2002;65:666–9.
Neghina R. Trichinellosis, a Romanian never-ending story. An overview of traditions, culinary customs, and public health conditions. Foodborne Pathog Dis. 2010;7:999–1003.
Neghina R, Neghina AM, Marincu I, Iacobiciu I. Human taeniasis in western Romania and its relationship to multicultural food habits and influences. Foodborne Pathog Dis. 2010;7:489–92.
Bremer V, Bocter N, Rehmet S, Klein G, Breuer T, Ammon A. Consumption, knowledge, and handling of raw meat: a representative cross-sectional survey in Germany, March 2001. J Food Prot. 2005;68:785–9.
Kurdova-Mintcheva R, Jordanova D, Ivanova M. Human trichinellosis in Bulgaria - epidemiological situation and trends. Vet Parasitol. 2009;159:316–9.
Bereiding varkensvlees. http://www.slagerijvdwijngaart.nl/bereiding-varkensvlees. Accessed 2 Oct 2017.
Know your cuts: Pork. http://www.marketmeats.com/2013/06/06/know-cuts-pork/. Accessed 2 Oct 2017.
Pork cut chart. http://www.ontariopork.on.ca/recipes/Cut-Chart. Accessed 2 Oct 2017.
Pasture raised pork. http://boulderridgefarm.net/index.php/pasture-raised-meat/pork. Accessed 2 Oct 2017.
Raw-cooked meat products. http://www.fao.org/docrep/010/ai407e/AI407E12.htm. Accessed 2 Oct 2017.
Hird DW, Pullen MM. Tapeworms, meat and man: a brief review and update of cysticercosis caused by Taenia saginata and Taenia solium. J Food Prot. 1979;42:58–64.
Sikasunge CS, Johansen MV, Willingham AL 3rd, Leifsson PS, Phiri IK. Taenia solium porcine cysticercosis: viability of cysticerci and persistency of antibodies and cysticercal antigens after treatment with oxfendazole. Vet Parasitol. 2008;158:57–66.
Dupuy C, Morlot C, Gilot-Fromont E, Mas M, Grandmontagne C, Gilli-Dunoyer P, et al. Prevalence of Taenia saginata cysticercosis in French cattle in 2010. Vet Parasitol. 2014;203:65–72.
Dorny P, Vallée I, Alban L, Boes J, Boireau P, Boué F, et al. Development of harmonised schemes for the monitoring and reporting of Cysticercus in animals and foodstuffs in the European Union. Scientific Report, European Food Safety Authority; 2010.
Chardon J, Swart A. Food consumption and handling survey for quantitative microbiological consumer phase risk assessments. J Food Prot. 2016;79:1221–33.
The authors thank Frits Franssen for fruitful discussions on QMRA of foodborne parasites.
This work was in collaboration with COST Action TD1302, The European Network on Taeniosis/Cysticercosis, CYSTINET.
The data supporting the results and conclusions of this article are included within the article. The raw datasets and script used to run the model are available from the corresponding author upon reasonable request.
National Institute for Public Health and the Environment (RIVM), Center for Infectious Disease Control, P.O. Box 1, 3720, BA, Bilthoven, The Netherlands
Marina Meester
, Arno Swart
, Huifang Deng
, Annika van Roon
& Joke van der Giessen
Department of Biomedical Sciences, Institute of Tropical Medicine, Nationalestraat 155, 2000, Antwerp, Belgium
Chiara Trevisan
& Pierre Dorny
Department of Veterinary Public Health and Food Safety, Faculty of Veterinary Medicine, Ghent University, Salisburylaan 133, 9820, Merelbeke, Belgium
Sarah Gabriël
Department of Veterinary Medicine, Universidade de Trás-os-Montes e Alto Douro, UTAD, Quinta de Prados, 5000-801, Vila Real, Portugal
Madalena Vieira-Pinto
CECAV, Centro de Ciência Animale Veterinária, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5000-801, Vila Real, Portugal
Department of Veterinary and Animal Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Dyrlægevej 100, 1870, Frederiksberg C, Denmark
Maria Vang Johansen
Search for Marina Meester in:
Search for Arno Swart in:
Search for Huifang Deng in:
Search for Annika van Roon in:
Search for Chiara Trevisan in:
Search for Pierre Dorny in:
Search for Sarah Gabriël in:
Search for Madalena Vieira-Pinto in:
Search for Maria Vang Johansen in:
Search for Joke van der Giessen in:
MM performed the data collection, participated in developing the model and wrote the final manuscript. AS and HD designed the model and AS created all figures. AvR developed and analysed the questionnaire sent to all European countries. JvdG initiated and supervised the study. All authors were involved in the design of the study and read and approved the final manuscript.
Correspondence to Joke van der Giessen.
Meester, M., Swart, A., Deng, H. et al. A quantitative risk assessment for human Taenia solium exposure from home slaughtered pigs in European countries. Parasites Vectors 12, 82 (2019) doi:10.1186/s13071-019-3320-3
Taenia solium
Cysticercosis
Portion prevalence | CommonCrawl |
Nano Express
Hybrid solar cell on a carbon fiber
Dmytro A. Grynko1,
Alexander N. Fedoryak1,
Petro S. Smertenko1,
Oleg P. Dimitriev1,
Nikolay A. Ogurtsov2 &
Alexander A. Pud2
Nanoscale Research Letters volume 11, Article number: 265 (2016) Cite this article
In this work, a method to assemble nanoscale hybrid solar cells in the form of a brush of radially oriented CdS nanowire crystals around a single carbon fiber is demonstrated for the first time. A solar cell was assembled on a carbon fiber with a diameter of ~5–10 μm which served as a core electrode; inorganic CdS nanowire crystals and organic dye or polymer layers were successively deposited on the carbon fiber as active components resulting in a core-shell photovoltaic structure. Polymer, dye-sensitized, and inverted solar cells have been prepared and compared with their analogues made on the flat indium-tin oxide electrode.
Hybrid organic-inorganic solar cells based on organic molecules and inorganic semiconductor crystals, which serve as electron donor and electron acceptor, respectively, attract great attention due to the mutual advantages of the both materials used in the same device [1]. One of the key factors influencing the effective work of the hybrid photovoltaic devices is the interface area between the counterparts. The higher is the interface area, the higher amount of excitons dissociates into free carriers per the same amount of the absorbed photons. Therefore, the design of interface geometry is of crucial importance for charge carrier generation and collection. The evolution of typically applied geometries/morphologies can be illustrated by the scheme in Fig. 1.
Illustration of evolution of a flat heterojunction (HJ) for charge separation into various HJ configurations leading to the increased interface area: textured, nanorod/nanowire/nanoparticle network, branched core-shell morphology
The advantages of the nanowire (NW) or nanorod morphology compared to the textured or flat one have been proved in our previous works [2–5]. In particular, application of semiconductor NW arrays in solar cells leads to better light absorption due to the reduced reflection and stronger light trapping (so-called shadow effect) and improvement of charge collection from the active layer, since charge carriers move straight to the respective electrode through a NW crystal [6].
Among typically used inorganic NW components of the hybrid PV cells, CdS nanocrystals and their aligned arrays have attracted much attention due to their relative cheapness, comparatively easy preparation, and adhesion to different surfaces as well [7, 8]. Over the past few years, tremendous efforts have been made to decrease the size and to control the shape of CdS nanocrystals, and a number of new methods were reported for the synthesis of CdS nanocrystals integrated into low-dimensional nano- and microstructures [9, 10]. Specifically, a significant extension of surface area of a NW CdS array can be obtained by using the branched morphology. This approach has resulted in effective application of branched and hyper-branched semiconductor nanocrystals in energy conversion devices [11], sensors [12], electronic logic gates [13], etc. The advantage of the branched morphologies stems from an increased surface of such nanocrystals and therefore from higher contribution of processes at the interface of the nanocrystal and the environment. Therefore, the modern trends in photovoltaics reveal the gradual changes from the flat morphology of hybrid heterojunctions to the textured one and to the branched NW configuration (Fig. 1).
Although the solar cells based on CdS nanocrystals usually do not demonstrate superior power conversion efficiency (PCE) (Table 1), this material is convenient to study as a proof of concept for new cell geometries and to model hybrid interface properties [4, 14–16] due to the abovementioned advantages.
Table 1 Overview of some recent results in the world for PV cells based on CdS.
In this work, we discuss the original method of design of nanoscale hybrid solar cells of four different types based on a single carbon fiber (CF) as a core electrode supporting active layers of the developed solar cell in the form of subsequent shells. It should be noted that fiber-shaped solar cells have attracted great attention recently in view of their potential integration into large-scale and low-cost textile and wearable electronic devices. It has been shown that carbon-based materials can be used both as a core electrode [17, 18] and as a counter-electrode [19–22] in respective solar cells. In this work, we use a novel approach to construct a shell in the form of a brush of radially oriented CdS NWs around a single CF. In such a cell, the flexible hybrid core-shell CF-CdS nanobrush serves as the inorganic acceptor component, whereas organic shell of Zn phthalocyanine (ZnPc) or poly(3-hexylthiophene) (P3HT) was used as a donor light-absorbing overlayer.
Single carbon fibers (CFs, diameter was about 5–10 μm and the length 15–30 mm) were taken out of the commercial carbon cloth LU-3 (Ukraine) produced by carbonization of polyacrylonitrile cloth and characterized with Young's modulus E~250 GPa and tensile strength 2.5–3.0 GPa. The CF separation was handled using an optical microscope with the help of a homemade microinstrument. Resistivity of CF was ca. 3.8 · 10−3 Ohm · cm. Studies on fiber flexibility showed that a single CF could be bent at 180° without fiber cracking when the bent radius is as small as 500 μm. CdS powder of the chemical grade (purity 99.998 %) was used for the growth of NW crystals on the surface of a single CF. Zinc 2,9,16,23-tetra-tert-butyl-29H, 31H-phthalocyanine (ZnPc-4R) (Sigma-Aldrich) or P3HT (Rieke Metals) served as organic donor counterparts. To prepare the top electrode poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) (Aldrich) has been drop-cast from the 1.3 wt.% water dispersion.
Preparation of the CdS/CF Nanobrush Structure
Details of the growth of the CdS NW array on CF are given elsewhere [23]. In short, the synthesis was performed by vapor-solid (VS) condensation technique [10] in a high-temperature reactor inside the vacuum chamber, at temperatures between 650 and 750 °C with the basic pressure in the chamber of ~10−5Torr. No gold or other metal islands were used as nucleation centers/seeds for the nanocrystal growth, and CdS nanocrystals were grown directly on the bare CF surface (Fig. 2). The adhesion of CdS crystals to CF was checked by electrical measurements which showed an ohmic or quasi-ohmic character of the contact between CdS and CF [17].
SEM images of a bare CF and b, c CdS NW array on a single CF under different magnifications
Solid-State Dye-Sensitized Solar Cell (SSDSSC)
A layer of ZnPc-4R in 1 wt% ethanol was drop cast onto the CdS nanobrush surface to prepare the core-shell CF/CdS/ZnPc-4R active heterojunction layer followed by deposition of the PEDOT:PSS top electrode for the SSDSSC formation. Indium pads with corresponding leads were clamped directly to a metallic holder attached to CF and to PEDOT:PSS as counter-electrode, respectively, for the electrical measurements.
Electrochemical Dye-Sensitized Solar Cell (DSSC)
A drop of Na2S-S electrolyte (10 μl) was drop-cast onto the same CF/CdS/ZnPc-4R nanobrush structure and covered by glassy carbon electrode via 15 μm PTFE spacer.
Solid-State Polymer-Sensitized Solar Cell (SSPSSC)
P3HT from 1 wt% chlorobenzene solution was drop-cast on the CF/CdS nanobrush array followed by annealing during 10 min (110 °C) under argon atmosphere. It should be noted that the polymer deposition can somewhat affect the fragile CdS NWs (Fig. 2). The analogous PV cells have been prepared on the flat indium-tin-oxide (ITO) surface by a similar successive deposition of CdS NW array, organic overlayer, and PEDOT:PSS electrode to compare PV performance of the cells of the different geometry and similar CdS NW arrays.
Inverted Solar Cell (ISC)
P3HT and fullerene derivative [6, 6]-phenyl-C61-butyric acid methyl ester (PCBM) mixture (1:2 molar ratio) from 1 wt% chlorobenzene solution was drop-cast on the CF/CdS nanobrush array followed by annealing during 10 min (110 °C) under argon atmosphere. The analogous PV cells have been prepared on the flat ITO surface by a similar successive deposition of CdS NW array, organic overlayer, and PEDOT:PSS electrode to compare PV performance of the cells of the different geometry and similar CdS NW arrays.
I-V measurements were performed by using a HP 4140B source meter device interfaced to a computer. White light illumination of the samples was provided by a 50-W halogen lamp. The results below are referred to light intensity of 100 mW/cm2 if not indicated otherwise. The I-V curves were processed by differential approach to analyze their fine structure [24–26]. This approach allowed us (i) to determine the differential slope α according to the equation
$$ \alpha (V)=\frac{d(lgI)}{d(lgV)}=\frac{dI}{dV}\times \frac{V}{I}; $$
(ii) to describe the I-V curves by the dependence
$$ I(V)\sim {V}^{\alpha } $$
(iii) to determine the differential slope of the second order γ according to the equation
$$ \gamma (V)=\frac{d\left( \lg \alpha \right)}{d\left( \lg V\right)}=\frac{d\alpha }{dV}\times \frac{V}{\alpha }; $$
(iv) to describe the I-V curves by the dependence
$$ I(V)\sim \exp \left({V}^{\gamma}\right); $$
(v) to determine the ideality factor η by substitution of current density from the diode equation
$$ j(V)={j}_{\mathrm{S}}\left( \exp \left\{\frac{eV}{\eta kT}\right\}-1\right)\cong {j}_{\mathrm{S}} \exp \left\{\frac{eV}{\eta kT}\right\} $$
to Eq. (1), which results in
$$ \eta =\frac{e}{kT}\cdot \frac{V}{\alpha }; $$
where k is the Boltzmann constant, e the electron charge, and T the absolute temperature.
The major charge carrier type in CF was determined by the hot-probe experiment. In this method, a bundle of CFs was fixed by two crocodile clamps and then heated from the one side while measuring the potential difference between the clamps. Depending on the sign of the potential difference, the main type of charge carriers was determined.
Morphologies of the samples were studied by scanning electron microscopy (SEM) using JEOL JSM35, JXA-8200 instruments and by optical microscope ULAB XY-B2.
The needle-like crystals of CdS were obtained at the surface of the CF (Fig. 2). Their growth mechanism was discussed in detail elsewhere [10]. Particularly, it was shown that the formation of CdS NWs on the CF proceeds through vapor-solid (VS) mechanism due to adsorption of the reactive gas phase directly on dangling bonds, polar groups, defects, etc., along the CF surface. Based on this approach, a rather robust contact of the CdS nanocrystals to CF can be formed, resulting in a quasi-ohmic behavior of the CF/CdS NW heterostructure. From the SEM images of the heterostructures made (Fig. 2), it was estimated an average diameter and length of the needle-like CdS crystals to be 300–700 nm and up to 10 μm, respectively; in some cases, the crystal length extended up to 50 μm. The estimated surface density was several CdS NW crystals per square micron.
The prepared CF/CdS nanobrush structure served as an electron acceptor component of the PV cell and was then covered by an additional layer of donor organic material (ZnPc-4R, P3HT or P3HT:PCBM) resulting in the penetration of the donor material into the porous CdS structure and formation of bulk heterojunction (BHJ) micron-sized solar cells (Fig. 3). The prepared different types of the hybrid CF/CdS nanobrush cells have somewhat different principles of operation, particularly, the different role of CdS layer in each type of the above cells, and therefore different PV performances can be expected. In SSPSSC and SSDSSC, CdS acts as an electron acceptor, while it plays the role of an electron-selective (hole-blocking) layer to direct electrons from the organic counterpart to cathode in DSSC and ISC assemblies. In SSPSSC, SSDSSC, and DSSC, an exciton dissociates at the organic-inorganic CdS interface, while in ISC, an exciton dissociates at organic-organic interface followed by drift of electrons to the CdS layer. More details about operation of the PV cell of the same compositions but assembled on the flat supporting electrodes can be found elsewhere [4]. The PV performance was found to be also dependent both on the type of the contact (solid or liquid) of active BHJ structure with counter hole-collecting electrode (anode) and on the organic donor material as well. In particular, the SSDSSC (CF/CdS/ZnPc-4R/PEDOT:PSS) showed V oc of ~0.08 V and I sc of 20 μA/cm2 (Fig. 4), while the same charge-generating hybrid structure of CF/CdS/ZnPc-4R in the electrochemical DSSC led to somewhat worse PV characteristics with V oc of 0.04 V and I sc of ~12 μA/cm2 (Fig. 5). However, both photocurrent and open-circuit voltage in the SSDSSC degraded rapidly during the first minutes of measurements under illumination by factors of ~3 and ~2, respectively, most probably due to photooxidation processes, since CdS is known as a strong catalyst under certain conditions [27, 28]. The PV parameters of the above cells are given here when the transient processes have practically finished; then, these parameters for the SSDSSC were found to remain stable at least during 1 week. It should be noted that the photocurrent density and PCE, respectively, can be evaluated in the CF cells only approximately because of the developed surface of CdS/dye interface whose area could not be calculated exactly. The surface area through which the photocurrent was measured was roughly evaluated as a product of the CF diameter and the CF length covered by the PEDOT:PSS counter-electrode.
The assembled PV device: 1 CF, 2 supporting conductive block, 3 ITO/PEDOT:PSS counter-electrode. The insert shows a part of the CF/CdS NW assembly after soaking with the polymer solution; some NWs become broken and the arrows show CF thickening due to the polymer adsorption
a I-V characteristics and b–e α(V) dependence of SSDSSC (CF/CdS/ZnPc-4R/PEDOT:PSS): squares correspond to positive and circles to negative potential on CF, respectively; black curves correspond to the dark and red ones to illumination conditions, respectively
a I-V characteristics and b–e α(V) dependence of DSSC (CF/CdS/ZnPc-4R/glassy carbon): squares correspond to positive and circles to negative potential on CF, respectively; black curves correspond to the dark and red ones to illumination conditions, respectively
It should be noted that the SSDSSC based on the hyper-branched core-shell morphology allowed us to get an increased PCE by more than two orders of magnitude as compared with the cells prepared on the flat ITO electrode which had PCE less than 10−5 % (Table 2). We suggest that the improved PV performance of the core-shell SSDSSC is just due to the increased CdS-dye interface area because of the hyper-branched morphology which is more prominent when it grows up in the radial direction than on the flat surface and because insulating dye layer better prevents shortcutting problem in SSDSSC in contrast to the polymer-containing cells which will be discussed later. However, the performance of SSDSSC can be also overestimated due to possible underestimated surface area upon calculation of the photocurrent density as was discussed above. Nevertheless, we expect that the future optimization of the cell structure, i.e., application of the hole-transporting layer (HTL) and a better contact of the top electrode will further improve the PV performance of solar cells of this type.
Table 2 Comparison of PV performance of the different types of CdS NW array solar cells prepared on CF core and flat ITO electrode, respectively.
PV performance of SSPSSC (CF/CdS/P3HT/PEDOT:PSS) was found to show poorer characteristics, with V oc of 0.04 V and I sc of about 1 μA/cm2 (Fig. 6a), which is far less than corresponding characteristics of the counterpart PV cell based on the flat ITO electrode (Table 2). At the same time, the ISC (CF/CdS/P3HT:PCBM/PEDOT:PSS, measured under the light intensity of 11 mW/cm2) showed V oc of 0.062 V and I sc of 120 μA/cm2 (Fig. 6b). Although such a performance is poorer as compared with the best PV cells based on P3HT:PCBM known in the literature, it is comparable with our flat counterpart PV cell of the same composition (Table 2). Therefore, the obvious advantage of the core-shell morphology in respect to the flat one was found only for the SSDSSC which showed superior characteristics as compared with the reference CdS NW cell prepared on the flat ITO electrode and showing V oc of 0.08 V and I sc of 20 μA/cm2, while fill factor of all developed cells was approximately the same, in the range of 0.22 to 0.30 (Table 2).
I-V characteristics of a SSPSSC (CF/CdS/P3HT/PEDOT:PSS) and b ISC (CF/CdS/P3HT:PCBM/PEDOT:PSS): squares correspond to positive and circles to negative potential on CF, respectively; black curves correspond to the dark and red ones to illumination conditions, respectively. Illumination was 11 mW/cm2 for b
Analysis of I-V characteristics was performed for the related SSDSSC and DSSC structures (which have the same charge generating CF/CdS/ZnPc-4R heterostructure), which is illustrated in Figs. 4 and 5. In most cases, I-V curves can be described by power dependence behavior, excepting the currents under illumination in the range from 0 to 0.2 V, where the I-V curve can be described by exponent with γ(V) = 1 and α(V) = 12 V (Fig. 4c, Eqs. (3) and (5)).
Formally, we can describe the I-V curve also by exponent with γ(V) = 1 and α (V) = 3.95 V in the range between 0.2 and 0.5 V (Fig. 5b) with negative potential on CF and between 0.5 and 0.7 V with positive potential on CF under illumination and in the dark (Fig. 4c). The large part of α curves follows the power dependence with α = 2 (Fig. 4b, e) that corresponds to monomolecular recombination with p >> n, i.e., in the structure the concentration of injected charge carriers is not enough for bimolecular recombination with p ≈ n which corresponds to α = 1.5. The regime of bimolecular recombination is absolutely necessary for solar cells, where both types of charge carriers contribute equally to the photocurrent. Such a behavior can be seen in a very small range of voltages from 0.1 to 0.2 V (Fig. 4b, e). Here, the formal approximation by exponent with α = 3.95 (Fig. 4b) or even with α = 12 gives ideality factor (according to Eq. (6)) η = 9.77 and η = 3.21, respectively, that is far from ideal diode characteristic. So, for better PV performance of this type of structure, it would be necessary, first, to improve the injection of both types of charge carriers from the contacts and, second, to decrease bulk resistance that will improve the ideality factor of the structure.
In the case of the DSSC structure (Fig. 5), the current behavior with negative potential on CF can be described by exponential function in the range from 0 to 0.1 V. Here the I-V curve can be described by exponent with γ(V) = 1 and α(V) = 35 V in the dark and α(V) = 32 V under illumination. It gives the ideality factor (according to Eq. (3)) η = 1.10 and η = 1.2, respectively, which is quite close to the ideal diode I-V characteristics described by Eq. (2). Then, the current behavior follows power function with α = 3, that corresponds to high injection into the dielectric medium when concentration of injected charge carriers is much more than the bulk one [25, 29]. Thus, injection behavior in this case is better than in SSDSSC (Fig. 4). With positive potential on CF (Fig. 5d), there are saturation regions with α = 0.5. Therefore, in this case, there is almost an ideal diode behavior of I-V curves. In order to improve PV performance of this structure, therefore, a substantial decrease of bulk resistance is necessary.
The above analysis of the I-V curves suggests that in the both nanobrush structures (SSDSSC and DSSC), the interface of CdS NWs with ZnPc-4R plays the major role in generation of photoexcited charge carriers. This suggestion agrees well with the fact that at high voltages, the difference between dark and light currents is very small (Figs. 4 and 5). At the same time, the notable photosensitivity is observed only at small voltages (up to ~0.08 V). Therefore, despite better performance of the SSDSSC found in this study, the DSSC structure displays better PV behavior than the SSDSSC from the viewpoint of charge injection and therefore it could be more attractive for further improvement in PV application.
It should be noted that here we demonstrate only a proof of concept of new nanoscale solar cells using a commercial carbon textile (i.e., carbon cloth in our case). Naturally, the cells should be optimized because there are several factors which limit their performance. First, application of CF itself implies that its work function should be consistent with the energy levels of other materials of the solar cell assembly. Particularly, the difference in the work function of the anode and the cathode is the driving force (the built-in potential) in BHJ solar cells which moves the electrons and holes in the opposite directions. The exact work function of CF is not known but it should be close to the other carbon-related materials. It is known, for example, that the work function of carbon nanotubes is about 5.0 eV, while that of highly oriented pyrolytic graphite is 4.8–4.9 eV [30, 31]. Therefore, one can suggest that the CF material has work function around 5 eV as well. But this value is close to that of PEDOT:PSS (5.0 eV) which is the counter electrode in the above system. Therefore, there is a very small driving force which separates electrons and holes in the PV cell of that type, which can be the reason of the observed small open-circuit voltages, respectively. Moreover, the hot-probe experiment has revealed that the major charge carriers in the used CF are holes that can worsen collection of electrons from CdS and therefore, in general, leads to disadvantages in charge collection since the major charge carriers in the PEDOT:PSS counter electrode are holes as well. On the other hand, this situation suggests a possibility to solve the above problem and to improve the PV performance through replacement of CdS by p-type semiconductor, for example CdTe, along with the change of PEDOT:PSS counter electrode by that possessing a low work function and electrons as the major charge carriers.
The other drawback of the above system is the loosely distributed CdS array on the CF surface, possessing large pores, which allows for polymer to penetrate deeply into the CdS nanobrush structure (see insert in Fig. 3) and to contact with the CF electrode and thus to contribute to undesirable current leakage. This drawback is clearly seen upon comparison of performance of solar cells prepared on the CF and on the flat ITO electrode. The latter geometry provides tighter CdS layer and better solution of the shortcutting problem which is particularly important in cells where the polymer layer (SSPSSC and ISC) or a liquid contact (DSSC) is used. As a result, a significant increase in the open-circuit voltage can be achieved (Table 2). Therefore, deposition of a tighter CdS shell layer around the CF core electrode is necessary to solve the above problem. Finally, there is a problem with the reliable contact between the organic layer and the top PEDOT:PSS electrode. We have revealed that casting of the PEDOT:PSS electrode from a solution leads to shortcutting problem, but the mechanically pressed PEDOT:PSS film onto the top of the assembly does not provide a tight contact. The above contact problems affect reproducibility of the device substantially, with variation of photocurrent within one order of magnitude depending on the contact quality (cast or pressed, etc.).
We hope that the future solution of the above problems will result in the flexible fiber-based PV cell possessing a better performance.
In this work, we have demonstrated the textile-based hybrid solar cells using inorganic CdS nanocrystals and organic dye or polymer as photoactive components. As a textile component, the conductive CF taken from the carbon cloth was used. We have showed that a single CF can serve as an aligned core electrode for the growth of CdS NW array followed by deposition of organic donor layer (ZnPc-4R, P3HT or P3HT:PCBM) resulting in active BHJ layers in new micron-sized core-shell PV structures.
It was found that behavior of charge carriers in the SSDSSC structure obeys mainly the power-law dependence with α = 2 that corresponds to the first-order (monomolecular) recombination with p >> n, that means that concentration of the injected minority charge carriers is not enough in the structure. In the case of the DSSC structure, the charge carriers behavior follows the cubic dependence with α = 3. In both structures based on hyperbranched CdS nanobrushes, the interface of CdS nanowires with ZnPc-4R plays the main role in formation of photo-generated charge carriers.
Analysis of the I-V curves allowed us to suggest the ways of optimization of the above PV structures, namely, to substantially decrease bulk resistance in SSDSSC and DSSC and to improve injection of both types of charge carriers from the contacts in case of SSDSSC. In SSPSSC and ISP, the use of the polymer layer requires a tighter CdS layer around the core CF electrode to escape shortcutting problems. The replacement of CdS for the p-type semiconductor would be useful as well for the future experiments.
Although a great work in order to get better performance of the respective PV cells should be undertaken, in principle, the developed technology can be considered as a major step towards "photovoltaics on curtains" [32].
Nozik AJ, Beard MC, Luther JM, Law M, Ellingson RJ, Johnson JC (2010) Semiconductor quantum dots and quantum dot arrays and applications of multiple exciton generation to third-generation photovoltaic solar cells. Chem Rev 110:6873–6890
Grynko DO, Fedoryak OM, Smertenko PS, Ogurtsov NA, Pud AA, Noskov YV, Dimitriev OP (2013) Application of CdS nanostructured layer in inverted solar cells. J Phys D Appl Phys 46:495114
Grynko DO, Fedoryak OM, Smertenko PS, Ogurtsov NA, Pud AA, Noskov YV, Dimitriev OP (2014) Hybrid solar cells based on CdS nanowire arrays. Adv Mat Res 854:75–82
Grynko DO, Fedoryak OM, Smertenko PS, Ogurtsov NA, Pud AA, Noskov YV, Dimitriev OP (2015) Multifunctional role of nanostructured CdS interfacial layers in hybrid solar cells. J Nanosci Nanotechnol 15:752–758
Kislyuk VV, Dimitriev OP (2008) Nanorods and nanotubes for solar cells. J Nanosci Nanotechnol 8:131–148
Garnett EC, Brongersma ML, Cui Y, McGehee MD (2011) Nanowire solar cells. Annu Rev Mater Res 41:269–295
Buonsanti R, Carlino E, Giannini C, Altamura D, De Marco L, Giannuzzi R, Manca M, Gigli G, Cozzoli PD (2011) Hyperbranched anatase TiO2 nanocrystals: nonaqueous synthesis, growth mechanism, and exploitation in dye-sensitized solar cells. J Am Chem Soc 133:19216–19239
Wang K, Qian XM, Zhang L, Li YG, Liu HB (2013) Inorganic-organic p-n heterojunction nanotree arrays for a high-sensitivity diode humidity sensor. ACS Appl Mater Interfaces 5:5825–5831
Chen N, Chen S, Ouyang C, Yu Y, Liu T, Li Y, Liu H, Li Y (2013) Electronic logic gates from three-segment nanowires featuring two p–n heterojunctions. NPG Asia Mater 5:e59
Grynko DA, Fedoryak AN, Dimitriev OP, Lin A, Laghumavarapu RB, Huffaker DL (2013) Growth of CdS nanowire crystals: vapor–liquid–solid versus vapor–solid mechanisms. Surf Coat Techn 230:234–238
Grynko DA, Fedoryak AN, Dimitriev OP, Lin A, Laghumavarapu RB, Huffaker DL, Kratzer M, Piryatinski YP (2015) Template-assisted synthesis of CdS nanocrystal arrays in chemically inhomogeneous pores by vapor-solid mechanism. RSC Adv 5:27496–27501
Jie J, Zhang W, Bello I, Lee CS, Lee ST (2010) One-dimensional II–VI nanostructures: Synthesis, properties and optoelectronic applications. Nano Today 5:313–336
Li H, Wang X, Xu J, Zhang Q, Bando Y, Golberg D, Ma Y, Zhai T (2013) One-dimensional CdS nanostructures: a promising candidate for optoelectronics. Adv Mater 25:3017–3037
Smertenko PS, Kostylev VP, Kislyuk VV, Syngaevsky AF, Zynio SA, Dimitriev OP (2008) Photovoltaic cells based on cadmium sulphide–phthalocyanine heterojunction. Sol Energ Mater Sol Cells 92:976–979
Grynko DO, Kislyuk VV, Smertenko PS, Dimitriev OP (2009) Bulk heterojunction photovoltaic cells based on vacuum evaporated cadmium sulfide–phthalocyanine hybrid structures. J Phys D Appl Phys 42:195104
Kislyuk VV, Fedorchenko MI, Smertenko PS, Dimitriev OP, Pud AA (2010) Interfacial properties and formation of a Schottky barrier at the CdS/PEDOT:PSS hybrid junction. J Phys D Appl Phys 43:185301
Xu W, Choi S, Allen MG. Hairlike carbon-fiber-based solar cell. Proc. IEEE Int. Conf. MEMS. 2010; 1187-90
Unalan HE, Wei D, Suzuki K, Dalal S, Hiralal P, Matsumoto H, Imaizumi S, Minagawa M, Tanioka A, Flewitt AJ, Milne WI, Amaratunga GAJ (2008) Photoelectrochemical cell using dye sensitized zinc oxide nanowires grown on carbon fibers. Appl Phys Lett 93:133116
Pan S, Yang Z, Li H, Qiu L, Sun H, Peng H (2013) Efficient dye-sensitized photovoltaic wires based on an organic redox electrolyte. J Am Chem Soc 135:10622–10625
Zhang Z, Chen X, Chen P, Guan G, Qiu L, Lin H, Yang Z, Bai W, Luo Y, Peng H (2014) Integrated polymer solar cell and electrochemical supercapacitor in a flexible and stable fiber format. Adv Mater 26:466–470
Chen T, Wang S, Yang Z, Feng Q, Sun X, Li L, Wang ZS, Peng H (2011) Flexible, light-weight, ultrastrong, and semiconductive carbon nanotube fibers for a highly efficient solar cell. Angew Chem Int Ed 50:1815–1819
Zhang Z, Li X, Guang G, Pan S, Zhu Z, Ren D, Peng H (2014) A Lightweight polymer solar cell textile that functions when illuminated from either side. Angew Chem Int Ed 53:11571–11574
Smertenko PS, Grynko DA, Osipyonok NM, Dimitriev OP, Pud AA (2013) Carbon fiber as a flexible quasi-ohmic contact to cadmium sulfide micro- and nanocrystals. Phys Stat Solidi A210:1851–1855
Smertenko P, Fenenko L, Brehmer L, Schrader S (2005) Differential approach to the study of integral characteristics in polymer films. Adv Colloid Interface Sci 116:255–261
Ciach R, Dotsenko YP, Naumov VV, Shmyryeva AN, Smertenko PS (2003) Injection technique for study of solar cells test structures. Sol Energ Mater Sol Cells 76:613–624
Luka G, Kopalko K, Lusakowska E, Nittler L, Lisowski W, Sobczak JW, Jablonski A, Smertenko PS (2015) Charge injection in metal/organic/metal structures with ZnO:Al/organic interface modified by Zn1−x Mg x O:Al layer. Org Electron 25:135–142
Wilcoxon JP (2000) Catalytic photooxidation of pentachlorophenol using semiconductor nanoclusters. J Phys Chem B 104:7334–7343
Chae WS, Ko JH, Choi KH, Jung JS, Kim YR (2010) Photocatalytic efficiency analysis of CdS nanoparticles with modified electronic states. J Anal Sci Technol 1:25–29
Baron R, Mayer JW. Double injection in semiconductors and semimetals. In: Willardson RK, Beer RC, editors. V 6. New York & London: Academic Press; 1970. p. 201-313
Shiraishi M , Ata M. Work function of carbon nanotubes. Carbon. 2001; 31913-7.
Tipler PA, Lewellyn RA (2008) Modern physics, 5th edn. W.H. Freeman, New York
Fan Z, Javey A (2008) Photovoltaics: solar cells on curtains. Nature Mater 7:835–836
This publication is based on work supported by Award No. UKE2-7035-KV-11 of the U.S. Civilian Research & Development Foundation (CRDF).
DAG and ANF performed the synthesis of CdS nanowires on CF, NAO performed the organic layer deposition, OPD carried out the photoelectrical measurements, PSS carried out the treatment and interpretation of current-voltage characteristics, and OPD and AAP prepared the manuscript. All authors took part in the discussion of the results. All authors read and approved the final manuscript.
V. Lashkaryov Institute of Semiconductor Physics, NAS of Ukraine, pr. Nauki 45, Kyiv, 03028, Ukraine
Dmytro A. Grynko
, Alexander N. Fedoryak
, Petro S. Smertenko
& Oleg P. Dimitriev
Institute of Bioorganic Chemistry and Petrochemistry, NAS of Ukraine, 50 Kharkivske shose, Kyiv, 02160, Ukraine
Nikolay A. Ogurtsov
& Alexander A. Pud
Search for Dmytro A. Grynko in:
Search for Alexander N. Fedoryak in:
Search for Petro S. Smertenko in:
Search for Oleg P. Dimitriev in:
Search for Nikolay A. Ogurtsov in:
Search for Alexander A. Pud in:
Correspondence to Oleg P. Dimitriev.
Grynko, D.A., Fedoryak, A.N., Smertenko, P.S. et al. Hybrid solar cell on a carbon fiber. Nanoscale Res Lett 11, 265 (2016) doi:10.1186/s11671-016-1469-7
Accepted: 05 May 2016
CdS nanowire
Flexible nanobrush
Hybrid solar cells | CommonCrawl |
Energy Informatics
Framework for dimensioning battery energy storage systems with applied multi-tasking strategies in microgrids
Volume 5 Supplement 4
Proceedings of the Energy Informatics.Academy Conference 2022 (EI.A 2022)
Troels S. Nielsen1,
Jens S. P. Thomsen1,
Athila Q. Santos1 &
Jutte Kaad1
Energy Informatics volume 5, Article number: 52 (2022) Cite this article
The shifting from the traditional centralized electric sector to a distributed and renewable system presents some challenges. Battery energy storage technologies have proven effective in relieving some aspects of this transition by facilitating load control and providing flexibility to non-dispatchable renewable production. Therefore, this paper investigates how to dimension battery energy storage systems with applied multi-tasking strategies in microgrids. To this end, it proposes a framework to accurately depict how BESS can be financially and technically feasible by deploying multi-tasking strategies that fit the system characteristics of a microgrid while providing arguments for the financial incentive. The framework development is based on the principles of the analytical approach and is conceptualized in a three-part funnel structure. This framework has been tested using the case study of Aeroe microgrid and resulted in a proposed battery energy storage configuration. Based on the findings, the BESS implementation contributes to improve load behavior and to increase internal production utilization. A sensitivity analysis was performed, to investigate the robustness of the configurations. Collectively, the framework has proven to provide feasible results within a wide range of parameters. This framework could help the preliminary investigation phase when analyzing future battery energy storage system investments.
The electrical power system is experiencing a period of rapid evolution worldwide. More specifically, the Danish energy sector has seen a yearly increase in renewable capacity of around 5.7% in the period of 2010–2019 (IRENA 2020) and reached saturation levels of 60.5% in 2018 (Danish Energy Agency 2019). The Danish national energy and climate plans (NECPS) now strives to obtain a 100% fossil-free energy sector by 2050, which will ensure a further increase in the saturation of RES (European Commission 2020). The task at hand, paired with the overall complexity of the NECPS inferred transition, will pose both economic and technical challenges. Therefore, the importance of finding pathways for balancing the intermittent and stochastic nature of RES while ensuring the security of supply is very high.
The current transitioning phase is slowly decommissioning conventional and controllable units and shifting towards a more decentralized energy sector, reliant on distributed energy resources (DER). This phase imposes the need for integration of technologies capable of performing tasks that can restore some of the controllable aspects of the conventional power system (IEA (International Energy Agency) 2018; Munuera and Pavarini 2020). The mechanisms used to ensure the balance of the power grid and the ability to handle the increased implementation of DERs include ideas such as demand management and distributed storage. The energy storage technologies (EST), are also identified as a balancing mechanism due to their ability to provide multiple grid-related services, e.g. acting as capacity reserves, frequency response, and congestion relief (Taylor et al. 2012). One of the ways to look at ESTs is to investigate the added benefits of integrating them into a system that can utilize the technology to an optimal extent. Energy storage is an interesting proposal to relieve some of the pressure of future power grids towards decarbonization of the energy system, allowing integration between the RES and the power grid's new 'Smart' configuration. The energy storage deployment, in 2018, reached nearly double of 2017 investment (Munuera and Pavarini 2020). Furthermore, the World Energy Council has projected that as much as 150 GWh of energy storage could be installed by as early as 2030, with large quantities of it being covered by battery energy storage systems (BESS) (Gardner et al. 2016; Renewable Energy Agency 2017). This has been driven by declining investment costs, technology improvements, and supportive governments. However, this wide scale implementation of EST calls for the investigation on how to maximize the usage of these systems.
Especially the BES technologies have proven viable options for enabling this transition due to them being able to operate as a controllable load and a dynamic generator. This controllability gives the energy storage system the capability to offer a variety of different grid benefits, depending on their configuration and relation to the grid. The consensus has, throughout recent years, been that energy storage technologies are lacking financial viability. However, the levelized cost of storage (LCOS) of many EST is decreasing over the years. The EST is also gaining momentum due to the prospects seen in a multi-tasking configuration. The multi-task concept relates to the use of ESTs that can handle different requests of several grid aggregators, thereby diversifying the financial prospects and furthering profitability (Tian et al. 2018; Truong et al. 2018).
Another mechanism that can help to provide reliability, stability, and security while grouping localized clusters of DERs are the microgrids (MG). The use of MGs is considered a viable option for enabling the excessive integration of DERs and can facilitate a cost-effective and reliable grid infrastructure (Mehigan et al. 2018; Allan et al. 2015; Alsaidan et al. 2018). Linking different energy integration technologies, such as energy storage, MGs and demand-side management will become valued assets of the future distributed energy system, due to their inherent ability to increase grid flexibility and resilience.
The underlying question that this paper aims at answering is: How to dimension BESS with applied multi-tasking strategies in microgrids? Because the answer is not unique, the first contribution of the paper is to provide a framework to accurately depict how BESS can be financially and technically feasible by deploying multi-tasking strategies. Herein, investigating whether the framework can collectively obtain a BESS configuration that fits the system characteristics of a microgrid (MG) while providing arguments for the financial viability. The developed framework uses a real operational environment to test the solution viability. Thus, it provides solutions that reflect critical parameters that infer the recommendation of the type, size, and capabilities of the BESS, best suited for the MG. A second contribution of the paper is to examine ways for optimizing the utilization, performance, and integration possibilities of BESS by applying a revenue stacking concept and comparing operational differences and their impact on system performance through sensitivity analysis.
With the Danish energy sector moving towards an entirely new energy production format and showing the willingness to invest in needed energy infrastructure, the future of the sector looks bright but somewhat challenging to manage. This paper analyzes some of the transition related aspects and try to provide a perspective on how to achieve this in a sufficient and financially viable way. Hereto investigate the overall viability of BESS in the context of MG, integrating DERs in the Danish energy scheme.
The methodological framework used to identify BESS performance indicators and characteristics. This entails a transparent approach that encapsulates the essence of determining the optimal stacking of revenues for BESS technologies, and how these systems behave as DERs in a MG context. The methodology investigates the overall viability of BESS within the boundaries, set by environment constraints, technical and financial indicators, and the energy market of which the system operates. Furthermore, the qualitative benefits that a BESS delivers, when implemented into an existing energy portfolio, is investigated.
The overall structure of the framework is based on a funneling template which focuses on forcing information towards the bottom of the funnel, effectively creating a focal point, condensing data, and perfecting the proposed solution. The framework structure has been divided into three dependent stages: data acquisition, system analysis and optimization, as illustrated in Fig. 1. The overall goal with this framework structure is to develop a series of steps that ensures the right sequence of actions, that enable the user to quantify and conceptualize BESS related properties and behavior. Each stage reflects a process and is designed to reduce uncertainties by gathering adequate information (Cooper 1990, 2008).
Illustration of the funnel framework
The data acquisition is used to establish the system foundation. It includes the definition of external and internal boundaries, empirical data, simulation data, relevant constraints, and simulation objective functions. The data acquisition describes the methods of data collection and selection, relevant to the case, while it also entails the prerequisites of the project. Furthermore, the data acquisition describes the sub-optimal BESS characteristics that can be used as a transitional input for the next stages in the funnel framework.
Definition of external and internal boundaries
This step defines the behavior of the BESS and sets the stage for the upcoming optimization. The internal boundaries are defined by the production, consumption, performance indicators, and local grid connections. The external boundaries are defined by the climate profiles, transmission and distribution connection, geographical location and power system integration. The collection of relevant data can be found through publicly available data hub, production prediction simulation software or onsite quantitative measurements. This section also entails the identification of relevant stakeholders and the aspects of the MG operators. This will often amass to the formation of system goal or objectives.
Objective functions and constraints
The data acquisition section also entails the identification of project relevant constraints and the goal of the project listed as objective functions. The objective function demonstrates the direction of the project and tells the developer on where to focus the attention. It reflects the desired end-goal and what problem needs solving. Consequently, this comes down to the individual stakeholders, relevant for project decision making, and how they want to manage resources. The constraints are used to define the operating parameters that are dependent on the environment of the system.
Analysis of load and production
The preliminary analysis of the system revolves around investigating the load and production characteristics of the existing investigated system. This investigation encompasses energy data visualization tools such as net load curve (NLC), load duration curve (LDC), and normal distributions of energy data. By arranging energy data, in ways that reflect the system behaviors, the visualizations can help quantify feasible BESS characteristics. The design and sizing of a system based on tools are commonly used as a rule of thumb, because it gives a quick overview of the existing system behavior. However, this approach is often subject to several limitations such as lack of load information, prediction, and assumption of ideal operation.
The analysis and evaluation enlist investigation points such as system perspectives, identification of grid application, the selection of suitable BES technologies and the overall prioritization of stacked revenue streams. Thereby, further narrowing the funnel of the framework towards an optimal system configuration.
Analyse system perspectives
The foundation of this stage is the empirical data, gathered in the data acquisition section, yielding a disordered information assortment. This assortment of relevant data can then be analyzed to which the system perspectives can be identified, such as business economics and environmental assessments. In the life cycle assessments, the idea of adequately defining the scope of the system is highly important, and comparatively consists of investigating the ecosphere (not intentionally 'man-made') and technosphere (everything that is intentionally 'man-made') (Hofstetter 2000; Hauschild et al. 2017). The idea of analyzing a system on the basis of eco- and technospheres helps to create awareness and categorize the BESS characteristics. By categorizing the data, the work on modeling the system gets noticeable faster, and the data gets more manageable.
Identify grid applications
The clustering of grid applications is based on the labeling of applications into predefined categories. By dividing the application into categories, and by considering the BESS constraints, it narrows the number of grid applications down to only the ones that fit the BESS. Example of this could be looking at the ecosphere of the BESS and identify where in the power system the BESS is integrated, thus excluding all other applications. The identification of ancillary services consequently follows a natural selection process, in that only the service that function with the domain can be considered. This is further amplified by the technical constraints and other performance indicators.
Selection of BES technology
The knowledge of technical and financial indicators proves valuable. The definition of BES technology consists of a range of performance indicators, which can be appropriate when determining the optimal characteristics of a BESS. A strategy that can be applied when determining BES technologies is the use of comparative analysis. The goal is to compare technology characteristics and cross-reference these against each other. A common unit of individual characteristics is often used to make the technologies comparable. This comparison can happen for one or several parameters of the BES technology, and often lead to tangible evidence of which technology performs the best. The comparison can effectively be based on LCOS.
Prioritization of grid applications and revenue stacking
This step increases the value of the BESS, done by choosing the applications with multi-task strategies. This can be done by focusing on increasing the utilization of the storage system. If a battery is only used for one application, say load following, there is a large untouched potential. When investing in a capital heavy BESS, it is ideal to utilize most of its operational potential. This can be done by finding the best multi-task configuration for the investigated system. Multi-tasking makes it possible to obtain the best financial feasibility of the setup since this allows stacking revenues. Revenue stacking is one of the strategies that can make storage technologies a viable business case. This is based on adding additional utilization of the battery when it is not doing load following or peak shaving.
In order to determine these extra revenues, it is needed to find the BESS type and the best suitable tasks for the chosen BESS. The tasks also depend on the size, location and type of MG, so by analyzing the constraints that the MG is subject to, the different tasks can be ascertained. The potential tasks can then be reduced to those that add the most value while being stackable or operable in symbiosis. Meaning that the additional tasks must be able to operate alongside each other, while still providing resilience to the system without significant interference. It is often chosen to give a specific application the highest priority for the operation, this can be done on the basis of the objective function, stakeholder or constraints present in the ecosphere.
The focus of the optimization is to find the best suited BESS technologies performing optimal multi-tasking. It consists of three main areas of optimization: operational strategies, storage capacity and power capacity of BESS. With the control mechanism being an objective function, which is maximized or minimized, subject to system constraints. The last step of the framework is to identify and evaluate the internal benefits of the MG, in order to include the non-tangible benefits and system perspectives in the decision process.
Case study of Aeroe microgrid
The proposed framework was tested by investigating the island Aeroe located in the southern part of Denmark and has an area of 88 km\(^2\) and a population of around 6000 habitants (Kommnue 2020). The island is characterized as a grid-connected community MG, powered by its electricity production and connections to the main grid (Santos et al. 2020; Danish Energy Agency 2015). The MG strives to become 100% fossil-free and solely reliant on RES by the year 2030 (Ærø Kommune 2018). The goal covers all areas of the MG, including industry, heating, transport and electricity. The reasoning is seen in their visionary plan and the facilitation of a more flexible and energy-efficient energy sector. Hereby, stating the clear objective of finding a BESS size and type that can assist Aeroe in reaching their vision.
Framework inputs
Three meteorological datasets were collected and used to simulate the wind and solar production patterns of Aeroe. For the solar data, Heliostat SARAH 54.85N 10.40Ø 2019 dataset is used and contains both the direct and indirect solar radiation of Aeroe (EMD International 2022). The CFSR 54.80N 10.31Ø 2019 dataset provides the wind and temperature data used to simulate the production of the MG.
The costs of operating the MG are based primarily on the technology catalogues made by the Danish Energy Agency (Energistyrelsen 2022). In a Danish setting, there are several tariffs related to the energy trade seen both on the production and consumption side. On the consumption side, there are three tariffs: transmission, system and balance. On the production side, there are two minor tariffs: the injection tariff and the production-based balance tariff. All these tariffs are included in the simulation of the MG and take hold in 2020 tariff price levels (Energinet 2020).
Current energy portfolio of Aeroe
The energy portfolio of Aeroe is composed by wind, solar and a small amount of heat-linked electricity generation. The wind power production comes from six 2 MW wind turbines (V80 Vestas). The power curve from these turbines is used along with the CFSR wind dataset in the simulation. The electricity production seen from the wind turbines is simulated using the software tool windPRO, which results in a yearly power production of approximately 43 GWh (same as the current energy accounts of Aeroe (Energistyrelsen 2018)). The installed photovoltaic (PV) capacity is based on generic residential installed panels with a capacity of around 1.7 MW. The MG also has a small amount of heat-bound electricity production from the district heating sector, which is also accounted. These are based on the 2017 energy accounts of Aeroe.
The island is connected with two 60 kV lines to the main electricity grid with a capacity of 21.3 and 23.4 MW. These will serve as boundaries for determining the BESS capacities. The load of the system in 2019 is around 30 GWh per year. This load includes the addition of the recently installed E-ferry (Huang et al. 2020).
Energy portfolio of Aeroe
With the high saturation of RES, Aeroe has a very intermittent production. Even though the MG has a larger production than consumption, Aeroe is not able to cover its demand with internal production as shown in Fig. 2a. This makes the island very dependent on the connections to the main electricity grid. With a maximum net load of around 12 MW and a connection capacity larger than 20 MW, no bottlenecks are limiting the system.
Figure 2b, displays the LDC of Aeroe, and here three load attributes can be drawn. The peak load of the system is represented by the upper grid pattern of Fig. 2b and reflects the highest load experienced by the system, here the interval for this was found to be from 5 to 7.3 MW. The intermediate load of the system is illustrated by the white section between peak and base load and relates to the most significant portion of the system load. The base load of the Aeroe MG is identified to be around 1.2 MW.
The data gathered is in the form of hourly electricity prices from the Nordpool spot price market of 2019 (Energinet 2020). All electricity spot prices have been extrapolated using Energinet price projections for 2020–2040. Energinet estimates that during the 20 years, the average electricity price will increase from approximately 350 DKK/MWh to 450 DKK/MWh, which can be seen in Fig. 3 (Danish Energy Agency 2020).
Average electricity price projections
Sup-optimal BESS characteristics
The task is to choose initial capacities for the BES technology that satisfy constraints and identified objectives. By following the framework, the Gaussian distribution is applied to both the net load and the accumulated deficits. The distribution of the net load finds the power of the system while the distribution of the accumulated deficits gives a representation of the system storage capacity. Together providing a representation of the system and is used to determine the initial size of the BESS.
The two distributions, seen in Fig. 4, show a relatively high spread of potential usage for both of the investigated characteristics, especially the accumulated deficits, i.e. the valleys of the system. Even though the distribution of the net load seems to have a minor spread, one standard deviation is almost double the size of the mean value. Therefore, it is chosen to use the mean for both the power and storage capacity. These will be used to initialize the revenue stacking calculation and the initial optimization point. The initial capacities are found to be 3.6 MW and 42 MWh.
Probability density of Aero microgrid
Analysis and evaluation of BESS
In order to identify the grid services that the system can utilize for revenue stacking, one needs to evaluate the boundaries of the investigated system. Since the MG of this case study is situated in a distribution setting, it is limited to services that can be performed within this scope. Another boundary is that it is situated within the Danish electricity market. This means that it can only perform services that the TSO aggregates or utilize the incentives present in this ecosphere. With the system perspectives of Aeroe, the first and foremost application area of the BESS is load-following, reflected by the following priority: \(Microgrid> BESS > Grid_{services}\).
Within the considered ecosphere, two groups of grid services can be identified: frequency and voltage control (Energinet 2019). The initial size of the BESS allows the system to perform within different groups of grid services. These grid services can be used as additional revenue streams and are described in Table 1.
Table 1 Chosen grid applications
Since the main task of the BES technology is to complement the MG, load following will be the highest prioritized part of the revenue stacking. Load following provides revenue in the form of avoiding electricity grid taxation by utilizing internal production, while also adding value in the form of flexibility.
By investigating the potential grid services, it is found that frequency regulation within the primary reserve is the most beneficial revenue that can be utilized by the BESS. The investigation of historical market data showed that the highest price incentive is obtained through frequency regulation. One of the benefits of frequency regulation is that it is mainly based on availability payments, which can be a valuable asset for the storage since this can increase the value of the system without adding excessive degradation of the battery (Alsaidan et al. 2018).
Most of the grid services within the Danish context are closely linked, and it is, therefore, difficult to stack these as revenue streams for a single BESS. For this reason, the last stacking task for this case study will be energy arbitrage. This task is chosen due to it being able to operate alongside the other two chosen tasks. This ensures the symbiosis of the stacked revenue streams.
LCOS comparison
The LCOS is used to choose one of the four common types of BES technologies. Averages are used for the prices and technical specifications of the batteries, to ensure comparability of the LCOS results. The results of the LCOS analysis can be seen in Fig. 5. This shows that lithium-ion batteries have the best financial indicator based on the costs of storing energy.
LCOS results of the four investigated BESS
Revenue stacking
The stacking of the tasks is implemented using the following steps:
Produce load following strategy: investigate how the BESS would perform load following without the other tasks. This is done by simulating the load following task, which produces the optimal load following strategy for the MG.
Produce arbitrage strategy: implement an operational strategy based on the costs of buying from and selling to the grid, while also including the characteristics of the BESS, e.g. efficiency, O &M, etc.
Implement frequency availability payments: determined by the amount of energy stored in the battery. The bids for frequency response are released in four-hour intervals six times each day. In order to make sure that the BESS is able to provide the service, it is only able to enter the frequency stabilization market if the stored capacity is above 20 MWh in the given and following hour. The reason is to make sure the system can provide the service during the full interval, if needed. All the bids for upwards frequency regulation that is accepted will receive an availability payment that corresponds to the price of the highest accepted bid.
System model simulation
The findings of the methodology are used to comprise a model of the system. The initial model of the system is simulated using the software tool energyPRO, which is a simulation tool developed by the company EMD International A/S (EMD International 2022). This program is a modelling software for combined techno-economic optimization and study of a variety of energy scenarios using different technologies for a wide range of application.
Figure 6 shows the visual representation of the model from energyPRO. The model includes the present production sources at Aeroe, together with its load and a connection to the main electricity grid, found through the identified framework inputs. The electricity grid can be seen as the day-ahead market.
Aeroe microgrid model in EnergyPro
Optimization of BESS
Since the load following strategy is already based on optimally matching the net load of the system, this parameter remains fixed throughout the optimization. The part that can be optimized is the arbitrage strategy. The optimal operation strategy is here determined by assigning the operation a priority function. The priority function generally represents the production cost of each task and the one with the lowest production cost will be assigned the lowest priority number. The arbitrage task is optimized by increasing or decreasing the operational thresholds.
The optimization of the storage and power capacities, and consequently, the final BESS solution, is based on implementing the BESS into the current Aeroe MG.
The NPV results of the different power capacities can be seen in Fig. 7a. The initial storage capacity of 42 MWh is used for this investigation. The investigation of the power capacity shows that a higher power capacity results in a better NPV, although with a decrease after 5 MW. The reason for the decrease in NPV is due to system set limitations, in which the system has a maximum bid capacity of 5 MW for the frequency regulation market. This constraint is based on the Danish TSO aggregators being able to disregard bids of more than 5 MW for frequency response if this bid leads to excess fulfilment of the required reserves (Energinet 2019).
NPV of the BESS
The reason that this boundary has such a substantial effect on the overall system is that the revenue gained from frequency regulation has a high contribution to the revenue of the system. This means that the additional power capacity added to the system cannot increase the revenues enough to justify the added investment costs. The investigation of the storage capacity can be seen in Fig. 7b, which shows the NPV relative to the size of the storage. The optimal storage size for the proposed BESS configuration is found to be 40 MWh, which is where there is the highest relative income related to the investment costs.
Final BESS
Table 2 shows the financial and technical aspects of the system, including the total investment, operational costs and revenues, together with the final NPV of the entire system. This system is based on installing a lithium-ion BESS with a lifetime of 20 years and a system discount rate of 4%.
The results shown represent the operation of the entire MG system, which means that the OPEX and revenues are based on both the production of the current system and the added BESS. The total investment cost is only based on the BESS implementation. The technical characteristics are also described in Table 2 and shows the installed features of the BESS, found through the framework, paired with the operational specifications of the system.
Table 2 BESS characteristics
An approximation of the proposed BESS size is based on the capacities and the rated energy and power density of the technology. This calculation can be seen in Eq. (1).
$$\begin{aligned} BESS_{size} = \frac{40~MWh}{0.13~MWh/m^3} + \frac{5~MW}{0.39~MW/m^3} \simeq 320m^3. \end{aligned}$$
The placement of the BESS would have to be close to the 60 kV transmission line, to minimize capital expenditures and to ensure a connection that can handle the additional capacity of the BESS. There are only a few protected areas by Nature 2000 (Sundseth 2008) leaving areas still available on Aeroe for additional DER integration.
A visual representation of the revenue contribution of the stacked tasks is shown in Fig. 8b. The chart shows the distribution of revenue from the different tasks. The chart shows that frequency response is the highest contributing task of the system. It is also seen that the load following task provide very little direct financial value to the system. The reason for this is a very small difference between the cost of import relative to the income from exporting.
This section highlights the differences between the original system, the base case, and the system with the integrated BES technology. The sections of the case study that are relevant to investigate are the production, load, and overall resilience of the system, with a minor focus towards elements such as full-load hours and BESS cycles.
To see how the behavior of the system, a representation of the Electricity import for internal load and Average yearly load and production can be seen in Fig. 9. This figure includes the current situation of Aeroe and the Aeroe MG after the integration of the BESS. Figure 9a shows the first 70 h of import of electricity that is used to cover the demand of Aeroe, where the current situation of Aeroe is noted as 'BASE' and the one including the battery is noted 'BESS'. The graph shows how the system can decrease the dependency on the main grid for covering its load, achieved by performing load following. The analysis of the import for internal load shows that the load following task is able to decrease the dependency on the grid by 38%. The decrease is based on the reduction in electricity import used solely for covering the internal demand of Aeroe.
Load following task
The result of the load following task is visualized in Fig. 9b. This shows the average yearly load and production before and after the integration of the BESS, where the effect of the load following can be seen in 'BESS Load'. The curve has been shifted to hours where there is more production available. The new load profile now has a curve that fits better with the MG production curve. This shift increases the system's ability to utilize its production by 26%.
The load following operation can be seen in Fig. 10. The top graph contains the electricity production and consumption of the system, the middle graph shows the BESS charge and discharge, and the bottom graph visualizes the state of charge. By looking at the graphs, it can be seen that the battery charges when the system has excess of energy and discharges if there is a deficit. This can be seen as a form of validation, in that the goal of the load following task is to improve the match between production and load. This shows signs of added flexibility and resilience to both the MG and the main grid.
EnergyPro load following production
Figure 11 shows how both the load following task and the arbitrage task is operating in the simulation. The top graph contains the spot price of the day-ahead market, the middle graph shows the system production and consumption including the BESS, and the third shows the storage state of charge. The system is doing arbitrage by utilizing the fluctuating electricity prices. This can be seen by comparing the top and the middle graph. The state of charge of the battery shows how the load following task and the arbitrage task can operate in symbiosis.
EnergyPro production graphics of BESS including load following
Analysis of BESS technical and financial indicators
The analysis of the technical characteristics of the BESS is used to evaluate the validity of the lithium-ion battery technology. With a total number of cycles of the investigated battery of 8202 cycles, the system is deemed valid, since the lithium-ion batteries are rated to have an operation performance between 6000–20,000 cycles.
In order to analyse how the BESS affects the financial aspect of the system, an evaluation of the NPV is made. This is based on comparing the BESS simulation to the current situation of the MG. By simulating Aeroe without the implementation of the BESS, the simulation results in a NPV of 47.7 mio. DKK. This gives a NPV of approximately 9 mio. DKK better than the simulated MG including BESS. This reveals that the additional revenue provided by the BESS is not able to justify its investment, even with the utilization of revenue stacking strategy. This finding correlates with the general tendency of battery technologies not being able to rationalize their high CAPEX (Forsight 2017). Although, it could be argued that with the increase of resilience and flexibility, the system has an increased value other than the financial aspect. The system has been improved with what can be referred to as non-tangible benefits. These benefits lie within the security of supply and power quality. The BESS makes it possible for Aeroe to utilize more of its production, thereby making the island more energy-efficient and self-reliable, which in turn help Aeroe come closer to their future perspectives.
Also, with the major developments within the battery sector, a decrease in costs is expected in the near future and could very well result in a more feasible solution. This will be investigated in the following section.
The sensitivity analysis of the system parameters investigates the overall robustness of BESS, and how well it reacts to certain changes within its environment. These parameters do not belong to a particular group of characteristics but rather reflect a broad spectrum of system boundaries and constraints, which could infer challenging behavior of the system if changed.
Changes in costs related to BESS
Lithium-ion based BESS suffers from high investment costs. Therefore, the investigation strives to find the magnitude of overall price reduction needed for the BESS to become financially justifiable. The analysis, in section 5, found that the overall cost of the BESS was approximately 80 mio. DKK. Resulting in a NPV of around 38.5 mio. DKK. This is seen to be approximately 9 million less than the base case of the Aeroe MG system. Table 3 shows the relevant economics and highlights the financial differences between the base case and the BESS scenario.
Table 3 Cost comparison between Base and BESS cases [mio DKK]
In order to make the BESS scenario viable and competitive with the base case, a reduction in the overall capital expenditures needs to occur. The main reason for the focus solely on capital expenditures is that it is a fixed value and it indirectly ties to the maturity of the technology. Whereas the operational expenditure is not subject to the same change due to being a product of running the system, thus resulting in a more stable price level. By solving the equation, that can be made from the numerical values in Table 3, it is found that the capital expenditures need a reduction of around 12% to achieve a NPV equal to that of the base case.
Change of energy portfolio
Aeroe strives to become 100% fossil-free (Ærø Kommune 2018). For that to happen a change in the overall composition of the energy portfolio of the MG has to occur. The governing municipality has stated that this transition is going to be achieved by a combination of the implementation of additional RES production units, the refurbishment of existing units to run on RES, and increase the internal energy efficiency. By having a connection to the main grid, it greatly increases the possibilities of which RES the MG can rely on. However, by going through each of the identified transitions stated by the municipality, it is found that certain areas pose a greater potential than others.
The refurbishment strategy of changing existing units to run on RES is a necessity for the system to achieve its vision, but not of great importance for this sensitivity analysis. This is seen as a result of the Aeroe MG not having any fossil-based electricity production. The third proposed transitional action point is to increase energy efficiency internally, which correlates well with the framework proposition regarding the EU's future energy sector guidelines (European Commission 2019). One of the ways the EU is planning to increase energy efficiency is through the use of distributed energy storage. Consequently, this analysis is already assessing this transition, thereby leaving only the last proposal to be assessed. The last proposal of the municipality was the implementation of additional RES production units. To see how an increase of these would affect the system, this is investigated using the developed model. The units responsible for the energy production at Aeroe consists mainly of wind turbines with a comparably small portion of PV.
Seeing that the existing energy portfolio is heavily invested in wind turbines, it provides the argument of moving towards more implementation of PVs. In relation to the MG, adding more wind power would only increase the difficulties of achieving a stable production curve. With the opposite production curves, an increase of installed PVs makes an interesting investigation.
It can be challenging to identify which capacities of PVs would be installed in the MG. Therefore, three options at different intervals are investigated. The capacities of 1, 5, and 10 MW was chosen due to their relative sizes compared to the overall production of the MG. The results of this incremental PV capacity increase can be seen in Fig. 12.
NPV increase with additional PV
The three scenarios depict the results, given in increased NPV, for both the base case and the BESS case. This is excluding additional investment costs of the PV. The BESS case performed noticeably better than the base case, in each of the increments. The highest difference is found in the increment of 1 MW installed PV capacity, where the BESS case gains 26% more value than the base case with the same setup. The other two increments experience slightly less difference, 14% and 12%, respectively. This shows the benefits of both the increased amount of solar power, but also the supplementary benefits of having a storage that can amplify the value of the additional production.
Figure 13 depicts the total NPV of the investigated subsidies added to the BESS case. The dashed line indicates the NPV of the base case and thereby also the breakeven point of where the BESS has the same financial profitability. Following the tendency seen in Fig. 13, the BESS obtains a breakeven point at a subsidy of approximately 56 DKK/MWh.
Subsidy effect on the total NPV of the BESS case
BESS technical performance indicators
In the case study it was chosen to use the average efficiency of the lithium-ion batteries, and since the efficiency has a significant impact on the LCOS, it was chosen to see how much a lower or higher efficiency would affect the system performance. To test this, the model was simulated using the highest and lowest identified efficiencies of the lithium-ion battery. The NPV of the two simulations can be seen in Table 4, where these are compared to the final result from the case study using a 90% round-trip efficiency.
Table 4 Technical performance indicator
The simulations show a better NPV with the higher efficiency and worse with the lower efficiency, as expected. The interesting result of this analysis is the magnitude of the two differences. It is seen that a round-trip efficiency of 85%, makes the NPV of the system significantly worse. This correlates well with the findings of the LCOS analysis, where efficient use of energy can outweigh the higher investment costs. The significant decrease is partly due to the operational strategies of the BESS. With the lower efficiency, the threshold that the system can operate within decreases, which has a significant impact on the total operation of the system. This efficiency dependency further proves that the lithium-ion battery is the right type of BESS for the identified system and perspective of utilizing grid services as revenue streams.
The Aeroe MG was chosen due to it being considered a smaller energy distribution system with saturated energy portfolio, which made it an ideal case for the investigation of how DERs and RESs interact. The system of Aeroe satisfies most of the MG definition criteria with the introduction of a BESS. The comparative analysis is used to identify the contribution of the BESS, showing how the system can increase the flexibility of the MG, but also showing how it becomes more "dependent" on the grid connection. This is not seen as a setback due to the stacked revenues that rely on additional exchange with the main grid. However, it causes the output of the simulation to be less transparent.
The electricity price projection reflects an average price increase based on extrapolated historical data. This input could be improved to provide better price predictions by making a more sophisticated analysis. Furthermore, the load of the Aeroe MG have been assumed to be at constant levels, resulting in no annual load increase throughout the lifetime of the simulation. This will likely not be the case since the municipality of Aeroe is experiencing a small yearly decrease in population (Kommnue 2020), together with the tendency of becoming more energy efficient.
The LCOS analysis decides the BES technology that is carried on further through the funnel framework, which occurs rather early. The earlier transition into a specific technology can have a negative effect on the proposed solution of the given case, due to it not being specifically tailored to the case. This is especially seen in relation to the full load hours used in the LCOS equation, which in the framework is based on an analysis of the estimated average of the system operation. A complete approach would have been to test every technology through the framework and then compare the findings, based on a common denominator such as NPV or return on investment analysis.
The simulation of the BESS is made with a depth of discharge of 100%. This choice can have a degrading impact on battery, which can cause a decrease in life expectancy. Although with the obtained 8202 cycles, it could be argued that the system should be able to perform as the simulation states.
Related costs
The inherent tradeoff between investment cost and the additional revenues does not justify the implementation of the BESS. The Danish TSO estimates a 39% decrease in the overall cost of lithium-ion batteries moving towards 2030 (Danish Energy Agency and Energinet 2018). This expected decrease in the cost of the CAPEX indicates that the needed decrease of 12% could be obtained in the near future. The potential is already identified with the use of electric vehicles as household batteries since the battery investment can be justified.
However, with the added amount of flexibility, provided by the BESS, together with the social-economic benefits, the additional non-tangible benefits can provide a value large enough to justify the investment of the proposed BESS. Furthermore, the perspectives of Aeroe also speaks for the implementation of the BESS, even with the lower of potential revenue than by doing business as usual.
Change in the energy portfolio
The addition of PV was identified to be the most relevant and beneficial supplement to the MG. The 'International Renewable Energy Agency' (International Renewable Energy Agency-IRENA 2019) estimates that the average cost of PVs has decreased by 77% in the period between 2010 and 2018. By 2050, they expect PVs to be among the cheapest RES available. A decrease in costs of this magnitude would provide even larger incentives to expand the energy portfolio of Aeroe with solar.
The simulation only yields the additional production from solar, meaning that the BESS is not optimized to fit the new net load of the system. The system could be optimized to match the new net load, but the idea is to see how the proposed system would handle the added production without changing the configuration. Since the system already shows an increased profit from the changed energy portfolio, the overall optimization is disregarded, as it does not convey any new information.
This work presents a framework that utilizes a top-down funnel structure to accurately depict how BESS can be financially and technically feasible by deploying multi-tasking strategies. The framework functionality is investigated by applying the framework steps to the Aeroe MG case study. Ways for optimizing the utilization, performance, and integration possibilities of BESS is found. The framework identified the optimal operation of the system by applying multi-tasking strategies and priority functions. This was found to be a system performing load following, energy arbitrage and frequency regulation.
The proposed solution for Aeroe entailed a lithium-ion based BESS, with a power capacity of 5 MW and storage capacity of 40 MWh. This is done by maximizing the financial indicator, NPV, while adhering to identified constraints. The comparative analysis found that the implementation of the BESS resulted in a NPV of approximately 9 mio. DKK lower than the current situation. Effectively, implying that the BESS is not able to justify its investment costs, even with revenue stacking. However, the comparative analysis identified intangible values in the form of increased flexibility and system utilization. Collectively, the proposed solution results in an optimal BESS, which will bring the MG closer to its envisioned goals while being close to financially viable. It was also found that either a decrease in investment costs of 12% or the addition of a 56 DKK/MWh subsidy could make the BESS a financially viable solution.
The framework has proven to provide feasible results within a wide range of MG parameters, all with respect to the identified objectives. Therefore, by using the framework, the user can transparently analyze different system parameters to determine the most optimal BESS solution for any specific situation. This framework can help the preliminary investigation phase when analyzing future battery energy storage system investments.
The electrical characteristics of the power system have not been accounted, as this was deemed outside the scope. As part of the future work, this could be implemented to give a better understanding of how a BESS affects the electrical characteristics, and if these could infer additional limitations. The future work investigations would help further the deployment of BESS into other regions and geographical locations, to provide an in-depth view of how the revenue stacking strategy would perform in other ecospheres.
NECPS:
National Energy and Climate Plans
CHP:
DER:
Distributed energy resources
EST:
Energy storage technologies
BESS:
BES:
MG:
LCOS:
Levelized cost of storage
CAPEX:
OPEX:
Operational expenditures
Net present value
O&M:
Operational and maintenance
IRR:
Internal rate on return
NLC:
Net load curve
LDC:
Load duration curve
BMS:
Building management system
ICT:
TSO:
Transmission system operator
Ærø Kommune: Visionsplan for Ærøs energiforsyning (2018). https://www.aeroekommune.dk/borger/energi-natur-miljoe-klima/baeredygtig-energi/visionsplan. Accessed 1 Jan 2022
Allan G, Eromenko I, Gilmartin M, Kockar I, McGregor P (2015) The economics of distributed energy generation: A literature review. Renewable and Sustainable Energy Reviews 42:543–556. https://doi.org/10.1016/j.rser.2014.07.064. Accessed 1 Jan 2022
Alsaidan I, Khodaei A, Gao W (2018) A comprehensive battery energy storage optimal sizing model for microgrid applications. IEEE Trans Power Syst 33(4):3968–3980. https://doi.org/10.1109/TPWRS.2017.2769639
Cooper RG (1990) Stage-gate systems: a new tool for managing new products. Business Horizons 33(3):44–54. https://doi.org/10.1016/0007-6813(90)90040-I
Cooper RG (2008) Perspective: The stage-gate ®idea-to-launch process-update, what's new, and nexgen systems*. Journal of Product Innovation Management 25(3), 213–232. https://doi.org/10.1111/j.1540-5885.2008.00296.x. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1540-5885.2008.00296.x
Danish Energy Agency (2015) Strategic Energy Planning In Denmark. Technical report, http://www.energiplanfyn.dk/
Danish Energy Agency: Energistatistik 2018. Technical report (2019). https://ens.dk/sites/ens.dk/files/Analyser/energistatistik_2018.pdf. Accessed 1 Jan 2022
Danish Energy Agency and Energinet: Technology Data Energy storage. Technical report, DEA (2018). https://ens.dk/sites/ens.dk/files/Analyser/technology_data_catalogue_for_energy_storage.pdf. Accessed 1 Jan 2022
Danish Energy Agency: Analyseforudsætninger til Energinet 2020. Technical report, DEA (2020). https://ens.dk/sites/ens.dk/files/Analyser/analyseforudsaetninger_til_energinet_2020.pdf. Accessed 1 Jan 2022
EMD International: Setting up operation strategies in energyPRO. Technical report, EMD International (2016). https://www.emd.dk/files/energypro/HowToGuides/SettingupoperationstrategiesinenergyPRO.pdf. Accessed 1 Jan 2022
EMD International: Data Services. https://www.emd.dk/data-services/. Accessed 1 Jan 2022
European Commission: An EU-wide assessment of National Energy and Climate Plans Driving. Technical report (2020). https://ec.europa.eu/energy/topics/energy-strategy/national-energy-climate-plans_en. Accessed 1 Jan 2022
FORESIGHT: Electricity storage examined: Battery babble debunked. FORESIGHT: Climate and Energy Business (2017)
Gardner P, Jones F, Rowe M, Nouri A, van de Vegte H, Breisig V, Linden C, Pütz T (2016) E-storage: Shifting from cost to value Wind and solar applications. Technical report, World Energy Council . https://www.worldenergy.org/wp-content/uploads/2016/03/Resources-E-storage-report-2016.02.04.pdf. Accessed 1 Jan 2022
Hauschild MZ, Rosenbaum RK, Olsen SI (2017) Life Cycle Assessment: Theory and Practice. Springer, https://books.google.dk/books?id=S7szDwAAQBAJ
Hofstetter P (2000) Perspectives in life cycle impact assessment: A structured approach to combine models of the technosphere, ecosphere, and valuesphere. The International Journal of Life Cycle Assessment 5 . https://doi.org/10.1007/BF02978561
Huang S, Limousin E, Filonenko K, Veje C (2020) Towards a 100 modelling, optimization, and cost analysis. In: Yokoyama, R., Amano, Y. (eds.) 33rd International Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems (ECOS 2020), vol. 1, pp. 308–319. ECOS, 33rd International Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems, ECOS 2020 ; Conference date: 29-06-2020 Through 03-07-2020
IEA (International Energy Agency): World Energy Outlook 2018: Highlights. Technical report (2018). https://www.oecd-ilibrary.org/energy/world-energy-outlook-2018/executive-summary_weo-2018-2-en. Accessed 1 Jan 2022
IRENA: Renewable Capacity Statistics 2020. Technical report (2020). https://irena.org/-/media/Files/IRENA/Agency/Publication/2020/Mar/IRENA_RE_Capacity_Statistics_2020.pdf. Accessed 1 Jan 2022
International Renewable Energy Agency - IRENA: Future of Solar Photovoltaic vol. November, pp. 1–88 (2019). https://www.irena.org/-/media/Files/IRENA/Agency/Publication/2019/Oct/IRENA_Future_of_wind_2019.pdf. Accessed 1 Jan 2022
Kommnue A (2020) Folketal, areal og beskatning
Mehigan L, Deane JP, Gallachóir BP, Bertsch V (2018) A review of the role of distributed generation (dg) in future electricity systems. Energy 163:822–836. https://doi.org/10.1016/j.energy.2018.08.022. Accessed 1 Jan 2022
Munuera L, Pavarini C (2020) Tracking Energy Integration. https://www.iea.org/reports/tracking-energy-integration. Accessed 02 Feb 2021
Renewable Energy Agency, I.: ELECTRICITY STORAGE AND RENEWABLES: COSTS AND MARKETS TO 2030. Technical Report October, IRENA (2017). www.irena.org. Accessed 1 Jan 2022
Energistyrelsen: Teknologikatalog for produktion af el og fjernvarme. Technical report, Energistyrelsen (2020). https://ens.dk/service/fremskrivninger-analyser-modeller/teknologikataloger/teknologikatalog-produktion-af-el-og. Accessed 1 Jan 2022
Energinet: Tariffer og gebyrer (2020). https://energinet.dk/El/Elmarkedet/Tariffer?fbclid=IwAR1sWxBC5HWNWijvPlVNJm6dHA7i7PR0E6rnzwuyLwPif7GDk26bDOhM6Pg. Accessed 1 Jan 2022
Energistyrelsen: Energi og CO2 regnskabet: Ærø Kommune (2018). https://sparenergi.dk/offentlig/vaerktoejer/energi-og-co2-regnskab/aeroe. Accessed 1 Jan 2022
European Commission: 2019 assessment of the progress made by Member States towards the national energy efficiency targets for 2020. Technical report, European Commission (2020). arXiv:1011.1669v3. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0326 &from=EN. Accessed 1 Jan 2022
Energinet: OM OS (2020). https://energinet.dk/Om-os. Accessed 1 Jan 2022
Energinet: Ancillary services to be delivered in Denmark Tender conditions. Technical Report December, Energinet (2019). https://en.energinet.dk/Electricity/Rules-and-Regulations#Market regulations. Accessed 1 Jan 2022
Santos A, Ma Z, Agergaard M, Rasmussen SF, Jørgensen BN (2020) Analysis of energy storage technologies for island microgrids: A case study of the Ærø island in denmark. In: 2020 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT), pp. 1–5 . https://doi.org/10.1109/ISGT45199.2020.9087668
Sundseth K (2008) Natura 2000: Protecting Europe's biodiversity. Technical Report 9, European Commission. arXiv:1011.1669v3
Taylor P, Bolton R, Stone D, Zhang X-P, Martin C, Upham P (2022) Pathways for energy storage in the UK. Technical report. The Centre for Low Carbon Futures, 1–56 (2012). Accessed 1 Jan
Tian Y, Bera A, Benidris M, Mitra J (2018) Stacked revenue and technical benefits of a grid-connected energy storage system. IEEE Trans Indus Appl 54(4):3034–3043. https://doi.org/10.1109/TIA.2018.2825303
Truong CN, Schimpe M, Bürger U, Hesse HC, Jossen A (2018) Multi-use of stationary battery storage systems with blockchain based markets. Energy Procedia 155:3–16. https://doi.org/10.1016/j.egypro.2018.11.070. 12th International Renewable Energy Storage Conference, IRES 2018, 13–15 March 2018, Düsseldorf, Germany. Accessed 1 Jan 2022
The authors would like to acknowledge Per Nielsen, General Manager at EMD International A/S, for the support and help with data and software access. Our thanks also extend to the Center for Energy Informatics, University of Southern Denmark for the research facilities provided to conduct this work.
About this supplement
This article has been published as part of Energy Informatics Volume 5 Supplement 4, 2022: Proceedings of the Energy Informatics.Academy Conference 2022 (EI.A 2022). The full contents of the supplement are available online at https://energyinformatics.springeropen.com/articles/supplements/volume-5-supplement-4.
This work is supported by the "Cost-effective large-scale IoT solutions for energy efficient medium- and large-sized buildings" project, funded by the Danish Energy Agency under the Energy Technology Development and Demonstration Program, ID number: 64020-2108.
Center for Energy Informatics, University of Southen Denmark, Odense, Denmark
Troels S. Nielsen, Jens S. P. Thomsen, Athila Q. Santos & Jutte Kaad
Troels S. Nielsen
Jens S. P. Thomsen
Athila Q. Santos
Jutte Kaad
TSN: Writing, review and editing. JSPT: Writing, review and editing. AQS: Writing, review and editing. JK: Review. All authors read and approved the final manuscript.
Correspondence to Athila Q. Santos.
Nielsen, T.S., Thomsen, J.S.P., Santos, A.Q. et al. Framework for dimensioning battery energy storage systems with applied multi-tasking strategies in microgrids. Energy Inform 5 (Suppl 4), 52 (2022). https://doi.org/10.1186/s42162-022-00236-1
Multi-tasking strategies
Funnel framework
Economis | CommonCrawl |
Experimental identification of unique angular dependent scattering behavior of nanoparticles
Guanchao Yin1,
Min Song1,2,
Waseem Raja1,
Patrick Andrae1,2 &
Martina Schmid ORCID: orcid.org/0000-0001-5103-07501,3
Nanoparticles exhibit unique light scattering properties and are applied in many research fields.
In this work, we perform angular resolved scattering measurements to study the scattering behaviour of random and periodic silver (Ag), and periodic polystyrene (PS) nanoparticles.
The random Ag nanoparticles, with a wide particle size distribution, are able to broadbandly scatter light into large angles. In contrast, both types of periodic nanoparticles are characterized by a strong scattering zone where scattering angles are increasing as the wavelength increases.
Angular resolved scattering measurements enable experimentally revealing the particular scattering properties of different nanostructures.
Light scattering is a fundamental property of particles and can be adjusted in a great flexibility by finely tuning the particle geometries and surrounding media. This unique feature opens up numerous applications for nanoparticles in spectroscopy, sensors and photovoltaic devices [1,2,3]. For instance, large-angle scattering of nanoparticles is capable of enhancing the light propagation path beyond the physical thickness of devices for absorption improvement, enabling to reduce material usage and resulting manufacturing cost [3,4,5,6,7]. As the scattering behaviour is highly wavelength-dependent it needs to be identified for specific applications. Mostly the scattering behaviour is either simply predicted by theoretical simulations or characterized experimentally by haze measurements [5,6,7]. However, the haze only gives the overall scattered fraction for each wavelength, and is not sufficient to resolve angular scattering details. In this contribution, we will apply angular resolved scattering (ARS) measurements [6,7,8] to characterize the wavelength-dependent scattering behaviour of particles for which only few examples exist so far [9].
The measurement is done with an UV/VIS setup (PerkinElmer Lambda 950 UV/VIS) and an additional ARS (Automated Reflectance/Transmittance Analyser (ARTA)) extension [10]. The illustrative sketch of ARTA is shown in Fig. 1a. It consists of a fixed sample holder placed in the middle of the moving detector with a detector-sample distance of 92.1 mm (R ARS ). Unpolarised light comes from 180° and the scattered light is measured with an angular resolution of 2°. The detection is done by an integration sphere with a slit opening of 6 mm width (w) and 17 mm height (l). Since the detector covers only one plane of the whole scattering space, the measured value at a certain angle should be expanded for estimating the volume scattering. The weighting factor F is acquired as [7]:
$$ F=4\uppi {R}_{\mathrm{ARS}}^2/{A}_{\mathrm{D}}\sin \left(\phi \right)\sin \left(\Delta \vartheta \right) $$
where A D = w*l is the area of the detector slit and ϕ is the measuring angle. Δϑ is related to the slit width and is obtained by \( \Delta \vartheta =\arctan \left(\frac{w}{2{R}_{ARS}}\right) \).
a Illustrative sketch of the Automated Reflectance/Transmittance Analyser (ARTA) and (b) measured samples
We selected random and periodic Ag nanoparticles as well as closely packed polystyrene (PS) nanospheres, which covers both metallic and dielectric materials. To avoid refraction at the substrate/air interface influencing the scattering angles when light is leaving the substrate, half cylinder glass is applied as substrate and a sketch is shown in Fig. 1b. The investigated wavelength range lies between 400 nm and 800 nm and the range of scanning angle is confined to 30° to 90° (see Fig. 1a), since the transmission is dominant over reflection in intensity for our investigated samples, and direct transmission (scattering angle below 30°) is omitted.
Figure 2 represents the scanning electron microscopy (SEM) morphologies (left column) and wavelength dependent angular scattering behaviour (right column) of random and periodic Ag nanoparticles and PS spheres. The random Ag nanoparticles in Fig. 2a were fabricated by annealing a 50 nm thick Ag film for 20 min at a temperature of 500 °C in air. The nanoparticle radii range dominantly from 80 nm to 160 nm (see the size distribution in Fig. 3a) with an averaged spacing of 200 nm. The periodic Ag nanoparticles (Fig. 2b) were prepared using nanosphere lithography [11]: a 30 nm thick Ag film was evaporated onto a hexagonally closely packed monolayer of PS spheres with a radius of 450 nm; subsequently the PS spheres were removed in ultrasonic bath and triangular Ag nanoparticles remained; finally spherical Ag nanoparticles formed after annealing at a temperature of 200 °C for 2 h. Due to the template structure of PS nanospheres, the Ag nanoparticles exhibit a hexagonal order at a uniform radius of 50 nm. Fig. 2c shows the closely packed PS spheres used for the formation of periodic Ag particles in Fig. 2b themselves, which constitute the dielectric nanoparticle sample.
Scanning electron microscopy (SEM) morphologies (left column) and wavelength dependent angular resolved scattering (ARS) behaviour (right column) of random (a,d) and periodic (b,e) Ag nanostructures and PS spheres (c,f)
a Size distribution (radius) of random Ag nanoparticles shown in Fig. 2 (a), and (b) calculated angular power distribution of light scattered by a Ag nanoparticle at air/glass interface as wavelength varies
As observed in Fig. 2d, random Ag nanoparticles exhibit a strong scattering ability with a pronounced angular scattering range from 50° to 60°. The scattering is quite broadband and almost covers the whole investigated spectrum, which could be correlated to the broad size and shape distribution of the Ag nanoparticles. Further, as indicated in Fig. 2d, there exits a trend of a moderate increase of scattering angles as the wavelength goes up. To simply explain the scattering behaviour of random Ag nanoparticles, Fig. 3b simulates the angular power distribution of a Ag nanoparticle at air/glass interface using the finite element method as implemented in the software COMSOL [12]. To adapt the simulation geometry to the experimental case, a spherical Ag nanoparticle of R = 140 nm radius was cropped off by 20 nm (C) at the substrate interface. Firstly, as shown in Fig. 3b, the large angle scattering ability (a degree beyond 30°) is demonstrated; additionally, the angle corresponding to the large angle scattering peak is increasing as the wavelength increases. This simulation trend is in agreement with the experimental observation of Fig. 2d. In contrast, the periodic Ag nanoparticles (Fig. 2e) exhibit a distinctive scattering feature. It is characterized by a strong scattering zone where scattering angles are increasing from 40° to 70° as the wavelength goes up from 400 nm to 700 nm. We also observe a similar scattering feature in Fig. 2f for the closely packed PS nanospheres. Treating the periodic nanoparticle arrays as line diffraction gratings for PS spheres with the line distance d and considering the refractive index n = 1.5 of the glass substrate, the zero-order diffraction angle α can be obtained by the equation [13]
$$ {d}^{\ast } sin\alpha =\lambda /n $$
where λ is the wavelength of incident light. The line distance d according to the shortest spacing is set to 1.15 * R, with R being the radius of a PS sphere, and taking into account a finite spacing between the spheres of the order of 15%.
The diffraction angle curve (dashed line) is plotted as a function of wavelength in both Fig. 2e and f. It can be discovered that the zero-order diffraction angle curve fits very well with the scattering feature for the closely packed PS spheres. This suggests that it is the diffraction which determines the strong scattering behaviour for the closely packed PS spheres. Remarkably, the periodic Ag nanoparticles follow the same trend of increasing scattering angles with wavelengths, but shifted to even larger scattering angles. This behaviour can be correlated to the large-angle scattering ability of plasmonic nanoparticles as shown in Fig. 3b, which is well known for individual metal nanoparticles and less pronounced for dielectric ones [14].
In this work, we prepared random and periodic Ag nanoparticles as well as closely packed PS spheres and studied their scattering behaviour using angular resolved scattering measurements. The different scattering properties of the three nanoparticles are revealed, showing that random Ag nanoparticles have a broadband scattering ability with large scattering angles due to their wide particle size distribution. In contrast, both periodic nanoparticle types are characterized by a strong scattering zone where scattering angles are increasing as the wavelength goes up. It can be explained by the zero-order diffraction for closely packed PS spheres. Overall, it is proved that angular resolved scattering measurements are a promising experimental characterization method to identify the scattering properties of nanoparticles and can support their selection for specific applications.
Wei, Y., Cao, C., Jin, R., Mirkin, C.A.: Science. 297, 1536 (2002)
Schmid, M., Manley, P., Song, M., Yin, G.: J. Mater. Res. 31, 3273 (2016)
Yin, G., Knight, M.W., Claire van Lare, M., Garcia, M.M.S., Polman, A., Schmid, M.: Adv. Opt. Mater. 5, 1600637 (2017)
Mellor, A., Hylton, N.P., Hauser, H., Thomas, T., Lee, K.H., Saleh, Y., Giannini, V., Braun, A., Loo, J., Vercruysse, D., Dorpe, P., Blasi, B., Maier, S.A., Ekins-Daukes, N.J.: IEEE J. Photovolt. 6, 1678 (2016)
Tan, H.R., Santbergen, R., Smets, A.H.M., Zeman, M.: Nano Lett. 12, 4070 (2012)
Lipovsek, B., Krc, J., Isabella, O., Zeman, M., Topic, M.: Phys. Status Solidi C. 3-4, 1041 (2010)
W. Dewald, V. Sittinger, B. Szyszka, 8th International Conference on Coatings on Glass and Plastics 415 (2011)
Schröder, S., Herffurth, T., Blaschke, H., Duparré, A.: Appl. Opt. 50, C164 (2011)
Wang, J.X., Nilsson, A.M., Fernandes, D.L.A., Niklasson, G.A.: J. Phys. Conf. Ser. 682, 012018 (2016)
www.perkinelmer.de
Haynes, C.L., Duyne, R.P.V.: J. Phys. Chem. B. 105, 5599 (2001)
www.comsol.com
Kinoshita, S., Yoshioka, S., Miyazaki, J.: Rep. Prog. Phys. 71, 076401 (2008)
Schmid, M., Andrae, P., Manley, P.: Nanoscale Res. Lett. 9, 50 (2014)
The authors acknowledge the funding from the Helmholtz-Association for Young Investigator groups within the Initiative and Networking fund (VH-NG-928). Besides, M. Schmid thanks Rudi Santbergen (TU Delft) and Wilma Dewald (formely IST Braunschweig) for discussions.
Funding was obtained from the Helmholtz-Association for Young Investigator groups within the Initiative and Networking fund (VH-NG-928).
As given in the paper.
Nanooptische Konzepte für die PV, Helmholtz-Zentrum Berlin für Materialien und Energie GmbH, Hahn-Meitner Platz 1, 14109, Berlin, Germany
Guanchao Yin, Min Song, Waseem Raja, Patrick Andrae & Martina Schmid
Fachbereich Physik, Freie Universität Berlin, Arnimallee 14, 14195, Berlin, Germany
Min Song & Patrick Andrae
Fakultät für Physik, Universität Duisburg-Essen & CENIDE, Lotharstraße 1, 47057, Duisburg, Germany
Martina Schmid
Guanchao Yin
Min Song
Waseem Raja
Patrick Andrae
M. Schmid proposed the original idea, guided the research and contributed to the manuscript. G. Yin conducted the measurements, evaluated the data and wrote the main manuscript. M. Song prepared the nanostructures, P. Andrae designed and conducted initial measurements and W. Raja provided the simulation templates. All of the authors read and approved the final manuscript.
Correspondence to Martina Schmid.
Yin, G., Song, M., Raja, W. et al. Experimental identification of unique angular dependent scattering behavior of nanoparticles. J. Eur. Opt. Soc.-Rapid Publ. 13, 38 (2017). https://doi.org/10.1186/s41476-017-0066-4
Light scattering
Angular resolved measurements
Wide angle scattering
Grating behaviour | CommonCrawl |
two pointers
Announcement (en)
D. Intervals of Intervals
Little D is a friend of Little C who loves intervals very much instead of number "$$$3$$$".
Now he has $$$n$$$ intervals on the number axis, the $$$i$$$-th of which is $$$[a_i,b_i]$$$.
Only the $$$n$$$ intervals can not satisfy him. He defines the value of an interval of intervals $$$[l,r]$$$ ($$$1 \leq l \leq r \leq n$$$, $$$l$$$ and $$$r$$$ are both integers) as the total length of the union of intervals from the $$$l$$$-th to the $$$r$$$-th.
He wants to select exactly $$$k$$$ different intervals of intervals such that the sum of their values is maximal. Please help him calculate the maximal sum.
First line contains two integers $$$n$$$ and $$$k$$$ ($$$1 \leq n \leq 3 \cdot 10^5$$$, $$$1 \leq k \leq \min\{\frac{n(n+1)}{2},10^9\}$$$) — the number of intervals Little D has and the number of intervals of intervals he will select.
Each of the next $$$n$$$ lines contains two integers $$$a_i$$$ and $$$b_i$$$, the $$$i$$$-th line of the $$$n$$$ lines describing the $$$i$$$-th interval ($$$1 \leq a_i < b_i \leq 10^9$$$).
Print one integer — the maximal sum of values Little D can get.
For the first example, Little D will select $$$[1,2]$$$, the union of the first interval and the second interval is $$$[1,4]$$$, whose length is $$$3$$$.
For the second example, Little D will select $$$[1,2]$$$, $$$[2,3]$$$ and $$$[1,3]$$$, the answer is $$$5+6+4=15$$$. | CommonCrawl |
Bao's Curriculum Vitae
Impedance matching for broadband PEH systems
Last updated on Apr 13, 2020 6 min read Technology
For piezoelectric energy harvesting (PEH) systems, over the past decade, there have been tremendous designs for the self-powered sensors targets. Typically we use a weakly coupled PEH system to minimize the influence of electro-mechanical coupling effect, however, it worth noting that in the real applications that the miniaturized structures often means the strong coupling intensity which will influence the performance of the power sources, i.e., such as mechanical oscillators or cantilever beams. Therefore, it is necessary to address the effect of different coupling intensities to execute the impedance matching method for maximizing the harvested power as well as the robust designs for various power sources.
Impedance Matching
Base on the previous study on phase variable technology, it has been proved that it is profitably to manipulate the switching phase when off-resonance cases occurred. Therefore, it is meaningful to investigate its performance on impedance matching from the pragmatism perspective. With reference to the past studies on the equivalent impedance of the PEH system, it is possible to establish the impedance expressions from the mechanical domain and electrical domain, which corresponding to the mechanical structure and electrical the energy harvesting circuit, respectively. From the classical impedance matching theory, we know that the maximum harvested power can be achieved when the load and source impedance is conjugated with each other, which means the real parts of the mechanical and electrical impedance equal to each other and the imaginary parts counteract with each other at resonance. This method is commonly used in the electronic analysis to maximize the energy transferred from source to load. It can be utilized in PEH systems to explore the power relationship and limit, which contributes to design broadband and high energy capability PEH systems.
Figure 1. The illustration of impedance matching
With a certain coupling intensity, as shown in Fig.1(a), it is enlightening to explore the impedance matching under different circuit solutions. From the detailed view in Fig.1(b), the mechanical impedance curve is outside the available impedance range of conventional topologies such as SEH. Take P-SSHI as an example. With the growth of load resistance, the first tunable parameter varies from 0 to π, which leads to the more substantial equivalent damping effect, corresponding to the real part of electrical impedance. However, confined by small load resistance, the electrical impedance cannot reach the mechanical impedance.
As the load resistance continually increases along the one-dimensional black dash line until ${Z'_e} = Z_m^*$ at the resonance frequency, the red circle in Fig.1© represents the coupling point where the output power reaches maximum. At this point, the real parts of mechanical impedance and electrical impedance equal to each other, and the imaginary parts counteract with each other. However, if the vibration frequency is not at the resonance frequency, such as lower or higher frequency. The real parts of the two impedance may equal to each other, but the imaginary parts could not be matched due to this type of topologies such as P-SSHI, S-SSHI or SEH only have one tunable parameter, the rectified voltage, which has confined the attainable impedance range on one-dimensional curves. Therefore, their off-resonance energy harvesting capability, in other words, the broadband capability, is underperformed.
If we consider the general practice model, the second tunable parameter, synchronized switching phase, with different switching phases, the conventional one-dimensional impedance curves become two-dimensional impedance plane, which enables a much wider range for impedance matching. As shown in Fig.1(d), if the system operates at a lower frequency, there is no way for conventional solutions to match the coupling point, but with a lead phase, which brings a negative electrically induced stiffness to the system, if the system operates at a higher frequency, similarly, a lag phase will introduce larger electrically induced stiffness to the system, which tunes the resonance frequency from $\omega_r$ to $\omega_h$. With the second tunable parameter, the off-resonance energy harvesting capability is increased, therefore the general practical model enables not only the in-resonance harvesting capability but also the broadband capability at off-resonance cases.
Critical Coupling
Figure 2. Critical coupling
One important feature of impedance matching is critical coupling for the selected circuit topologies, which means the minimum coupling intensity needed to reach the maximum output power. As shown in Fig.2, for PV-SSHI, the red line is tangent to the boundary of PV-SSHI at the coupling point. In order to determine its critical coupling intensity of PV-SSHI, firstly the effective coupling intensity is defined as $K_e^2$ , then with the impedance matching relationship it can be derived that:
$${\frac{{2\left( {1 - \gamma } \right)}}{{\tilde \omega \pi \left( {1 + \gamma } \right)}}\left( {1 + \cos 2\varphi } \right) = \frac{{2{\zeta _m}}}{{k_c^2}}}
{1 + \sin 2\varphi = \frac{{{{\tilde \omega }^2} - 1}}{{k_c^2}}}$$
With the critical coupling intensity $K_c^2$ derived from above expression of PV-SSHI, the minimum coupling intensity required for impedance matching can be identified under certain mechanical damping factor. The red line shown in Fig.2(b) indicates the optimized solutions of switching phases under different mechanical damping factors. For the critical coupling intensity of conventional SSHI topologies, it can be derived when $\phi=0$ , however, compared with the vertical dashed gray line crossing $\phi=0$ , the optimized phase solutions are smaller than $0$ , which means with phase variable technology, the minimum coupling intensity can be coupled with a certain lead switching phase. And as the mechanical damping effect increases, it needs a bigger negative optimized switching phase to couple with the minimum coupling intensity. As shown in Fig.2(a), the two critical coupling points for conventional SSHI topologies and PV-SSHI, a weaker coupling intensity can be handled by the phase variable energy harvesting model. To simplify the expression of $K_c^2$ for pratical usage, it can be expanded by Taylor series as:
$$K_{c}^2 = \frac{2}{{1 + \cos 2\varphi }}c + \frac{{2\left( {1 + \sin 2\varphi } \right)}}{{{{\left( {1 + \cos 2\varphi } \right)}^2}}}{c^2}, \qquad c = \frac{{\pi \left( {1 + \gamma } \right)}}{{2\left( {1 - \gamma } \right)}}{\zeta _m}$$
From the expression of $K_c^2$ it can be found that with a certain switching phase, the critical coupling intensity will increase together with the mechanical damping factor. Moreover, if the flipping factor increases to $-1$ , the critical coupling intensity will approach $0$, it means no matter what circuit topology is used, the maximum output power can be reached, due to the mechanical impedance curve is always inside the attainable impedance range of the selected circuit. Furthermore, the critical coupling intensity also can be tuned by the switching phase $\phi$, as mentioned before, there exists an optimized switching phase to handle the weakest coupling intensity, which means the application range of conventional topologies has been expanded by the phase variable technology.
B. Zhao, J. Liang, "Impedance Modeling of a Bidirectional Energy Conversion Circuit towards Energy Harvesting and Vibration Exciting Purposes," IEEE/ASME Transactions on Mechatronics, to appear.
B. Zhao, J. Liang, and K. Zhao, "Phase-Variable Control of Parallel Synchronized Triple Bias-Flips Interface Circuit towards Broadband Piezoelectric Energy Harvesting," in 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018, pp. 1–5.
Bao Zhao
Work hard, play hard!
Powered by the Academic theme for Hugo. | CommonCrawl |
What's the difference between an entangled state, a superposed state and a cat state?
What is the difference between the groups $PSU(N)$ and $SU(N)$?
What is the difference between Chiral anomaly, ABJ anomaly, and Axial anomaly?
Difference between locality and causality?
Why not using Lagrangian, instead of Hamiltonian, in non relativistic QM?
What is the physical difference between states and unital completely positive maps?
What is the difference between population transfer, polarization transfer and coherence transfer in a quantum system?
What is the fundamental difference between ghost and auxiliary fields?
In what picture of QM is the Haag-Ruelle scattering theory formulated?
What is the name of the principle saying it is meaningless to talk/ask questions that can not be measured/tested?
What is the difference between QM and non-relativistic QFT
I have often heard that quantum mechanics can be considered to be 0-dimensional quantum field theory (QFT in 0 space and one dimension of time). I also know that there are classical field theories, such as fluid dynamics for example, that may depending on the regime considered relativistic or not.
But what are non-relativistic quantum field theories (examples?), and what is their difference to (non-relativistic) quantum mechanics?
asked Oct 2, 2016 in Theoretical Physics by Dilaton (5,240 points) [ no revision ]
QM has finite degrees of freedom. Non-relativistic QFT has infinite degree of freedom. QM can be treated as an one dimensional Non-relativistic QFT.
So far, the zero dimensional QFT I've seen people discussing is the perturbation Gaussian integrals. An example of non-relativistic QFT is the Schroedinger field theory, which is introduced in its wikipedia page: https://en.wikipedia.org/wiki/Schr%C3%B6dinger_field
The Hamiltonian of a QFT can be obtained by the so-called 2nd quantization method starting from QM by sandwiching the first quantized Hamiltonian operator with the field operator and its Hermitian conjugate.
The term 2nd quantization is misleading because everything has been quantized already. For historical reason, people thought of it as if the "wavefunction" is quantized again, we now call it 2nd quantization.
answered Oct 2, 2016 by XIaoyiJing (50 points) [ revision history ]
edited Oct 2, 2016 by XIaoyiJing
Most voted comments show all comments
Ok, do you have some concrete examples of non-relativistic QFTs? Thinking about it, I have a rather hard time coming up with a specific example ...
commented Oct 2, 2016 by Dilaton (5,240 points) [ no revision ]
@Dilaton You can find even more examples of non-relativistic QFT from many advanced quantum mechanics books and condensed matter field theory books.
1. Advanced Quantum Mechanics by Franz Schwabl
2. Quantum Field Theory of Point Particles and String by Hatfield B
3. Condensed Matte Field Theory by Altland Alexander Simons Ben.
As far as I can recall, the 2nd quantization is roughly using ladder operators to generalize a 1-particle QM to many particle QM. The Hilbert space for 1-particle QM is replaced by the Fock space for an arbitrary number of particles.
commented Oct 2, 2016 by XIaoyiJing (50 points) [ no revision ]
When the 2nd quantization method applied for non-relativistic QM is used for relativistic theory, it leads to unaccepted quantum theory. The reason is that in relativistic theory, it is not easy to define a well-behaved localized operator $\hat{x}^{\mu}$, and so you cannot find the relativistic generalization of the completeness $1=\int dx |x><x|$. In relativistic quantum field theories, the momentum representation is not equivalent (in the sense of Fourier transforms) to position representation. In fact, you cannot interpret a relativistic field $\phi(x)$ as a quantum mechanical wave. For a much clearer explanation, you may read some papers about "induced representation" of Poincare group.
commented Oct 2, 2016 by XIaoyiJing (50 points) [ revision history ]
In QM, we start from promoting $x(\tau)$ and $p(\tau)$ to quantum operators $\hat{x}$ and $\hat{p}$, where $\tau$ is a parameter, which is usually called time.
When the classical theory is relativistic, you may think the possibility that you promote time $t(\tau)$ as an operator. But there is a theorem which I cannot completely recall at the moment saying that the quantum commutation relation $[x^{\mu},p^{\nu}]=i\hbar\eta^{\mu\nu}$ for Lorentzian signature gives you a quantum theory whose energy is not bounded from below. This is why "relativistic quantum mechanics" does not work.
Then there are two approaches. One is called string theory, in which the quantum operators $x^{\mu}$ are parametrized by two variables $\tau$ and $\sigma$.
Another approach is the quantum field theory, in which the dynamical variables are fields $\phi$, which depends on 1+3 variables $t$ and $\vec{x}$. In this theory, space coordinates play a role as continuously labeling the dynamical variables. You may think of fields $\phi$ in QFT as $x$ in QM. In QFT, $\phi(t,\vec{x})$ is labeled by continuous indices $\vec{x}$; in QM $x_{i}$ is labeled by $i$. You may alternatively think of $\phi$ in QFT parametrized by four parameters $t,x^{i}$. but in QM $x(\tau)$ is parametrized by one variable $\tau$.
In its present formulation, your third sentence contradicts your first two sentences!
commented Oct 2, 2016 by Arnold Neumaier (13,989 points) [ no revision ]
Most recent comments show all comments
@Arnold Neumaier Thanks a lot. I don't think that the two sentences "QM has finite degrees of freedom. Non-relativistic QFT has infinite degree of freedom." are correct anymore. But I think QM is a 1-dim QFT. Take path integral quantization as an example, QM is a 1-dim QFT since we are summing over paths $x(t)$ parametrized by a single variable.
According to the customary terminology in nonrelativistic QFT, and consistent with the definition given in the question,
''0-dimensional quantum field theory (QFT in 0 space and one dimension of time)''
QM is a 0-dimensional QFT. You count the dimensions in the relativistic way, which is inappropriate for QM.
A traditional quantum mechanical system of $N$ particles has a finite number of $3N$ position coordinates as degrees of freedom. The Hamiltonian can be expressed for any $N$; hence one can form the corresponding system where $N$ is treated as uncertain. This is done by working in the corresponding Fock space, defined as the direct sum of all $N$-particle Hilbert spaces. The resulting system has now an infinite number of degrees of freedom. That this constitutes a nonrelativistic QFT becomes apparent when changing the notation to that of second quantization, with creation and annihilation operators that change the number of particles.
Thus all statistical mechanics done in the grand canonical ensemble is nonrelativistic quantum field theory.
Other examples are given by the statistical mechanics of crystals, which are infinitely extended periodic systems. The periodicity, however, allows one to map the problem on the 3-dimensional torus, which is a compact manifold; so that the total number of particles on the torus is finite (but huge).
Both classes of examples together constitute the main examples of nonrelativistic quantum field theories.
In general, one considers the infinite number of degrees of freedom as the essence of a quantum field theory, but this is the case only when the underlying position space is non-compact. In the compact case, and in particular for a torus (e.g. crystal) or a single point (= 0-dimensional field theory; e..g, ordinary quantum mechanics in the rest frame in a basis of harmonics), there are only finitely many degrees of freedom.
0-dimensional QFT is QM in the same sense as 0-dimensional classical field theory is described by ODEs rather than PDEs for higher dimensions.
The difference between relativistic and nonrelativistic is the same in QFT as in classical field theory - namely whether the theory is invariant under the Poincare group or the Galilei group.
answered Oct 2, 2016 by Arnold Neumaier (13,989 points) [ revision history ]
edited Oct 2, 2016 by Arnold Neumaier
QM is certainly not a $0$-dim QFT.
Starting from the partition function of a quantum field living on a connected manifold $M$,
$$\int D[\phi]e^{iS[\phi]},$$
where the field $\phi$ is regarded as a map $\phi$: $M\rightarrow\mathbb{R}$, if the manifold is of $0$-dimensional, the isometry group of $M$ is then trivial since $M$ is a single point.
Furthermore, this field cannot have spin degree of freedom, i.e. it indeed carries no spinoral indices. Lastly, there can be no spacetime derivative acting on $\phi$ in the action $S[\phi]$.
We then can conclude that the classical action $S[\phi]$ can only be a real-valued function $S(\phi)$, which can be Taylor expanded as
$$S(\phi)=\frac{m^2}{2}\phi^{2}+\frac{g}{3!}\phi^3+\cdots.$$
This partition function then is simply a perturbative Gaussian integral in calculus
$$\int_{\mathbb{R}}e^{\frac{m^2}{2}\phi^{2}+\frac{g}{3!}\phi^3+\cdots}d\phi.$$
Then the $0$-dim QFT is simply a theory of perturbative Gaussian integrals in calculus.
If the manifold $M$ is $1$-dim, then by following a similar procedure, we can find the partition function given by the Feynman path-integral in QM.
answered Oct 4, 2016 by XIaoyiJing (50 points) [ no revision ]
What you describe is a field theory with a single field only and $-1$ space dimensions, so it doesn't even have time. With a zero-dimensional space, the fields are functions of time, the action may involve the first derivative with respect to time, and the path integral is over all histories (time-dependent fields). What one gets is precisely the same as an anharmonic quantum oscillator in the Heisenberg picture.
Moreover, you forgot that in QFT $\phi$ is a single label for all the fields, which are components of $\phi$. If you look at the the 0-dimensional case of a QFT with $3N$ scalar fields invariant under a SO(3) acting on the components in sets of three, and rename the triples as $q_k~(k=1,\ldots,n)$ you get precisely the quantum mechanics of $N$ particles.
commented Oct 4, 2016 by Arnold Neumaier (13,989 points) [ revision history ]
p$\hbar$ysi$\varnothing$sOverflow | CommonCrawl |
このページの翻訳
Welcome to Mathematical Optimization Laboratory!
Optimization Algorithms
en:intro:researches:optimization
Fixed Point Algorithms
Dept. of Computer Science
Private Room, GitHub
Optimization Problem and Its Applications
Optimization Algorithms for Smooth Convex Optimization
Centralized Optimization Algorithms
Decentralized Optimization Algorithms
Optimization Algorithms for Nonsmooth Convex Optimization
Optimization Algorithms for Smooth Nonconvex Optimization
Let us consider the following problem: \begin{align*} \text{Minimize } f(x):= \sum_{i\in \mathcal{I}} f^{(i)} (x) \text{ subject to } x\in C := \bigcap_{i\in \mathcal{I}} C^{(i)}, \end{align*} where $H$ is a real Hilbert space, $f^{(i)} \colon H \to \mathbb{R}$ $(i\in \mathcal{I} := \{1,2,\ldots,I\})$, and $C^{(i)}$ $(\subset H)$ $(i\in \mathcal{I})$ is closed and convex with $C \neq \emptyset$. In particular, it is assumed that $C^{(i)}$ can be expressed as the fixed point set of a nonexpansive mapping. This implies the metric projection onto $C^{(i)}$ cannot be efficiently computed (e.g., $C^{(i)}$ is the intersection of many closed convex sets or the set of minimizers of a convex function).1) Please see the Fixed Point Algorithms page for the details of fixed point sets.
Here, we divide the problem into the three categories.
Smooth Convex Optimization Problem:
It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and convex. The problem includes signal recovery problem, beamforming problem, storage allocation problem, optimal control problem, and bandwidth allocation problem.
Nonsmooth Convex Optimization Problem:
It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is nonsmooth and convex. The problem includes signal recovery problem,ensemble learning, minimal antenna-subset selection problem, and bandwidth allocation problem.
Smooth Nonconvex Optimization Problem
It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and nonconvex. The problem has practical problems such as power control and bandwidth allocation.
We focus on the following algorithms for solving the above problems.
Centralized Optimization Algorithms:
It can be implemented under the assumptions that we can know the explicit forms of $f^{(i)}$s and $C^{(i)}$s, i.e., $f:= \sum_{i\in \mathcal{I}} f^{(i)}$ and $C:= \bigcap_{i\in \mathcal{I}} C^{(i)}$.
Decentralized Optimization Algorithms:
It can be implemented through all users cooperating when user $i$ $(i\in \mathcal{I})$ has its own private $f^{(i)}$ and $C^{(i)}$ and cannot use the private information of other users.
Please see the Publications page. You can get the pdf files of our papers.
Many algorithms have been presented to solve convex optimization problems (e.g., interior-point methods, projection methods). In 2001, Yamada proposed the hybrid steepest descent method for smooth convex optimization over the fixed point sets of nonexpansive mappings. We have presented algorithms based on the hybrid steepest descent method and their convergence analyses. The following are our previously reported results:
H. Iiduka: Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping, Mathematical Programming, Vol. 149, No. 1, pp. 131-165, 2015.
S. Iemoto, K. Hishinuma, and H. Iiduka: Approximate Solutions to Variational Inequality over the Fixed Point Set of a Strongly Nonexpansive Mapping, Fixed Point Theory and Applications, 51, Vol. 2014, No. 1, 2014.
H. Iiduka and I. Yamada: Computational Method for Solving a Stochastic Linear-Quadratic Control Problem Given an Unsolvable Stochastic Algebraic Riccati Equation, SIAM Journal on Control and Optimization, Vol. 50, No. 4, pp. 2173-2192, 2012.
H. Iiduka: Fixed Point Optimization Algorithm and Its Application to Network Bandwidth Allocation, Journal of Computational and Applied Mathematics, Vol. 236, No. 7, pp. 1733-1742, 2012.
H. Iiduka: Iterative Algorithm for Solving Triple-hierarchical Constrained Optimization Problem, Journal of Optimization Theory and Applications, Vol. 148, No. 3, pp. 580-592, 2011.
H. Iiduka: Three-term Conjugate Gradient Method for the Convex Optimization Problem over the Fixed Point Set of a Nonexpansive Mapping, Applied Mathematics and Computation, Vol. 217, No. 13, pp. 6315-6327, 2011.
H. Iiduka and I. Yamada: A Use of Conjugate Gradient Direction for the Convex Optimization Problem over the Fixed Point Set of a Nonexpansive Mapping, SIAM Journal on Optimization, Vol. 19, No. 4, pp. 1881-1893, 2009.
The incremental subgradient method and the broadcast optimization method are useful for decentralized convex optimization. However, they can be applied to only the case where $C^{(i)} = C$ $(i\in \mathcal{I})$ and $C$ is simple in the sense that the projection onto $C$ can be easily computed (e.g., $C$ is a half-space). We have proposed decentralized optimization algorithms for solving convex optimization problems with more complicated $C^{(i)}$ (e.g., $C^{(i)}$ is the intersection of simple, closed convex sets), which include bandwidth allocation problem and storage allocation problem.
H. Iiduka: Stochastic Fixed Point Optimization Algorithm for Classifier Ensemble, IEEE Transactions on Cybernetics (accepted)
H. Iiduka: Decentralized Hierarchical Constrained Convex Optimization, Optimization and Engineering (accepted)
H. Iiduka: Parallel Optimization Algorithm for Smooth Convex Optimization over Fixed Point Sets of Quasi-nonexpansive Mappings, Journal of the Operations Research of Japan, Vol. 58, No. 4, pp. 330-352, 2015.
H. Iiduka: Distributed Convex Optimization Algorithms and Their Application to Distributed Control in Peer-to-Peer Data Storage System, Journal of Nonlinear and Convex Analysis: : Special Issue-Dedicated to Wataru Takahashi on the occasion of his 70th birth day, Vol. 16, No. 11, pp. 2159-2179, 2015.
H. Iiduka and K. Hishinuma: Acceleration Method Combining Broadcast and Incremental Distributed Optimization Algorithms, SIAM Journal on Optimization, Vol. 24, No. 4, pp. 1840-1863, 2014.
H. Iiduka: Fixed Point Optimization Algorithms for Distributed Optimization in Networked Systems, SIAM Journal on Optimization, Vol. 23, No. 1, pp. 1-26, 2013.
The proximal point algorithms and the subgradient methods are useful for solving nonsmooth convex optimization. The following are the results of the algorithms based on the above methods.
K. Shimizu, K. Hishinuma, H. Iiduka: Parallel Computing Proximal Method for Nonsmooth Convex Optimization with Fixed Point Constraints of Quasi-nonexpansive Mappings, submitted
H. Oishi, Y. Kobayashi, H. Iiduka: Incremental Proximal Method for Nonsmooth Convex Optimization with Fixed Point Constraints of Quasi-nonexpansive Mappings, Linear and Nonlinear Analysis (accepted)
H. Iiduka: Distributed Optimization for Network Resource Allocation with Nonsmooth Utility Functions, IEEE Transactions on Control of Network Systems (accepted)
K. Hishinuma and H. Iiduka: Convergence Analysis of Incremental and Parallel Line Search Subgradient Methods in Hilbert Space, Journal of Nonlinear and Convex Analysis: Special Issue-Dedicated to Wataru Takahashi on the occasion of his 75th birth day, Vol. 20, No. 9, pp.1937-1947, 2019.
K. Hishinuma and H. Iiduka: Incremental and Parallel Machine Learning Algorithms with Automated Learning Rate Adjustments, Frontiers in Robotics and AI: Resolution of Limitations of Deep Learning to Develop New AI Paradigms, Vol. 6, Article 77, 2019.
H. Iiduka: Two Stochastic Optimization Algorithms for Convex Optimization with Fixed Point Constraints, Optimization Methods and Software, Vol. 34, No. 4, pp.731-757, 2019.
K. Sakurai, T. Jimba, and H. Iiduka: Iterative Methods for Parallel Convex Optimization with Fixed Point Constraints, Journal of Nonlinear and Variational Analysis, Vol. 3, No. 2, pp.115-126, 2019.
Y. Hayashi and H. Iiduka:Optimality and Convergence for Convex Ensemble Learning with Sparsity and Diversity Based on Fixed Point Optimization, Neurocomputing, Vol. 273, pp.367-372, 2018.
H. Iiduka: Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization, Optimization, Vol. 66, No. 1, pp.35-59, 2017.
H. Iiduka: Incremental Subgradient Method for Nonsmooth Convex Optimization with Fixed Point Constraints, Optimization Methods and Software, Vol. 31, No. 5, pp.931-951, 2016.
H. Iiduka: Convergence Analysis of Iterative Methods for Nonsmooth Convex Optimization over Fixed Point Sets of Quasi-Nonexpansive Mappings, Mathematical Programming, Vol. 159, No. 1, pp. 509-538, 2016.
H. Iiduka: Proximal Point Algorithms for Nonsmooth Convex Optimization with Fixed Point Constraints, European Journal of Operational Research, Vol. 253, No. 2, pp. 503-513, 2016.
H. Iiduka: Parallel Computing Subgradient Method for Nonsmooth Convex Optimization over the Intersection of Fixed Point Sets of Nonexpansive Mappings, Fixed Point Theory and Applications, Vol. 2015, 72, 2015.
The following are our previously reported results.
H. Iiduka: Iterative Algorithm for Triple-Hierarchical Constrained Nonconvex Optimization Problem and Its Application to Network Bandwidth Allocation, SIAM Journal on Optimization, Vol. 22, No. 3, pp. 862-878, 2012.
H. Iiduka: Fixed Point Optimization Algorithm and Its Application to Power Control in CDMA Data Networks, Mathematical Programming, Vol. 133, No.1, pp. 227-242, 2012.
H. Iiduka and M. Uchida: Fixed Point Optimization Algorithms for Network Bandwidth Allocation Problems with Compoundable Constraints, IEEE Communications Letters, Vol. 15, No. 6, pp. 596-598, 2011.
H. Iiduka and I. Yamada: A Subgradient-type Method for the Equilibrium Problem over the Fixed Point Set and its Applications, Optimization, Vol. 58, No. 2, pp. 251-261, 2009.
K. Hishinuma and H. Iiduka: Fixed Point Quasiconvex Subgradient Method, European Journal of Operational Research, Vol. 282, No. 2, 428–437, 2020
H. Iiduka: Distributed Iterative Methods for Solving Nonmonotone Variational Inequality over the Intersection of Fixed Point Sets of Nonexpansive Mappings, Pacific Journal of Optimization, Vol. 10, No. 4, pp. 691-713, 2014.
The metric projection onto $C^{(i)}$, denoted by $P_{C^{(i)}}$, is defined for all $x\in H$ by $P_{C^{(i)}}(x) \in C^{(i)}$ and $\|x - P_{C^{(i)}}(x)\| = \inf_{y\in C^{(i)}} \|x-y\|$.
en/intro/researches/optimization.txt
by Hideaki IIDUKA | CommonCrawl |
Free Courant and derived Leibniz pseudoalgebras
JGM Home
Symplectic reduction at zero angular momentum
March 2016, 8(1): 35-70. doi: 10.3934/jgm.2016.8.35
Lagrangian reduction of discrete mechanical systems by stages
Javier Fernández 1, , Cora Tori 2, and Marcela Zuccalli 2,
Instituto Balseiro, Universidad Nacional de Cuyo – C.N.E.A., Av. Bustillo 9500, San Carlos de Bariloche, R8402AGP
Departamento de Matemática, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 50 y 115, La Plata, Buenos Aires, 1900
Received February 2015 Revised November 2015 Published February 2016
In this work we introduce a category of discrete Lagrange--Poincaré systems $\mathfrak{L}\mathfrak{P}_d$ and study some of its properties. In particular, we show that the discrete mechanical systems and the discrete dynamical systems obtained by the Lagrangian reduction of symmetric discrete mechanical systems are objects in $\mathfrak{L}\mathfrak{P}_d$. We introduce a notion of symmetry group for objects of $\mathfrak{L}\mathfrak{P}_d$ as well as a reduction procedure that is closed in the category $\mathfrak{L}\mathfrak{P}_d$. Furthermore, under some conditions, we show that the reduction in two steps (first by a closed normal subgroup of the symmetry group and then by the residual symmetry group) is isomorphic in $\mathfrak{L}\mathfrak{P}_d$ to the reduction by the full symmetry group.
Keywords: symmetry and reduction., discrete mechanical systems, Geometric mechanics.
Mathematics Subject Classification: Primary: 37J15, 70G45; Secondary: 70G7.
Citation: Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of discrete mechanical systems by stages. Journal of Geometric Mechanics, 2016, 8 (1) : 35-70. doi: 10.3934/jgm.2016.8.35
V. Arnold, Sur la géométrie différentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits, Ann. Inst. Fourier (Grenoble), 16 (1966), 319-361. doi: 10.5802/aif.233. Google Scholar
M. Castrillón López and T. S. Ratiu, Reduction in principal bundles: Covariant Lagrange-Poincaré equations, Comm. Math. Phys., 236 (2003), 223-250. doi: 10.1007/s00220-003-0797-5. Google Scholar
H. Cendra and V. A. Díaz, Lagrange-d'Alembert-Poincaré equations by several stages, arXiv:1406.7271, 2014. Google Scholar
H. Cendra, J. E. Marsden and T. S. Ratiu, Geometric mechanics, Lagrangian reduction, and nonholonomic systems, in Mathematics Unlimited-2001 and Beyond, Springer, Berlin, 2001, 221-273. Google Scholar
________, Lagrangian reduction by stages, Mem. Amer. Math. Soc., 152 (2001), x+108pp. doi: 10.1090/memo/0722. Google Scholar
V. A. Díaz, Reducción por etapas de sistemas noholónomos, Ph.D. thesis, Universidad Nacional del Sur, Bahía Blanca, Argentina, 2008. Google Scholar
J. Fernández, C. Tori and M. Zuccalli, Lagrangian reduction of nonholonomic discrete mechanical systems, J. Geom. Mech., 2 (2010), 69-111. doi: 10.3934/jgm.2010.2.69. Google Scholar
J. Fernández and M. Zuccalli, A geometric approach to discrete connections on principal bundles, J. Geom. Mech., 5 (2013), 433-444. doi: 10.3934/jgm.2013.5.433. Google Scholar
V. Guillemin and A. Pollack, Differential Topology, Prentice-Hall Inc., Englewood Cliffs, N.J., 1974. Google Scholar
D. Husemoller, Fibre Bundles, Third ed., Graduate Texts in Mathematics, vol. 20, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4757-2261-1. Google Scholar
D. Iglesias, J. C. Marrero, D. Martín de Diego, E. Martínez and E. Padrón, Reduction of symplectic Lie algebroids by a Lie subalgebroid and a symmetry Lie group, SIGMA Symmetry Integrability Geom. Methods Appl., 3 (2007), Paper 049, 28pp. doi: 10.3842/SIGMA.2007.049. Google Scholar
S. Jalnapurkar, M. Leok, J. E. Marsden and M. West, Discrete Routh reduction, J. Phys. A, 39 (2006), 5521-5544. doi: 10.1088/0305-4470/39/19/S12. Google Scholar
S. Kobayashi and K. Nomizu, Foundations of Differential Geometry. Vol. I, Wiley Classics Library, John Wiley & Sons Inc., New York, 1996. Google Scholar
J. M. Lee, Introduction to Smooth Manifolds, Graduate Texts in Mathematics, Vol. 218, Springer-Verlag, New York, 2003. Google Scholar
M. Leok, J. E. Marsden and A. Weinstein, A discrete theory of connections on principal bundles, arXiv:math/0508338, 2005. Google Scholar
J. C. Marrero, D. Martín de Diego and E. Martínez, Discrete Lagrangian and Hamiltonian mechanics on Lie groupoids, Nonlinearity, 19 (2006), 1313-1348. doi: 10.1088/0951-7715/19/6/006. Google Scholar
J. E. Marsden and M. West, Discrete mechanics and variational integrators, Acta Numer., 10 (2001), 357-514. doi: 10.1017/S096249290100006X. Google Scholar
J. E. Marsden and A. Weinstein, Reduction of symplectic manifolds with symmetry, Rep. Mathematical Phys., 5 (1974), 121-130. doi: 10.1016/0034-4877(74)90021-4. Google Scholar
J. E. Marsden, G. Misiołek, J. P. Ortega, M. Perlmutter and Tudor S. Ratiu, Hamiltonian Reduction by Stages, Lecture Notes in Mathematics, Vol. 1913, Springer, Berlin, 2007. Google Scholar
J. E. Marsden and T. S. Ratiu, Introduction to Mechanics and Symmetry, Second ed., Springer-Verlag, New York, 1999. doi: 10.1007/978-0-387-21792-5. Google Scholar
R. McLachlan and M. Perlmutter, Integrators for nonholonomic mechanical systems, J. Nonlinear Sci., 16 (2006), 283-328. doi: 10.1007/s00332-005-0698-1. Google Scholar
K. Meyer, Symmetries and integrals in mechanics, Dynamical systems (Proc. Sympos., Univ. Bahia, Salvador, 1971), Academic Press, New York, 1973, 259-272. Google Scholar
P. Michor, Topics in Differential Geometry, Graduate Studies in Mathematics, vol. 93, American Mathematical Society, Providence, RI, 2008. doi: 10.1090/gsm/093. Google Scholar
E. Routh, Stability of a Given State of Motion, Macmillan and Co., London, 1877. Google Scholar
S. Smale, Topology and mechanics. I, Invent. Math., 10 (1970), 305-331. Google Scholar
________, Topology and mechanics. II. The planar $n$-body problem, Invent. Math., 11 (1970), 45-64. Google Scholar
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems. Journal of Geometric Mechanics, 2010, 2 (1) : 69-111. doi: 10.3934/jgm.2010.2.69
Kathrin Flasskamp, Sebastian Hage-Packhäuser, Sina Ober-Blöbaum. Symmetry exploiting control of hybrid mechanical systems. Journal of Computational Dynamics, 2015, 2 (1) : 25-50. doi: 10.3934/jcd.2015.2.25
Jean-Marie Souriau. On Geometric Mechanics. Discrete & Continuous Dynamical Systems, 2007, 19 (3) : 595-607. doi: 10.3934/dcds.2007.19.595
Viviana Alejandra Díaz, David Martín de Diego. Generalized variational calculus for continuous and discrete mechanical systems. Journal of Geometric Mechanics, 2018, 10 (4) : 373-410. doi: 10.3934/jgm.2018014
Anthony M. Bloch, Melvin Leok, Jerrold E. Marsden, Dmitry V. Zenkov. Controlled Lagrangians and stabilization of discrete mechanical systems. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 19-36. doi: 10.3934/dcdss.2010.3.19
Javier Fernández, Sebastián Elías Graiff Zurita, Sergio Grillo. Erratum for "Error analysis of forced discrete mechanical systems". Journal of Geometric Mechanics, 2021, 13 (4) : 679-679. doi: 10.3934/jgm.2021030
Javier Fernández, Sebastián Elías Graiff Zurita, Sergio Grillo. Error analysis of forced discrete mechanical systems. Journal of Geometric Mechanics, 2021, 13 (4) : 533-606. doi: 10.3934/jgm.2021017
Serena Dipierro. Geometric inequalities and symmetry results for elliptic systems. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3473-3496. doi: 10.3934/dcds.2013.33.3473
E. García-Toraño Andrés, Bavo Langerock, Frans Cantrijn. Aspects of reduction and transformation of Lagrangian systems with symmetry. Journal of Geometric Mechanics, 2014, 6 (1) : 1-23. doi: 10.3934/jgm.2014.6.1
Miguel Rodríguez-Olmos. Book review: Geometric mechanics and symmetry, by Darryl D. Holm, Tanya Schmah and Cristina Stoica. Journal of Geometric Mechanics, 2009, 1 (4) : 483-488. doi: 10.3934/jgm.2009.1.483
Thomas Hagen, Andreas Johann, Hans-Peter Kruse, Florian Rupp, Sebastian Walcher. Dynamical systems and geometric mechanics: A special issue in Honor of Jürgen Scheurle. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : i-iii. doi: 10.3934/dcdss.20204i
Gianne Derks. Book review: Geometric mechanics. Journal of Geometric Mechanics, 2009, 1 (2) : 267-270. doi: 10.3934/jgm.2009.1.267
Andrew D. Lewis. The physical foundations of geometric mechanics. Journal of Geometric Mechanics, 2017, 9 (4) : 487-574. doi: 10.3934/jgm.2017019
François Gay-Balmaz, Darryl D. Holm. Predicting uncertainty in geometric fluid mechanics. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1229-1242. doi: 10.3934/dcdss.2020071
C. D. Ahlbrandt, A. C. Peterson. A general reduction of order theorem for discrete linear symplectic systems. Conference Publications, 1998, 1998 (Special) : 7-18. doi: 10.3934/proc.1998.1998.7
Anthony M. Bloch, Peter E. Crouch, Nikolaj Nordkvist. Continuous and discrete embedded optimal control problems and their application to the analysis of Clebsch optimal control problems and mechanical systems. Journal of Geometric Mechanics, 2013, 5 (1) : 1-38. doi: 10.3934/jgm.2013.5.1
Alexis Arnaudon, So Takao. Networks of coadjoint orbits: From geometric to statistical mechanics. Journal of Geometric Mechanics, 2019, 11 (4) : 447-485. doi: 10.3934/jgm.2019023
Firdaus E. Udwadia, Thanapat Wanichanon. On general nonlinear constrained mechanical systems. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 425-443. doi: 10.3934/naco.2013.3.425
Leo T. Butler. A note on integrable mechanical systems on surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1873-1878. doi: 10.3934/dcds.2014.34.1873
Javier Fernández Cora Tori Marcela Zuccalli | CommonCrawl |
The Riemann problem and the limit solutions as magnetic field vanishes to magnetogasdynamics for generalized Chaplygin gas
CPAA Home
Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field
January 2018, 17(1): 143-161. doi: 10.3934/cpaa.2018009
Nonlinear SchrÖdinger equations with sum of periodic and vanishing potentials and sign-changing nonlinearities
Bartosz Bieganowski 1, and Jaros law Mederski 1,2,,
Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, ul. Chopina 12/18, 87-100 Toruń, Poland
Institute of Mathematics, Polish Academy of Sciences, ul. Sniadeckich 8, 00-956 Warszawa, Poland
* Corresponding author
Received February 2017 Revised February 2017 Published September 2017
Fund Project: The second author was supported by the National Science Centre, Poland (Grant No. 2014/15/D/ST1/03638)
We look for ground state solutions to the following nonlinear Schrödinger equation
$-Δ u + V(x)u = f(x,u)-Γ(x)|u|^{q-2}u\hbox{ on }\mathbb{R}^N,$
where $V=V_{per}+V_{loc}∈ L^{∞}(\mathbb{R}^N)$ is the sum of a periodic potential $V_{per}$ and a localized potential $V_{loc}$, $Γ∈ L^{∞}(\mathbb{R}^N)$ is periodic and $Γ(x)≥ 0$ for a.e. $x∈\mathbb{R}^N$ and $2≤q <2^*$. We assume that $\inf σ(-Δ+V)>0$, where $σ(-Δ+V)$ stands for the spectrum of $-Δ +V$ and $f$ has the subcritical growth but higher than $Γ(x)|u|^{q-2}u$, however the nonlinearity $f(x, u)-Γ(x)|u|^{q-2}u$ may change sign. Although a Nehari-type monotonicity condition for the nonlinearity is not satisfied, we investigate the existence of ground state solutions being minimizers on the Nehari manifold.
Keywords: Photonic crystal, linear defect, gap soliton, ground state, variational methods, Nehari manifold, Schrödinger equation, periodic potential, localized potential.
Mathematics Subject Classification: Primary:35Q60;Secondary:35J20, 35Q55, 58E05, 35J47.
Citation: Bartosz Bieganowski, Jaros law Mederski. Nonlinear SchrÖdinger equations with sum of periodic and vanishing potentials and sign-changing nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (1) : 143-161. doi: 10.3934/cpaa.2018009
N. Ackermann, Uniform continuity and Brézis-Lieb type splitting for superposition operators in Sobolev space, Advances in Nonlinear Analysis, (2016). doi: 10.1515/anona-2016-0123. Google Scholar
S. Alama and Y. Y. Li, On "multibump" bound states for certain semilinear elliptic equations, Indiana Univ. Math. J., 41 (1992), 983-1026. doi: 10.1512/iumj.1992.41.41052. Google Scholar
A. Ambrosetti and P. Rabinowitz, Dual variational methods in critical point theory and applications, J. Functional Analysis, 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7. Google Scholar
T. Bartsch and J. Mederski, Ground and bound state solutions of semilinear time-harmonic Maxwell equations in a bounded domain, Arch. Rational Mech. Anal., 215 (2015), 283-306. doi: 10.1007/s00205-014-0778-1. Google Scholar
T. Bartsch and J. Mederski, Nonlinear time-harmonic Maxwell equations in an anisotropic bounded medium, J. Funct. Anal., 272 (2017), 4304-4333. doi: 10.1016/j.jfa.2017.02.019. Google Scholar
J. Belmonte-Beitia and D. Pelinovsky, Bifurcation of gap solitons in periodic potentials with a periodic sign-varying nonlinearity coefficient, Appl. Anal., 89 (2010), 1335-1350. doi: 10.1080/00036810903330538. Google Scholar
V. Benci, C. R. Grisanti and A. M. Micheletti, Existence and non existence of the ground state solution for the nonlinear Schrödinger equations with $V(∞) = 0$, Topol. Methods in Nonlinear Anal., 26 (2005), 203-219. doi: 10.12775/TMNA.2005.031. Google Scholar
[8] A. Biswas and S. Konar, Introduction to Non-Kerr Law Optical Solitons, Chapman and Hall, 2006.
A. V. Buryak, P. Di Trapani, D. V. Skryabin and S. Trillo, Optical solitons due to quadratic nonlinearities: from basic physic to futuristic applications, Physics Reports, 370 (2002), 63-235. doi: 10.1016/S0370-1573(02)00196-5. Google Scholar
D. Costa and H. Tehrani, Existence of positive solutions for a class of indefinite elliptic problems in $\mathbb{R}^N$, Cal. Var., 13, 159-189. doi: 10.1007/PL00009927. Google Scholar
D. Costa and H. Tehrani, Existence and multiplicity results for a class of Schrödinger equations with indefinite nonlinearities, Adv. Differential Equations, 8 (2003), 1319-1340. Google Scholar
V. Coti-Zelati and P. Rabinowitz, Homoclinic type solutions for a semilinear elliptic PDE on $\mathbb{R}^n$, Comm. Pure Appl. Math., 45 (1992), 1217-1269. doi: 10.1002/cpa.3160451002. Google Scholar
[13] W. Dörfler, A. Lechleiter, M. Plum, G. Schneider and C. Wieners, Photonic Crystals: Mathematical Analysis and Numerical Approximation, Springer, Basel, 2012.
G. Figueiredo and H. R. Quoirin, Ground states of elliptic problems involving non homogeneous operators, Indiana Univ. Math. J., 65 (2016), 779-795. doi: 10.1512/iumj.2016.65.5828. Google Scholar
Q. Guo and J. Mederski, Ground states of nonlinear Schrödinger equations with sum of periodic and inverse-square potentials, J. Differential Equations, 260 (2016), 4180-4202. doi: 10.1016/j.jde.2015.11.006. Google Scholar
L. Jeanjean and K. Tanaka, A positive solution for a nonlinear Schrödinger equation on $\mathbb{R}^N$, Indiana Univ. Math. Journal, 54 (2005), 443-464. doi: 10.1512/iumj.2005.54.2502. Google Scholar
P. Kuchment, The mathematics of photonic crystals, Mathematical modeling in optical science, Frontiers Appl. Math. , 22, SIAM, Philadelphia (2001), 207-272. Google Scholar
Y. Li, Z.-Q. Wang and J. Zeng, Ground states of nonlinear Schrödinger equations with potentials, Ann. Inst. H. Poincaré Anal. Non Linéaire, 23 (2006), 829-837. doi: 10.1016/j.anihpc.2006.01.003. Google Scholar
P.L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. Part Ⅰ and Ⅱ, Ann. Inst. H. Poincaré, Anal. Non Liné are., 1 (1984), 109-145; and 223-283. Google Scholar
F. Liu and J. Yang, Nontrivial solutions of Schrödinger equations with indefinite nonlinearities, J. Math. Anal. Appl., 334 (2007), 627-645. doi: 10.1016/j.jmaa.2006.12.054. Google Scholar
S. Liu, On superlinear Schrödinger equations with periodic potential, Calc. Var. Partial Differential Equations, 45 (2012), 1-9. doi: 10.1007/s00526-011-0447-2. Google Scholar
J. Mederski, Solutions to a nonlinear Schrödinger equation with periodic potential and zero on the boundary of the spectrum, Topol. Methods Nonlinear Anal., 46 (2015), 755-771. doi: 10.12775/TMNA.2015.067. Google Scholar
J. Mederski, Ground states of a system of nonlinear Schrödinger equations with periodic potentials, Comm. Partial Differential Equations, 41 (2016), 1426-1440. doi: 10.1080/03605302.2016.1209520. Google Scholar
A. Pankov, Periodic nonlinear Schrödinger equation with application to photonic crystals, Milan J. Math., 73 (2005), 259-287. doi: 10.1007/s00032-005-0047-8. Google Scholar
A. Pankov, On decay of solutions to nonlinear Schrödinger equations, Proc. Amer. Math. Soc., 136 (2008), 2565-2570. doi: 10.1090/S0002-9939-08-09484-7. Google Scholar
[26] A. Pankov, Lecture Notes on Schrödinger Equations, Nova Publ., 2007.
P. H. Rabinowitz, On a class of nonlinear Schrödinger equations, Z. Angew. Math. Phys., 43 (1992), 270-291. doi: 10.1007/BF00946631. Google Scholar
M. Reed and B. Simon, Methods of Modern Mathematical Physics, Analysis of Operators, Vol. IV, Academic Press, New York, 1978. Google Scholar
B. Simon, Schrödinger semigroups, Bull. Amer. Math. Soc., 7 (1982), 447-526. doi: 10.1090/S0273-0979-1982-15041-8. Google Scholar
[30] R. E. Slusher and B. J. Eggleton, Nonlinear Photonic Crystals, Springer, 2003.
[31] M. Struwe, Variational Methods, Springer, 2008.
A. Szulkin and T. Weth, Ground state solutions for some indefinite variational problems, J. Funct. Anal., 257 (2009), 3802-3822. doi: 10.1016/j.jfa.2009.09.013. Google Scholar
A. Szulkin and T. Weth, The method of Nehari manifold, Handbook of nonconvex analysis and applications, 597-632, Int. Press, Somerville, 2010. Google Scholar
X. H. Tang, New super-quadratic conditions on ground state solutions for superlinear Schrödinger equation, Advanced Nonlinear Studies, 14 (2014), 361-373. doi: 10.1515/ans-2014-0208. Google Scholar
[35] M. Willem, Minimax Theorems, Birkhäuser Verlag, 1996. doi: 10.1007/978-1-4612-4146-1.
Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1547-1565. doi: 10.3934/cpaa.2019074
A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419
Robert Magnus, Olivier Moschetta. The non-linear Schrödinger equation with non-periodic potential: infinite-bump solutions and non-degeneracy. Communications on Pure & Applied Analysis, 2012, 11 (2) : 587-626. doi: 10.3934/cpaa.2012.11.587
Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4973-5002. doi: 10.3934/dcds.2017214
Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5867-5889. doi: 10.3934/dcds.2019257
Grégoire Allaire, M. Vanninathan. Homogenization of the Schrödinger equation with a time oscillating potential. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 1-16. doi: 10.3934/dcdsb.2006.6.1
Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003
Daniele Garrisi, Vladimir Georgiev. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4309-4328. doi: 10.3934/dcds.2017184
César E. Torres Ledesma. Existence and concentration of solutions for a non-linear fractional Schrödinger equation with steep potential well. Communications on Pure & Applied Analysis, 2016, 15 (2) : 535-547. doi: 10.3934/cpaa.2016.15.535
Scipio Cuccagna, Masaya Maeda. On weak interaction between a ground state and a trapping potential. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3343-3376. doi: 10.3934/dcds.2015.35.3343
Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2105-2123. doi: 10.3934/cpaa.2017104
Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 525-544. doi: 10.3934/dcds.2001.7.525
Naoufel Ben Abdallah, Yongyong Cai, Francois Castella, Florian Méhats. Second order averaging for the nonlinear Schrödinger equation with strongly anisotropic potential. Kinetic & Related Models, 2011, 4 (4) : 831-856. doi: 10.3934/krm.2011.4.831
Georgy L. Alfimov, Pavel P. Kizin, Dmitry A. Zezyulin. Gap solitons for the repulsive Gross-Pitaevskii equation with periodic potential: Coding and method for computation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1207-1229. doi: 10.3934/dcdsb.2017059
Suna Ma, Huiyuan Li, Zhimin Zhang. Novel spectral methods for Schrödinger equations with an inverse square potential on the whole space. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1589-1615. doi: 10.3934/dcdsb.2018221
Toshiyuki Suzuki. Scattering theory for semilinear Schrödinger equations with an inverse-square potential via energy methods. Evolution Equations & Control Theory, 2019, 8 (2) : 447-471. doi: 10.3934/eect.2019022
Jian Zhang, Wen Zhang, Xianhua Tang. Ground state solutions for Hamiltonian elliptic system with inverse square potential. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4565-4583. doi: 10.3934/dcds.2017195
Zhitao Zhang, Haijun Luo. Symmetry and asymptotic behavior of ground state solutions for schrödinger systems with linear interaction. Communications on Pure & Applied Analysis, 2018, 17 (3) : 787-806. doi: 10.3934/cpaa.2018040
Yanfang Xue, Chunlei Tang. Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1121-1145. doi: 10.3934/cpaa.2018054
Wided Kechiche. Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1233-1252. doi: 10.3934/cpaa.2017060
Bartosz Bieganowski Jaros law Mederski | CommonCrawl |
Only show content I have access to (193)
Only show open access (11)
Over 3 years (243)
Physics and Astronomy (96)
Materials Research (66)
Statistics and Probability (15)
Earth and Environmental Sciences (11)
European Psychiatry (100)
MRS Online Proceedings Library Archive (42)
Microscopy and Microanalysis (28)
Epidemiology & Infection (15)
Parasitology (14)
Proceedings of the International Astronomical Union (14)
British Journal of Nutrition (11)
Journal of Materials Research (10)
MRS Advances (10)
Proceedings of the Nutrition Society (7)
European Astronomical Society Publications Series (6)
Infection Control & Hospital Epidemiology (4)
The Journal of Laryngology & Otology (4)
European Journal of Anaesthesiology (3)
Journal of the Marine Biological Association of the United Kingdom (3)
Public Health Nutrition (3)
The Journal of Agricultural Science (3)
Zygote (3)
Revista de Historia Economica - Journal of Iberian and Latin American Economic History (2)
European Psychiatric Association (100)
Materials Research Society (66)
MiMi / EMAS - European Microbeam Analysis Society (28)
International Astronomical Union (18)
Nestle Foundation - enLINK (17)
BSAS (16)
Nutrition Society (4)
MBA Online Only Members (3)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (3)
AEPC Association of European Paediatric Cardiology (2)
Animal consortium (2)
Developmental Origins of Health and Disease Society (2)
Weed Science Society of America (2)
test society (2)
Arab Grid for Learning (1)
JLO (1984) Ltd (1)
Society for Disaster Medicine and Public Health, Inc. SDMPH (1)
Elements in Emerging Theories and Technologies in Metamaterials (1)
CRITICAL BINOMIAL IDEALS OF NORTHCOTT TYPE
Arithmetic rings and other special rings
Theory of modules and ideals
Rings and algebras with additional structure
P. A. GARCÍA‐SÁNCHEZ, D. LLENA, I. OJEDA
Journal: Journal of the Australian Mathematical Society , First View
Published online by Cambridge University Press: 16 November 2020, pp. 1-23
In this paper, we study a family of binomial ideals defining monomial curves in the n-dimensional affine space determined by n hypersurfaces of the form $x_i^{c_i} - x_1^{u_{i1}} \cdots x_n^{u_{1n}}$ in $\Bbbk [x_1, \ldots , x_n]$ with $u_{ii} = 0, \ i\in \{ 1, \ldots , n\}$ . We prove that the monomial curves in that family are set-theoretic complete intersections. Moreover, if the monomial curve is irreducible, we compute some invariants such as genus, type and Frobenius number of the corresponding numerical semigroup. We also describe a method to produce set-theoretic complete intersection semigroup ideals of arbitrary large height.
Cocoa-rich chocolate and body composition in postmenopausal women: a randomised clinical trial
Irene A. Garcia-Yu, Luis Garcia-Ortiz, Manuel A. Gomez-Marcos, Emiliano Rodriguez-Sanchez, Cristina Lugones-Sanchez, Jose A. Maderuelo-Fernandez, Jose I. Recio-Rodriguez
Journal: British Journal of Nutrition , First View
Published online by Cambridge University Press: 04 August 2020, pp. 1-9
During menopause, women undergo a series of physiological changes that include a redistribution of fat tissue. This study was designed to investigate the effect of adding 10 g of cocoa-rich chocolate to the habitual diet of postmenopausal women daily on body composition. We conducted a 6-month, two-arm randomised, controlled trial. Postmenopausal women (57·2 (sd 3·6) years, n 132) were recruited in primary care clinics. Participants in the control group (CG) did not receive any intervention. Those of the intervention group (IG) received 10 g daily of 99 % cocoa chocolate in addition to their habitual diet for 6 months. This quantity comprises 247 kJ (59 kcal) and 65·4 mg of polyphenols. The primary outcomes were the between-group differences in body composition variables, measured by impendancemetry at the end of the study. The main effect of the intervention showed a favourable reduction in the IG with respect to the CG in body fat mass (–0·63 kg (95 % CI –1·15, –0·11), P = 0·019; Cohen's d = –0·450) and body fat percentage (–0·79 % (95 % CI –1·31, –0·26), P = 0·004; Cohen's d = –0·539). A non-significant decrease was also observed in BMI (–0·20 kg/m2 (95 % CI –0·44, 0·03), P = 0·092; Cohen's d = –0·345). Both the body fat mass and the body fat percentage showed a decrease in the IG for the three body segments analysed (trunk, arms and legs). Daily addition of 10 g of cocoa-rich chocolate to the habitual diet of postmenopausal women reduces their body fat mass and body fat percentage without modifying their weight.
Catalysis using metal–organic framework-derived nanocarbons: Recent trends
Oxana V. Kharissova, Boris I. Kharisov, Igor Efimovich Ulyand, Tomas Hernandez García
Journal: Journal of Materials Research / Volume 35 / Issue 16 / 28 August 2020
Published online by Cambridge University Press: 30 June 2020, pp. 2190-2207
Print publication: 28 August 2020
Recent trends in the area of catalytic applications of metal–organic framework (MOF)-derived nanocarbons are covered. These highly porous nanostructures, convenient for the green chemistry processes, are generally formed by the direct carbonization of a variety of MOF, mainly MOF-5, ZIF-8, ZIF-67, UiO-66-NH2, MIL-101-NH2 at 700–1000 °C in argon or nitrogen flow. Differences between conventional porous carbons and MOF-derived carbons are in pore volumes, surface area, and presence of ad-atoms. The morphology of the MOF-derived nanocarbons can be adjustable with uniform dopant distribution. Resulting nanocarbons are widely applied in heterogeneous catalysis, photocatalysis and are very promising as electrocatalysts, having excellent performance in oxygen evolution reaction, oxygen reduction reaction, and hydrogen evolution reaction. Catalytic applications for environmental purposes are also discussed. Good catalytic performance is related with highly dispersed heteroatoms, density of catalytic active sites, controllable porosity, and high surface area. Opportunities for further research are indicated, in particular, the creation of low pH-stable electrocatalysts and novel strategies for the preparation of 1÷3D single-atom catalysts.
Global calculation of neoclassical impurity transport including the variation of electrostatic potential
Focus on Fusion
Keiji Fujita, S. Satake, R. Kanno, M. Nunami, M. Nakata, J. M. García-Regaña, J. L. Velasco, I. Calvo
Journal: Journal of Plasma Physics / Volume 86 / Issue 3 / June 2020
Published online by Cambridge University Press: 25 June 2020, 905860319
Recently, the validity range of the approximations commonly used in neoclassical calculation has been reconsidered. One of the primary motivations behind this trend is observation of an impurity hole in LHD (Large Helical Device), i.e. the formation of an extremely hollow density profile of an impurity ion species, such as carbon $\text{C}^{6+}$ , in the plasma core region where a negative radial electric field ( $E_{r}$ ) is expected to exist. Recent studies have shown that the variation of electrostatic potential on the flux surface, $\unicode[STIX]{x1D6F7}_{1}$ , has significant impact on neoclassical impurity transport. Nevertheless, the effect of $\unicode[STIX]{x1D6F7}_{1}$ has been studied with radially local codes and the necessity of global calculation has been suggested. Thus, we have extended a global neoclassical code, FORTEC-3D, to simulate impurity transport in an impurity hole plasma including $\unicode[STIX]{x1D6F7}_{1}$ globally. Independently of the $\unicode[STIX]{x1D6F7}_{1}$ effect, an electron root of the ambipolar condition for the impurity hole plasma has been found by global simulation. Hence, we have considered two different cases, each with a positive (global) and a negative (local) solution of the ambipolar condition, respectively. Our result provides another support that $\unicode[STIX]{x1D6F7}_{1}$ has non-negligible impact on impurity transport. However, for the ion-root case, the radial $\text{C}^{6+}$ flux is driven further inwardly by $\unicode[STIX]{x1D6F7}_{1}$ . For the electron-root case, on the other hand, the radial particle $\text{C}^{6+}$ flux is outwardly enhanced by $\unicode[STIX]{x1D6F7}_{1}$ . These results indicate that how $\unicode[STIX]{x1D6F7}_{1}$ affects the radial particle transport crucially depends on the profile of the ambipolar- $E_{r}$ , which is found to be susceptible to $\unicode[STIX]{x1D6F7}_{1}$ itself and the global effects.
Using dried orange pulp in the diet of dairy goats: effects on milk yield and composition and blood parameters of dams and growth performance and carcass quality of kids
J. L. Guzmán, A. Perez-Ecija, L. A. Zarazaga, A. I. Martín-García, A. Horcada, M. Delgado-Pertíñez
Journal: animal / Volume 14 / Issue 10 / October 2020
Published online by Cambridge University Press: 05 May 2020, pp. 2212-2220
Although dried orange pulp (DOP) may conveniently replace cereals in ruminant diets, few studies have considered similar diet substitution for goats. We hypothesised that DOP could replace cereal-based concentrate in goat diets without detrimental effects on growth performance and carcass quality of suckling kids and milk performance and blood biochemical parameters of dams in early lactation. We also hypothesised that DOP substitution may increase the levels of antioxidants, such as phenolic compounds and vitamin E, in milk and improve its total antioxidant capacity (TAC). Therefore, 44 primiparous Payoya dairy goats were allocated to three experimental groups, each fed a different diet: control (CD, n = 14) based on a commercial concentrate with alfalfa hay as forage; and DOP40 (n = 16) in which 40% and DOP80 (n = 14) in which 80% of the cereal in the concentrate were replaced by DOP. The experiment lasted from the final month of pregnancy to 55 days postpartum. The DOP diets did not affect suckling kids' carcass quality, but at 28 days, led to improvement in live weight (LW) and average daily gain (ADG) from birth, although no differences were found between DOP40 and DOP80 (for CD, DOP40 and DOP80, LW at 28 days was 8.00, 8.58 and 8.34 kg and ADG was 184, 199 and 195 g/day, respectively). Diet had no significant effect on milk yield (average daily milk yield and total yield at 55 days were 1.66 l/day and 90.6 l, respectively) and commercial and fatty acid composition. Nevertheless, α-tocopherol, total phenolic compound (TPC) and TAC concentration in milk increased with substitution of cereals by DOP (for CD, DOP40 and DOP80, concentration of α-tocopherol was 21.7, 32.8 and 42.3 μg/100 g, TPCs was 63.5, 84.1 and 102 mg gallic acid equivalents/l, and TAC was 6.63, 11.1 and 12.8 μmol Trolox equivalents/ml, respectively). Every plasma biochemistry parameter considered was within reference values for healthy goats; therefore, no pathological effect was detected for these variables due to dietary treatment. However, DOP diets caused a reduction in plasmatic creatine kinase and aspartate aminotransferase, implying reduced oxidative damage to muscles. In conclusion, DOP may be an interesting alternative to cereals in early lactation goat diets for increasing farmers' income and the healthy antioxidant capacity of milk.
PW01-35 - A Prospective Study of Mixed Bipolar Patients: Ten Years of Follow Up
A. Ugarte, J. García, S. Ruiz de Azúa, I. González, M. Sáenz, M. Gutierrez, C. Valcarcel, E. Zuhaitz, I. de la Rosa, R. Alonso, A. González-Pinto
Mixed Bipolar patients are those who have co-existing depressive symptoms during mania. These patients are supposed to have a worse evolution.
The objective of this study was to compare the long-term outcomes of patients who had at least one mixed episode with those who experienced only pure manic episodes.
169 outpatients diagnosed of Bipolar I disorder and treated at least during two years were included. 120 patients (71%) complited the follow-up over 10 years. Baseline demographic and clinical variables were included.
The patients with mixed episodes (37%) had a significantly younger mean age at onset comparing with those with manic episodes (25.3 years vs. 30.8 years; p=0.025) they also had more previous mood- incongruent psychotic symptoms χ2= 6.77, p=0.034), more number of hospitalizations (OR= 1.36, 95% CI = 1.14; -1.63; p< 0.001), and more number of episodes (OR= 1.21, 95% CI = 1.10-1.31; p< 0.001). There were no significant differences relating to depressive episodes, alcohol use, drug abuse, suicidal behaviour and suicide attempts.
Age at onset differed significantly between the mixed episode and pure mania groups, with mixed episode patients having a younger age of onset. This is interesting as one of the major results of the study we have found that age at onset mediates some of the factors classically related to outcome in mixed episodes like alcohol abuse and suicide attempts. However, independently of age at onset, these patients represent a especially severe type of bipolar disorder.
P02-271 - Psychiatric Trainning as a "Rite of Passage": a Field Study in Spain
R. Calero-Fernández, E. Serrano, M. Magariños, J. Picazo, C. Peláez, L. Fernández, M.J. Martín, I. García, R. Carmona, L. Caballero-Martínez
"Rite of passage" is an etnographic concept developed by VanGennep that defines the vital transition of an individual between two different status. It is divided in three stages: separation, liminal/threshold and aggregation. Turner described the liminal phase, and the terms of "communitas" and "liminoid" (structure of a rite without religious/spiritual elements). One widely-known Rite of Passage is the initiation of the shamans.
Study the elements of a rite of passage present in Psychiatric Trainning.
• Field study (observational, descriptive, non-experimental).
• Preliminary Sample=10trainees (5man+5women); last year of Psychiatric Trainning.
• "ad hoc" semi-structured interview (21items subdivided in open questions). 10interviews (average duration=75mins). Permanent register:digital recorder.
• Summary and analysis of the answers. Review of the literature.
- Psychiatric Trainning shared the elements and tri-phasic structure of VanGennep's "rite of passage" concept
- Trainees saw themselves as more empathic(7/10) and humanistic(8/10) than other specialties colleagues. Stigma towards mental illness(8/10) and fear of suicide(9/10) were also considered as their distinctives.
- The collective behaved as a communitas(10/10)
- No spiritual elements(0/10): liminoid process
- Resemblances of the ancestral shamans' Initiation: Despite bloody practices were over, suffering was also present(7/10), but was seen as necessary(6/10) and well tolerated(7/10).
- Trainees felt that they grew spiritual and mentally(7/10) during the trainning years
Results suggest that Psychiatric Trainning has stable phenomena that:
• are compatible with the Rite of Passage schema
• Are considered exclusive of Psychiatry by trainees
• Have not been systematically studied as a whole, which could help to improve the training.
PW01-14 - Lithium Placental Passage and Perinatal Outcome: Clinical Management During Late Pregnancy
M.L. Imaz, M. Torra, C.C. Santos-Lozano, A. Torres, C. Marqueta, J.M. Hernández, J.M. Pérez, I. Teixido, R. Martín-Santos, L. García-Esteve
Despite lithium has been used for the last 50 years as a maintenance treatment for bipolar disorder during pregnancy, there is limited information about perinatal clinical outcomes from fetal exposure to lithium.
1. To quantify the rate of lithium placental passage
2. To assess any association between plasma concentration of lithium at delivery and perinatal outcome.
Observational and prospective study. Subjects: Women in maintenance treatment with lithium, being attended during pregnancy at the Perinatal Psychiatry Programme of Hospital Clínic (Barcelona, Spain) between 2007 and 2009. Procedure: We assessed sociodemographical data; dose/day of lithium carbonate; other drugs doses; plasmatic concentration of lithium carbonate in maternal blood intrapartum and in the umbilical cord; obstetrical maternal complications; gestational age at delivery; weight at delivery; Apgar scores; congenital malformations; hospital stays, infant serum concetrations of thyroid-stimulating hormone.
Eight mother-child diads. Mean age of the mother (SD): 32.1 (4,7); 100% caucasian and married. Mean dose of maternal lithium (SD): 675mg (237,5mg). Premature rupture of membranes (%):25. Gestational mean age (in weeks) (SD): 39,9 (1). Birth weight (SD) : 3625gr (451,2gr); Mean Apgar1min (SD): 8,38 (1,1); Mean Apgar5min (SD): 9,75 (0,4). Loss of fetal intrapartum wellness (%): 12,5. Days of hospitalization (mean) (SD):9,5(16,6). Lithium plasmatic concentration (mEq/L), mean (SD): maternal 0,45(0,1), umbilical cord 0,33(0,1), lithium ratio uc/m 0,93 (0,3); infant TSH μU/mL mean (SD): 4,9(4,6).
Lithium placental passsage was 0,93 (0,63-1,07). ≤At umbilical cord lithium levels ≤ 0.60 mEq/L, we didn't have any preterm deliveries, low birth weight newborns, nor neonatal complications.
P02-352 - Outpatient Group Psychotherapy: Impact on Number and Mean Duration of Re-Entries in Acute Inpatient Unit
E. Marin Diaz-Guardamino, E. Bravo Barba, L. Larrañaga Rementeria, I. Hervella Garces, J. Garcia Ormaza, R. Segarra Echevarria, I. Eguiluz Uruchurtu
Outpatient group psychotherapy began in our department in 2002 as a complement to the acute inpatient unit. Patients with heterogeneous diagnoses were included before or after a short-duration stay in the unit. Clinicians' impression was that re-entries to the acute unit were less frequent and shorter after group therapy.
The objective is to determine the real impact of group psychotherapy on the number and mean duration of these re-entries.
Data was collected for 156 patients during period of two years. The number and mean duration of hospitalization in the psychiatric acute unit were registered for each patient during the year before and in the year after therapy to be analyzed and compared.
Before attending group psychotherapy 60.3% of the patients were hospitalized in the acute unit (39.1% once, 12.8% twice, 6.4% three times, 1.9% four times). 65.4% had no re-entries in the following year; 71% of those who did had one re-entry. The mean number of entries per year in the acute unit before therapy was 0.92, while the mean after therapy was 0.52. The mean stay was 7.86 days before therapy, and 4.62 days after. The mean differences between before and after entries were significant in statistical analyses.
Group psychotherapy seems to have effects on number and duration of re-entries to the acute unit for most patients in the different diagnostic categories. These findings have important implications, as this form of therapy is cost-effective and available for a wide range of psychiatric patients.
P03-37 - Testing A New Technique for the Rehabilitation of Schizophrenia and Other Psychoses Based on Viewing Fiction Films
L. Caballero Martínez, M. Magariños, C. Pelaez, I. García del Castillo, R. Calero, I. Mateo, R. Diaz, E. Baca, C. Torre-Marin, M. Luz Chimeno de la Vega
Fiction films offer unexplored opportunities of rehabilitation for schizophrenia and other psychoses. Schizophrenia produces deficits y distortions in the perception and comprehension of reality, also expressed in the perception and comprehension of films. After a year of an "ad hoc" experience, the following technique was developed:
1) Selecting a fiction film for its narrative, affective, cognitive and social cognitive content
2) Briefly presenting of the film to a group of 8-16 patients with diverse psychosis.
3) Screening of the film to the patients and the therapeutic team.
4) Summarizing of the plot by a patient. Group correcting of distortions and deficits caused by problems of attention and working memory, as well as positive, negative, affective and social cognitive symptoms (emotional perception, theory of mind, attributive style)
5) Selecting 1-2 sequences by each patient, and group commenting using the same technique.
6) Field recording of all the commentaries obtained.
7) Second screening of the film two days after, repeating points 2 to 6.
8) Comparing both field records.
An experimental study using this technique is presented. 8 patients with schizophrenia and other psychoses watched 4 fiction films ("The 39 Steps", "Charade", "M", "The General"). The differences founded in both viewings by two external evaluators (using CGI and analogical scales of the main variables) are presented and commented. An evaluation of the perceived usefulness and satisfaction of the participants was included.
P03-56 - Physical Illness in Institutionalized Schizophrenic Patients
R. Gallardo, E. González Pablos, F.J. García Sánchez, C. Salgado Pascual, G. Hoyos Villagrá, I. Herreros Gilarte
To find out the frequency of medical conditions presented by a population of institutionalized chronic schizophrenic patients.
The target population is a total of 220 schizophrenic patients, 48 men and 172 women, diagnosed following the ICD-10 criteria, institutionalized at least during 5 years in a 76,8% of the patients. The average age was of 64,64 years.
Specific survey applied by the group of investigators aiming to collect socio-demographical data and the medical conditions, using the following psychometric scales: Cumulative Index of Illnesses (CII), Global Assessment Scale (GAS), and Clinical Global Impression (CGI).
Statistical analysis was performed with SPSS v 15.0, including descriptive statistics and correlation analysis.
Diabetes was found in 15% of cases, obesity in 31,7%, overweight in 39%, high blood pressure in 24,5%, high cholesterol serum levels in 21%, high triglyceride serum levels in 8,7%. A 26% of the patients were smokers.
The average number of categories at the CII scale was 4,84 and the average total score was 11,96.
Our patients predominantly are of an advanced age, female sex, and long-term inpatients. The presence of comorbid physical illness is high. The relatively low number of smokers could be explained by the demographic characteristics of our sample.
P03-292 - Suicidal Behavior In Psychotic Patients
R. Gómez Martínez, M.D. Ortega García, C. Martinez Martínez, A. Rodríguez Campos, M.A. Alonso de la Torre, L. Gómez Martínez, A. Agúndez Modino, I. Alvarez Silva
Suicide is one of the most frequent causes of death. In 1993, Bleuler emphasized its importance in his "Suicidal behavior is the most serious symptom of schizophrenia". Since then, various studies have confirmed importance of suicide in schizophrenia, and today it's clear that his research and knowledge is one of the great challenges of psychiatry.
- Establish clinical-socio-demographic profile and risk factors for psychotic people with autolytic behaviors.
- Determine frequency of suicides in psychotic disorders in our area of care.
Retrospective study(3 years evolution) that includes psychotic patients(diagnosed according DSM IV-TR) admitted to the HCU of Valladolid. With data provided by hospital medical records, analyzed socio-demographic variables and clinics. Study consists of two groups:group of cases(those patients who have suicidal behavior) and control group (those that haven't autolytic gesture during the study period). Statistical evaluation was performed with SPSS.
- The sample includes 191 patients:41(21%) have attempted suicide.
- Of them:73% are males;88% singles;51% have basic studies;61% we re unemployed;37% were 31-40 aged;54% started disease 21-30 aged and 63.5% are schizophrenic.
- Considering statistical study we find that suicidal patient profile is male(p = 0.039),diagnosed with schizophrenia(p = 0.033),with previous suicide attempts(p = 0.009)and lack of social support(p = 0.007).
- 21% of hospitalized psychotic patients have presented some autolytic attempt.
- Profile of suicidal psychotic patient is a male, single, 21-40 aged, primary education, unemployed, with a primary diagnosis of schizophrenia, particularly paranoid, with ten years evolution,without acceptable social support, number of revenues higher than non-suicidal psychotic and a personal history of previous autolytic attempts.
P03-57 - Negative Symptoms and Functioning in First Episode Psychosis and Chronic Schizophrenia
J. García, R. Segarra, P. Sánchez, N. Ojeda, J. Peña, I. Eguiluz, M. Gutiérrez
Course and outcome in schizophrenia are heterogeneous. Numerous studies have shown an association between the presence of negative symptoms and psychosocial and occupational functioning of patients.
To analyse the prevalence of negative symptoms in the course of illness in first episode psychosis and chronic schizophrenia and to establish its relation with the functional outcome.
43 patients with a first-episode psychosis (FEP) from our area were compared with 43 chronic schizophrenic patients and 43 normal controls from a parallel area. They were matched one on one for age, gender and years of education. All subjects were compared regarding psychopathology and functional outcome terms. Patients were examined with Positive and Negative Syndrome Scale (PANSS) for clinical symptom. Longitudinal functionality was prospectively assessed with the Clinical Global Impression (CGI) and Global Assessment of Functioning (GAF) rating scales.
We found significant differences between FEP and chronic patients in negative symptom severity (t = -4.97, p< 0.001) and global assessment of functioning (t = 7.58, p< 0.001). There was no statistically significant difference between the two groups in PANSS positive and general components or Clinical Global Impression. Negative symptom severity was associated with poorer GAF ratings in first episode psychosis and chronic schizophrenia.
Negative symptoms appear to be persistent. In our study negative symptom severity was associated with social and functional impairment, defined as Global Assessment of Functioning Scale score of less than or equal to 60.
Randomised clinical trial comparing oral versus depot formulations of zuclopenthixol in patients with schizophrenia and previous violence
C. Arango, I. Bombín, T. González-Salvador, I. García-Cabeza, J. Bobes
Journal: European Psychiatry / Volume 21 / Issue 1 / January 2006
Published online by Cambridge University Press: 16 April 2020, pp. 34-40
The aim of this longitudinal study was to determine whether the depot formulation of an antipsychotic reduces violence in outpatients with schizophrenia as compared to oral administration of the same antipsychotic.
Forty-six previously violent patients with schizophrenia were randomised to receive treatment with oral or depot zuclopenthixol for 1 year. Clinicians interviewed patients at baseline and every month thereafter to assess treatment adherence. An interviewer blinded to treatment assignments interviewed an informant about any violent behaviour during the previous month.
Violence during the follow-up year was inversely proportional to treatment adherence, better compliance, and greater reduction of positive symptoms. Lower frequency of violent acts was observed in the depot group. The level of insight at baseline was not significantly associated with violence recidivism. Regardless of route of administration, treatment non-adherence was the best predictor of violence.
Some patients with schizophrenia and prior violent behaviour may benefit from the depot formulation of antipsychotic medication.
The prevalence of the metabolic syndrome in bipolar patients
M.P. Garcia-Portilla, P.A. Saiz, I. Menendez, P. Sierra, L. Livianos, A. Benabarre, J. Perez, A. Rodriguez, J. Valle, F. Sarramea, R. Fernandez-Villamor, J.M. Montes, M.J. Muñiz
Published online by Cambridge University Press: 16 April 2020, p. S252
Background and aims:
Two studies to date have been published regarding the prevalence of the metabolic syndrome in bipolar patients. The unadjusted prevalence rates reported were 30% and 32%. The aim of this study was to evaluate the prevalence of the metabolic syndrome in a group of 142 bipolar patients from Spain.
Bipolar patients (ICD-10 criteria) from 11 centres in Spain were assessed cross-sectionally for metabolic syndrome according to the NCEP ATP III criteria.
The mean age was 47.3 (SD 14.5), 51.1% were male. On average, patients were receiving 2.8 (SD 1.3) drugs for the treatment of their bipolar disorder. Ninety-one percent were receiving mood stabilizers, 63.4% antipsychotics and 29.6 antidepressants. Eighty-seven percent of the antipsychotics prescribed were atypicals. The overall prevalence of metabolic syndrome in our sample was 24.6% Fifty-seven percent of the sample met the criterion for abdominal obesity, 37.4% for met the criterion for hypertriglyceridemia, 36.4% for low HDL-cholesterol, 25.2% for high blood pressure and 12.5% for high fasting glucose. No statistically significant difference was found between with and without the metabolic syndrome for gender, illness status (acute versus in remission), CGI-S-BP scores and number of medications used. Patients taking tow mood stabilizers had significantly higher metabolic syndrome rates than patients taking one mood stabilizer and than patients without mood stabilizer treatment (40% versus 17.8% and 11.1% respectively, p .02).
The prevalence of the metabolic syndrome in bipolar patients is high. It appears to be higher than that estimated for the Spanish general population.
Antipsychotic treatment and the need for hospitalization: advantages of long-acting neuroleptics
M. Perez Garcia, M. Paramo Fernandez, V. Prado Robles, J. Alonso San Gregorio, J. Perez Perez, I. Tortajada Bonasselt
At present,the need of antipsychotic treatments for the improvement of the condition of people with psychotic disorders is unquestionable.Despite the current availability of highly effective drugs with few secondary effects,the main cause behind hospitalization is still the lack of compliance.
Analysis of the determining variables behind the need for hospitalization and the influence of the types of antipsychotic treatments.
Retrospective and follow-up analysis of psychotic patients hospitalized in the Psychiatric Ward of the Hospital de Conxo (1998-2005).Three groups of patients:with Oral neuroleptics(170), with Depot typical neuroleptics (238),with Long-Acting Risperidone(60);and comparison based on treatment maintenance.
Males,day-to-day living with the family of origin and single status are predominant in all three groups,although in a higher proportion in the Long-Acting Risperidone one(75,71 and 85%respectively).Only 7% of the patients with Long-Acting Risperidone completed their university studies,62% were pensioners.The average duration of hospitalization periods is 21 days for the patients with Long-Acting Risperidone,23.3 days in the Oral group,29.5 days in the Depot group.The main cause behind re-hospitalization is the lack of compliance(68% in Depot group),whilst after the introduction of Long-Acting Risperidone,no compliance rate is 59%.If we compare the number of hospitalizations/year of the patients with Long-Acting Risperidone,before and after its introduction,the rate is reduced significantly from 0.89 to 0.73.
Despite the fact that patients treated with Long-Acting Risperidone show a more seriously ill condition and less social capacity,they have less need for hospitalization than patients treated with Depot neuroleptics.Median lengths of stay were shorter than patients in the other two groups,and are less re-hospitalized after the introduction of this treatment.
A Comparative Study of PSE-10/CATEGO-5, OPCRIT-Checklist/Quick-Basic: Two Psychiatric Computer Assisted Diagnoistic Systems
A. Lazartigues, U. Barua, I. Garcia-Orad, Ph. Genest, J. Fermanian
Published online by Cambridge University Press: 16 April 2020, p. 168s
Are acute involuntary hospitalization related to anxiety disorders?
I. De Vitton, H. Delavenne, F.D. Garcia
Published online by Cambridge University Press: 16 April 2020, p. 154
Acute involuntary hospitalization is perceived as a threatening event for most of patients. Acute involuntary hospitalization of psychiatric patients is probably a major source of anxiety and may be related to anxiety disorders. Further knowledge on the anxiety disorders secondary to involuntary hospitalization may limit traumatic experience and is, indeed, of considerable importance.
Bibliographic review of the existing literature was conducted using MEDLINE/PubMed (1969-2010). The following key words were used "involuntary hospitalization"; "coercion"; "patient admission"; "stress disorder" "anxiety" and "posttraumatic stress disorder". All papers in English and French were considered in this review.
Although the impact of diverse stressful or traumatic events has been the subject of numerous publications, we found only one study in the literature dealing with PTSD symptoms related to acute involuntary hospitalization. In this article involuntarily admitted patients were not more traumatized than voluntarily admitted ones. Although coercive measures can be traumatizing, but forced medication, seclusion, or application of any coercive measure were not significantly associated with traumatizing. Reviews on involuntary hospital admission' demonstrated negative and positive consequences on various outcome domains. Findings highlight the predominantly negative impact of physical restraint on the person restrained and their family. These findings support minimal use of restraint in health care to a relatively vulnerable group of people. But coercion can also lead to positive outcomes.
Specific studies concerning the impact of involuntary hospitalization, coercitive measures and forced treatment causing anxiety disorders are still needed. Discussion about its methodology and ethical aspects remains necessary.
A new apathy scale for institutionalized dementia patients: validation of the APADEM-NH scale short-version
M.I. Ramos-García, L. Agüera-Ortiz, N. Gil-Ruiz, I. Cruz-Orduña, M. Valentí-Soler, R. Osorio-Suárez, B. Reneses-Prieto, P. Martinez-Martín, UIPA-CAFRS Study Group
To describe validation process of the new apathy scale for institutionalized dementia patients (APADEM-NH).
100 elderly, institutionalized patients with diagnosis of probable Alzheimer Disease (AD) (57%), possible AD (13%), AD with cerebral vascular disease (CVD) (17%), Lewy Bodies Dementia (11%) and Parkinson associated to dementia (PDD) (2%). All stages of the disease severity according to the Global Deterioration Scale (GDS) and Clinical Dementia Rating (CDR) were assessed. The Apathy Inventory (AI), Neuropsychiatric Inventory (NPI), Cornell scale for depression, and the tested scale were applied. Re-test and inter-rater reliability was carried out in 50 patients. The feasibility and acceptability, reliability, validity, and measurement precision were analyzed.
APADEM-NH final version consists of 26 items and 3 dimensions: Deficit of Thinking and Self-Generated behaviors (DT): 13 items, Emotional Blunting (EB): 7 items, and Cognitive Inertia (CI): 6 items. Mean application time was 9.56 minutes and 74% of applications were fully computable. All subscales showed floor and ceiling effect lower than 15%. Internal consistency was excellent for each dimension (Cronbach's α DT = 0.88, α EB = 0.83, α CI= 0.88);Test-retest reliability for the items was kW=0,48-0,92; Inter-rater reliability reached kW values 0.84-1.00; The APADEM-NH total score showed a low/moderate correlation with apathy scales (Spearman ρ, AI =0.33; NPI-Apathy= 0,31), no correlation with depression scales (NPI-Dementia = -0.003; Cornell= 0,10), and high internal validity (ρ =0.69 0.80).
APADEM-NH is a brief, psychometrically acceptable, and valid scale to assess apathy in patients from mild to severe dementia and discerning between apathy and depression.
Functional Outcome in First-Episode-Schizophrenia Receiving Assured Antipsychotic Medication: 52-week Prospective Study with Risperidone Long-Acting Injection
J. Garcia, R. Segarra, N. Ojeda, J. Peña, I. Eguiluz
It is well stablished that therapeutic compliance is a fundamental predictive factor in the outcome of first-episode psychosis. Risperidone long-acting injection has demonstrated high remission rates and improvements in treatment adherence.
1- To analyse the efficacy of risperidone long-acting injection vs oral atypical antipsychotics in first-episode psychosis.
2- To describe in both groups the evolution of clinical and cognitive symptoms, functional outcome, quality of life, insight and treatment adherence.
18 patients with a first-episode psychosis treated with long-acting risperidone were compared with 21 first-episode psychosis treated with oral atypical antipsychotic medication. They were matched one on one for age, gender and years of education. All subjects were compared regarding psychopathology and functional outcome terms. Patients were examined with Positive and Negative Syndrome Scale (PANSS) for clinical symptoms. Longitudinal functionality was prospectively assessed with the Clinical Global Impression (CGI) and Global Assessment of Functioning (GAF) rating scales.
We found significant differences between both groups in negative symptom severity and global assessment of functioning. There was no statistically significant difference between the two groups in PANSS positive and general components. Negative symptom severity was associated with poorer GAF ratings.
Our data suggest that risperidone long-acting injection assures treatment compliance and therefore could improve clinical and functional outcome. | CommonCrawl |
Why is $e$ close to $H_8$, closer to $H_8\left(1+\frac{1}{80^2}\right)$ and even closer to $\gamma+\log\left(\frac{17}{2}\right) +\frac{1}{10^3}$?
The eighth harmonic number happens to be close to $e$.
$$e\approx2.71(8)$$
$$H_8=\sum_{k=1}^8 \frac{1}{k}=\frac{761}{280}\approx2.71(7)$$
This leads to the almost-integer
$$\frac{e}{H_8}\approx1.0001562$$
Some improvement may be obtained as follows.
$$e=H_8\left(1+\frac{1}{a}\right)$$
$$a\approx6399.69\approx80^2$$
$$e\approx H_8\left(1+\frac{1}{80^2}\right)\approx 2.7182818(0)$$ http://mathworld.wolfram.com/eApproximations.html
Equivalently $$ \frac{e}{H_8\left(1+\frac{1}{80^2}\right)} \approx 1.00000000751$$
Q: How can this approximation be obtained from a series?
EDIT: After applying the approximation $$H_n\approx \log(2n+1)$$ (https://math.stackexchange.com/a/1602945/134791) to $$e \approx H_8$$
the following is obtained: $$ e - \gamma-\log\left(\frac{17}{2}\right) \approx 0.0010000000612416$$ $$ e \approx \gamma+\log\left(\frac{17}{2}\right) +\frac{1}{10^3} +6.12416·10^{-11}$$
sequences-and-series logarithms approximation harmonic-numbers euler-mascheroni-constant
Rócherz
Jaume Oliver LafontJaume Oliver Lafont
$\begingroup$ It's very likely just a numerical coincidence and not likely to be related to any known series. By playing around with numbers like $H_n$ (or $\sqrt{2}$, $\pi$ etc.) you are bound to eventually stumble upon identities like this. It's rearly any deep reason behind it. $\endgroup$
– Winther
$\begingroup$ Why is $2\pi+e$ so close to $9$ ? $\endgroup$
– Lucian
$\begingroup$ @Lucian "Because" we can get the close approximations $e\approx\frac{19}{7}$ and $\pi\approx \frac{22}{7}$ from their continuous fractions, and $2\pi+e\approx 2\frac{22}{7}+\frac{19}{7}=\frac{44+19}{7}=\frac{63}{7}=9$. (although the integer approximations $e\approx\pi\approx 3$ would suffice). There is also a notable integral for $\pi-\frac{22}{7}$. Is there a similar one for $e-\frac{19}{7}$? That would allow writing $2\pi+e-9$ in closed form. If that is not an answer to "why", at least it would answer the question "how close is $2\pi+e$ to $9$?". $\endgroup$
– Jaume Oliver Lafont
$\begingroup$ @JaumeOliverLafont: The fact that e is so close to $\dfrac{19}7$ is also an odd coincidence. Euler, its discoverer, passed away on September $18,$ that month being, as its name hints, the seventh of the Roman calendar. $\endgroup$
$\begingroup$ There are of course exceptions where such results have simple explanations. My favourite in that regard is $\frac{22}{7}-\pi > 0$ which follows from the nice identity $\frac{22}{7}-\pi = \int_0^1\frac{x^4(1-x)^4}{1+x^2}dx$. However such nice and simple results are not to be expected for every single almost identity and this one seems pretty random. If a simple argument exists explaining it I'd love to see it, but I just don't think it's likely to exist. $\endgroup$
Quesly Daniel obtains $$e\approx \frac{19}{7}$$ from $$\int_0^1 x^2(1-x)^2e^{-x}dx = 14-38e^{-1} \approx 0$$ (see https://www.researchgate.net/publication/269707353_Pancake_Functions)
Similarly, $$\int_0^1 x^2(1-x)^2e^{x}dx = 14e-38 \approx 0$$
The approximation may be refined using the expansion $$e^x=\sum_{k=0}^\infty \frac{x^k}{k!} = 1+x+\frac{x^2}{2}+\frac{x^3}{6}+...$$ so $$\frac{1}{14} \int_0^1 x^2(1-x)^2(e^x-1)dx =e-\frac{163}{60}\approx 0$$ gives the truncation of the series to six terms $$e\approx\frac{163}{60}=\sum_{k=0}^{5}\frac{1}{k!}$$ using the largest Heegner number $163$, and
$$\frac{1}{14} \int_0^1 x^2(1-x)^2(e^x-1-x)dx = e-\frac{761}{280}=e-H_8\approx 0$$
gives $$e\approx H_8$$
Similar integrals relate $e$ to its first four convergents $2$,$3$,$\frac{8}{3}$ and $\frac{11}{4}$.
$$\int_0^1 (1-x)e^x dx = e-2$$ $$\int_0^1 x(1-x)e^x dx = 3-e$$ $$\frac{1}{3}\int_0^1 x^2(1-x)e^x dx=e-\frac{8}{3}$$ $$\frac{1}{4}\int_0^1 x(1-x)^2e^x dx=\frac{11}{4}-e$$
These four formulas are particular cases of Lemma 1 by Henry Cohn in A Short Proof of the Simple Continued Fraction Expansion of e.
Not the answer you're looking for? Browse other questions tagged sequences-and-series logarithms approximation harmonic-numbers euler-mascheroni-constant or ask your own question.
Is there an integral that proves $\pi > 333/106$?
Why is $e^\pi - \pi$ so close to $20$?
Why is Euler's number $2.71828$ and not anything else?
Do harmonic numbers have a "closed-form" expression?
Representation of $e$ as a descending series
Summing Finitely Many Terms of Harmonic Series: $\sum_{k=a}^{b} \frac{1}{k}$
Intuitively, why is the Euler-Mascheroni constant near $\sqrt{1/3}$?
Why $e^{\pi}-\pi \approx 20$, and $e^{2\pi}-24 \approx 2^9$?
Euler-Mascheroni constant expression, further simplification
An integral for $2\pi+e-9$
An integral and series to prove that $\log(5)>\frac{8}{5}$
Log Properties for Harmonic Numbers
How to prove/disprove that $\sum _{k=1}^{\infty } \frac{(-1)^k \left(H_{k-1}+\log (k)+\gamma \right)}{k}= 0$ | CommonCrawl |
Pull Down Resistors
In my quest to understand electrical engineering, I have stumbled across this tutorial:
http://www.ladyada.net/learn/arduino/lesson5.html
I have understood the diagrams until I got to switches. I am not sure how switches work on the breadboard or the diagrams. This is the specific one I am thinking of (this is of a pull down resistor):
The implementation is:
Based on the diagram, what I think is happening is: Power goes to the switch, if the button is up then the circuit is not completed. If the button is pressed then the current takes the path of least resistance to pin2 because it has more pull (100ohm < 10kohm).
The way it is described in the tutorial sounds like when the button is up, the circuit is still complete, but the 10k ohm resistor pulls the power to the ground. I am not positive how or why if both the 10k ohm and 100ohm are receiving equal current, the current would get pulled to the ground through a higher resistance than is open to pin 2.
resistors switches pulldown
Storm KiernanStorm Kiernan
\$\begingroup\$ An aside: try to think of a circuit in terms of what the voltage will be at each point, rather than where the current flows. This helped my understanding when I was first learning EE. \$\endgroup\$ – geometrikal Aug 12 '12 at 21:32
\$\begingroup\$ I'm kind of disappointed in the quality of answers on this question. I'd suggesting watching this video by AddOhms instead.. I don't understand this concept enough to explain it but none of the answers here at the time of writing are even talking about what causes the floating state, or how either pull-up or push-down resolves the problem. \$\endgroup\$ – Evan Carroll Oct 16 '15 at 20:55
\$\begingroup\$ @EvanCarroll On the other hand, the question at the time of writing doesn't ask about those things that you're interested in. \$\endgroup\$ – Dmitry Grigoryev Aug 8 '18 at 12:11
Firstly, forget the 100 Ω resistor for now. It's not required for the working of the button, it's just there as a protection in case you would make a programming error.
If the button is pressed P2 will be directly connected to +5 V, so that will be seen as a high level, being "1".
If the button is released the +5 V doesn't count anymore, there's just the 10 kΩ between the port and ground.
A microcontroller's I/O pin is high impedance when used as input, meaning there flows only a small leakage current, usually much less than the 1 µA, which will be the maximum according to the datasheet. OK, lets' say it's 1 µA. Then according to Ohm's Law this will cause a voltage drop of 1 µA \$\times\$ 10 kΩ = 10 mV across the resistor. So the input will be at 0.01 V. That's a low level, or a "0". A typical 5 V microcontroller will see any level lower than 1.5 V as low.
Now the 100 Ω resistor. If you would accidentally made the pin output and set it low then pressing the button will cause a short-circuit: the microcontroller sets 0 V on the pin, and the switch +5 V on the same pin. The microcontroller doesn't like that, and the IC may be damaged. In those cases the 100 Ω resistor should limit the current to 50 mA. (Which still is a bit too much, a 1 kΩ resistor would be better.)
Since there won't flow current into an input pin (apart from the low leakage) there will hardly be any voltage drop across the resistor.
The 10 kΩ is a typical value for a pull-up or pull-down. A lower value will give you even a lower voltage drop, but 10 mV or 1 mV doesn't make much difference. But there's something else: if the button is pressed there's 5 V across the resistor, so there will flow a current of 5 V/ 10 kΩ = 500 µA. That's low enough not to cause any problems, and you won't be keeping the button pressed for a long time anyway. But you may replace the button with a switch, which may be closed for a long time. Then if you would have chosen a 1 kΩ pull-down you would have 5 mA through the resistor as long as the switch is closed, and that's a bit of a waste. 10 kΩ is a good value.
Note that you can turn this upside down to get a pull-up resistor, and switch to ground when the button is pressed.
This will invert your logic: pressing the button will give you a "0" instead of a "1", but the working is the same: pressing the button will make the input 0 V, if you release the button the resistor will connect the input to the +5 V level (with a negligible voltage drop).
This is the way it's usually done, and microcontroller manufacturers take this into account: most microcontrollers have internal pull-up resistors, which you can activate or deactivate in software. If you use the internal pull-up you only need to connect the button to ground, that's all. (Some microcontrollers also have configurable pull-downs, but these are much less common.)
stevenvhstevenvh
\$\begingroup\$ I don't think it's clear how the Push-Down method solves the problem with floating-state from this answer. \$\endgroup\$ – Evan Carroll Oct 16 '15 at 20:49
Note that the switch is not a fancy device that takes power and creates some output signal -- instead, think of it as a wire that you're just adding or removing from the circuit by pushing the button.
If the switch is disconnected (not pressed), the only possible path for current is from P2 through both resistors to ground. Thus, the microcontroller will read a LOW.
If the switch is connected (pressed):
Current travels from the power supply through the switch
Some current travels through the 100 ohm resistor to P2. The microcontroller will read HIGH.
A small amount of current will flow through the 10 Kohm resistor to ground. This is basically wasted power.
Note that the 100 ohm resistor is just there to limit the maximum current going into P2. It's normally not included on a circuit like this, because the microcontroller's P2 input is already high-impedance and will not sink much current. However, including the 100 ohm resistor is useful in case your software has a bug or a logic error that causes it to try to use P2 as an output instead. In that case, if the microcontroller is trying to drive P2 low but the switch is shorted and connecting it to high, you'd possibly damage the microcontroller pin. To be safe, the 100 ohm resistor would limit the maximum current in that case.
Jim ParisJim Paris
When you press the button you place a logic high level (+5 V) on the input. But if you omit the resistor and the button is released, then the input pin would just be floating, which in HCMOS means that the level is undefined. That's something you don't want, so you pull the input down to ground with the resistor. The resistor is required because otherwise pushing the button would cause a short-circuit.
The input is high impedance, meaning that there will hardly flow any current through it. Zero current through the resistor means zero voltage across it (Ohm's Law), so the 0 V on one side will also be 0 V (or very near) on the input pin.
This is one way to connect a button, but you can also swap resistor and button, so that the resistor goes to +5 V and the button to ground. The logic is then inversed: pushing the button will give a low level on the input pin. This is often done, though, because most microcontrollers have pull-up resistors built-in, so that you only need the button, the external resistor can then be omitted. Note that you may have to enable the internal pull-up.
See also this answer.
The 10kohm resistor is called a pull-down resistor because, when the "green" node (on connecting the 100ohm and 10kohm resistors) is not connected to +5V by the switch, that node is pulled to ground (assuming low current through that branch, obviously). When the switch is closed, that node gains a potential of +5V.
This is used to control the inputs of logical ICs (AND gates, OR gates, etc), since these circuits will behave erratically if there is no determinate value on their inputs (a 0 or a 1 value). If you leave the input of a logical gate floating, the output cannot be reliably determined, thus it is advisable to always apply a determined input (a 0 or a 1, again) to the gate's input. In this case, P2 would be an input to a specific logical gate, and when the switch is open, it has an input value of 0 (GND); when the switch is closed, it has an input value of 1 (+5V).
ShamtamShamtam
current takes the path of least resistance
I'm not sure where does this common misconception come from, but it's indeed wrong as it directly contradicts the Ohm's law. Current takes all possible paths, inversely proportional to their resistance. If you apply 5V to a 10k resistor, 0.5mA will flow through it, regardless of how many alternative paths (low-resistance or otherwise) you provide.
Incidentally, that path through 100 Ohm resistor is not necessarily "least resistance", since the resistor is not connected to ground. Typicall, you would connect that resistor to an MCU input with >10 MOhm impedance, effectively making the 10k resistor the path of least resistance.
Dmitry GrigoryevDmitry Grigoryev
The reason the pull-down resistor is required is that the microcontroller is a CMOS device and thus the input pin is ultimately the gate of a MOSFET.
If your pushbutton controlled a bulb or an LED or a relay you would not need a pull-down resistor because an open circuit would be "off". When the button was released the bulb would turn off because no current would flow.
If your device was a true TTL part like the original 7400 series logic chips you would not need the pulldown resistor because those inputs would be bipolar transistors and when the button was released no current would flow through the base-emitter junction and the input would be "off".
In contrast, the input of your microcontroller is a MOSFET gate which acts like a capacitor. When the gate voltage is high enough the input is "on". That happens when you push the button and current flows through the 100R resistor into the microcontroller. The gate charges up (very quickly) like a capacitor and the input becomes "on". Now what happens when you release the button? No more current flows. But what does that mean to the input? If there's no pull-down resistor the charge on the gate has nowhere to go. The voltage will just sit there near 5V and the input will still be "on". The pull-down resistor drains the gate charge so its voltage falls below the "on" level. That's what you want to ensure the digital input is considered "off".
You can experiment with this by hooking two buttons up to your input pin. Tie one to 5V and one to ground. When you push the 5V button the input will turn on. When you release it it will stay on until you push the one that is connected to GND.
Ben JacksonBen Jackson
\$\begingroup\$ In TTL it's indeed the base-emitter junction which will not conduct, but not in the way you might think: the input is the emitter of the input NPN transistor, and the transistor conducts if the input is made low. Floating is the same as high. \$\endgroup\$ – stevenvh Aug 13 '12 at 18:02
Not the answer you're looking for? Browse other questions tagged resistors switches pulldown or ask your own question.
How is the input not floating when there is a pull-down resistor?
How does the push button in this circuit work?
What is the purpose of the resistors in this schematic?
(Bad) counting digital pulses with Arduino using interrupts and a 4 pin switch
How do I calculate the required value for a pull-up resistor?
What are the mechanisms at work in a pull-up or pull-down resistor circuits with a push-buttons and a GPIO?
MCU pull down input push button with interrupt ESD issue
Pull-down resistor ; why is it crucial
Noise in simple BC547-Raspberry Pi GPIO doorbell?
TJA1048 - Can someone check if my calculation is good about pull-up and pull--down
10k Resistor Pull Up/Down Standard for 74 series chips
Pull-down resistor confusion
Choosing pulldown resistor for 74LS86 | CommonCrawl |
Probing Collective Effects in Hadronisation with the Extremes of the Underlying Event
COEPP-MN-16-6
by: Martin, Tim (Warwick U.) et al.
We define a new set of observables to probe the structure of the underlying event in hadron collisions. We use the conventional definition of the "transverse region" in jet events and, for a fixed window in jet $p_\perp $ , propose to measure several discriminating quantities as a function of the level of activity in the transverse region. The measurement of these observables in LHC data would reveal whether, e.g., the properties of "low-UE" events are compatible with equivalent measurements in $e^+e^-$ collisions (jet universality), and whether the scaling behaviour towards "high-UE" events exhibits properties of non-trivial soft-QCD dynamics, such as colour reconnections or other collective phenomena. We illustrate at $\sqrt{s} = 13 \,\mathrm{{TeV}}$ that significant discriminatory power is obtained in comparisons between MC models with varying treatments of collective effects, including Pythia 8, epos, and Dipsy. | CommonCrawl |
On the dimension of the subfield subcodes of 1-point Hermitian codes
Multi-point codes from the GGS curves
Designs from maximal subgroups and conjugacy classes of Ree groups
Jamshid Moori 1, , Bernardo G. Rodrigues 2, , Amin Saeidi 1, and Seiran Zandi 2,,
School of Mathematical Sciences, North-West University, (Mafikeng) 2754, South Africa
School of Mathematics, Statistics and Computer Science University of KwaZulu-Natal, Durban 4000, South Africa
* Corresponding author: Seiran Zandi
Received August 2018 Revised August 2019 Published November 2019
Fund Project: The first author acknowledges support of NRF and NWU (Mafikeng).
The second author acknowledges support of NRF through Grant Numbers 95725 and 106071.
The third author acknowledges support of NWU (Mafikeng) postdoctoral fellowship.
The fourth author acknowledges support of NRF postdoctoral fellowship through Grant Number 91495
In this paper, using a method of construction of $ 1 $-designs which are not necessarily symmetric, introduced by Key and Moori in [5], we determine a number of $ 1 $-designs with interesting parameters from the maximal subgroups and the conjugacy classes of the small Ree groups $ ^2G_2(q) $. The designs we obtain are invariant under the action of the groups $ ^2G_2(q) $.
Keywords: Ree groups, design, conjugacy class, maximal subgroup, group action.
Mathematics Subject Classification: 20D05, 05E15, 05E20.
Citation: Jamshid Moori, Bernardo G. Rodrigues, Amin Saeidi, Seiran Zandi. Designs from maximal subgroups and conjugacy classes of Ree groups. Advances in Mathematics of Communications, doi: 10.3934/amc.2020033
E. F. Assmus Jr. and J. D. Key, Designs and Their Codes, Cambridge Tracts in Mathematics, 103, Cambridge University Press, Cambridge, 1992. doi: 10.1017/CBO9781316529836. Google Scholar
[2] J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker and R. A. Wilson, Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups, Oxford University Press, Eynsham, 1985. Google Scholar
I. M. Isaacs, Character Theory of Finite Groups, Dover Publications, Inc., New York, 1994. Google Scholar
J. D. Key and J. Moori, Codes, designs and graphs from the Janko groups J1 and J2, J. Combin. Math. Combin. Comput., 40 (2002), 143-159. Google Scholar
J. D. Key and J. Moori, Designs from maximal subgroups and conjugacy classes of finite simple groups, J. Combin. Math. Combin. Comput., 99 (2016), 41-60. Google Scholar
J. D. Key, J. Moori and B. G. Rodrigues, On some designs and codes from primitive representations of some finite simple groups, Combin. Math. Combin. Comput., 45 (2003), 3-19. Google Scholar
O. H. King, The subgroup structure of finite classical groups in terms of geometric configurations, in Surveys in Combinatorics, London Math. Soc. Lecture Note Ser., 327, Cambridge Univ. Press, Cambridge, 2005, 26-56. doi: 10.1017/CBO9780511734885.003. Google Scholar
V. M. Levchuk and Y. N. Nuzhin, The structure of Ree groups, Algebra i Logika, 24 (1985), 26-41. Google Scholar
J. Moori, Finite groups, designs and codes, in Information Security, Coding Theory and Related Combinatorics, NATO Sci. Peace Secur. Ser. D Inf. Commun. Secur., 29, IOS, Amsterdam, 2011,202-230. Google Scholar
J. Moori and B. G. Rodrigues, A self-orthogonal doubly even code invariant under McL:2, J. Combin. Theory Ser. A, 110 (2005), 53-69. doi: 10.1016/j.jcta.2004.10.001. Google Scholar
J. Moori and B. G. Rodrigues, Some designs and codes invariant under the simple group Co2, J. Algebra, 316 (2007), 649-661. doi: 10.1016/j.jalgebra.2007.02.004. Google Scholar
J. Moori and B. G. Rodrigues, A self-orthogonal doubly-even code invariant under the McL, Ars Combin., 91 (2009), 321-332. Google Scholar
J. Moori and B. G. Rodrigues, On some designs and codes invariant under the Higman-Sims group, Util. Math., 86 (2011), 225-239. Google Scholar
J. Moori, B. G. Rodrigues, A. Saeidi and S. Zandi, Some symmetric designs invariant under the small Ree groups, Comm. Algebra, 47 (2019), 2131-2148. doi: 10.1080/00927872.2018.1530245. Google Scholar
J. Moori and A. Saeidi, Some designs and codes invariant under the Tits group, Adv. Math. Commun., 11 (2017), 77-82. doi: 10.3934/amc.2017003. Google Scholar
J. Moori and A. Saeidi, Some design invariant under the Suzuki groups, Util. Math., 109 (2018), 105-114. Google Scholar
J. Moori and A. Saeidi, Constructing some design invariant under the PSL2(q), q even, Comm. Algebra, 46 (2018), 160-166. doi: 10.1080/00927872.2017.1316854. Google Scholar
R. Ree, A family of simple groups asssociated with the simple Lie algebra of type (G2), Amer. J. Math, 83 (1961), 432-462. doi: 10.2307/2372888. Google Scholar
D. O. Revin and E. P. Vdovin, On the number of classes of conjugate Hall subgroups in finite simple groups, J. Algebra, 324 (2010), 3614-3652. doi: 10.1016/j.jalgebra.2010.09.014. Google Scholar
H. Ward, On Ree's series of simple groups, Trans. Amer. Math. Soc., 121 (1966), 62-89. doi: 10.2307/1994333. Google Scholar
R. A. Wilson, The Finite Simple Groups, Graduate Texts in Mathematics, 251, Springer-Verlag London, Ltd., London, 2009. doi: 10.1007/978-1-84800-988-2. Google Scholar
R. A. Wilson, Another new approach to the small Ree groups, Arch. Math. (Basel), 94 (2010), 501-510. doi: 10.1007/s00013-010-0130-4. Google Scholar
Table 1. Non-trivial designs from $G = Ree(q)$ using construction Method 2
$Max$ $t =o(x)$ $v = |x^G|$ $k =| M \cap x^G|$ $ \lambda= \chi_{M_i}(x)$
$M_1$ $t=2$ $q^2(q^2-q+1)$ $q^2$ $q+1$
$M_1$ $t=3$ $(q^3+1)(q-1)$ $q-1$ 1
$M_1$ $t=3$ $\frac{q(q^3+1)(q-1)}{2}$ $\frac{q(q-1)}{2}$ 1
$M_1$ $t=9$ $\frac{q^2(q^3+1)(q-1)}{3}$ $\frac{q^2(q-1)}{3}$ 1
$M_1$ $t=6$ $\frac{q^2(q^3+1)(q-1)}{2}$ $\frac{q^2 (q-1)}{2}$ 1
$M_1$ $t |(q-1)$, $t \ne 2$ ${q^3}(q^3+1)$ $2q^3$ 2
$M_2, M_3$ $ t=2$ $q^2(q^2-q+1) $ $ q^{\mp}$ $\frac{{{q(q^2-1)}}}{6}$
$M_2, M_3$ $t=3$ $\frac{q(q^3+1)(q-1)}{2}$ $ q^{\mp}$ $\frac{q^2}{3}$
$M_2, M_3$ $t=6$ $\frac{q^2(q^3+1)(q-1)}{2}$ $ q^{\mp}$ $\frac{q}{3}$
$M_2, M_3$ $t | q^{\mp}$ ${q^3(q^2-1)q^{\pm}}$ $ 6$ $1$
$M_4$ $ t=2$ $q^2(q^2-q+1) $ $ q^2-q+1$ $q^2-q+1$
$M_4$ $t=3$ $\frac{q(q^3+1)(q-1)}{2}$ $ \frac{q^2-1}{2}$ $q$
$M_4$ $t=6$ $\frac{q^2(q^3+1)(q-1)}{2} $ $ \frac{q^2-1}{2}$ $1$
$M_4$ $t |(q-1)$, $t \ne 2$ $q^3(q^3+1)$ $ q(q+1)$ $1$
$M_4$ $t |\frac{q+1}{2}$, $t \ne 2$ ${q^3(q^2-q+1)(q-1)}$ $ 3q(q-1))$ $3$
$M_5$ $t=2$ $q^2(q^2-q+1) $ $ q+4$ $\frac{{{q(q-1)(q+4)}}}{6}$
$M_5$ $ t=3$ $\frac{{{{q}(q^3+1)(q-1)}}}{2}$ $ q+1$ $ \frac{{{q^2}}}{3}$
$M_5$ $ t=6$ $\frac{{{{q^2}(q^3+1)(q-1)}}}{2}$ $ q+1$ $ \frac{{{q}}}{3}$
$M_5$ $t |\frac{q+1}{2}$, $t \ne 2$ ${q^3(q^2-q+1)(q-1)}$ $ 6$ 1
$M_6$ $t=2$ $q^2(q^2-q+1) $ $ q_0^2(q_0^2-q_0+1)$ $\frac{q(q^2-1)}{q_0(q_0^2-1)}$
$M_6$ $ t=3$ $(q^3+1)(q-1)$ $(q_0^3+1)(q_0-1)$ $ \frac{{{q^3}}}{q_0^3}$
$M_6$ $ t=3$ $\frac{q(q^3+1)(q-1)}{2}$ $\frac{q_0(q_0^3+1)(q_0-1)}{2}$ $ \frac{{{q^2}}}{q_0^2}$
$M_6$ $t= 9$ $\frac{q^2(q^3+1)(q-1)}{3}$ $\frac{q_0^2(q_0^3+1)(q_0-1)}{3}$ $ \frac{{{q}}}{q_0}$
$M_6$ $t |(q_0-1)$, $t \ne 2$ ${q^3(q^3+1)}$ ${q_0^3(q_0^3+1)}$ $ \frac{q-1}{q_0-1} $
$M_6$ $t |\frac{q_0+1}{2}$, $t \ne 2$ ${q^3(q^2-q+1)(q-1)}$ ${q_0^3(q_0^2-q_0+1)(q_0-1)}$ $ \frac{q+1}{q_0+1} $
$^*M_6$ $t|q_0^{\pm}$ ${q^3}\left( {q^2 - 1} \right){q^{\pm}} $ ${q_0^3}\left( {q_0^2 - 1} \right){q_0^{\pm}} $ $ \frac{{q^{\mp}}}{{q_0^{\mp}}} $
$^{**}M_6$ $t|q_0^{\pm}$ ${q^3(q^3+1)}$ ${q_0^3}\left( {q_0^2 - 1} \right){q_0^{\pm}} $ $\frac{q-1}{q_0^{\mp}}$
Yves Guivarc'h. On the spectrum of a large subgroup of a semisimple group. Journal of Modern Dynamics, 2008, 2 (1) : 15-42. doi: 10.3934/jmd.2008.2.15
Brandon Seward. Every action of a nonamenable group is the factor of a small action. Journal of Modern Dynamics, 2014, 8 (2) : 251-270. doi: 10.3934/jmd.2014.8.251
Anton Stolbunov. Constructing public-key cryptographic schemes based on class group action on a set of isogenous elliptic curves. Advances in Mathematics of Communications, 2010, 4 (2) : 215-235. doi: 10.3934/amc.2010.4.215
A. Yu. Ol'shanskii and M. V. Sapir. The conjugacy problem for groups, and Higman embeddings. Electronic Research Announcements, 2003, 9: 40-50.
Wei Xu, Liying Yu, Gui-Hua Lin, Zhi Guo Feng. Optimal switching signal design with a cost on switching action. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-19. doi: 10.3934/jimo.2019068
Adriano Da Silva, Alexandre J. Santana, Simão N. Stelmastchuk. Topological conjugacy of linear systems on Lie groups. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3411-3421. doi: 10.3934/dcds.2017144
S. A. Krat. On pairs of metrics invariant under a cocompact action of a group. Electronic Research Announcements, 2001, 7: 79-86.
Eldho K. Thomas, Nadya Markin, Frédérique Oggier. On Abelian group representability of finite groups. Advances in Mathematics of Communications, 2014, 8 (2) : 139-152. doi: 10.3934/amc.2014.8.139
Mickaël D. Chekroun, Jean Roux. Homeomorphisms group of normed vector space: Conjugacy problems and the Koopman operator. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 3957-3980. doi: 10.3934/dcds.2013.33.3957
Zhenyu Zhang, Lijia Ge, Fanxin Zeng, Guixin Xuan. Zero correlation zone sequence set with inter-group orthogonal and inter-subgroup complementary properties. Advances in Mathematics of Communications, 2015, 9 (1) : 9-21. doi: 10.3934/amc.2015.9.9
Carlos Matheus, Jean-Christophe Yoccoz. The action of the affine diffeomorphisms on the relative homology group of certain exceptionally symmetric origamis. Journal of Modern Dynamics, 2010, 4 (3) : 453-486. doi: 10.3934/jmd.2010.4.453
Xiaojun Huang, Yuan Lian, Changrong Zhu. A Billingsley-type theorem for the pressure of an action of an amenable group. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 959-993. doi: 10.3934/dcds.2019040
Martin Pinsonnault. Maximal compact tori in the Hamiltonian group of 4-dimensional symplectic manifolds. Journal of Modern Dynamics, 2008, 2 (3) : 431-455. doi: 10.3934/jmd.2008.2.431
Kaizhi Wang. Action minimizing stochastic invariant measures for a class of Lagrangian systems. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1211-1223. doi: 10.3934/cpaa.2008.7.1211
Ernesto Aranda, Pablo Pedregal. Constrained envelope for a general class of design problems. Conference Publications, 2003, 2003 (Special) : 30-41. doi: 10.3934/proc.2003.2003.30
Yoshikazu Katayama, Colin E. Sutherland and Masamichi Takesaki. The intrinsic invariant of an approximately finite dimensional factor and the cocycle conjugacy of discrete amenable group actions. Electronic Research Announcements, 1995, 1: 43-47.
Alexander Moreto. Complex group algebras of finite groups: Brauer's Problem 1. Electronic Research Announcements, 2005, 11: 34-39.
Giuseppe Da Prato, Alessandra Lunardi. Maximal dissipativity of a class of elliptic degenerate operators in weighted $L^2$ spaces. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 751-760. doi: 10.3934/dcdsb.2006.6.751
Mike Boyle, Sompong Chuysurichay. The mapping class group of a shift of finite type. Journal of Modern Dynamics, 2018, 13: 115-145. doi: 10.3934/jmd.2018014
Jean-François Biasse. Improvements in the computation of ideal class groups of imaginary quadratic number fields. Advances in Mathematics of Communications, 2010, 4 (2) : 141-154. doi: 10.3934/amc.2010.4.141
Jamshid Moori Bernardo G. Rodrigues Amin Saeidi Seiran Zandi | CommonCrawl |
BMC Biomedical Engineering
A portable assist-as-need upper-extremity hybrid exoskeleton for FES-induced muscle fatigue reduction in stroke rehabilitation
Ashley Stewart ORCID: orcid.org/0000-0001-5350-20531,
Christopher Pretty1 &
Xiaoqi Chen1
BMC Biomedical Engineering volume 1, Article number: 30 (2019) Cite this article
Hybrid exoskeletons are a recent development which combine Functional Electrical Stimulation with actuators to improve both the mental and physical rehabilitation of stroke patients. Hybrid exoskeletons have been shown capable of reducing the weight of the actuator and improving movement precision compared to Functional Electrical Stimulation alone. However little attention has been given towards the ability of hybrid exoskeletons to reduce and manage Functional Electrical Stimulation induced fatigue or towards adapting to user ability. This work details the construction and testing of a novel assist-as-need upper-extremity hybrid exoskeleton which uses model-based Functional Electrical Stimulation control to delay Functional Electrical Stimulation induced muscle fatigue. The hybrid control is compared with Functional Electrical Stimulation only control on a healthy subject.
The hybrid system produced 24° less average angle error and 13.2° less Root Mean Square Error, than Functional Electrical Stimulation on its own and showed a reduction in Functional Electrical Stimulation induced fatigue.
As far as the authors are aware, this is the study which provides evidence of the advantages of hybrid exoskeletons compared to use of Functional Electrical Stimulation on its own with regards to the delay of Functional Electrical Stimulation induced muscle fatigue.
Stroke is the second largest cause of disability worldwide after dementia [1]. Temporary hemiparesis is common among stroke survivors. Regaining strength and movement in the affected side takes time and can be improved with the use of rehabilitation therapy involving repetitive and function-specific tasks [2]. Muscle atrophy is another common issue that occurs after a stroke due to lack of use of the muscle. For each day a patient is in hospital lying in bed with minimal activity approximately 13% of muscular strength is lost (Ellis. Liam, Jackson. Samuel, Liu. Cheng-Yueh, Molloy. Peter, Paterson. Kelsey, Lower Limb Exoskeleton Final Report, unpublished). Electromechanically actuated exoskeletons offer huge advantages in their ability to repetitively and precisely provide assistance/resistance to a user. However electromechanical actuators which provide the required forces are often heavy in weight and have high power requirements which limits portability. Furthermore, muscle atrophy can only be prevented by physically working the muscles either through the patient's own volition or the use of Functional Electrical Stimulation (FES).
FES is the application of high frequency electrical pulses to the nerves or directly to the muscle belly in order to elicit contractions in the muscle. FES devices are typically lightweight and FES is well suited to reducing muscle atrophy in patients with no or extremely limited movement. The trade off to this is that precise control of FES is extremely difficult and controlling specific, repetitive, and functional movement is not easily accomplished. Furthermore, extended use of FES is limited by the introduction of muscle fatigue caused by the unnatural motor unit recruitment order [3]. The forces required for large movements, such as shoulder abduction, are too great to be provided by the use of FES which is much better suited to smaller movements such as finger extension [4, 5]. Some patients also find the use of FES painful.
Combining the use of FES and an electromechanical actuator within an exoskeleton can potentially overcome the limitations of each individual system. Despite the potential advantages of hybrid exoskeletons, so far only limited studies have been done on their effectiveness. A recent review was conducted into upper-extremity hybrid exoskeletons [6] which highlighted the advantages hybrid exoskeletons (exoskeletons which combine FES with an actuator) have with regards to improving the precision of FES induced movements. However, little attention has been given towards reduction and management of FES-induced fatigue. FES control systems used for upper-extremity hybrid exoskeletons simply manually ramp up stimulation intensity when fatigue is observed.
This work describes the design and testing of an assist-as-need upper-extremity hybrid exoskeleton which uses model-based control of FES with a focus on reducing FES-induced muscle fatigue. The control system is described in Section "Theory", and the results are presented in Section "Results". A discussion of the results is given in Section "Discussion". Conclusions are summarised in Section "Conclusion". Methods, physical structure of the exoskeleton, and the sensing system is described in Section "Material and methods".
It is highly desirable in stroke rehabilitation robotics that a robot or exoskeleton be capable of performing assistance-as-needed. This way the patient is encouraged to make the effort to achieve movement rather than learning to rely on the robot to perform the movement [7,8,9]. Appropriately timed action is more important than strength for functional gains, however repetitive practice which builds strength without specific functional application can still help to diminish impairment [10]. The ability of rehabilitation robots to adapt to different users and even to the same user on different days or throughout the same session is also highly important with regards to minimising set-up time and cost of rehabilitation [9]. In general the robot should aid and encourage but not limit the movement of the patient [9, 11]. Above all else the robot should pose no harm to the user or nearby individuals.
To implement the concept of assist-as-need there are two important features which are desired:
The assistance provided from the FES and motor should be the minimum which the patient requires to perform the movement at a given time.
The FES should perform the bulk of the movement which the patient is physically unable to. This ensures that most of the movement performed requires effort form the patient's muscles and thus improves muscular strength.
In the system proposed here the angle of the arm can be affected and controlled by three different inputs; volitional movement from the subject, FES-induced movement, and rotation of the motor. Any one of these on their own could potentially produce the desired angle. However to achieve the two defined desires, there is a necessary hierarchy of control.
The control system may in general be clearly divided into at least two control systems, each related to a different output variable which can be considered independently (Fig. 1). There is one situation however, where this is not the case. This situation would occur if neither the FES nor the user were able to provide sufficient torque to produce the movement. In this type of situation the motor should provide positive active assistance for the user and the control for the motor would be based on the angle rather than the measured support. It is important to note that the assistance that the motor provides is purely for flexion. The motor may slow the rate at which the arm extends but it cannot pull the arm down faster than gravity.
High Level Control of the Hybrid System
Section "Setup" will describe the system set-up process. Sections "Motor" to "FES Gain (k)" will present the individual control systems for each of the four variables; the arm angle, the % support, the desired % support, and the overall gain for the FES, respectively.
Previous work [12] investigated the performance of new linear model for FES control. This model is described by Eq. 1.
$$ \Delta \uptheta =\mathrm{k}\left({\mathrm{v}}_{\mathrm{g}}\Delta \mathrm{v}+{\mathrm{pw}}_{\mathrm{g}}\Delta \mathrm{pw}+{\mathrm{f}}_{\mathrm{g}}\Delta \mathrm{f}\right) $$
θ is the elbow angle in degrees
v is the voltage in volts
pw is the pulse-width in microseconds (full pulse length = positive + negative portions)
f is the frequency in Hertz
k is the overall gain
vg= 14, voltage gain
pwg = 0.15, pulse-width gain
fg = 0.22, frequency gain
The model performed well for different subjects and only the threshold voltage and the overall gain needed to be found for the subject. As described in Section "Material and Methods", to measure the support percentage, knowledge of the user's arm weight is also required. Because this system is adaptive it is possible to initially estimate the value of k as something conservative (higher rather than lower so the system starts with a small stimulation intensity) and have the system recalculate k at run time. Thus, there are only two parameters which must be obtained during setup. These are obtained as follows:
The user is instructed to relax their arm so that the palm faces the user (parallel to the sagittal plane) with fingers pointed down. Once the user is relaxed the system is switched on.
At the beginning of setup the motor rotates the arm to 90°. Five measurements each are taken of the angle and torque. These readings are averaged and used to calculate the weight of the arm under the assumption that the arm and exoskeleton are a point mass at distance 0.13 m from the elbow. The motor then lowers the arm back to 0°.
The voltage threshold test is conducted. Stimulation is applied at a frequency of 30.5 Hz, and pulse-width of 200 μs. Voltage steps are applied in increments of 0.5 V starting at 10 V. Each step is applied for a duration of 3 s and the peak arm response is recorded in degrees. When a step results in a peak arm angle of 20° the voltage threshold test is complete and the input voltage is recorded and defined as the threshold voltage. In between each step if a 20° angle has not been achieved then then stimulation is turned off for a duration of 3 s before the next step is applied. This short rest is to prevent the arm getting used to the stimulation which would affect the voltage threshold (more stimulation would be required to achieve a given angle).
The entire system takes less than 6 min for a complete set up including attachment of the exoskeleton and electrodes. Once the setup has completed, the control system runs on the right arm in response to a desired arm angle based on the position of the left arm. Control of this system is described in sections "Motor to FES Gain (k)".
When the desired support is less than 100% the motor speed is set using proportional-differential (PD) control based on the support error. When the desired support is 100% the motor speed is set using proportional-derivative (PD) control based on the angle error for errors larger than 5°. If the angle error is negative and greater than 20° then the motor will lower the arm at the maximum speed. This last state is to ensure that the motor does not impede movement of the user. Due to the setup of the pulley and cable system the motor cannot physically impede movement of the user when raising the arm but could potentially impede movement during lowering.
Control of FES is performed using the model described by Eq. 1. Updating of the FES parameter inputs is only performed if the desired angle has changed by more than 5° or if the time since the last update has exceeded 0.5 s. This is to give the muscle time to respond to the stimulation. These values were experimentally found to be suitable while still allowing for a faster response from the FES than from the motor. When the FES is updated the left side of Eq. 1 is equal to the angle error. Equation 1 is then used to calculate the required change in each input parameter so that each parameter contributes the same change in angle.
Desired support
The control of the desired amount of support is performed based on the angle error over time. In general if the error over time is consistently positive then desired support should increase. If, on the other hand, the error is consistently negative then support should decrease. In general the desired support should not respond too quickly to errors in the angle and it should ignore any large short term errors. Thus rather than use the average error over time a median filter of length 50 is used. A measurement of the angle error is taken every 0.5 s.
If the median error over the last 25 s is within 5° then no changes are made to the desired amount of support. If the median error is positive and larger than 5° the desired support is increased. If the median error is negative and larger than 5° the desired support is decreased. The previous median error is also used to calculate how much the desired support should be changed. If there has been a change in both the desired support and the median error then those values are used to calculate the new support using Eq. 2. If there has not been a change in the desired support or the median angle error then the desired support is changed by 1% for every 1° median angle error, for median angle errors greater than 5°. Regardless of what the median error is, if all of the FES parameters used for the current step are applied at their maximum values the desired support will be increased by 20%. To prevent rapid changes in the desired support the maximum change is limited to 20% every 0.5 s.
$$ \mathrm{S}\left(t+0.5\right)=S(t)+\left[\ \frac{\left[\overset{\sim }{e_{\theta }}\left(t+0.5\right)-\overset{\sim }{e_{\theta }}(t)\right]\ }{\overset{\sim }{e_{\theta }}(t)-\overset{\sim }{e_{\theta }}\left(t-0.5\right)}\ x\ \left[S(t)-S\left(t-0.5\right)\right]\ \right] $$
S is the desired % support
\( \overset{\sim }{e_{\theta }} \) is the median angle error
t is the current time
\( \overset{\sim }{e_{\theta }}\left(t+0.5\right) \) is the desired angle error at t + 0.5 s which is set equal to 0°
FES gain (k)
Every 0.5 s Eq. 1 is used to calculate the overall gain (k) using the measured right arm angle (minus the 20° threshold) and FES parameters (minus their respective thresholds). If the right arm angle is greater than 20°, the input parameters are greater than their threshold values, and thus the overall gain is positive then the calculated value for the overall gain is added to an array. The array contains the last 50 calculations for the overall gain. Each 0.5 s the median value for the overall gain is retrieved from the array and, after checking limits, is set as the new overall gain value used to calculate the future FES parameter step sizes given a desired arm angle change. The change in the overall gain is limited to plus or minus 0.2 each 0.5 s.
Ten tests were conducted on Exoskeleton using a healthy 27-year old female subject with different initial values for the overall gain and desired assistance. Selected plots of the test results are displayed in Sections "Test 2 – 2 Minutes of Hybrid Control, k = 1, Assist = 0 %" to "Summary of Results for All Tests", and Figs. 2, 3, 4, 5, 6, 7, 8 and 9. Subsection "Summary of Results for All Tests", contains a summary of all the results. The test details are summarised in Table 1, and the results for each tests are summarised in Table 2 in Subsection "Summary of Results for All Tests". Ethical approval for testing was granted by the University of Canterbury Human Ethics Committee.
Table 1 Initial Parameters, Control Scheme, and Test Length for Tests Conducted Using the Hybrid Exoskeleton on one Healthy Individual
Table 2 Summary of Exoskeleton Test Results, Initial Parameters, Control Scheme, and Test Length for Tests Conducted Using the Hybrid Exoskeleton on one Healthy Individual. RMSE = Root Mean Square Error
All tests, except Test 5, were conducted with the user providing no volitional input from their right arm. Test 5 involved the user moving both arms volitionally together in a mirroring pattern. Test 6 used FES only with no assistance form the motor. Test 9 and 10 used only the motor and no FES. Only Test 10 did not perform assist-as-need. The tests are listed in the order they were conducted and only short rests (a few minutes) were taken between each test. All tests were conducted on the same day. A discussion is given for the tests and results in Section "Discussion", following the figures. Some mechanical issues were had following Test 7, resulting in a longer rest time (about 30–60 min) prior to Test 8. Any effects this had are discussed in Section "Discussion".
The first figure in each test subsection displays the desired angle (angle of the left arm, input) and measured angle (angle of the right arm, output) during the test. In cases where there is a second figure this shows the change in the desired support and the change in the gain during the test in response to the assist-as-need control scheme.
Test 2–2 minutes of hybrid control, k = 1, Assist = 0%
Right Arm Angle (Orange) and Left Arm Angle (Blue) during Test 2
Test 5–2 minutes of hybrid control with volitional movement, k = 1, Assist = 50%
Variation in Overall Gain (Orange) and Desired Support (Blue) during Test 5
Test 6–6 minutes of FES control, k = 10, Assist = 0%
Test 7–6 minutes of hybrid control, k = 10, Assist = 0%
Test 10–2 minutes of motor control without assist-as-need, k = 1, Assist = 100%
Right Arm Angle (Orange) and Left Arm Angle (Blue) during Test 10
Summary of results for all tests
Root Mean Square Error (RMSE) for Each Test
Change in Root Mean Square Error (RMSE) during Each Test
The results for the tests are given in Table 2 and Fig. 8 and Fig. 9 at the end of the previous section. It is important to note that the input reference angle trajectory was not the same for every test thus these results should only be used to give a general high-level performance comparison. It should also be noted from the angle comparison plots that the right-arm rest angle appeared to be slightly higher than that of the left arm. This is likely due to the fact that the left arm was controlled volitionally the entire time whereas the right arm was in a relaxed state. When the arm is relaxed it was observed to not often rest exactly at 0° but rather a little higher and the elbow has a slight bend. Thus, the controlling left arm would be physically held at 0° and the right-arm would settle slightly above 0° in response. This will result in a slight shift in the median and average angle error towards the negative, and an increase in the magnitude of the Root Mean Square Error (RMSE).
This is likely why at initial glance the volitional test (Test 5) and the motor only without assist-as-need test (Test 10) appear to have larger average and median errors than the hybrid tests. It is expected that these two tests should produce the smallest errors. That said, from the plots for these two tests (Figs. 3 and 7) it can be seen that there is also some error at the peaks as well as at the troughs so not all of error can be attributed to the rest angle. Furthermore, when comparing the RMSE instead of the median and average errors, the volitional test (Test 5) does perform the best, as expected, with the lowest RMSE value. It is important to note that no time delay was considered when comparing the desired angle with the measured angle, thus the response time should cause a larger measured error for all tests compared with the volitional test, during which the movement was conducted simultaneously. It is worth noting that even volitional movement for a healthy subject without a time delay does not produce perfect tracking.
Overall, the first four Hybrid Control Tests (Tests 1–4) performed similarly to that of the motor only control (Test 10) with similar sized error measurements across the board. There was some variation in performance among the Hybrid control tests depending on the initial test parameters (k and desired assistance), variations in test time, and variations in fatigue, however no noticeable trend was observed and the differences were not large. The difference in RMSE between the best and worst of the first four hybrid control tests was 7.44 degrees.
There are three tests which stand out as having large errors. These are Test 6, 8 and 9. Test 6 is the FES only control test and given that the difficulties with performing large movements with FES and FES-induced fatigue are well known problems for FES, it is not surprising that Test 6 has the worst performance. The results for Test 8 and 9 are less expected and will be discussed later on in this section.
Due to all of these tests being conducted on the same day and one after another it is expected that the arm will be more fatigued for the later tests. This is backed up by the general increase seen for the voltage threshold (vthresh) and the general decrease in the overall gain (k). It is for this reason that the 6 min FES only tests (Test 6) was conducted prior to the 6 min long Hybrid test (Test 7). If the hybrid control is able to reduce the impact of FES-induced fatigue then it is expected that the performance of Test 7 should be better than that of Test 6, however it is also expected that a good performance would be harder to achieve in the presence of more fatigue. Thus given that Test 7 has better performance than Test 6 despite being performed after Test 6 there is much stronger support for the argument that the hybrid control does indeed reduce the impact of FES-induced Fatigue. This is further backed up by the smaller final overall gain (k) for Test 6.
In general, the voltage threshold is expected to increase as more tests are conducted, however small reductions in fatigue can be observed during the tests in response to brief rests. Holding the reference arm at 0° for a while results in an increased response to the FES for the next movement with a larger response observed following longer rests (Fig. 5). However due to some minor mechanical issues which took time to repair (as described in Section "Results"), the rest time following Test 7 was longer than a few minutes thus allowing the arm more time to rest compared with the time between the other tests. Furthermore, the electrode position was not necessarily kept consistent due to the removal of electrodes and reapplication of electrode gel following Test 7. It is this increase in rest time that is the likely cause for the reduced voltage threshold seen for Test 8. Thus the values of the overall gain give a better comparison of the induced fatigue for each of the 6 min tests. It is also for these reasons that it is difficult to compare the results from Test 8 with the other FES tests although it is still useful to observe the parameter and error changes within Test 8. The RMSE values for each minute within all tests are shown in Fig. 9.
The first, solid blue bar for each test in Fig. 9, gives the RMSE for the overall test, while each of the following shaded bars gives the RMSE for each progressive minute, i.e. the first shaded bar gives the RMSE for the first minute, the second shaded bar gives the RMSE for the second minute and so on. For almost all tests the RMSE can be seen to improve from the first minute to the second although for most tests this is only by a small amount so could simply be attributed to differences in the reference movement or other small random variations. The two exceptions are the two motor only tests (Tests 9 and 10). The reasons for the results for Test 9 are discussed more thoroughly below. Test 10 does not perform assist-as-need so it is not expected that the performance would increase during the test and the change is only a few degrees so may be explained simply by differences in the reference movement or other small random variations. There is a larger decrease in the RMSE for some of the tests which start with larger estimates for the overall gain (Test 6 and 8) which is likely due to the adaptive nature of the control system, i.e. as the value for the overall gain becomes more accurate, the RMSE becomes smaller. However, the improvement for Test 7, while consistent over the first few minutes is not as significant despite also starting with an overall gain of 10, the same as Test 6 and 8. Overall it is expected that fatigue would cause an increase in RMSE over time in a FES system without adaptive parameters and assist-as-need. Indeed, during the last few minutes for the 6 min FES tests an increase in RMSE is observed which may be attributed to fatigue. Given that the increase and variability is larger for the FES only test (Test 6) than for the Hybrid test (Test 7) this further provides evidence for the hypothesis that hybrid exoskeletons can offer performance improvements over FES only systems with regards to precise control and fatigue reduction.
The Root Mean Square Error (RMSE) is the most commonly used measurement of performance for prosthesis and exoskeleton control systems [13] however given the early state of many of the upper-extremity hybrid exoskeletons described in previous works [6] very little statistical comparison can be made between the results from the hybrid exoskeleton described in this work and those described in [6]. Only one exoskeleton described in [6], The FES/Robot Hand, uses the RMSE as a measurement of performance [14, 15]. The tracking ability of the FES/Robot Hand was tested on 4 stroke subjects during 20 s long tests. The ability of the subject to track without the aid from the hybrid exoskeleton was compared to the tracking ability of the exoskeleton with different combinations of FES and motor support. The RMSE for the volitional movement was 10.9 degrees while the best exoskeleton performance (with a 50/50 balance of motor and FES support) was 4.9 degrees, and the RMSE for the FES only was about 8.5 degrees, resulting in an improvement in the RMSE of 6 degrees between no support and hybrid support, and an improvement of 3.6 degrees between FES only and the hybrid system.
It is important to note that the tracking tests for the FES/Robot hand were performed on the index finger which comparatively has a smaller range of motion as compared to the elbow joint so is it is difficult to directly compare to this work. Furthermore, the tests performed in this work involved a comparison between a healthy subject performing volitional movement and the same subject putting no effort in at all with FES and with the hybrid system, whereas the tests conducted on the FES/Robot Hand compared Stroke patients performing volitional movement with and without the help of the different exoskeleton systems [14, 15]. As this thesis tests the volitional movement of a healthy subject it is not expected that the exoskeleton would produce a reduction in error for this work. Furthermore, complete relaxation of a subject's muscles is not always easy to achieve. In some cases a user may unintentionally fight or aid the FES. Thus, the focus of this work is to compare the performance of the FES on its own with that of the hybrid combination which, as given in Table 2, shows an improvement of RMSE of 13.2 degrees for a 6 min test (RMSE of 42.9 for the FES system compared with a RMSE of 29.7 for the hybrid system). While it is difficult to compare values directly both the results described in this thesis and the results described in [14, 15] demonstrate an improvement of the hybrid system over the use of FES on its own with regards to precision of movement.
One other exoskeleton described in [6], the Wearable Rehabilitation Robot [16], uses a similar type of performance measure. The Integral of the Square of the Error (ISE) is used to compare the performance of an exoskeleton with and without FES for movement of the shoulder and fingers. For these tests the power of the actuator was deliberately reduced below that which would normally be required. The performance of the system was found to be better when the FES was used in addition to the motor providing evidence that hybrid exoskeletons can reduce the power requirements of the actuator. This cannot be seen in the results presented in the current work as the power of the actuator was not limited in the same way.
A key novel contribution of the current work is to test whether the hybrid exoskeleton is able to reduce the level of FES-induced fatigue as this is something which has not been tested by the hybrid exoskeletons described in [6]. As has been described already this can be tested by comparing the variations in the final value of the overall gain (k). The greater reduction of the overall gain during the FES test compared to that of the later performed hybrid test indicates that the hybrid system is able to reduce the FES-induced fatigue.
Given that all of these tests were conducted on the same individual and same day it is expected that the final value for the overall gain would be of roughly similar magnitude, with some variation due to fatigue as described above. Thus it is promising to see that despite the large differences in the initial value of the overall gain, the final values are all a similar magnitude to one another, with one exception. The volitional test (Test 5) did not cause variations in the overall gain. This can be attributed to a lack of errors during the test which would normally cause the software to apply the FES. The value of the overall gain can only be updated if the FES has been applied a sufficient number of times. Given that Test 5 involved the subject moving both arm simultaneously there was very little error and thus very few reasons for the FES to be applied. This is not an issue from a control perspective as if the error were to increase then the FES would be applied and the overall gain would be calculated. Given that the user is very capable it is not a problem that the overall gain has yet to be calculated and from a patient monitoring perspective one can still observe that the FES input parameter values are small, desired assistance has decreased and remains sitting at 0% (Fig. 4), and yet error is also small. This strongly indicates a user which is capable of performing the movement completely on their own. The change in these values over time will also provide an indication regarding the user's ability as the user becomes more fatigued and across several sessions. It is also promising to note that the exoskeleton does not appear to impede a user who is capable of fully performing the movement. Overall the assist-as-need with regards to the overall gain (k) performs well.
The assist-as-need of the motor is not quite as smooth as that of the overall gain which can be seen by comparing Test 9 and Test 10. It is expected that the desired assistance will fluctuate somewhat given that short rests can improve the effectiveness of the FES parameters, however the rate at which the desired assistance varies during these tests is faster and larger than is desirable. It's possible that the rest angle of the right arm being greater than 0° contributes to this as well as the slow lowering speed of the motor. However, what is more likely is that the assumption made in Section "Desired Support", with regards to Eq. 2, is a poor assumption. Equation 2 relies on the assumption that if a previous change in support results in a given change in error then applying that same change in support again would result in the same change in error. Based on the results, this is likely not the case. Generally it is not desirable to lower the arm too quickly though. Other improvements could be made by using the median angle error for a longer time period in addition to making changes to Eq. 2.
So far the hybrid exoskeleton described in this work has only been tested on one healthy subject. This is something which is a common issue with regards to current exoskeleton research in general. Furthermore, very few exoskeletons, and even less hybrid exoskeletons have been tested on stroke patients, let alone on large numbers of them. Cost is one of the main barriers to widespread testing of exoskeleton devices. The cost to construct the exoskeleton described in this work is very low (a few hundred NZD) which may help to lessen this barrier in future.
Overall the hybrid control and assist-as-need control methods perform well in comparison with complete volitional movement and non-hybrid control. In particular the hybrid system shows an improved performance with regards to FES-induced fatigue compared with using FES only demonstrated by larger change in overall gain (k) and a larger average and RMSE error for the FES only control. As far as the authors are aware this is the first upper-extremity hybrid exoskeleton which uses model-based FES control to perform assist-as-need.
The design of a voltage controlled functional electrical stimulator has been described in other works [17] and is the FES device used during the tests described in this work. It allows for control of a wide range of FES parameters. The electrodes used in this work are (50 mm × 50 mm) reusable e-textile electrodes, which have a similar performance and lower resistance than conventional hydrogel electrodes [18]. The exoskeleton in this work has been designed for the elbow joint.
For simplicity and portability the Rhino Motion Controls High Torque Servo Motor (RMCS-2251) has been selected as the actuator for this exoskeleton This motor is more than capable of providing all of the torque requirements for movement of the elbow joint [19]. A smaller and lighter motor could be used in place of this one in future. A portable and rechargeable Li-Po battery (Zippy, 4000 mAh, 11.1 V, Hardcase, 20C Series) was acquired for the supply for the system as a whole. It is combined with a 150 W adjustable boost circuit (purchased from prodctodc.com - Item ID #090438, set to 13.5 V for this system) and relay circuitry for added safety. The relay section of the circuit was constructed by one of the lab technicians. A 5 V regulator (L78s05cv) was used to step the 13.5 V down for the Arduino and motor. For testing described this work, a desktop DC power supply was used in place of a 3 V regulator for the input to the FES circuit as a slightly more consistent supply could be achieved. A 3 V regulator could be used instead for portability. Ethical approval for testing of this device was granted by the University of Canterbury Human Ethics Committee. Section "Exoskeleton Construction" and "Sensing" describe the construction of the exoskeleton and sensing system respectively.
Exoskeleton construction
The powered exoskeleton was arbitrarily selected to be designed for the right-arm and a second non-powered smaller exoskeleton was designed for the left arm to be used as the control input. Construction of the both exoskeletons was based around Actobotics components sourced from Sparkfun [20]. The powered exoskeleton is shown in Fig. 10 and the unpowered exoskeleton is shown in Fig. 11.
The Powered Exoskeleton (Right Arm)
Unpowered Exoskeleton (Left Arm)
A swivel hub allows free movement of the elbow joint for both exoskeletons. A pulley is affixed to one side of the swivel hub on the powered exoskeleton. A metal cable is wound around this pulley and runs up the inside of a protective plastic tube. At the other end of the plastic tube the cable is wound around a second pulley which is affixed to the shaft of the motor situated on the shoulder of the user.
To attach the motor to the user a soft shoulder brace is worn. The exoskeleton is manually lined up with the user's arm and the motor is placed gently on the shoulder. Velcro straps are used to hold the motor and exoskeleton in place. The FES sleeve is placed on the user's arm and electrode gel (Spectra 360) is applied prior to attachment of the exoskeleton. Correct placement of the electrodes are also checked and any adjustments made prior to exoskeleton attachment. Figure 12 shows a user wearing the exoskeleton and FES sleeve.
A User Wearing the Hybrid Exoskeleton
The exoskeleton can be made shorter or longer for the shoulder to elbow section by unscrewing the upper metal rod and moving the screw up or down a hole. The entire attachment of the FES electrodes and exoskeleton to the user takes only 1–2 min and can be performed by the user themselves without the need for movement in the right arm. Despite the motor only weighing 180 g (and making up the bulk of the weight of the exoskeleton) the structure was still found to be heavy enough to cause mild discomfort during prolonged wearing (45 min) for a healthy subject. Future designs should consider methods to shift the weight of the motor to the centre of the back of the user and away from the shoulder.
To measure the angle of the arm, the shaft of a potentiometer was attached to the pulley at the elbow joint using a set screw hub (Actobotics) and the body of the potentiometer was soldered to a small Vero board and affixed to a rod. The rod is attached to the upper portion of the exoskeleton as shown in (Fig. 13). Thus by measuring the potentiometer voltage the angle can be calculated without regular recalibration. The same method is used to measure the angle of the unpowered left-arm exoskeleton.
Elbow Joint with Potentiometer
To measure the force applied to and by the exoskeleton arm, a 10 kg straight bar load cell is used to connect the elbow section of the exoskeleton to the wrist section (Fig. 14). An HX711 load cell amplifier was used to interface between the load cell and the Arduino. The free body diagram of the exoskeleton is shown in Fig. 15.
Connection of the Load Cell
Free Body Diagram of the Exoskeleton Arm
The interaction forces between the user's forearm and the exoskeleton all occur at the wrist strap point (x), which is located 0.2 m from the elbow pivot joint. The support forces from the motor are applied very close to the elbow pivot joint. The total torque applied at the exoskeleton elbow is the due to the torque produced from the interaction forces between the user and exoskeleton, and the support torque. The load sensor is located 0.13 m from the elbow pivot joint and measures the perpendicular force to the exoskeleton at this point. Thus the total torque at the exoskeleton elbow joint is the product of the force measured by the load sensor and the distance to the load sensor (0.13 m) as described by Eqs. 3 to 5. The force measured by the load sensor can be calculated from the load sensor reading using Eq. 6.
$$ {\tau}_{tot}={\tau}_{support}+{\tau}_{user} $$
$$ {\tau}_{user}=x\left({F}_{muscle}- mg\cos \left({\uptheta}_{\tau}\right)\right) $$
$$ {\tau}_{tot}=L{F}_{meas} $$
$$ {F}_{meas}= MR+c $$
τtot is the total torque around the exoskeleton elbow joint (anticlockwise).
τsupport is torque produced by the motor and cable system (anticlockwise).
τuser is the torque produced by the user, includes volitional movement and effects from gravity (anticlockwise).
x is the distance from the elbow joint to the wrist strap (0.2 m).
Fmuscle is the force produced by volitional movement from the user measured in Newtons (perpendicular to the arm and upwards).
m is the combined mass of the arm and exoskeleton in kilograms (at the wrist strap).
g is acceleration due to gravity (9.8 ms-2) (perpendicular to Earth and downwards)
θ is the elbow angle in degrees. This is the angle which the arm makes measured from 0° (when the arm hangs straight and perpendicular to Earth) in an anticlockwise direction.
θτ is (90 – θ) for arm angles below 90° and (θ – 90) for arm angles above 90°.
L is the distance from the elbow joint to the load sensor (0.13 m).
Fmeas is the force measured by the load sensor in Newtons (perpendicular to the arm and upwards).
M is the gradient
c is the offset.
R is the output of the load sensor amplifier (volts).
Thus the measured total torque is given by Eq. 7.
$$ {\tau}_{tot}=L\left( MR+c\right) $$
The exoskeleton arm was rotated using the motor and cable system and measurements of the load sensor were taken at 14 different angles. This was repeated with three different weights attached to the end of the exoskeleton arm: 100 g, 200 g, and 500 g. Using the measurements and expected torque produced by each weight the gradient and offset in Eq. 6 were calculated. The mass of the empty exoskeleton arm was calculated to be 0.1026 kg.
Using these values, the expected torque produced by the mass of the exoskeleton can be calculated for a given angle using Eq. 4. This torque is defined as the set point for the given angle. If the torque calculated from the load sensor reading (Eq. 7) is greater (direction is anticlockwise and upwards) than the expected torque (Eq. 4) then the difference is due to the torque produced by the subject (either volitional or FES-induced) and furthermore the subject is supporting at least some of the weight of the exoskeleton arm in addition to their own. If the torque measured is equal to the set point then the subject is supporting their own arm weight but not the weight of the exoskeleton. This point is also called the 0% support point as well as the set point. If the torque measured is less than the set point then the exoskeleton is providing support for the subject.
In order to calculate the percent of support which the exoskeleton is providing, knowledge of the arm weight of the subject is required. During setup the system rotates the exoskeleton arm to 90° while the subject relaxes their arm. Several measurements are taken by the software and the results are averaged. From these measurements the arm mass of the subject can be calculated and the 100% support torque point is defined. Any torque measurements between this point and the set point mean that the exoskeleton is providing a certain percent support. For example half way between these two values would be 50% support.
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
Offset of the load sense reading
Frequency in Hertz
FES:
Functional Electrical Stimulation
ISE:
Integral of the Square of the Error
Overall gain
Gradient of the load sense reading
PD:
Proportional-Derivative
Pulse-width in microseconds (full pulse length = positive + negative portions)
Output of the load sensor amplifier (volts)
RMSE:
Root Mean Square Error
Voltage in volts
θ:
Elbow angle in degrees
θτ:
(90 – θ) for arm angles below 90˚ and (θ – 90) for arm angles above 90˚
F meas :
Force measured by the load sensor in Newtons (perpendicular to the arm and upwards)
F muscle :
Force produced by volitional movement from the user measured in Newtons (perpendicular to the arm and upwards)
L :
Distance from the elbow joint to the load sensor (0.13 m)
\( \overset{\sim }{e_{\theta }} \) :
Median angle error
\( {e}_{\theta } \) :
Angle error
fg :
Frequency gain (equal to 0.22)
Acceleration due to gravity (9.8 ms‐²) (perpendicular to Earth and downwards)
Combined mass of the arm and exoskeleton in kilograms (at the wrist strap)
pwg :
Pulse-width gain (equal to 0.15)
vg :
Voltage gain (equal to 14)
Distance from the elbow joint to the wrist strap (0.2 m)
τ support :
Torque produced by the motor and cable system (anticlockwise direction)
τ tot :
Total torque around the exoskeleton elbow joint (anticlockwise direction)
τ user :
Torque produced by the user, including volitional muscular movement and the effects from gravity (anticlockwise direction)
World Heart Federation. Stroke 2015 July 2016]; Available from: http://www.world-heart-federation.org/cardiovascular-health/stroke/. Accessed 31 July 2016.
Senelick RC. Technological advances in stroke rehabilitation—high tech marries high touch. US Neurology. 2010;6(2):102–4.
Doucet BM, Lam A, Griffin L. Neuromuscular electrical stimulation for skeletal muscle function. Yale J Biol Med. 2012;85(2):201.
Schill O, et al. OrthoJacket: an active FES-hybrid Orthosis for the paralysed upper extremity. Biomedizinische Technik/Biomedical Engineering. 2011;56(1):35–44.
Pylatiuk C, et al. Design of a Flexible Fluidic Actuation System for a Hybrid Elbow Orthosis. In: ICORR 2009. IEEE International Conference on Rehabilitation Robotics: IEEE; 2009. https://ieeexplore.ieee.org/document/5209540.
Stewart AM, et al. Review of Upper Limb Hybrid Exoskeletons. IFAC-PapersOnLine. 2017;50(1):15169–78.
Lu EC, et al. The development of an upper limb stroke rehabilitation robot: identification of clinical practices and design requirements through a survey of therapists. Disabil Rehabil Assist Technol. 2011;6(5):420–31.
Jarrassé N, et al. Robotic Exoskeletons: A Perspective for the Rehabilitation of Arm Coordination in Stroke Patients. Front Human Neurosci. 2014;8:947.
Maciejasz P, et al. A survey on robotic devices for upper limb rehabilitation. J Neuroeng Rehabil. 2014;11(3):10.1186.
Patton J, Small SL, Zev Rymer W. Functional restoration for the stroke survivor: informing the efforts of engineers. Top Stroke Rehabil. 2008;15(6):521–41.
Loureiro RC, et al. Advances in upper limb stroke rehabilitation: a technology push. Med Biol Eng Computing. 2011;49(10):1103–18.
Stewart AM, Pretty CG, Chen X. An investigation into the effect of electrode type and stimulation parameters on FES-induced dynamic movement in the presence of muscle fatigue for a voltage-controlled stimulator. IFAC J Syst Control. 2019;8:100043.
Fougner A, et al. Control of upper limb prostheses: terminology and proportional myoelectric control—a review. IEEE Trans Neural Syst Rehabil Eng. 2012;20(5):663–77.
Rong W, et al. Combined Electromyography (EMG)-driven Robotic System with Functional Electrical Stimulation (FES) for Rehabilitation. In: 2012 38th Annual Northeast Bioengineering Conference (NEBEC): IEEE; 2012. https://ieeexplore.ieee.org/document/6207090.
Rong W, et al. Effects of electromyography-driven robot-aided hand training with neuromuscular electrical stimulation on hand control performance after chronic stroke. Disabil Rehabil Assist Technol. 2013;10(2):149–59.
Tu X, et al. Design of a Wearable Rehabilitation Robot Integrated with Functional Electrical Stimulation. In: 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob): IEEE; 2012. https://ieeexplore.ieee.org/abstract/document/6290720.
Stewart AM, Pretty CG, Chen X. Design and Testing of a Novel, Low-cost, Low-voltage, Functional Electrical Stimulator. In: 2016 12th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA): IEEE; 2016. https://ieeexplore.ieee.org/document/7587155.
Stewart AM, Pretty CG, Chen X. An evaluation of the effect of stimulation parameters and electrode type on bicep muscle response for a voltage-controlled functional electrical stimulator. IFAC-PapersOnLine. 2017;50(1):15109–14.
Perry JC, Rosen J, Burns S. Upper-limb powered exoskeleton design. IEEE/ASME Trans Mechatronics. 2007;12(4):408–17.
Sparkfun. Actobotics. 2018 August 2018]; Available from: https://www.sparkfun.com/pages/Actobotics. Accessed 31 Aug 2018.
The authors would like to thank technical staff at the University of Canterbury Mechanical Engineering Department for their help with circuit construction and component sourcing.
This research was supported and funded by the University of Canterbury through the College of Engineering Publishing Scholarship. The role of the University of Canterbury in this study was as reviewer of the early drafts of the paper.
Mechanical Engineering, University of Canterbury, 20 Kirkwood Ave, Upper Riccarton, Christchurch, 8041, New Zealand
Ashley Stewart, Christopher Pretty & Xiaoqi Chen
Christopher Pretty
Xiaoqi Chen
All authors contributed to the study concept and design. The construction of the exoskeleton, acquisition of data, analysis and interpretation of data, and writing of the draft of the manuscript was performed by AS. Critical revision of the manuscript was performed by XC and CP. All authors read and approved the final manuscript.
Correspondence to Ashley Stewart.
Ethical approval for testing described in this work was granted by the University of Canterbury Human Ethics Committee. Informed and written consent was obtained from all participants.
Consent for publication was obtained for all individual's data included in this work.
Stewart, A., Pretty, C. & Chen, X. A portable assist-as-need upper-extremity hybrid exoskeleton for FES-induced muscle fatigue reduction in stroke rehabilitation. BMC biomed eng 1, 30 (2019). https://doi.org/10.1186/s42490-019-0028-6
Hybrid exoskeletons
Medical technologies, robotics and rehabilitation engineering
Submission enquiries: [email protected] | CommonCrawl |
In-orbit operation of an atomic clock based on laser-cooled 87Rb atoms
Demonstration of a trapped-ion atomic clock in space
E. A. Burt, J. D. Prestage, … T. A. Ely
Observation of Bose–Einstein condensates in an Earth-orbiting research lab
David C. Aveline, Jason R. Williams, … Robert J. Thompson
A space-based quantum gas laboratory at picokelvin energy scales
Naceur Gaaloul, Matthias Meister, … Nicholas P. Bigelow
NASA's Cold Atom Lab (CAL): system development and ground test status
Ethan R. Elliott, Markus C. Krutzik, … David C. Aveline
Cold-atom sources for the Matter-wave laser Interferometric Gravitation Antenna (MIGA)
Quentin Beaufils, Leonid A. Sidorenkov, … Remi Geiger
Exploring gravity with the MIGA large scale atom interferometer
B. Canuel, A. Bertoldi, … P. Bouyer
Space-borne Bose–Einstein condensation for precision interferometry
Dennis Becker, Maike D. Lachmann, … Ernst M. Rasel
Blackbody radiation shift assessment for a lutetium ion clock
K. J. Arnold, R. Kaewuam, … M. D. Barrett
Energy of the 229Th nuclear clock transition
Benedict Seiferle, Lars von der Wense, … Peter G. Thirolf
Liang Liu1,
De-Sheng Lü1,
Wei-Biao Chen2,
Tang Li1,
Qiu-Zhi Qu1,
Bin Wang1,
Lin Li1,
Wei Ren1,
Zuo-Ren Dong2,
Jian-Bo Zhao1,
Wen-Bing Xia2,
Xin Zhao1,
Jing-Wei Ji1,
Mei-Feng Ye1,
Yan-Guang Sun2,
Yuan-Yuan Yao1,
Dan Song1,
Zhao-Gang Liang1,
Shan-Jiang Hu2,
Dun-He Yu2,
Xia Hou2,
Wei Shi2,
Hua-Guo Zang2,
Jing-Feng Xiang ORCID: orcid.org/0000-0001-9484-90961,
Xiang-Kai Peng1 &
Yu-Zhu Wang1
Nature Communications volume 9, Article number: 2760 (2018) Cite this article
Atomic and molecular interactions with photons
Quantum metrology
Atomic clocks based on laser-cooled atoms are widely used as primary frequency standards. Deploying such cold atom clocks (CACs) in space is foreseen to have many applications. Here we present tests of a CAC operating in space. In orbital microgravity, the atoms are cooled, trapped, launched, and finally detected after being interrogated by a microwave field using the Ramsey method. Perturbing influences from the orbital environment on the atoms such as varying magnetic fields and the passage of the spacecraft through Earth's radiation belt are also controlled and mitigated. With appropriate parameters settings, closed-loop locking of the CAC is realized in orbit and an estimated short-term frequency stability close to 3.0 × 10−13τ−1/2 has been attained. The demonstration of the long-term operation of cold atom clock in orbit opens possibility on the applications of space-based cold atom sensors.
Modern time keeping systems (TKS) on Earth and the global navigation satellite system (GNSS) rely heavily on atomic clocks. Traditional atomic clocks which use hot atoms, however, have almost reached their limits especially in regard to long-term stability. Laser cooling of atoms provides an approach to improve the performance of atomic clocks further1, particularly in applications that require precision time-keeping over long time scales. The atoms are first cooled by lasers, and then interrogated by a microwave field typically with the Ramsey method. The width of the central Ramsey fringe for a cold atom clock (CAC) is almost two orders of magnitude narrower than that for their hot atom counterparts. A variety of CACs have been demonstrated on the ground, notably atomic fountain clocks2,3,4 and optical frequency standards based on neutral atoms in a lattice or trapped ions5,6. Primary caesium fountain standards currently reach an uncertainty around 2 × 10−16, and the improved accuracy and stability of optical clocks motivates a future redefinition of the SI second7.
Currently, the best performing space atomic clocks used in the GNSS are those at a frequency stability of a few parts in 1015 per day8. Applying CACs in space is of great interest, not only in constructing the next-generation TKS and GNSS, but also in permitting deep space surveys and conducting more accurate tests of fundamental physics9,10,11,12,13. Moreover, other space applications in cold atom physics such as cold atom interferometry, optical clocks, and cold atom sensors also benefit from the techniques used in space CACs14.
Experiments related to cold atoms in microgravity have been successfully demonstrated in a drop tower, parabolic flights, and a sounding rocket15,16,17,18. These methods provide a microgravity environment ranging from several seconds (drop tower, parabolic flight) to several minutes (sounding rocket). Nevertheless, testing while in orbital operation is required to gauge the long-term operation of a space CAC. Several projects on space CACs, such as ACES, PARCS, and RACE, have been proposed in the last few decades19. For example, the ACES mission, which consists of a caesium CAC called PHARAO, a hydrogen maser, as well as a package for frequency comparisons and distribution, aims to search for drifts in fundamental constants and measure the gravitational red shift with improved precision20,21,22,23,24,25,26. The PHARAO clock is expected to operate in space with a frequency stability of 1.0 × 10−13τ−1/2 (τ is the average time in second) and an accuracy below 3 × 10−16 (ref. 24). Under the support of the China Manned Space Program (CMSP), we started a mission called Cold Atom Clock Experiment in Space (CACES) in 2011 with the goal of operating a rubidium CAC in space.
Operating a CAC in orbit has great challenges. First, because of the limited resources on board a spacecraft, weight, volume, and power consumption must be greatly reduced compared with ground-based fountain clocks. Second, the CAC must pass mechanical, thermal, and electromagnetic compatibility tests specified for space missions. Third, all operations of the CAC must be automated and all units must be maintained without any manual-adjustments. Fourth, the CAC must be robust in the orbital environment against, for example, the variation in Earth's magnetic field and impacts from high-energy particles. Finally, the CAC must be designed to work in microgravity.
The principle of our Rubidium (87Rb) space CAC is shown in Fig. 1. Because 87Rb has a smaller collision shift than that of 133Cs, better long-term performance can be expected with a relaxed control required over the atomic density27,28. The atoms are cooled and trapped in a magneto-optical trap (MOT); the cooled atoms are then launched using the moving molasses technique. Differing from that on the ground, cold atoms launched in microgravity move in a straight line at constant velocity. After state selection, the cold atoms are interrogated by a microwave field and their state is detected through laser-excited atomic fluorescence.
Principle and structure of the space cold atom clock (CAC). The capture zone is a magneto-optical trap (MOT) with a folded beam design. The ring interrogation cavity is used for the microwave field to interrogate the cold atoms. In the detection zone, cold atoms in both hyperfine states are detected. The clock signal is obtained by feeding the error signal to the frequency of microwave source
The whole system consists of four units: physics package, optical bench29, microwave source, and control electronics. The main part of the physics package is a titanium alloy vacuum tube for which a vacuum is maintained to better than 1 × 10−7 Pa30. A ring cavity inside the vacuum is used for the Ramsey interrogation of the cold atoms31. Three layers of Mumetal are used to shield the magnetic field in the interrogation cavity, with only one layer shielding the capture zone. The magnetic field in the interrogation region is stabilized to better than 5 nT using a servo loop and a magnetic field coil, which compensate for field variations arising from the motion of the spacecraft in orbit32.
The MOT is formed using a pair of anti-Helmholtz coils and two laser beams with independently controlled frequencies. Each beam is folded to create the multiple trapping beams required33. The intensity of each trapping beam is about 3.8 mW cm−2 and the geometry is designed so that a frequency shift between the beams forms a moving molasses that launches atoms towards the interrogation region with a velocity
$$v = \left( {\omega _1 - \omega _2} \right){\mathrm{/}}k,$$
where ω1 and ω2 are the frequencies of the two laser beams, and K is the wave vector. Launch velocities ranging from 0.6 to 6.0 m s−1 is accessible. With this process, about 5 × 107 cold atoms are launched from the MOT zone when the loading time is set to 1.0 s.
State selection is accomplished with a combination of microwave excitation and laser pushing. After being cooled, the atoms are concentrated in the state \(\left| {F = 2} \right\rangle\) and evenly distributed in the five magnetic sub-states. The microwave power in the state selection cavity is adjusted such that the atoms in \(\left| {F = 2,m_F = 0} \right\rangle\) are pumped to \(\left| {F = 1,m_F = 0} \right\rangle\) at an efficiency of almost 100%; the laser beam then pushes away all other atoms in the states \(\left| {F = 2,m_F \ne 0} \right\rangle\). The remaining atoms are used for microwave interrogation.
In two subsequent interactions of duration Δt with the interrogating field separated by a free evolution time T, the Ramsey interrogation of the space CAC employs two cavities separated by a distance D, giving an interrogation time T = D/v for cold atoms moving with velocity v in the microgravity of space. The full-width-at-half-maximum (FWHM) Δ of central Ramsey fringe is directly related to the velocity34
$$\Delta = v{\mathrm{/}}2D\,{\mathrm{(}}T > > \Delta t,\,\Delta \omega < < \Omega {\mathrm{),}}$$
where Δω is the microwave frequency detuning from resonance, and Ω the Rabi frequency. Taking into consideration the dead time of the clock cycle and the size limitation, D = 217 mm is used in our space CAC.
Our space CAC has been tested in the laboratory and all of its performance metrics meet the design specifications35. The setup passed all mechanical, thermal, and electromagnetic compatibility tests required by CMSP. It was launched into orbit on 15 September 2016 with the Chinese space laboratory Tiangong-2 and put into operation the following day. Since then, the space CAC has been working in orbit under the management of Tiangong-2. Under almost continuous operation in microgravity, our space CAC has already been tested for over 15 months in orbit.
Clock operation
Figure 2 presents typical results of time-of-flight (TOF) signals in the detection region. The cold atoms are launched by the moving molasses technique at the specified velocity Eq. (1), and detected by laser-excited atomic fluorescence. The transition probability p is found from
$$p = \frac{{N_2}}{{N_1 + N_2}},$$
Typical time-of-flight (TOF) signal. Flight time begins when the cold atomic cloud is launched. The red and black curves correspond to the fluorescence from states \(\left| {F = 1,m_F = 0} \right\rangle\) and \(\left| {F = 2,m_F = 0} \right\rangle\), respectively. The launch velocity is 4 m s−1
where N1 and N2 denote the number of cold atoms in the states \(\left| {F = 1} \right\rangle\) and \(\left| {F = 2} \right\rangle\), respectively. With this normalized detection, influences from fluctuations in atomic number on clock measurements are greatly reduced.
Figure 3a–d presents typical central Ramsey fringes of the space CAC. The interrogation cavity and microwave frequency are pre-tuned to the atomic transition frequency between the two ground states \(\left| {F = 1,m_F = 0} \right\rangle\) and \(\left| {F = 2,m_F = 0} \right\rangle\). The injected microwave power is optimized by measuring the transition probability p, Eq. (3). When the power is such that each passage through the microwave cavity applies a π/2 pulse all of the population flips from \(\left| {F = 1,m_F = 0} \right\rangle\) to \(\left| {F = 2,m_F = 0} \right\rangle\)on resonance. Unlike atomic fountains under gravity on the ground, the FWHM of the central Ramsey fringes in microgravity, Eq. (2), is linearly related to the launch velocity, as shown in Fig. 3e.
Ramsey fringes with different launch velocities. Central Ramsey fringes of the space CAC with a launch velocity of a 4.0 m s−1, b 1.0 m s−1, c 0.8 m s−1, and d 0.6 m s−1, respectively, corresponding to the FWHM of central fringe 7.3, 1.8, 1.4, and 0.9 Hz. Red lines are sinusoidal fits. e Dependence of FWHM of the central Ramsey fringes on flight velocity. Red line represents calculated result; black circles represent measured results. Each error bar is the standard deviation calculated from eight measurements
To determine the parameter settings for the clock operation, we measured the signal-to-noise ratio (SNR) for different launch velocities. Even though the lower velocity gives a narrower FWHM (Fig. 3e), it also leads to a lower SNR mainly due to the loss of cold atoms originated from expansion of cold atom cloud. Combining the contributions of both SNR and FWHM, a 1.1 m s−1 launch velocity, which corresponds to an FWHM of 2.0 Hz, yields an optimal performance. At this velocity, an SNR of 440 at the half-maximum point of the central Ramsey fringes is achieved (Fig. 4b). Figure 4a presents a plot of the population oscillation vs. microwave power at resonance frequency to determine the power required for π/2 transition in each interaction zone. The calculated results agree well with the measured data except the oscillations at higher microwave power mainly due to the velocity distribution of the cold atomic cloud and the inhomogeneities in the microwave amplitude. We focused on the first peak, from which we got the microwave power of about −66.5 dBm for the required π/2 transitions; the corresponding Ramsey fringes are given in Fig. 4b.
Atomic population oscillations after microwave interrogation. a Population oscillations vs. microwave power and b Ramsey fringes in microgravity with a launch velocity of 1.1 m s−1, used in closed-loop operation. The black circles represent measured data and the red lines are calculated results
Using the Ramsey fringes with 2.0 Hz FWHM, a closed-loop operation has been performed on the CAC by feeding the error signal to the direct digital synthesizer (DDS) of the microwave source. Figure 5 gives the error signal for the transient stage of the closed-loop operation. After a transient time of about 300 s, the error signal is stabilized around zero, which means the microwave frequency is tightly locked to the atomic resonance. This operation has been continuously performed for around 1 month before intentional interruption.
The error signal fed to the frequency of microwave source. Deviation of the microwave frequency from the atomic transition after locking the clock signal to the DDS of the microwave source. The servo loop is activated at time t = 0
To maintain continuous clock operation for long periods in orbit, several issues distinct from conditions typical on the ground are of concern. First, the environmental magnetic field varies periodically by about 80 μT due to the motion of the spacecraft in low Earth orbit (LEO). Figure 6 shows the variation of the magnetic field inside the outer magnetic shield monitored by a fluxgate magnetometer mounted around the state-selection cavity. The roughly 90 min period corresponds to the orbital period of the spacecraft around Earth. If left uncontrolled, this varying magnetic field would degrade the magnetic uniformity inside the interrogation zone and leads to clock frequency variations arising from the Zeeman effect. In the CAC, a magnetic field compensation loop is used and the parameter settings of the loop has been specified by the detected magnetic field using the fluxgate in orbit. The magnetic field along the trajectory of the cold atoms inside the interrogation zone is measured by exciting the magnetic-sensitive transition of rubidium24 and a variation of 4.5 nT is obtained when the loop is active. Such fluctuations lead to a clock frequency instability of about 1.7 × 10−16 on the timescale of the field fluctuations.
Magnetic field inside the outer magnetic shield vs time as the spacecraft rotates in LEO. Every peak corresponds to one orbit and the dips shown in some peaks correspond to the magnetic field when spacecraft moves through the South Atlantic Anomaly (SAA) region
Second, in LEO altitudes, an area of the Van Allen radiation belt known as the South Atlantic Anomaly (SAA)36 is observed in our test in orbit (Fig. 6). A spacecraft in LEO passing through that region is exposed to high-energy particles. Long-term observations indicate that this radiation has no significant influence on laser cooling. However, it does interfere with the photodetectors inside the detection zone and introduces random spikes on the detected TOF signals. There are two interference issues: spikes located on the TOF signal and near the TOF signal (see Fig. 7). When spikes are actually on the TOF signal (Fig. 7a), the data are discarded. When spikes are near the TOF signal but well separated from the signal (Fig. 7b), a window function is used to filter out the TOF signal from the contaminated signal. All these processes are automatically managed by the software of the control units. Because the issue in Fig. 7b is the majority, the influence from discarded data (about 0.05% of the total data) to the clock operation is slight and can be ignored.
Interfered TOF signals in the SAA region. Interference spikes located a on the TOF signal and b near the TOF signal when the spacecraft moves in the SAA region. Time t = 0 is the start time of the detection process
Third, in most cases during Tiangong-2's flight, the microgravity level is stationery with fluctuations of about a few parts in 104 g, which does not make a difference to the clock operation. However, when the spacecraft changes attitude, the microgravity levels inside vary over a large range. In this instance, the clock data are invalid and automatically discarded by the software. Nonetheless, adjustments in spacecraft attitude are seldom and predictable in flight, and therefore its influence on the clock operation is controllable. Apart from these three influences, other impacts such as fluctuations in ambient temperature and atmosphere pressure are also well managed.
As mentioned above, a longer interrogation time leads to a narrower linewidth but also more loss of cold atoms, which reduces the SNR of the Ramsey fringes. Using the SNR of 440 and FWHM of 2.0 Hz, a short-term frequency stability at an averaging time τ is close to 3.0 × 10−13τ−1/2 as expected with a clock period of 2.0 s37. There is no second clock on board and also no frequency dissemination link to the ground, so we have no reference with which to evaluate the clock stability in orbit. However, we have verified the performance of the CAC in our laboratory by measuring the frequency stability against an H maser as shown in Fig. 8. The calculated results estimated from the measured SNR and FWHM of the central Ramsey fringes are compared with the ground-based measurements (Fig. 8). The estimated instability in orbit is more than a factor of six lower than on Earth, which we attribute to the microgravity environment. Such measurements on the ground hint at the long-term performance of the space CAC. Currently, the frequency stability of the space CAC is mainly limited by the detected atoms. This limitation will be improved in the next space CAC for the Chinese space station by increasing the number of cold atoms using a 2D MOT and larger power for the cooling lasers. As the collision shift is low, more 87Rb atoms can be used without degrading the accuracy of the CAC.
Frequency stability of the space clock. Black circles represent the measured total deviation against an H Maser on the ground; the black dashed line represents the predicted frequency stability calculated using the measured SNR and FWHM on the ground. The cold atomic cloud is launched downward with a velocity of 1 m s−1 on the ground. The red dashed line represents the predicted frequency stability in orbit. The error bars represent uncertainty in the total deviation estimator
We realized laser cooling of 87Rb atoms and achieved the launching of cold atoms at low velocity in microgravity, demonstrated long-term closed-loop operation, and estimated the performance of the CAC in orbit. Since launch, our space CAC has been working in orbit for more than 15 months, and its performance continues as designed. We anticipate our demonstration to be a starting point for more tests of space-based cold atom sensors, for example, the construction of the next-generation TKS and GNSS in deep space, as well as of optical clocks, atomic interferometers, atomic gyros, etc. Furthermore, the experience from this mission, especially the data related to the orbital environments, will also benefit precision physics experiments such as the preparation of the ultra-cold quantum gases.
Setup of the space CAC
The space CAC (Supplementary Figure 1) consists of four sub-systems: physics package, optical bench, microwave source, and control electronics.
The physics package is an ultra-high vacuum (UHV) tube surrounded by a three-layer magnetic shield in which the rubidium atoms are cooled, state-prepared, interrogated by the microwave field, and detected. The UHV tube is closed by a rubidium base on one side and by two 2 l s−1 ion pumps on the other side. The rubidium source is covered by a thin film with tens of 0.1 mm diameter holes to avoid the leakage of liquid rubidium in microgravity. The atom flux is regulated by the temperature of the rubidium base. The rubidium vapour diffusing into the capture zone is captured and cooled by a MOT. Unlike the traditional configuration with six laser beams, the MOT we use is compact with only two input laser beams employing a fold-optical-path method. Its optical path and a photograph are shown in Supplementary Figure 2. Following the capture zone, a TE011 cylindrical microwave cavity with cut-off aperture of 13 mm diameter is used as a state-selection cavity. To avoid rubidium migration from the capture region to the other zones, a graphite getter is inserted in the tube between the capture zone and the state-selection cavity.
The interrogation cavity of the CAC is designed as a rectangular waveguide cavity (Supplementary Figure 3), based on the U-type interrogation cavity. The microwave signal propagates symmetrically along the two guided wave zones (TE107 mode) and forms a standing wave field at each microwave interaction zone (TE201 mode). Therefore, the one-way-flight cold atoms interact with the interrogating microwave field twice along their trajectory. This cavity is made of titanium alloy coated with silver to a thickness of roughly 5 μm. The measured loaded quality factor is approximately 4200. To mitigate the influences of the fluctuations in environmental temperature on the cavity resonance frequency, an active thermal controller is used to stabilize the cavity temperature to about 25 °C with a stability of ±0.01 °C.
Finally, the cold atoms enter the detection zone and pass through four laser beams in succession. The laser beams are set to perpendicular to the trajectory. The first beam is circularly polarized and forms a standing wave tuned to the transition \(\left| {5{}^2S_{1/2},F = 2} \right\rangle \to \left| {5{}^2P_{3/2},F = 3} \right\rangle\) to detect the population in state \(\left| {5{}^2S_{1/2},F = 2} \right\rangle\). Tuned to the same transition as the first, the second beam is a travelling wave to push away the atoms in state \(\left| {5{}^2S_{1/2},F = 2} \right\rangle\) after their detection. The third beam is tuned to the transition \(\left| {5{}^2S_{1/2},F = 1} \right\rangle \to \left| {5{}^2P_{3/2},F = 2} \right\rangle\) and pumps the atoms in state \(\left| {5{}^2S_{1/2},F = 1} \right\rangle\) to state \(\left| {5{}^2S_{1/2},F = 2} \right\rangle\). Finally, the fourth beam is set up like the first but detects the remaining atoms. As designed, the detection system (Supplementary Figure 4) is a compact integrated opto-mechanical system. All the optical components are glued, to improve the thermostability of the detection system, which is athermalized. The induced fluorescence is recorded by two 10 mm ×mm photodiodes with an efficiency of 2% and the photocurrent of each photodiode is converted into voltage using amplifiers with a gain of 107 A V−1. The photodetection has a noise level of less than \(3\,{{\upmu }}{{\mathrm V}}\,{\mathrm{Hz}}^{ - 1/2}\) over the frequencies from 1 to 100 Hz which can be ignored in the experiment. The two photodetection signals are digitized by the control electronics to calculate the transition probability.
The vacuum tube is built with four individual chambers, each is connected by flanges to the others. Each flange is sealed by Delta seals and indium whereas the optical windows are sealed by indium. To improve the vacuum level of the interrogation zone, two groups of four getters are distributed on the internal sides of the vacuum chamber at the two ends of the interrogation zone. All these methods allow the tube to maintain an UHV for a long time even if the ion pumps do not work. The vacuum level of the CAC in orbit is less than 1 × 10−7 Pa.
The vacuum tube is surrounded by three cylindrical Mumetal shields (Supplementary Figure 5): inner shield, middle shield, and outer shield. The total magnetic attenuation of these three-layer shields is more than 36 dB and the variation of the magnetic field inside the shield along the atom trajectory is less than 2 nT in a static geomagnetic field (Supplementary Figure 6). The interrogation zone is inside the inner shield, a solenoid coil surrounds the interrogation cavity to provide a clock magnetic field of about 100 nT. The middle shield encloses the vacuum tube from the state-selection cavity to the detection zone. One compensation coil is mounted close to the inner magnetic shield aperture to improve the magnetic homogeneity; a second coil is wound around the state-selection cavity to define the magnetic field for state selection. The outer shield encloses the whole tube and a coil is mounted on the capture zone. Two magnetic fluxgates are placed inside the magnetic shield. The magnetic field inside the middle shield can be detected by the fluxgate mounted near the detection zone and used as feedback to the current in the interrogation coil to compensate the fluctuation of the external magnetic field. The fluxgate mounted near the state-selection cavity is used for monitoring.
The optical bench is designed as a compact, robust optical system. Its architecture is shown in Supplementary Figure 7. The laser sources we use are distributed Bragg reflector (DBR) diodes. Each laser diode (LD) is followed by a 40 dB isolator to eliminate feedback from the surfaces of the optical components. The frequency of cooling LD1 is locked to the saturated absorption crossover resonance of the transition \(\left| {5{}^2S_{1/2},F = 2} \right\rangle \to \left| {5{}^2P_{3/2},F = 2,3} \right\rangle\) of the 87Rb D2 line through an acousto-optic modulator. The whole frequency stabilization process is automated and managed by a microcontroller unit. The laser beam from cooling LD1 is divided into four beams, which are coupled into different fibres and used as two cooling laser beams, the state selection laser beam, and the probing laser beam. The frequency of each beam is shifted by an acousto-optic modulator before the fibre coupler. To improve the reliability, LD2 is used as a backup laser source of the cooling LD1. The repumping laser, LD, is frequency-locked to the saturated absorption crossover resonance of the transition \(\left| {5{}^2S_{1/2},F = 1} \right\rangle \to \left| {5{}^2P_{3/2},F = 1,2} \right\rangle\) of the 87Rb D2 line. The beam passes through an acousto-optic modulator and is split into two beams. One beam is coupled into a fibre and act as a repumping laser beam for the dual-level detection process, and another beam is combined with the cooling laser beam for the laser cooling process.
The base of the optical bench is made of SiC-reinforced AI–Si alloy, its size is 300 mm × 290 mm × 30 mm. All optical components have been validated to comply with the space-application criteria, and their mounts were specially designed and manufactured. Additionally, the temperature of the optical bench is stabilized to about 25 °C, which is the optimum operating temperature of this system. Both active and passive methods are used for temperature stabilization. The active temperature control is used to compensate the thermal loss of the optical bench, whereas the passive temperature control is used to dissipate excess heat from the hot component and to maintain a maximum temperature gradient below 1.0 °C for the whole optical bench.
The microwave source is based on frequency multiplication of an ultra-low phase noise quartz oscillator (BVA8607, OSA) up to 6.834 GHz (Supplementary Figure 8). The mass is 5 kg and the power dissipation is 20 W. The BVA8607 oscillator output signal has excellent spectral purity but is fragile to mechanical vibration during the rocket's launching phase. To circumvent this problem, the oscillator is placed in an aluminium enclosure with polyurethane hemisphere vibration isolators attached to each inner sidewall. With this method, the BVA8607 oscillator has passed the space vibration qualification test.
The oscillator frequency (5 MHz) is multiplied to 100 MHz which is used to phase-lock a local 100 MHz quartz oscillator with a bandwidth of about 100 Hz. The purpose of this loop is to optimize the frequency noise spectrum of the 100 MHz signal at high Fourier frequencies. The 100 MHz signal is multiplied to 200 MHz and then split into two signals. One is used to drive a DDS and two frequency down-converter mixers; the other is multiplied to 7 GHz. The 7 GHz signal is mixed with a 7.034 GHz signal from a dielectric resonator oscillator (DRO) to produce a fractional frequency signal (about 34.68 MHz), which is compared with the output signal of the DDS to phase-lock the DRO. Finally, the output signal of the DRO is split into two signals and each signal is frequency down-converted to 6.834 GHz. One 6.834 GHz signal is fed to the state-selection cavity whereas the other is fed to the interrogation cavity. Each microwave signal can be switched off by more than 65 dB to avoid significant microwave leakage. In addition, the power of the microwave signals can be adjusted to a resolution of 0.25 dB by digital attenuators.
The phase noise spectral density of the 6.834 GHz signal is mainly dominated by that of the 5 MHz oscillator. The contribution of other elements is 10 dB lower. Using the measured spectrum (Supplementary Figure 9), the frequency stability limited by the microwave source4 is estimated to be 1 × 10−13τ−1/2 which is consistent with the mission requirements.
The control electronics consists of two parts: the microcontroller unit and the FPGA unit. The microcontroller unit manages the data communication and calculations whereas the FPGA unit controls the operation of the CAC. The microcontroller unit is connected to the FPGA unit by a RS422 line for data exchanges. The FPGA unit sends trigger signals to other sub-systems at different phases of the clock cycle, for instance the AOM frequency changes, the microwave source frequency or the microwave switch operations. The FPGA unit also receives digitized analogue signals, the main signal being the detection fluorescence signal, and implements servo loops such as temperature control and magnetic field compensation.
Experimental method in orbit
The timing sequence of the laser trapping and cooling is presented in Supplementary Figure 10. This process is composed of four stages: capture, pre-cooling, launching, and post-cooling. The power and frequency of the cooling beams are regulated by the AOMs. First, the rubidium atoms are captured in the MOT. After hundreds of milliseconds, the atoms are cooled to about 100 μK and the magnetic field of the MOT is switched off. A few milliseconds later, the cooling process changes to a pre-cooling stage that lasts from 4 to 11 ms with different launching velocities where the atoms are further cooled by polarization gradient cooling. Thereafter, the cold atoms are launched with the moving optical molasses. During the launching stage, the atoms are slightly heated. To cool the atoms in the moving frame, an adiabatic cooling by slowly ramping down the intensity of the cooling beams together with the polarization gradient cooling are used in the post-cooling stage. Finally, the atoms are cooled to several micro-kelvins at the end of the cooling phase.
If the cold atom cloud has small density, its spatial distribution can be approximately considered to be Gaussian. In this experiment, Gaussian fitting is used to deduce the detected atoms number and their temperature. The number of the cold atoms is obtained from
$$N = \frac{{{\int} {U_{{\mathrm{PD}}}\left( t \right){\mathrm {d}}t} }}{{\gamma _P \cdot \Delta t \cdot c \cdot \rho _{{\mathrm{PD}}} \cdot G \cdot E_{{\mathrm{Rb}}}}}$$
where \(\gamma _P\), \(\Delta t\),\(c\), UPD (t), \(\rho _{{\mathrm{PD}}}\), and G represent respectively the scattering rate of atoms, the time taken for atoms to cross the probing beam, the fluorescence collection efficiency of the detection system, the TOF signal lineshape in volts, the quantum efficiency of the photodiode and the gain of current amplifier. \(E_{{\mathrm{Rb}}}\) is the D2 transition energy of 87Rb, which equals \(2.54 \times 10^{ - 19}{\mathrm{ J}}\). In this experiment, we have \(\gamma _p = 1.7 \times 10^7{\mathrm{ s}}^{ - 1}\), \(c = 2\%\), \(\rho _{{\mathrm {PD}}} = 0.5{\mathrm{ A W}}^{ - 1}\), and G = 1.1 × 109 V A−1. By substituting these values into Eq. (2), the atomic number can be obtained.
From the ground-based tests, the initial size of the cold atom cloud is approximate 1 mm which is small compared with that of the detected cold atom cloud in the detection zone. From the principle of equipartition of energy, its temperature is found from
$$T = \frac{m}{{k_{\mathrm {B}}}}\frac{{\sigma _t^2 \cdot v_0^2}}{{\Delta t^2}},$$
where m is the mass of a single rubidium atom, KB the Boltzmann constant, v0 the velocity of the atomic cloud while passing through the probing beam, Δt the time period between launch and detection, and σt the Gaussian radius of the TOF signal, which represents the size of atomic cloud.
The data that support the findings of this study are available from the corresponding authors upon request.
Metcalf, H. J. & van der Straten, P. Laser Cooling and Trapping (Springer-Verlag, New York, 1999).
Wynands, R. & Weyers, S. Atomic fountain clocks. Metrologia 42, S64–S79 (2005).
Article ADS CAS Google Scholar
Abgrall, M. et al. Atomic fountains and optical clocks at SYRTE: status and perspectives. C. R. Phys. 16, 461–470 (2015).
Heavner, T. P. First accuracy evaluation of NIST-F2. Metrologia 51, 174–182 (2014).
Ludlow, A. D., Boyd, M. M., Ye, J., Peik, E. & Schmidt, P. O. Optical atomic clocks. Rev. Mod. Phys. 87, 637–701 (2015).
Poli, N., Oates, C. W., Gill, P. & Tino, G. M. Optical atomic clocks. Riv. Del. Nuovo Cim. 36, 555–624 (2013).
ADS CAS Google Scholar
Riehle, F. Towards a redefinition of the second based on optical atomic clocks. C. R. Phys. 16, 506–515 (2015).
Senior, K. L., Ray, J. R. & Beard, R. L. Characterization of periodic variations in the GPS satellite clock. GPS Solut. 12, 211–225 (2008).
Ely, T. A., Seubert, J. & Bell, J. Advancing navigation, timing, and science with the Deep Space Atomic Clock. In SpaceOps 2014 13th International Conference on Space Operations (American Institute of Aeronautics and Astronautics, 2014).
Reynaud, S., Salomon, C. & Wolf, P. Testing general relativity with atomic clocks. Space Sci. Rev. 148, 233–247 (2009).
Article ADS Google Scholar
Hawking, S. W. & Ellis, G. F. R. The Large Scale Structure of Space-Time (Cambridge University Press, Cambridge, 1973).
Derevianko, A. & Pospelov, M. Hunting for topological dark matter with atomic clocks. Nat. Phys. 10, 933–936 (2014).
Graham, P., Hogan, J. M., Kasevich, M. A. & Rajendran, S. New method for gravitational wave detection with atomic sensors. Phys. Rev. Lett. 110, 171102 (2013).
Article ADS PubMed CAS Google Scholar
Tino, G. M. et al. Atom interferometers and optical atomic clocks: new quantum sensors for fundamental physics experiments in space. Nucl. Phys. B Proc. Suppl. 166, 159–165 (2007).
Stern, G. et al. Light-pulse atom interferometry in microgravity. Eur. Phys. J. D 53, 353–357 (2009).
van Zoes, T. et al. Bose-Einstein condensation in microgravity. Science 328, 1540–1543 (2010).
Müntinga, H. et al. Interferometry with Bose-Einstein condensates in microgravity. Phys. Rev. Lett. 110, 093602 (2013).
Kulas, S. et al. Miniaturized lab system for future cold atom experiments in microgravity. Microgravity Sci. Technol. 29, 37–48 (2017).
Lämmerzahl, C. et al. Experiments in fundamental physics scheduled and in development for the ISS. General Relativ. Gravit. 36, 615–649 (2004).
Article ADS MATH Google Scholar
Laurent, P. et al. A cold atom clock in absence of gravity. Eur. Phys. J. D 3, 201–204 (1998).
Bize, S. et al. Design of the cold atom PHARAO space clock and initial test results. J. Phys. B 38, S449–S468 (2005).
Laurent, P. et al. Design of the cold atom PHARAO space clock and initial test results. Appl. Phys. B 84, 683–690 (2006).
Cacciapuoti, L. & Salomon, C. Space clocks and fundamental tests: the ACES experiment. Eur. Phys. J. Spec. Top. 172, 57–68 (2009).
Laurent, P., Massonnet, D., Cacciapuoti, L. & Salomon, C. The ACES /PHARAO space mission. C. R. Phys. 16, 540–552 (2015).
Moric, I., De Graeve, C., Grosjean, O. & Laurent, P. Hysteresis prediction inside magnetic shields and application. Rev. Sci. Instrum. 85, 075117 (2014).
Peterman, P., Gibble, K., Laurent, P. & Salomon, C. Microwave lensing frequency shift of the PHARAO laser-cooled microgravity atomic clock. Metrologia 53, 899–907 (2016).
Fertig, C. & Gibble, K. Measurement and cancellation of the cold collision frequency shift in an 87 Rb fountain clock. Phys. Rev. Lett. 85, 1622–1625 (2000).
Sortais, Y. et al. Cold collision frequency shifts in a 87Rb atomic fountain. Phys. Rev. Lett. 85, 3117–3120 (2000).
Ren, W. et al. Highly reliable optical system for a rubidium space cold atom clock. Appl. Opt. 55, 3607–3614 (2016).
Ren, W. et al. Development of an ultra-high vacuum system for space cold atom clock. Vacuum 116, 54–59 (2015).
Ren, W., Gao, Y. C., Li, T., Lü, D. S. & Liu, L. Microwave interrogation cavity for the rubidium space cold atom clock. Chin. Phys. B 6, 060601 (2016).
Li, L. et al. Automatic compensation of magnetic field for a rubidium space cold atom clock. Chin. Phys. B 25, 073201 (2016).
Qu, Q. Z. et al. Integrated design of a compact magneto-optical trap for space applications. Chin. Opt. Lett. 13, 061405 (2015).
Ramsey, N. F. A molecular beam resonance method with separated oscillating fields. Phys. Rev. 78, 695–699 (1950).
Li, L. et al. Initial tests of a rubidium space cold atom clock. Chin. Phys. Lett. 33, 063201 (2016).
Johnston, A. H. Radiation effects in optoelectronic devices. IEEE Trans. Nucl. Sci. 60, 2054–2073 (2013).
Riehle, F. Frequency Standards: Basics and Applications (Wiley, Weinheim, 2004).
We would like to thank our colleagues at the Technology and Engineering Center for Space Utilization, CAS, especially Yidong Gu, Min Gao, Guangheng Zhao, Congming Lü, Hongen Zhong, and many others for their many discussions and technical supports. We also thank Wenrui Hu of Institute of Mechanics, CAS, and Tianchu Li of National Institute of Metrology for their long-term discussions on the space applications of the atomic clock. We also thank Shenggang Liu and Yuanci Gao of the University of Electronic Science and Technology of China for his work on the microwave cavity, the Nanopa Vacuum Co. for their help on the vacuum tube. We thank K. Gibble, H. Metcalf, and Xinye Xu for their many suggestions on the manuscript. We acknowledge the supports from the China Manned Space Engineering Office, the Chinese Academy of Sciences, and the National Key R&D Program of China (Grant No. 2016YFA0301504).
Key Laboratory of Quantum Optics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
Liang Liu, De-Sheng Lü, Tang Li, Qiu-Zhi Qu, Bin Wang, Lin Li, Wei Ren, Jian-Bo Zhao, Xin Zhao, Jing-Wei Ji, Mei-Feng Ye, Yuan-Yuan Yao, Dan Song, Zhao-Gang Liang, Jing-Feng Xiang, Xiang-Kai Peng & Yu-Zhu Wang
Research Center of Space Laser Information Technology, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
Wei-Biao Chen, Zuo-Ren Dong, Wen-Bing Xia, Yan-Guang Sun, Shan-Jiang Hu, Dun-He Yu, Xia Hou, Wei Shi & Hua-Guo Zang
De-Sheng Lü
Wei-Biao Chen
Tang Li
Qiu-Zhi Qu
Lin Li
Wei Ren
Zuo-Ren Dong
Jian-Bo Zhao
Wen-Bing Xia
Jing-Wei Ji
Mei-Feng Ye
Yan-Guang Sun
Yuan-Yuan Yao
Dan Song
Zhao-Gang Liang
Shan-Jiang Hu
Dun-He Yu
Xia Hou
Wei Shi
Hua-Guo Zang
Jing-Feng Xiang
Xiang-Kai Peng
Yu-Zhu Wang
L. Liu and Y.-Z.W. conceived the research. L. Liu, D.-S.L., W.-B.C., T.L., Q.-Z.Q., B.W., L. Li, W.R., J.-B.Z., X.Z., J.-W.J., M.-F.Y., Y.-Y.Y., J.-F.X. and X.-K.P. designed the experiments. All authors designed and developed the setup. D.-S.L., J.-B.Z., W.R., and M.-F.Y. developed the physics package. W.-B.X., B.W., Z.-R.D., Y.-G.S., and Q.-Z.Q. developed the optical bench. T.L. and W.S. developed the microwave source. L. Li, X.Z., J.-W.J., S.-J.H., D.-H.Y., X.H., H.-G.Z., and Y.-Y.Y. developed the control electronics and software. W.R., J.-W.J., X.-K.P., J.-F.X., L. Liu, D.-S.L., T.L., Q.-Z.Q., B.W., and L.Li contributed to the data collection and discussed the results. D.S. and Z.-G.L. are responsible for project management. Y.-Z.W. supervised the whole project.
Correspondence to Liang Liu, De-Sheng Lü or Wei-Biao Chen.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Figures
Liu, L., Lü, DS., Chen, WB. et al. In-orbit operation of an atomic clock based on laser-cooled 87Rb atoms. Nat Commun 9, 2760 (2018). https://doi.org/10.1038/s41467-018-05219-z
DOI: https://doi.org/10.1038/s41467-018-05219-z
Bespoke magnetic field design for a magnetically shielded cold atom interferometer
P. J. Hobson
J. Vovrosh
M. Holynski
Direct detection of ultralight dark matter bound to the Sun with space quantum sensors
Yu-Dai Tsai
Joshua Eby
Marianna S. Safronova
Microgravity Test Complex for Mobile and Portable Optical Frequency Standards
A. P. Vyalykh
A. V. Semenko
S. N. Slyusarev
Measurement Techniques (2022)
Point-to-point stabilized optical frequency transfer with active optics
Benjamin P. Dix-Matthews
Sascha W. Schediwy
Ultracold atom interferometry in space
Maike D. Lachmann
Holger Ahlers
Ernst M. Rasel
Top 50: Physics | CommonCrawl |
Why ket and bra notation?
So, I've been trying to teach myself about quantum computing, and I found a great YouTube series called Quantum Computing for the Determined. However. Why do we use ket/bra notation? Normal vector notation is much clearer (okay, clearer because I've spent a couple of weeks versus two days with it, but still). Is there any significance to this notation? I'm assuming it's used for a reason, but what is that reason? I guess I just don't understand why you'd use ket notation when you have perfectly good notation already.
quantum-mechanics soft-question hilbert-space notation
Qmechanic♦
heatherheather
$\begingroup$ I don't see how this question is about quantum computing -- the ket notation is much older. As you probably know, it originates from cutting a scalar product $\langle\cdot,\cdot\rangle$ in two halves. Whether it is better is probably mostly a matter of taste. Honestly, this seems more a history of science questions. $\endgroup$ – Norbert Schuch Sep 25 '16 at 14:22
$\begingroup$ @NorbertSchuch, well, actually, I didn't know it originates from cutting a scalar product into two halves, though I do know it is older than quantum computing. I'm asking about the significance of this notation and what it is used for, especially in the context of quantum computing. I don't see how this is a "history of science" question. It seems more practical. $\endgroup$ – heather Sep 25 '16 at 14:24
$\begingroup$ You should translate $|0\rangle=\vec e_0$, $|1\rangle=\vec e_1$, etc., where $\vec e_i$ are the (canonical) basis vectors. It's really just a different way of writing vectors, with some small advantages/disadvantages in specific situations. (For one thing, it can be more compact as you omit the $\vec e$ and don't need to use subscripts.) $\endgroup$ – Norbert Schuch Sep 25 '16 at 15:45
$\begingroup$ It's important to note that notation for vectors isn't enough -- you need to notate covectors too. In fact, you need to be able to work with vectors, covectors and operators all in the same equation! You really want the algebra to follow the analogy with matrix algebra, with vectors acting like columns and covectors acting like rows. $\endgroup$ – user5174 Sep 26 '16 at 0:36
$\begingroup$ @heather Here's an example. Let's say you're working with the free particle in introductory quantum mechanics, where the "vector" $ \psi $ has infinitely many components. With traditional notation, you can't keep track of both (1) whether $ \psi $ belongs to the dual or regular space (in which case you have to write out the components to demonstrate whether they are in a column or row) and (2) all the elements (because there are infinitely many of them). Bra-ket is nicer there. $\endgroup$ – QuantumFool Sep 27 '16 at 1:05
| show 11 more comments
Indeed, I agree with you, standard notation is, in my personal view, already sufficiently clear and bra-ket notation should be used when it is really useful. A typical case in QM is when a state vector is determined by a set of quantum numbers like this $$\left|l m s \right\rangle$$ Another case concerns the use of the so-called occupation numbers $$\left|n_{k_1} n_{k_2}\right\rangle$$ in QFT. Also q-bits notation for states $\left|0\right\rangle$, $\left|1\right\rangle$ in quantum information theory is meaningful... Finally the use of bra ket notation permits one to denote orthogonal projectors onto subspaces in a very effective manner $$\sum_{|m|\leq l}\left|l m \right\rangle \left\langle l m\right|\:.$$
A reason for its, in my view, nowadays not completely justified use is historical and due to the famous P.A.M. Dirac's textbook. In the 1930s, mathematical objects like Hilbert spaces and dual spaces, self-adjoint operators, were not very familiar mathematical tools to physicists. (The modern notion of Hilbert space was invented in 1932 by J. von Neumann in his less famous textbook on mathematical foundations of QM.) Dirac proposed a very nice notation which embodied a fundamental part of the formalism. However it also includes some drawbacks. In particular, manipulating non-self adjoint operators, e.g., symmetries, turns out to be very cumbersome within bra-ket formalism. If $A$ is self-adjoint, in $\left\langle \psi\right| A\left| \phi\right\rangle$ the operator can be viewed, indifferently, as acting on the left or on the right preserving the final result. If the operator is not self-adjoint this is false.
I think bra-ket notation is a very useful tool, but should be used "cum grano salis" in QM. In my view $\left|\psi\right\rangle$ where $\psi$ is a qunatum mechanics wavefunction, may be a dangerous notation, especially for students, as it generates misleading questions like this, $A\left|\psi\right\rangle = \left|A\psi \right\rangle$?
ADDENDUM. I understand that I interpreted the question into a broader view, regarding the use of bra-ket notation in QM rather than the restricted field of quantum information theory.
Valter MorettiValter Moretti
$\begingroup$ So there isn't really any significance to it, it is just like vector notation only more confusing? Or is there some uses for it that make it worth it? $\endgroup$ – heather Sep 25 '16 at 14:47
$\begingroup$ I started 20 years ago to deal with mathematics of Quantm Theories, I never found a cogent reason to always use bra ket notation. You can check my pse answers and you see that I have rarely used that notation...though I think it is very valuable in some cases. $\endgroup$ – Valter Moretti Sep 25 '16 at 14:53
$\begingroup$ What would be an example of those cases? $\endgroup$ – heather Sep 25 '16 at 14:57
$\begingroup$ If $A $ is self-adjoint, in $\langle \psi|A|\phi\rangle $ the operator can be viewed, indifferently, as acting on the left or on the right. If the operator is not self-adjoint this is false. In my view $|\psi\rangle $ where $\psi $ is a wavefunction, is a dangerous notation, especially for students, as it generates misleading questions like this, $A|\psi\rangle = |A\psi \rangle $? $\endgroup$ – Valter Moretti Sep 25 '16 at 15:07
$\begingroup$ IMO, the problem with $A$ not self-adjoint is not in the idea of $A$ acting on the left, but on the strange idea of writing $\langle A \psi | $ for the result of $\langle \psi | A$. (I also don't like $|A \psi \rangle$, but that's less weird since it at least gets the ordering of things right) $\endgroup$ – user5174 Sep 26 '16 at 0:41
What is "normal vector notation"? I've seen angle brackets with commas, parentheses, square brackets, $\hat{x}$, $\hat{i}$, column matrices, row matrices ... which of those is "normal", $(x|y)$, ...?
Bras and kets are just another, with the particular benefit that it distinguishes the vector space from its dual space.
edit after comment
Note that some of these are component notations, which do not work for quantum mechanics as the number of dimensions can be large or infinite.
garypgaryp
20k11 gold badge3636 silver badges7373 bronze badges
$\begingroup$ Well, I've always seen vector notation as either $\begin{bmatrix}x\\y\end{bmatrix}$ or $(x, y)$. I personally think the first of these two is the best, as it is the most clear for calculations. Ket/bra notation seems very different from the notations seen in linear algebra books, such as these two. $\endgroup$ – heather Sep 25 '16 at 16:29
$\begingroup$ The two that you have seen are component notations, while bra and ket (and others) are a symbolic notation. Components do not work for quantum mechanics as the number of dimensions can be large or infinite. $\endgroup$ – garyp Sep 25 '16 at 16:54
$\begingroup$ In linear algebra (the math of vectors) and more importantly in multi-linear algebra (the math of tensors) I agree with @garyp in that there is no "normal" notation. The notation used should fit the problem domain. Sometimes, vectors in column form (or its dual in row form) are useful. But, in QM, Bra/Ket can be very useful. In General Relativity, a vector is most useful considered as a rank 1 tensor with distinctions for its dual (aka covector or one-form). The study of orthogonal functions brings about new ideas for vectors that don't fit the $(x,y)$ syntax. $\endgroup$ – K7PEH Sep 25 '16 at 17:35
I think there is a practical reason for ket notation in quantum computing, which is just that it minimises the use of subscripts, which can make things more readable sometimes.
If I have a single qubit, I can write its canonical basis vectors as $\mid 0 \rangle$ and $\mid 1 \rangle$ or as $\mathbf{e}_0$ and $\mathbf{e}_1$, it doesn't really make much difference. However, now suppose I have a system with four qubits. Now in "normal" vector notation the basis vectors would have to be something like $\mathbf{e}_{0000}$, $\mathbf{e}_{1011}$, etc. Having those long strings of digits typeset as tiny subscripts makes them kind of hard to read and doesn't look so great. With ket notation they're $\mid 0000\rangle$ and $\mid 1011\rangle$ etc., which improves this situation a bit. You could compare also $\mid\uparrow\rangle$, $\mid\to\rangle$, $\mid\uparrow\uparrow\downarrow\downarrow\rangle$, etc. with $\mathbf{e}_{\uparrow}$, $\mathbf{e}_{\to}$, $\mathbf{e}_{\uparrow\uparrow\downarrow\downarrow}\,\,$ for a similar issue.
NathanielNathaniel
First, this notation makes it very clear which objects are interpreted as elements of the primal space (kets) or elements of the dual space (bras).
The names "bra" and "ket" recall how the notation was formed: as the left and right halves of an inner product, the projection of the state $a$ along the measurement $\phi$ is the inner product(indicated by angle brackets) $\langle \phi , a \rangle$, which can be typographically broken into $\langle \phi \mid\,\mid a \rangle$, two objects which typographically hint vector-ness.
There is also a typewriter limitation that contributes to this notation (and rather too many notations that are just variants of two or more elements in a comma-separated list bounded by parentheses or square brackets: GCD, LCM, object generated by, meet, join, intervals with various endpoint conventions, sequences, object tuples, et al...). It is very time consuming to type a column vector on a typewriter. Typewriters do not have oversized parentheses for column vectors. This leads to strictly malformed constructions like "Let $A$ be a linear operator between $\Bbb{R}^2$ and $\Bbb{R}^2$, then $A \cdot (1,0)$ has ..." where a row vector is typed in a place that a column vector is required. In particular, this means that the most common form of vector in beginning linear algebra is hard to typeset and so was frequently typed incorrectly transposed.
Further, elements of the primal and dual spaces should be readily distinguished (to prevent unintentionally writing, e.g., $\sum_i \mid \mathrm{e}_i \rangle \mid \mathrm{e}_i \rangle$). However, the "obvious" solution is even harder to type: "$\sum_i \langle \mathrm{e}_i \mid \underline{\widehat{\mathrm{e}_i}}$" (and even with the full power of MathJax, as much time as I'm willing to spend on this necessarily has the primal vector pointing up instead of down).
Finally, the stuff one puts in a bra or ket is seldom a set of vector components. By the definitions that a mathematician uses, the components of a vector all come from the same field. This isn't going to work for states described by some continuous and some discrete variables, or by states with some variables in the primal space and some variables in the tangent space. (If we force this to work, we actually get direct sums of modules, not vector spaces.) So while we might like to put lists of state-describing numbers in a bra or a ket, the thing we get is not and cannot be a (formal) vector.
Eric TowersEric Towers
$\begingroup$ IMO the whole column vector vs colums vector issue is just an artifact of the over-reliance on matrices. I find nothing wrong with writing vectors $v \in \mathbb{R}^2 \equiv \mathbb{R}\times\mathbb{R}$ as tuples, i.e. $v = (v_0, v_1)$. If this leads to inconsistencies when it comes to matrix multiplication then that's an issue of the matrix notation, not the notation you use to represent vectors in some give space. And of course the component writings only work in the finite-dimensional case. Bra-ket notation avoids all these issues. $\endgroup$ – leftaroundabout Sep 26 '16 at 22:07
$\begingroup$ (Nevertheless: I rather prefer the maths convention of not using any special markup for vectors or dual vector at all; it should simply be declared in what space the quantity lives that some symbol refers to.) $\endgroup$ – leftaroundabout Sep 26 '16 at 22:09
$\begingroup$ @leftaroundabout : The idea of having no specific notation for primal versus dual vectors is ineffective in the quantum mechanical setting where the same symbols are used for an eigenstate and its dual. Consider $\langle n+1 \mid H \mid n+1 \rangle$. $\endgroup$ – Eric Towers Sep 27 '16 at 15:59
$\begingroup$ Well, that is mainly inefficient to write because you can't use $n+1$ as an identifier without wrapping in in a bra/ket. However, if you wrote it $\langle \psi_{n+1} | H | \psi_{n+1}\rangle$ then this could perfectly well be translated to an ordinary inner-product expression $\langle \psi_{n+1}, H\, \psi_{n+1}\rangle$. No bras or kets here. What does get a bit annoying is when you actually want to talk about dual vectors as such, without executing an inner product – e.g. $\sum_i |v_i\rangle\langle v_i|$ becomes $\sum_i v_i \langle v_i,\cdot\rangle$, which may not be very clear. $\endgroup$ – leftaroundabout Sep 27 '16 at 16:06
$\begingroup$ @leftaroundabout : ... but that notation requires an adjoint notation ... $\langle \psi_{n+1}^*, H \psi_{n+1} \rangle$, which, contrary to your prior comment, marks up the dual vectors. $\endgroup$ – Eric Towers Sep 28 '16 at 12:51
All the answers so far provide valid reasons for Dirac notation (bra's and ket's). However, the central reason why Dirac felt the need to introduce this notation seems to be missing from these answers.
When I specify a quantity as a vector, say $$ \mathbf{v}=[a, b, c, d, ...]^T $$ then in effect I have already decided what the basis is in terms of which the quantity is expressed. In other words, each entry represents the value (or coefficient) for that basis element.
When Dirac developed his notation, he realized that a quantum mechanical state contains the same information regardless of the basis in terms of which the state is expressed. So the notation is designed to represent this abstractness. The object $|\psi\rangle$ does not make any statement about the basis in terms of which it is expressed. If I want to consider it in terms of a particular basis (say the position basis) I would compute the contraction $$ \langle x|\psi\rangle = \psi(x) $$ and then I end up with the wavefunction in the position basis. I can equally well express it in the Fourier basis $$ \langle k|\psi\rangle = \psi(k) $$ and obtain the wavefunction in the Fourier domain. Both $\psi(x)$ and $\psi(k)$ contain the same information, since they are related by a Fourier transform. However, each represents a certain bias in the sense that they are cast in terms of a particular basis. The power of the Dirac notation is that it allows one to do calculations without having to introduce this particular bias. I think this is a capability that Dirac notation provides that is not available in ordinary vector notation.
flippiefanusflippiefanus
$\begingroup$ This is not really something that's special about Dirac notation. Sure, component notation is inferior, but vectors as abstract quantities without a basis reference could also be written $\vec{\psi}$ or in fact simply $\psi$. $\endgroup$ – leftaroundabout Sep 27 '16 at 16:19
$\begingroup$ That would be confusing, because one can write the wavefunction also like that. $\endgroup$ – flippiefanus Sep 28 '16 at 4:15
$\begingroup$ That in itself is a bit of a questionable overloading of the $\psi$ symbol, but again it has little to do with bra-ket. Instead of $\psi(x) := \langle x|\psi\rangle$ you'd then write $\psi(x) := \langle e_x, \psi\rangle$. $\endgroup$ – leftaroundabout Sep 28 '16 at 7:54
$\begingroup$ Though I agree that the overloading becomes less problematic when using a ket symbol for the plain quantum state, because it's clear this does not mean the function. $\endgroup$ – leftaroundabout Sep 28 '16 at 8:04
The bra-ket notation is an advancement of the dot product of "normal" vectors. $$ \vec{a} \cdot \vec{b} = \sum_i a_ib_i . $$ This is generalized to the inner product $ \langle a, b \rangle $, which for functions is defined as: $$ \langle a(x), b(x) \rangle = \int a(x)b(x) dx $$ in the simple case of 1-dim functions.
Well the big advantage of the bra-ket notation is that there is no need to specify the representation, i.e. the coordinate system until one wants to calculate something in a specific space.
Part of the appeal of the notation is the abstract representation-independence it encodes, together with its versatility in producing a specific representation (e.g. x, or p, or eigenfunction base) without much ado, or excessive reliance on the nature of the linear spaces involved.
It is pretty handy when, for example, evaluating equations like
$$ \langle \psi_0 | ( |\psi_0\rangle + |\psi_1\rangle) = \langle \psi_0 |\psi_0\rangle ,$$ where $ |\psi_i\rangle $ are some orthogonal states. It is fast evaluation without the need to specify the system of $|\psi\rangle$ - whether $|\psi_0\rangle = (1, 0) $ and $|\psi_1\rangle = (0, 1) $ or it is $ |\psi_0\rangle = (1, \pi/2) $ and $|\psi_1\rangle = (1, 0) $ in polar coordinates $r, \varphi$.
I do see, your point in saying "normal" vector notation is much clearer. That might be the case for these simple vectors as the ones above, but makes things hard to write when it comes to functions in multi- or even infinite-dimensional Hilbert space.
stevosnstevosn
The bra-ket notation comes from Dirac. Feynman gives a good explanation in his Lectures on Physics, vol. 3, p 3-2. If you are familiar with conditional probability, we write the probability of seeing $b$ if we have seen $a$ is written $$P(b|a)$$
In quantum mechanics the calculation of seeing $b$, if we have already seen $a$, is written in bracket notation: $$\langle b|a \rangle$$ which is the same idea, except that it is not a probability, but a complex number called a probability amplitude. In quantum mechanics we don't work with real numbers; the probability calculations give good predictions only when we work with complex numbers. At the end of calculations, the length of a complex number is squared to obtain the real-number probability we expect to observe: $$| \langle b|a \rangle |^2$$
Now we can talk about posterior conditions and prior conditions as a "bra" $\langle b|$ and a "ket" $|a \rangle$. Then if in place of a specific outcome $b$ we consider all possible outcomes, that is a vector, a "bra-vector". The space of prior values (or states) is a "ket-vector".
kubanczyk
Mike DunlaveyMike Dunlavey
The preference for bracket notation might be related to how make an elegant classic interpretation of quantum measurement.
Consider a system described by the state $|\beta\rangle$ then the average or expected value of an operator $\hat{A}$ which corresponds to the classical theory is simply the bracket of the operator. Or sandwiching the operator:
$$\langle \beta| \hat{A} |\beta\rangle$$
In the case of the hydrogen atom, for example, the bracket of position operator, for an electron in an eigenstate $|\epsilon_{n}\rangle$, is zero. Classically, therefore, electron is in the nucleus or origin:
$$\langle \epsilon_{n}| \hat{X} |\epsilon_{n}\rangle=0$$
Makes sense, classically, why no radiation is emitted while the system is in an energy eigenstate.
Here's an example. Let's say you're working with the free particle in introductory quantum mechanics, where the "vector" $ \vec{\psi} $ has infinitely many components (I know that sounds crazy if you don't have a lot of experience with quantum mechanics, but it's the case). With traditional notation, you can't keep track whether $ \vec{\psi} $ belongs to the dual or regular space - whether $ \vec{\psi} $ is a row vector or a column vector, respectively. In standard notation you'd have to write out the components (infinitely many of them!) to demonstrate a row or a column.
Bra-ket notation is nicer there. The "bras" $ \left \langle \psi \right | $ are dual vectors to the "kets" $ \left | \psi \right \rangle $.
A more crazy and more useful interpretation is that bras are linear functions and kets are their arguments.
QuantumFoolQuantumFool
Not the answer you're looking for? Browse other questions tagged quantum-mechanics soft-question hilbert-space notation or ask your own question.
state vector notation
Writing down an entanglement in bra-ket notation
Using bra-ket notation?
Bra-Ket Notation
Why do we use vectors in quantum mechanics?
Perfectly distinguishable states are mutually orthogonal
Strange bra-ket notation
Understanding bra-ket notation as in quantum computer in terms of pure algebra
Understanding bra-ket notation | CommonCrawl |
Journal of the Brazilian Computer Society
A coherence analysis module for SciPo: providing suggestions for scientific abstracts written in Portuguese
Vinícius Mourão Alves de Souza1 &
Valéria Delisandra Feltrim1
Journal of the Brazilian Computer Society volume 19, pages 59–73 (2013)Cite this article
SciPo is a writing tool whose ultimate goal is to assist novice writers in producing scientific writing in Portuguese, focusing primarily on abstracts and introductions from computer science. In this paper, we describe the development of a coherence analysis module for SciPo, which aims to automatically detect semantic coherence aspects of abstracts and provide suggestions for improvement. At the core of this new module are classifiers that identify different semantic relationships among sentences within an abstract and hence indicate semantic aspects that add coherence to the abstract. Such classifiers are based on a set of features extracted automatically from the surface of the text and by Latent Semantic Analysis processing. All classifiers were evaluated intrinsically and their performance was higher than the baseline. We also resorted to actual users to evaluate our coherence analysis module, which has been incorporated into the SciPo system, and results demonstrate its potential to help users write scientific abstracts with a higher level of coherence.
Abstracts are said to be a key section in scientific manuscripts (papers, dissertations, theses, etc.). Along with the title, it is used by researchers to promote their work in a given scientific community. As Feltrim et al. [17] point out, a scientific abstract should be carefully tailored so as to be complete (in the sense of providing the necessary information), interesting and informative. It should allow readers to capture the main ideas of the research being described while, at the same time, convince him/her to read the full text.
Scientific texts tend to have a well-defined structure, which can be defined as Introduction–Development–Conclusion [39]. Swales [39] adds that the Development part may unfold in either Materials and Methods and Results or Materials and Methods, Results and Discussion. The same can be said about abstracts, which tend to present a typical structural organization, especially when we consider abstracts from the same knowledge domain. The well-defined nature of the rhetorical structure of abstracts has allowed researchers to propose structure models for abstracts [7, 16, 21, 43]. Although each model has its peculiarities, there is a clear consensus on the typical rhetorical components of such structure as well as on their order of appearance.
Based on models that take into account the rhetorical structure of abstracts, different computational tools have been developed over the past few years to assist authors in writing/revising scientific abstracts. Taking into consideration only systems that focus on the English language, it is worth mentioning AMADEUS (Amiable Article Development for User Support) [3], SciPo-Farmácia [2], Writer's Assistant [34, 35], Writing Environment [28], Composer [33, 36], Academic Writer [8], Abstract Helper [31, 32], and Mover [5]. All these systems target non-native novice writers and aim to help users write scientific texts in English. Within the specific context of Portuguese, we cite SciPo (Scientific Portuguese) [15, 17], a system designed to help novice writers, specially undergraduate and graduate students from the discipline of computer science, by providing support for the writing of introduction and abstract sections of theses and dissertations. To the best of our knowledge, SciPo is the only system of this nature which targets the Portuguese language specifically.
Among other functionalities, SciPo examines the rhetorical structure of abstracts and introductions submitted for analysis and provides both criticisms and suggestions for improvement. For abstracts, the system relies on the rhetorical structure proposed by Feltrim el al. [16], which comprises six rhetorical components arranged in the following order: Background, Gap, Purpose, Methodology, Result and Conclusion. It provides feedback indicating which parts of the abstract could be improved regarding its structure. However, no attention is paid to semantic aspects of the text, such as coherence, which are essential to the readability and interpretability of the abstract.
Coherence and cohesion are responsible for adding sense to a group of words or sentences. By coherence we refer to what makes a group of words or sentences semantically meaningful. We assume that coherence refers to the establishment of a logical connection between sentences within a text. Thus, it is a principle of interpretability related to a given communicational situation and to the ability of the reader to interpret the meaning of the text. Therefore, it is bounded to the text, even though it does not depend solely on it [23]. On the other hand, meaning can only be established if we use textual elements responsible for connecting words/sentences and hence provide cohesion to the text [12]. Coherence and cohesion are closely related and this is why they are usually treated together. Here, we focus on coherence specifically and treat it as a level of semantic relationship among specific text segments. In line with van Dijk and Kintsch [41], we refer to it as semantic coherence.
We have developed a coherence analysis module (CAM) to identify semantic coherence aspects in abstracts. Here, we examine coherence by focusing on two or more rhetorical components and determining the level of semantic similarity between them. Following Higgins and Burstein [19] and Higgins et al. [20], four types of relationships among rhetorical components are considered and we have termed dimensions. These are: (1) Dimension Title: examines the semantic relationship between the Purpose sentence(s) and the title of the abstract; (2) Dimension Purpose: verifies the semantic relationship between the Purpose sentence(s) and those related to Methodology, Result and Conclusion; (3) Dimension Gap-Background: assesses the semantic relationship between Gap and Background sentences; and (4) Dimension Linearity-break: checks whether there is a break in the logical sense between adjacent sentences. Although aware that there are many aspects of a discourse that contribute to coherence, as pointed out by Foltz et al. [18], our main assumption is that a low level of semantic relationship may be interpreted as an indication of a coherence problem. Thus, this system can be used to complement SciPo's functionalities by providing users with suggestions related to semantic coherence.
To automate the analysis of each dimension, we have developed a number of text classifiers. Such classifiers are based on features that have been extracted automatically from the surface of the text and by Latent Semantic Analysis (LSA) [27] processing. With the exception of Linearity-break, all dimensions rely on the abstract's rhetorical structure, which is automatically detected according to SciPo's rhetorical structure model.
We believe our work brings innovative contributions regarding two aspects: (1) the nature of the corpus in question, since we are dealing with scientific abstracts written in Portuguese, and (2) the kind of application to which we apply automatic coherence analysis. As Burstein et al. [10] point out, there is a small body of work that has investigated the problem of identifying coherence in student writing. What is more, none has focused on scientific writing but instead on essays written by native/non-native English writers with different writing skills. In addition to the composition of the corpus, the context in which our approach applies is also different from most systems presented so far in the literature. To date coherence analysis has been applied mostly within the context of Automatic Essay Scoring [29]. Three scoring systems which consider aspects of coherence when grading essays are worth mentioning: Criterion [9, 10, 20], Intelligent Essay Assessor [26], and Intellimetric [14]. Unlike these systems, SciPo is a scientific writing support tool, which in other words means that we are not interested in assigning a score to the text. Our aim is instead to detect potential structure and coherence problems and give the writer constructive feedback.
This paper is organized as follows: in Sect. 2, we briefly describe the SciPo system, focusing on its main functionalities implemented so far. In Sect. 3, we detail our corpus and its annotation process as well as our proposed dimensions. In Sect. 4, we focus on the classifiers that comprise the proposed CAM and on the results of its intrinsic evaluation. Section 5 presents the CAM, how it is incorporated into the SciPo system, and the results of its evaluation by actual users. Last but not least, in Sect. 6, we draw some conclusions and offer some suggestions for further investigation.
The SciPo system
SciPoFootnote 1 is a web-based system whose primary purpose is to assist novice writers, specially undergraduate and graduate students, in producing scientific writing in Brazilian Portuguese. It focuses mainly on the abstract and introduction sections of dissertations and theses from computer science and was designed to help users structure their texts and make adequate linguistic choices. SciPo allows its users to choose between two working modes:
A top-down process that starts from planning the rhetorical structure and then tackles the writing itself. This mode was inherited from the AMADEUS project [3];
A bottom-up process in which the system automatically detects and analyses the rhetorical structure of the text submitted.
In fact, these two modes are different starting points for the same cyclical process of refinement given that the rhetorical structure detected and assessed in (ii) can be improved using the resources available in (i).
The system contains four knowledge bases, namely: the Abstract Case Base, Introduction Case Base, Rules and Similarity Measures, and Critiquing Rules. The Abstract Case Base includes 52 examples of schematic structures taken from authentic abstracts as well as the description of the rhetorical components, strategies and lexical patterns for each case. Similarly, the Introduction Case Base contains 48 examples of schematic structures for introductions and the description of the rhetorical components, strategies and lexical patterns for each case. For both case bases, all information was manually annotated according to appropriate rhetorical structure models [4, 16]. The user can freely browse these databases and search for occurrences of any given rhetorical structure.
Table 1 shows the rhetorical structure model used for abstracts along with a brief description of the function served by each component. Figure 1 illustrates how an abstract may be annotated according to its rhetorical structure. All lexical patterns have been underlined for emphasis. Given that SciPo's corpus is in Portuguese, for convenience, the example in Fig. 1 has been translated into English.
Table 1 SciPo's rhetorical components for abstracts and their functions within the abstract
Example of an annotated abstract according to its rhetorical structure with lexical patterns underlined (adaptated from Feltrim et al. [17])
As for the third knowledge base, Similarity Rules and Measures, they refer to rules established according to similarities among lists (pattern matching) and to nearest neighbor matching [24]. These rules are used to retrieve a given rhetorical structure, as requested by the writer.
Last but not least, the Critiquing Rules are based on prescriptive guidelines for good writing and on structural problems observed in the annotated corpus, as an attempt to anticipate and correct ill-formed structural patterns that the writer might construct. These rules cover two distinct types of problems: content deviations (absence of structural components) and order deviations (occurrence of a given structural component in relation to the overall structure). In short, this base consists of four classes of rules: critical comments on (1) the content and (2) the order, and suggestions for improving (3) the content and (4) the order.
A fifth element of SciPo's architecture is a text classifier which automatically detects the rhetorical structure of an abstract. Named AZPort, it is a Naive Bayesian classifier that implements the Argumentative Zoning approach proposed by Teufel and Moens [40], adapting it to the context of scientific abstracts written in Portuguese. Following the structural components of the rhetorical model proposed by Feltrim et al. [16], AZPort assigns one of the following labels to each input sentence: Background, Gap, Purpose, Methodology, Result, and Conclusion. Further details about AZPort can be found in [15] and [17]. Using AZPort, we are therefore able to incorporate the bottom-up process into SciPo. Figure 2 presents a simplified version of the SciPo's architecture, showing how the bottom-up and top-down processes relate to each other and to the knowledge bases.
As shown in Fig. 2, once the user has decided on a given rhetorical structure, which may have been either automatically detected (bottom-up process) or explicitly constructed (top-down process), he/she receives some feedback from the system. This procedure is repeated as many times as necessary until an acceptable structure has been built. The user can then recover authentic examples from the corpus and use the lexical patterns provided in his/her writing.
Simplified version of SciPo's architecture [17]
Corpus and annotation
To determine what kind of semantic relationships would have an impact on coherence in scientific texts written in Portuguese, we have compiled and annotated a corpus of 385 abstracts written in Portuguese by undergraduate students. Since significant differences can be found between European and Brazilian Portuguese in terms of vocabulary and syntactic constructions, we have opted to compile a corpus of abstracts of the latter variety. This is mainly because the SciPo system targets Brazilian students specifically.
All abstracts were extracted from monographs written as one of the requirements for being awarded a BS degree in computer science. These monographs date from 1999 to 2009 and come from different fields within computer science, such as database systems, artificial intelligence, software engineering, computer networks, digital systems, distributed systems, programming languages and image processing. They were collected at three Brazilian universities: the State University of Maringá, where we could collect the abstracts directly from their authors, the State University of Londrina, and the Federal University of Pelotas, where we have collected the abstracts by accessing their digital libraries.Footnote 2\(^{,}\)Footnote 3
Once we had our abstract corpus ready, its annotation was processed in two stages: (1) rhetorical structure annotation, and (2) annotation of coherence-related aspects according to the proposed dimensions. Both annotation processes are described below.
Rhetorical structure annotation
The first stage of the annotation phase consisted of assigning tags to the abstract title, start and end of each sentence, and their classification according to the six abovementioned rhetorical categories, following the rhetorical components proposed by Feltrim et al.'s structure model [16]. At this point, it is worth mentioning that a rhetorical component may be realized by one or more sentences. For the former case, all the sentences are classified according to the category in question.
To annotate the rhetorical structure of each abstract, we have used the aforementioned AZPort classifier. As reported in [17], AZPort was trained and tested by applying 13-fold cross-validation to a set of 52 abstracts from the CorpusDT [16], which comprises 320 sentences. The system's accuracy rate was 74 %. The automatic annotation was also evaluated in relation to that by a human annotator, by calculating the Kappa coefficient \(K\) [37] as follows:
$$\begin{aligned} K=\frac{P(A)-P(E)}{1-P(E)}\\ \end{aligned}$$
where \(P(A)\) is pairwise agreement and \(P(E)\) is random agreement. A Kappa value may range from \(-1\) to 1. The former indicates maximal disagreement and the latter suggests perfect agreement. A kappa value of 0 implies that agreement between annotators is no greater than it would be expected by chance, thus following the same distribution as the observed one.
The level of agreement between AZPort and the human annotator was \(K=0.65\). This kappa value is in fact fairly similar to that calculated when the same 52 abstracts were annotated by three human specialists \((K=0.69)\). Even so, AZPort 's categorization was manually revised by one human annotator so as to correct potential errors and hence minimize the chances of any noise from the automatic annotation interfering with the coherence annotation.
A total of 2,293 sentences were automatically annotated and manually revised. Table 2 presents the frequencies of each rhetorical component in the annotated corpus. As can be seen, Purpose sentences are the most frequent, occurring in nearly all abstracts (97.40 %, 375 abstracts). It is followed by Background (68.05 %), Result (55.32 %), Gap (40.51 %), Methodology (37.66 %), and Conclusion (23.11 %).
Table 2 Frequency of rhetorical components in the corpus
The distribution of annotated sentences across the six rhetorical components is presented in Table 3. It can be seen that Background is the most frequent category when the number of sentences is considered (35.23 %, 808 sentences), followed by Result (19.67 %), Purpose (18.58 %), Methodology (11.90 %), Gap (9.38 %), and Conclusion (5.24 %). It is also worth noting that while Purpose sentences occur in most abstracts (97.4 %, as shown in Table 2), the number of Purpose sentences (426 sentences) is lower than the number of Background (808 sentences) and Result (451 sentences) sentences. This is explained by the fact that the 426 Purpose sentences are distributed across 375 abstracts, leading to an average of 1.13 Purpose sentences per abstract. By way of contrast, the 808 Background sentences are spread across 262 abstracts, leading to an average of 3.08 sentences per abstract. We believe that this higher number of Background sentences may be explained by the composition of the corpus. When it comes to monograph abstracts, there is no restriction on the maximum number of words and hence authors tend to write more sentences to contextualize their work. The same does not apply to abstracts from scientific papers, which tend to be limited in length, leading writers to focus on Purpose and Result.
Table 3 Distribution of sentences by rhetorical component
Annotation of coherence-related aspects
In the second stage of the annotation phase, we identified and annotated semantic relationships between specific rhetorical components of scientific abstracts, bearing in mind that the resulting information was intended to be used as a resource to generate useful feedback to SciPo users. For doing so, we have adapted the dimensions proposed by Higgins et al. [20] and proposed four kinds of semantic relationships between rhetorical components which, as mentioned earlier, have been termed dimensions. These are: (1) Dimension Title, (2) Dimension Purpose, (3) Dimension Gap-Back-ground, and (4) Dimension Linearity-break.
To check reproducibility, we conducted annotation experiments with two human annotators who were familiar with the corpus domain and scientific writing. Here again, we used the Kappa coefficient to measure the level of agreement between them.
The four dimensions, originally proposed in [38], are described in the following sections. We also provide statistics on the annotated corpus as well as the Kappa values for the agreement experiments.
Dimension Title
In this dimension, we have examined whether each sentence of the abstract is semantically similar to the title. If the sentence was found to present a high semantic similarity to the title, it was labeled as high. Otherwise, it was labeled as low. Here, we have opted for a binary scale due to the subjective nature of the task. As previously mentioned, we have conducted an agreement experiment with two human annotators. They were initially asked to annotate a subset of ten abstracts. After this training phase, they annotated a subset of 40 abstracts that had been randomly selected from the corpus. This figure represents nearly 10 % of the overall number of abstracts in the corpus and comprises 209 sentences. The resulting Kappa for their level of agreement was approximately 0.6.
Out of a total of 2,293 sentences, 1,243 (54.20 %) were ranked as having high semantic similarity with the title and 1,050 (46.80 %) were ranked as low. Table 4 presents the distribution of high and low sentences according their semantic similarity with the title across the six potential rhetorical categories.
Table 4 Semantic relationship between sentences from the abstract and the title sentence
As we can observe in Table 4, Purpose sentences tend to present a high level of semantic similarity to the title, since 83.33 % of such sentences were ranked as high. This figure is much higher than the average percentage of high sentences for all remaining components, which is 48.79 %. In fact, the title should indicate the main topic covered in a scientific text and the same is expected from the purpose of the abstract, even if in a concise form. We understand that lack of semantic relationship between purpose sentence(s) and the title may be interpreted as evidence for two possible situations: (1) the title is inappropriate for the abstract or (2) the abstract may have coherence problems.
On the other hand, Background sentences tend to have a low level of semantic similarity to the title. Over half of the overall number of sentences (54.95 %) was ranked as low. We ascribe that to the fact that Background sentences usually appear at the beginning of the abstract so as to place the research within a broader context. Thus, they may not be directly related to the main topic of the research being presented but, rather, address questions or state facts which will prepare the reader to understand the motivations behind the study being presented. We assume that a low level of semantic similarity between the title and Background sentences cannot be viewed as an indication of a coherence problem.
As for the remaining rhetorical categories (Gap, Methodology, Result, and Conclusion), their level of semantic similarity to the title is evenly balanced, with an average percentage of 50.5 % of low sentences and 49.5 % of high sentences over a total of 1,059 sentences. In this study, we find that the relationship between sentences within these categories and the title depends on aspects other than coherence, such as the very nature of the research being reported. This is mainly why we consider that lack of a strong relationship between Gap, Methodology, Result, and Conclusion sentences and the title cannot be interpreted as an indication of a coherence problem.
Dimension Purpose
For each abstract from the corpus, we have examined the semantic similarity between Purpose sentences and all other sentences of the abstract. If the sentence was found to be closely related to the Purpose component, it was labeled as high. Otherwise, it was labeled as low. The label N/A was assigned to sentences of abstracts which do not have Purpose sentences or to sentences classified as Purpose themselves. Like in the case of the dimension Title, we have resorted to the Kappa statistics to measure the agreement between two human annotators over a randomly selected subset of 167 sentences. The resulting value was approximately \(0.8\). The human agreement experiment for this dimension was carried out in the same way as that for the dimension Title.
Apart from 573 sentences labeled as N/A (426 Purpose sentences and 147 sentences distributed across all five categories other than Purpose), 1,720 sentences were labeled as high/low for this dimension. Within these, 1,016 (59.07 %) sentences were ranked as having high semantic similarity with the Purpose component and 704 (40.93 %) sentences were ranked as low. The distribution of high and low sentences across all six rhetorical categories is presented in Table 5.
Table 5 Semantic relationship between sentences from various categories with Purpose sentences
We find that Conclusion, Methodology, and Result sentences tend to present a high level of semantic similarity to the Purpose sentences, as shown by their percentage of sentences ranked as such: 72.55, 67.59, and 66.17 %, respectively. It is worth noting that these figures could be even higher since most of these sentences restate the content of the Purpose component. However, for doing so, writers may resort to anaphoric expressions. Since we rely solely on string matching to identify coreferential entities, and entity names introduced in the Purpose component may not always be explicitly reintroduced in Conclusion, Methodology, and Result components, it decreases the level of semantic similarity. Thus, for these specific cases, although we have found a close relationship between sentences from the abovementioned categories and the Purpose, we have labeled them as low.
Here again, the general nature of Background sentences can be said to account for the category having the highest percentage of low sentences (50.13 %). In fact, Background sentences tend to be closely related to Gap sentences, which in turn are strongly related to the Purpose component. A total of 62.01 % of the analyzed Gap sentences were labeled as high. So, we understand that the low level of semantic relationship between Background and Purpose sentences cannot be regarded as a potential coherence problem.
According to Higgins et al. [20], the semantic relationship among the various rhetorical components dictates the global coherence of the text. Thus, an abstract will not be easily readable and entirely understandable if some rhetorical components are not semantically related to each other. Taking into consideration the rhetorical structure model used for the annotation of our corpus, we expect the Purpose component to present a high level of semantic similarity to the Methodology, Result and Conclusion components. Conversely, absence of a close relationship between these components and the Purpose may be an indication of a coherence problem.
Dimension Gap-background
Taking into consideration all Gap and Background sentences from the corpus, we have examined the semantic similarity between these categories within each abstract. Gap sentences were labeled as yes if they were found to be closely related to at least one sentence from the Background component. Otherwise, they were labeled as no.
With the exception of 32 sentences from abstracts which do not have Gap/Background categories, 183 sentences were considered for this dimension. Within these, 74.86 % (137 sentences) were labeled as yes and 24.14 % (46 sentences) were labeled as no. Like in the case of all other dimensions, we have used the Kappa statistics to measure the agreement between two human annotators over a randomly selected subset of 46 sentences from the corpus. The result was approximately \(0.7\). The human agreement experiment for this dimension was carried out in the same way as the aforementioned dimensions.
As previously mentioned, Background sentences tend to be more closely related to Gap than to Purpose sentences. Thus, the Gap component is expected to have a high semantic relationship with at least one Background sentence. In our view, absence of relationship between these components can said to be an indication of a coherence problem.
Dimension Linearity-break
For this dimension, we have examined whether there is a linearity break in the logical sense between adjacent sentences, that is, whether the sentence in question is semantically related to its preceding and subsequent sentences. Unlike all other dimensions, Linearity-break does not dependent on the rhetorical structure of the abstract. A human annotator was instructed to label sentences as yes whenever a logical connection between the sentence under analysis and its previous and/or its subsequent sentence was difficult to establish. Otherwise, the annotator was instructed to label sentences as no.
Out of 2,293 sentences, 7.14 % (153 sentences) were labeled as yes and 92.86 % (2,140 sentences) were labeled as no. Within the 153 sentences labeled as yes, 26.8 % (41 sentences) are Result sentences, which is the rhetorical category with the highest proportion of yes sentences. Gap is the rhetorical category with the lowest number of yes sentences, with only 4.57 % (7 sentences) labeled as yes.
These results indicate that, within the scope of our study, it is unusual to find sentences which are not related to their surrounding sentences. In addition, we also find that most sentences labeled as yes relate with some other part of the text which may not necessarily be their neighboring sentences. This brings extra complexity to the annotation and analysis of this dimension. As a matter of fact, this dimension indicates very local coherence issues which we believe to be frequent in texts with problems more serious than those observed in the texts analyzed here. For these reasons, we have decided to discard this dimension from the automatic CAM, which we describe in the following section.
Automatic detection of semantic coherence
As previously stated, the purpose of this study is to develop a complementary module for the SciPo system with a view to identifying aspects related to semantic coherence in scientific abstracts written in Portuguese. This new module is based on three out of the four dimensions presented in the previous section, namely: dimension Title, dimension Purpose, and dimension Gap-Background. For this new functionality to work, the system needs to automatically identify potential coherence problems so that appropriate suggestions can be selected and presented to the writer. Here, the automatic analysis of the aforementioned dimensions is accomplished by means of text classifiers, as we shall see next.
All text classifiers were induced by machine learning algorithms based on features extracted from the surface of the text and by LSA processing [27]. LSA is a well-known statistical method for the extraction and representation of knowledge from corpus. Its basic idea is to create a semantic space in which terms are regarded as similar if they occur in the same context. Similarity between concepts related to two words/sentences can be calculated by the cosine product of vectors that represent the target words/sentences. This is shown in the following equation [30]:
$$\begin{aligned} {\text{ sim}}(X,Y)=\frac{\sum \nolimits _{i=1}^n X_iY_i}{\sqrt{\sum \nolimits _{i=1}^n X_i\sum \nolimits _{i=1}^n Y_i}} \end{aligned}$$
where \(X=(x_1, x_2,\ldots , x_n)\) and \(Y=(y_1, y_2,\ldots , y_n)\) are vectors with n dimensions and represent the texts to be compared using the bag of words model. The similarity value ranges from \([-1, 1]\), where \(-1\) is the lowest possible value of similarity and 1 is the highest. LSA results can be improved by pre-processing the corpus before similarity calculations are performed. In the specific case of our study, the pre-processing phase consisted mainly of case-folding, stemming and stopwords removal.
Since some of our features rely on the rhetorical structure of the abstract, we have used AZPort to automatically label sentences according to their rhetorical category. This automatic annotation was then manually validated so as to correct inadequate labeling and hence avoid noise in the processing of the coherence dimensions. In fact, AZPort is used in the prototype of the semantic CAM and the user can correct its predictions whenever he/she regards it as incorrect.
For inducing the classifiers, we have opted for Platt's [22] Sequential Minimal Optimization (SMO) algorithm. SMO is widely used for training support vector machines (SVM) [42], a machine learning method based on statistical theories in which an optimal hyperplan is created to distinguish categories. The method has been employed in several pattern recognition tasks such as text classification [1], spam detection [13], and coherence analysis [20]. This is why we assume SMO is suitable for the task we have proposed. Details on the training and testing of the SMO algorithm are presented in Sect. 4.2.
Extracted features
All sentences were automatically analyzed according to a set of 13 features. All features were automatically extracted and used to induce the classifiers. The complete set of features is:
Rhetorical category of the sentence under analysis. Possible values are B, G, P, M, R, or C, standing for Background, Gap, Purpose, Methodology, Result, and Conclusion, respectively;
Rhetorical category of the sentence that precedes the one under analysis. Possible values are B, G, P, M, R, C, or N/A. The N/A value is assigned when the sentence under analysis is the first one of the abstract;
Rhetorical category of the subsequent sentence. Possible values are B, G, P, M, R, C, or N/A. The N/A value is assigned when the sentence under analysis is the last one of the abstract;
Presence of words that may characterize an anaphora. Possible values are Yes or No, which are calculated on the basis of a list of Portuguese pronouns that can be used anaphorically, such as "ele/ela (he/she/it)", "deste/desta (of this)", "dele/dela (his/hers)", etc;
Position of the sentence within the abstract, estimated in relation to the beginning of the abstract. Possible values are integer numbers starting from 0 (zero);
Presence of words that may characterize some kind of transition. Possible values are Yes or No, which are established on the basis of a list of expressions such as "no entanto (however)", "embora (although)", etc;
Length of the sentence under analysis, measured in words. Possible values are integer numbers starting from 1 (one);
Length of the title, measured in words. Possible values are integer numbers starting from 1 (one);
Semantic similarity (LSA) score between the sentence under analysis and its preceding sentence. Possible values are real numbers between \(-\)1 and 1. This feature is extracted only when feature number 2 is other than N/A;
Semantic similarity (LSA) score between the sentence under analysis and its subsequent sentence. Possible values are real numbers between \(-\)1 and 1. This feature is extracted only when feature number 3 is other than N/A;
Semantic similarity (LSA) score between the sentence under analysis and the abstract title. Possible values are real numbers between \(-\)1 and 1. This feature is extracted only when feature number 1 has the value P;
Semantic similarity (LSA) score between the sentence under analysis and the sentence(s) of the abstract classified as Purpose. Possible values are real numbers between \(-\)1 and 1. This feature is extracted only when feature number 1 has the following values: R, M, or C;
Maximum Semantic similarity (LSA) score between the Gap and the Background sentences of the abstract. Possible values are real numbers between \(-\)1 and 1. As some abstracts may not include these categories, this feature is calculated only for abstracts with sentences from both B and G.
Features 1–8 rely on the abstract's rhetorical structure and other shallow measures. Features 9–13 are based on LSA processing. Features 1–10 make up our basic pool of features.
Feature 11 was added to the basic pool of features when inducing the classifier for dimension Title. For each Purpose sentence in an abstract, this classifier uses the extracted features to predict whether it is strongly/weakly related to the title (high/low categories).
Similarly, feature 12 was added to the basic pool of features when inducing the classifiers for dimension Purpose. Since the dimension Purpose applies to sentences from three different rhetorical categories—Result, Methodology and Conclusion—different experiments were carried out to induce each classifier. For each Result, Methodology, and Conclusion sentence in an abstract, these classifiers use the extracted features to predict whether it is strongly/weakly related to the Purpose sentence(s) (also high/low categories).
As mentioned above, feature 13 is extracted only for abstracts that include both Gap and Background sentences. Thus, in the induction of the classifier for dimension Gap-Background, we have used the basic pool of features plus feature 13. For each Gap sentence in an abstract, the classifier predicts whether it is associated with at least one Background sentence (yes/no categories).
We have conducted feature selection experiments for all aforementioned classifiers. The results as well as the intrinsic evaluation of each classifier are described below.
Feature selection and intrinsic evaluation of the classifiers
Using the annotation presented in Sect. 3 and the set of features extracted from the corpus, we have generated and evaluated five classifiers: one for dimension Title, three for dimension Purpose [(namely, Purpose (M); Purpose (R), and Purpose (C)], and, finally, one classifier for dimension Gap-Background.
The feature selection experiments were carried out in the Weka learning environment [44], and so were the training and testing of all classifiers. For the feature selection experiments, we have adopted the Wrapper method in conjunction with the Best-First search. The SMO algorithm with the PolyKernel kernel was used to select features as well as induce classifiers. All classifiers were induced using tenfold stratified cross-validation with parameter Filtertype of the SMO algorithm set to the value "Standardize training data", to normalize the numerical attributes so that their average is zero and the variance interval is unitary.
Table 6 presents the set of features with the best performance in the feature selection experiments by classifier. It can be noticed that features extracted by LSA processing appear in all best performance feature sets. Table 6 also shows the Kappa resulting values between the human annotation and the classifiers as well as their corresponding accuracy values.
Table 6 Feature selection results, Kappa agreement between human annotation and classifiers and accuracy values by classifier
From the figures presented in Table 6, we can conclude that all classifiers showed satisfactory levels of Kappa, taking into consideration the subjective nature of the task. The Dimension Title classifier had the best performance, with \(K=0.871\). The lowest value (\(K=0.679\)) was recorded for the dimension Gap-Background classifier. It is nevertheless regarded as a good level of agreement. We also find that all classifiers achieved high accuracy rates, with values between 86.17 and 96.48 %. However, raw accuracy is a measure which does not take into account the number of successes and errors across the predicted classes and hence a more detailed analysis of the classifiers performance is required. Table 7 shows the performance of classifiers in terms of Precision, Recall, \(F\)-Measure and Macro-\(F\).
Table 7 Performance in terms of Precision, Recall, \(F\)-measure and Macro-\(F\) by classifier
For comparison purposes, we also present the results of a simple baseline by classifier. It is calculated separately for each classifier by assigning the majority class as output (Table 8). In both Tables 7 and 8, high and low classes refer to the classifiers of dimensions Title, Purpose (M), Purpose (R), and Purpose (C), while classes yes and no refer to the classifier of dimension Gap-Background.
Table 8 Baseline performance in terms of Precision, Recall, \(F\)-measure and Macro-\(F\)
As can be seen in Tables 7 and 8, all classifiers outperform their corresponding baselines. This is particularly evident for the classifier for dimension Title, which also showed the best results for the measures presented in Table 6. We also find that the \(F\)-measure value for the classes high/yes is consistently higher than the values for the classes low/no. Although there is an imbalance in the corpus that may favor the classes high/yes, we believe the behavior of the classifiers can be explained by the lower level of ambiguity in the annotation of high/yes sentences in comparison with low/no sentences. In fact, for all dimensions, our human annotators have found it more difficult to rank sentences as being weakly related to others than to rank them as having a high relationship. They argued that low/no sentences seem to show a higher level of ambiguity than high/yes sentences.
In some specific cases, such as the Purpose (M) and Purpose (R) classifiers, this ambiguity can be justified by other factors. In the first case, the content of Methodology sentences introduces new terms concerning names of techniques and methods. Such terms tend to be proper names and lead the semantic similarity between these sentences and the Purpose, which is calculated by feature 12, to be classified as low. This contradicts the assessment of the human annotator whose analysis goes beyond the text surface. Similarly, the content of Result sentences usually introduces names of metrics for evaluating the results. These terms cause the semantic similarity between Result and Purpose sentences to be classified as low and here again contradict the human annotator, who has a deeper understanding of the sentences. Even so, the overall performance of the classifiers was reasonably good, with macro-\(F\) values between \(0.839\) and \(0.936\). Macro-\(F\) takes into account the \(F\)-measure values for both high/yes and low/no classes.
To sum up, we conclude that the positive results obtained in the evaluation of all classifiers allow us to use them in the CAM proposed in this study, specifically to automatically detect semantic coherence aspects evaluated by dimensions Title, Purpose and Gap-Background. In the next section, we explain how the proposed dimensions are used to generate suggestions for improving coherence in scientific abstracts written in Portuguese.
The CAM
As reported in Sect. 2, SciPo assists novice writers in producing scientific abstracts in Portuguese by offering criticisms and/or suggestions regarding aspects of their rhetorical structure. Here, we intend to extend SciPo's functionalities so that it can also provide feedback on semantic coherence. Given the proposed coherence dimensions (Sect. 3), the developed classifiers and their good performance (Sect. 4), we have prototyped a CAN (henceforth, CAM) to be incorporated into the SciPo system. CAM identifies potential issues related to three out of the four coherence dimensions discussed earlier and selects the appropriate feedback from a collection of predefined coherence suggestions.
Figure 3 presents the new SciPo system architecture, including CAM, which is highlighted by the dashed rectangle. CAM comprises a base of coherence suggestions, a set of coherence classifiers (as previously mentioned, one for dimension Title, three for dimension Purpose, and one for dimension Gap-Background), and a coherence advisor, which selects the appropriate suggestion(s) based on potential coherence problems.
SciPo's architecture including CAM
The coherence analysis process starts by resorting to the AZPort classifier to automatically detect the abstract's rhetorical structure. If any structural problem is detected, the user gets criticisms and/or suggestions from SciPo so that he/she can correct it. Otherwise, the detected rhetorical structure as well as text itself are passed on to CAM for LSA processing and feature extraction. Based on the extracted feature values, the five classifiers analyze each sentence of the abstract and their results are sent to the coherence advisor. In case of a potential coherence problem, the appropriate suggestion(s) will be selected by the advisor and presented to the user. The refinement cycle continues until either the system has no suggestions to offer or the user has decided to stop the process.
It is important to stress that the user can freely reject the coherence suggestions made by the system. In fact, semantic aspects are controversial by nature so we cannot rule out the possibility that user and system may disagree on the coherence problems identified. With this issue in mind, we have decided to leave the user free to accept or reject the suggestions offered by CAM. This freedom of choice given to the user had already been implemented in the SciPo system, and we have decided to maintain it in CAM.
Coherence suggestions
Coherence suggestions are presented to the user whenever one or more coherence classifiers return a low/no value. According to the classifier in question, the coherence advisor then selects the appropriate suggestions out of the set presented in Table 9. In this table, we show the five coherence suggestions elaborated according to the dimensions discussed earlier as well as a brief explanation that is presented to the user along with the suggestion. For convenience, all suggestions and their explanations in Table 9 have been translated into English, although in SciPo they are presented in Portuguese.
Table 9 Coherence suggestions and their explanations as presented by CAM
Evaluation by users
To evaluate CAM in its context of use, i.e., as part of the SciPo system, we have conducted an experiment with actual users to check how effective the coherence suggestions are in scientific abstract writing. The experiment was carried out with eight MSc students in computer science from the State University of Maringá. All students have written or were in the process of writing their master's dissertation in the year 2010 and hence had already finished writing its abstract or had at least a draft of it.
All users were asked to use CAM in the refinement of their abstract/draft. Before doing so, they were presented to the main purposes of CAM, along with a brief explanation of the components that make up the rhetorical structure of a scientific abstract. It is worth mentioning that users were not familiar with the concepts of rhetorical components and structure. After using CAM, all users were asked to complete a questionnaire reporting, among other points, their impressions on CAM, including easiness to use, relevance of the presented suggestions, alterations in the coherence level of his/her abstract after using CAM. In Fig. 4, we present a summary of the users' answers to the questionnaire.
Summary of the users' answers to the questionnaire applied after the experiment with CAM
As explained earlier, CAM uses the AZPort classifier to detect the abstract rhetorical structure. In this experiment, users were asked to correct AZPort output whenever they felt appropriate. Out of a total of 63 sentences classified by AZPort, 26 (41.3 %) were corrected.
During the experiment, six users were presented with one coherence suggestion for their abstracts. For three of them it referred to low relationship between Title and Purpose. For two of them the suggestion was for the low relationship between Purpose and Methodology. In one case, the suggestion concerned the low relationship between Purpose and Result. Two users were presented with two suggestions for their abstracts. For one of them, one suggestion referred to the low relationship between Title and Purpose, and another to the low relationship between Purpose and Methodology. For the other user, the suggestions were for the low relationship between Purpose and Methodology, and between Purpose and Result.
No suggestions were offered for the relationship between Gap and Background nor for the relationship between Conclusion and Purpose. In addition to the fact that the analyzed abstracts did not present problems in relation to these components, we should also bear in mind that most abstracts did not include Gap and Conclusion sentences. Although we had previously explained about the importance of all rhetorical components and users were given the chance to refine his/her abstract's rhetorical structure using SciPo, some opted for not including Gap and Conclusion in their abstracts. We believe that this can be partially explained by the fact that some users had submitted a finished version of abstract to the system, rather than a draft, and were reluctant to modify it.
All users have accepted the suggestions presented by CAM. Six have accepted all the suggestions provided and two have accepted them partially. In the latter case, the system continued to present suggestions even after the rewriting and adaptation of the abstract and the refinement process was ended by the user.
As regards the relevance of the coherence suggestions presented by CAM, five users considered them as relevant, two users found them very relevant, and one user regarded them as irrelevant (see Fig. 4a).
When comparing the initial abstract to its revised version, after the refinement process guided by the coherence suggestions, four users reported that the coherence level of the final abstract increased substantially, while three users considered that the coherence level of the final abstract increased reasonably. One user found that the adjustments in the abstract did not alter its level of coherence (see Fig. 4b).
Three users had some doubts about how to adjust/rewrite the abstract in accordance with the suggestions presented by CAM. One user had many doubts. On the other hand, four users reported that they had no doubts about how to rewrite and adequate the abstract according to the coherence suggestions (see Fig. 4c). We believe that the reported difficulties in adjusting the abstracts are inherent to the writing process, since most users (six of them) ranked the information and suggestions provided as very informative; two users ranked them as fairly informative, and even those with difficulties in rewriting the abstract did not considered the suggestions as uninformative (see Fig. 4d).
This study aimed to implement and evaluate a method for automatically detecting potential problems regarding semantic coherence in scientific abstracts written in Portuguese. The method is based on the evaluation of three coherence dimensions. More specifically, we have developed a complementary module to the SciPo system, namely CAM, which identifies potential coherence problems and generate appropriate suggestions for semantic aspects of the abstract section.
CAM is the prototype resulting from the induction and evaluation of several machine learning models (classifiers) applied to the proposed coherence dimensions, plus a base of suggestions. We initially have proposed four dimensions: (1) Title, (2) Purpose, (3) Gap-background, and (4) Linearity-break. However, due to the reduced number of examples with a Linearity-break in the annotation process, we could not induce a classifier for this dimension. By way of contrast, for the other three dimensions, the classifiers presented good results which allowed us to use them to identify potential coherence problems and, as a result, implement them as part of CAM. Another important point to stress here is that CAM performs real time text classification with no runtime efficiency issues. Even though CAM classifiers depend on the AZPort system, when it comes to extracting features and classifying texts, CAM classifiers remain reasonably fast. The slight delay in the process is hardly felt by the user.
Given the difficulties found in the induction of a classifier for Dimension Linearity-break, in future studies, we intend to explore the entity-based model proposed by Barzilay and Lapata [6] for extracting new features that may be helpful for such dimension. In addition, during the training and testing of the classifiers, we have performed a feature selection phase using the SMO algorithm with a view to removing redundant features, thus improving classifiers performance. However, we believe that further study on the SMO parameters could optimize their values and consequently provide even better classification results. In a near future, we also consider experimenting with algorithms from other approaches, such as Bayesian models and decision trees.
Another point to be considered in the classifiers' performance is the imbalance of classes. In future investigations, it is our intention to explore techniques for artificially balancing classes, such as Undersampling, which eliminates instances of the majority class [25], and Oversampling, which replicates instances of the minority class [11]. The impact of these techniques is worth analyzing since the manual annotation of a larger corpus has a high cost.
Another issue to be addressed in future studies is the adequacy of the dimensions to other sections of a scientific work, such as introduction and conclusion. In addition to differences related to rhetorical structure, these sections are usually longer and present more variations than abstracts in terms of both structure and content.
Finally, we conclude that the classifiers evaluations both intrinsic and as part of CAM demonstrate the potential of the proposed dimensions to support the writing of scientific abstracts. The experiment with actual users, although preliminary, has shown that CAM can provide relevant suggestions and offer potentially useful guidance for writing abstracts with a high level of coherence.
The SciPo system as described in the Sect. 2 is available at http://www.nilc.icmc.usp.br/~scipo.
Document Archiving and Indexing System of the Computer Science Department, State University of Londrina, available at: http://www2.dc.uel.br/nourau/.
Digital Collections of the Library of Science and Technology, Federal University of Pelotas, available at: http://www.ufpel.tche.br/prg/sisbi/bibct/acervodigital.html.
Aizawa A (2001) Linguistic techniques to improve the performance of automatic text categorization. In: Proceedings of the 6th natural language processing pacific rim symposium (NLPRS-01), Tokyo, Japan, pp 307–314
Aluísio S, Schuster E, Feltrim VD, Pessoa A, Oliveira O (2005) Evaluating scientific abstracts with a genre-specific rubric. In: Proceeding of the 12th international conference on artificial intelligence in education (AIED 2005) IOS Press. The Netherlands, Amsterdam, pp 738–740
Aluísio SM, Barcelos I, Sampaio J, Oliveira ON Jr (2001) How to learn the many unwritten "rules of the game" of the academic discourse: a hybrid approach based on critiques and cases. In: Proceedings of the IEEE international conference on advanced learning technologies (ICALT 2001), pp 257–260
Aluísio SM, Oliveira ON Jr (1996) A detailed schematic structure of research paper introductions: an application in support-writing tools. Procesamiento del Lenguaje Nat 19:141–147
Anthony L, Lashkia GV (2003) Mover: a machine learning tool to assist in the reading and writing of technical papers. In: IEEE transactions on professional communication 46(3):185–193
Barzilay R, Lapata M (2008) Modeling local coherence: an entity-based approach. Comput Linguist 34(1):1–34
Booth WC, Colomb CG, Williams JM (2000) A arte da pesquisa. Martins Fontes, Brazil
Broady E, Shurville S (2000) Developing academic writer: designing a writing environment for novice academic writers. In: Broady E (ed) Second language writing in a computer environment. CiLT/AFLS, London, pp 131–152
Burstein J, Chodorow M, Leacock C (2003) Criterionsm online essay evaluation: an application for automated evaluation of student essays. In: Proceedings of the 15th annual conference on innovative applications of artificial intelligence, pp 3–10
Burstein J, Tetreault J, Andreyev S (2010) Using entity-based features to model coherence in student essays. In: Proceedings of the human language technologies: the 2010 annual conference of the North American chapter of the association for computational linguistics . Association for Computational Linguistics, Los Angeles, pp 681–684
Chawla NV, Bowyer KW, Hall LD, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357
MATH Google Scholar
de Beaugrande RA, Dressler WU (1981) Introduction to text linguistics. Longman Publisher Group, New York
Drucker H, Wu D, Vapnik VN (1999) Support vector machines for spam categorization. IEEE Trans Neural Netw 10(5):1048–1054
Elliot S (2003) Intellimetric: from here to validity. In: Shermis MD, Burstein JC (eds) Automatic essay scoring: a cross-disciplinary perspective. Lawrence Erlbaum Associates, Hillsdale, pp 71–86
Feltrim VD (2004) Uma abordagem baseada em córpus e em sistemas de crítica para a construção de ambientes web de auxílio à escrita acadêmica em português. Ph.D. thesis, Instituto de Computação e Matemática Computacional - Universidade de São Paulo, São Carlos, SP
Feltrim VD, Aluísio SM, Nunes MGV (2003) Analysis of the rhetorical structure of computer science abstracts in portugese. In: Archer D, Rayson P, Wilson A, McEnery T (eds) Proceedings of Corpus Linguistics 2003, UCREL Technical Papers, vol 16, part 1, special issue, pp 212–218
Feltrim VD, Teufel S, Nunes MGV, Aluísio SM (2006) Argumentative zoning applied to criquing novices scientific abstracts. In: Shanahan JG, Qu Y, Wiebe J (eds) Computing attitude and affect in text: theory and applications. The Netherlands, Dordrecht, pp 233–246
Foltz PW, Kintsch W, Landauer TK (1998) The measurement of textual coherence with latent semantic analysis. Discourse Processes 25:285–307
Higgins D, Burstein J (2007) Sentence similarity measures for essay coherence. In: Proceedings of the 7th International workshop on computational semantics (IWCS-7), Tilburg, The Netherlands, pp 1–12
Higgins D, Burstein J, Marcu D, Gentile C (2004) Evaluating multiple aspects of coherence in student essays. In: Human language technologies: the 2004 annual conference of the north american chapter of the association for computational linguistics. Association for Computational Linguistics. Boston, pp 185–192
Huckin T, Olsen LA (1991) Technical writing and professional communication for non-native speakers. McGraw-Hill Humanities/Social Sciences/Languages, New York
Keerthi SS, Shevade SK, Bhattacharyya C, Murthy KRK (2001) Improvements to Platt's SMO algorithm for SVM classifier design. Neural Comput 13(3):637–649
Article MATH Google Scholar
Koch IV, Travaglia LC (2003) A coerência textual. Editora Contexto, São Paulo
Kriegsman M, Barletta R (1993) Building a case-based help desk application. IEEE Expert Intell Syst Appl 8(6):18–26
Kubat M, Matwin S (1997) Addressing the curse of imbalanced training sets: One-sided selection. In: Machine learning: proceedings of the 14th international conference (ICML'97). Morgan Kaufmann, Nashville, pp 179–186
Landauer T, Laham D, Foltz P (2003) Automated scoring and annotation of essays with the intelligent essay assessor. In: Shermis MD, Burstein JC (eds) Automated essay scoring: a cross-disciplinary perspective. Lawrence Erlbaum Associates, New Jersey, USA, pp 87–112
Landauer TK, Foltz PW, Laham D (1998) Introduction to latent semantic analysis. Discourse Process 25:259–284
Lansman M, Smith JB, Weber I (1993) Using the "writing environment" to study writer's strategies. Comput Compos 10(2):71–92
Lapata M, Barzilay R (2005) Automatic evaluation of text coherence: models and representations. In: Proceedings of the international joint conferences on artificial intelligence, pp 1085–1090
Ljungstrand P, Johansson H (1998) Intranet indexing using semantic document clustering. Department of Informatics - Göteborg University, Göteborg, Sweden, Master's thesis
Narita M (2000) Constructing a tagged EJ parallel corpus for assisting japanese software engineers in writing english abstracts. In: Proceedings of the 2nd international conference on language resources and evaluation (LREC 2000), pp 1187–1191
Narita M (2000) Corpus-based english language assistant to japanese software engineers. Proceedings of MT-2000 machine translation and multilingual applications in the new millennium pp 24-1–24-8
Pemberton L, Shurville S, Hartley T (1996) Motivating the design of a computer assisted environment for writers in a second language. In: Diaz de Ilarraza Sanchez A, Fernandez de Castro I (eds) Computer aided learning and instruction in science and engineering. Lecture Notes in Computer Science, vol 1108. Springer, Berlin, pp 141–148
Sharples M, Goodlet J, Clutterbuck A (1994) A comparison of algorithms for hypertext notes network linearization. Int J Hum Comput Stud 40(4):727–752
Sharples M, Pemberton L (1992) Representing writing: external representations and the writing process. In: Holt PO, Williams N (eds) Computers and writing: state of the art. Intellect, Oxford, pp 319–336
Shurville S, Hartley AF, Pemberton L (1997) A development methodology for composer: a computer support tool for academic writing in a second language. In: Knorr D, Jakobs E-M (eds) Text production in electronic environments. Lang, Frankfurt, pp 171–182
Siegel S, Castellan NJ Jr (1988) Nonparametric statistics for the behavioral sciences, 2nd edn. McGraw-Hill, Berkeley
Souza VMA, Feltrim VD (2011) An analysis of textual coherence in academic abstracts written in portuguese. In: Proceedings of 6th corpus linguistics conference: CL 2011, Birmingham, UK
Swales JM (1990) Genre analysis: english in academic and research settings. Cambridge University Press, Cambridge, Cambridge applied linguistics series
Teufel S, Moens M (2002) Summarising scientific articles - experiments with relevance and rhetorical status. Comput Linguist 28(4):409–446
van Dijk TA, Kintsch W (1983) Strategies of discourse comprehension. Academic Press, New York
Vapnik VN (2000) The nature of statistical learning theory. Springer Verlag, New York
Book MATH Google Scholar
Weissberg R, Buker S (1990) Writing up research: experimental research report Writing for students of English. Prentice Hall Regents, Englewood Cliffs
Witten IH, Frank E (2005) Data mining: practical machine learning tools and technique. Morgan Kaufmann, San Francisco
We would like to thank the Brazilian Funding Body CNPq (Process # 135669/2009-0) for partially funding this research. We are also grateful to Danilo Machado Jr. for his invaluable work in the corpus annotation.
Informatics Department, State University of Maringá, Av. Colombo, 5.790, 87020-900, Maringá, PR, Brazil
Vinícius Mourão Alves de Souza & Valéria Delisandra Feltrim
Vinícius Mourão Alves de Souza
Valéria Delisandra Feltrim
Correspondence to Valéria Delisandra Feltrim.
This is a revised and extended version of a previous paper that appeared at STIL 2011, the 8th Brazilian Symposium in Information and Human Language Technology.
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
de Souza, V.M.A., Feltrim, V.D. A coherence analysis module for SciPo: providing suggestions for scientific abstracts written in Portuguese. J Braz Comput Soc 19, 59–73 (2013). https://doi.org/10.1007/s13173-012-0079-1
Received: 14 January 2012
Accepted: 10 May 2012
Issue Date: March 2013
Scientific writing in Portuguese
Semantic coherence
Automatic analysis of coherence
Latent Semantic Analysis | CommonCrawl |
Proceedings of the Japan Academy, Series A, Mathematical Sciences
Proc. Japan Acad. Ser. A Math. Sci.
Volume 83, Number 3 (2007), 19-20.
On the first layer of anti-cyclotomic $\mathbf {Z}_{p}$-extension of imaginary quadratic fields
Jangheon Oh
More by Jangheon Oh
PDF File (98 KB)
In this paper, we give an explicit description of the first layer of anti-cyclotomic $\mathbf{Z}_{p}$-extension of imaginary quadratic fields.
Proc. Japan Acad. Ser. A Math. Sci., Volume 83, Number 3 (2007), 19-20.
First available in Project Euclid: 9 April 2007
https://projecteuclid.org/euclid.pja/1176126884
doi:10.3792/pjaa.83.19
Primary: 11R23: Iwasawa theory
Iwasawa theory anti-cyclotomic extension Kunner extension Minkowski unit
Oh, Jangheon. On the first layer of anti-cyclotomic $\mathbf {Z}_{p}$-extension of imaginary quadratic fields. Proc. Japan Acad. Ser. A Math. Sci. 83 (2007), no. 3, 19--20. doi:10.3792/pjaa.83.19. https://projecteuclid.org/euclid.pja/1176126884
H. Cohen, Advanced Topics in Computational Number Theory, Springer, New York, 2000.
J. M. Kim and J. Oh, Defining Polynomial of the first layer of anti-cyclotomic $\mathbf{Z}_{3}$-extension of imaginary quadratic fields of class number 1, Proc.Japan Acad. Ser.A Math. Sci. 80 (2004), no. 3, 18–19.
Digital Object Identifier: doi:10.3792/pjaa.80.18
Project Euclid: euclid.pja/1116442135
J. Oh, The first layer of $\mathbf{Z}_{2}^2$-extension over imaginary quadratic fields, Proc. Japan Acad. Ser.A Math. Sci. 76 (2000), no. 9, 132–134.
Digital Object Identifier: doi:10.3792/pjaa.76.132
L. C. Washington, Introduction to Cyclotomic Fields, Springer, New York, 1982.
The Japan Academy
Defining polynomial of the first layer of anti-cyclotomic $\mathbf {Z}_3$-extension of imaginary quadratic fields of class number 1
Kim, Jae Moon and Oh, Jangheon, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 2004
The first layer of $\mathbf {Z}_2^2$-extension over imaginary quadratic fields
Oh, Jangheon, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 2000
On an upper bound of $\lambda$-invariants of $\mathbb{Z}_p$-extensions over an imaginary quadratic field
MURAKAMI, Kazuaki, Journal of the Mathematical Society of Japan, 2019
Construction of 3-Hilbert class field of certain imaginary quadratic fields
The commutativity of Galois groups of the maximal unramified pro-$p$-extensions over the cyclotomic $\mathbb{Z}_{p}$-extensions II
Okano, Keiji, Osaka Journal of Mathematics, 2012
A class number calculation of the $5^{\text{th}}$ layer of the cyclotomic $\mathbf{Z}_2$-extension of $\mathbf{Q}(\sqrt{5})$
Aoki, Takuya, Functiones et Approximatio Commentarii Mathematici, 2019
On a bound of $\lambda$ and the vanishing of $\mu$ of $\mathbb{Z}_p$-extensions of an imaginary quadratic field
FUJII, Satoshi, Journal of the Mathematical Society of Japan, 2013
On maximal tamely ramified pro-2-extensions over the cyclotomic $\mathbb{Z}_{2}$-extension of an imaginary quadratic field
Salle, Landry, Osaka Journal of Mathematics, 2010
Iwasawa invariants on non-cyclotomic ${\mathbf {Z}_{p}}$-extensions of CM fields
Goto, Hideki, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 2006
On the Isomorphism classes of Iwasawa modules with $\lambda = 3$ and $\mu = 0$
Murakami, Kazuaki, Osaka Journal of Mathematics, 2014
euclid.pja/1176126884 | CommonCrawl |
Factoring a trivariate polynomial
I would appreciate some help with factoring a trivariate polynomial.
The polynomial in question is
$$p(x,y,z)=a_1 x^7+a_2 x^5y+a_3 x^3y^2+a_4 xy^3+a_5 x^4z+a_6 x^2yz+a_7 y^2z+a_8 xz^2,$$
where the coefficients are integers. I would like to assign values to these coefficients such that $p$ can be factored. For example, one possible factorization might be
$$p(x,y,z)=(x^3+b_1 xy+b_2 z)(b_3 x^4+b_4 x^2y+b_5 xz+b_6 y^2).$$
One can expand this factorization and equate the coefficients of the expansion with those of the above equation and hope that the system of equations can be solved for integers b's.
How can one find other possible factorizations? Is there an easy way?
polynomials diophantine-equations factoring
TKNTKN
Let us suppose first that all the coefficients in your polynomial $p$ are non-zero.
Consider the set of vectors of $\mathbb R^3$ given by the rows of the following matrix: $$ \left( \begin{array}{ccc} 7 & 0 & 0 \\ 5 & 1 & 0 \\ 3 & 2 & 0 \\ 1 & 3 & 0 \\ 4 & 0 & 1 \\ 2 & 1 & 1 \\ 0 & 2 & 1 \\ 1 & 0 & 2 \end{array} \right) $$ which comes from the exponents which appear in the monomials in you polynomial $p$. All of them satisfy the equation $$(\Pi)\qquad\qquad x+2y+3z=7,$$ and this tells us that the convex hull of those eight points is actually a polygon contained in a plane. To be able to make pictures more easily, I will project down to one of the coordinate planes: the projection $\phi:\mathbb R^3\to \mathbb R^2$ on the last two coordinates is injective and affine on that plane (and «it corresponds to looking at $\mathbb R^3$ from above»). The image of our points under this projections are, of course, the rows of $$ \left( \begin{array}{ccc} 0 & 0 \\ 1 & 0 \\ 2 & 0 \\ 3 & 0 \\ 0 & 1 \\ 1 & 1 \\ 2 & 1 \\ 0 & 2 \end{array} \right) $$ and the convex hull $P_p$ of these in the plane is the following polygon:
We can clearly do this construction from any polynomial $f$ whose exponent vectors lie on a plane parallel to $\Pi$ to obtain a polygon $P_f$.
Suppose now that $p=gh$ is a factorization of your polynomial. An easy consequence of the way polynomials are multiplied implies
Lemma. The exponent vectors of $g$ lie on a plane parallel to $\Pi$ and the same, of course, applies to $h$.
We therefore have two polygons $P_g$ and $P_h$. Now the key observation:
Lemma. We have $P_p=P_g+P_h$.
Here the sum is the so-called Minkowski sum of subsets of the plane.
This last lemma imposes very severe limitations on the shape of the polynomials $g$ and $h$ which can appear in a factorization of $p$.
Indeed, a little thinking will give you the list of all possible pairs of polygons $(A,B)$ whose vertices are points of the plane with non-negative integer coordinates and such that $P_p=A+B$. For each such pair, you get the form of the factos of a possible factorization of $p$.
Now, the factorization you proposed corresponds to the pair of polygons
and in fact this is the only decomposition of $P_p$ as a Minkowski sum (to see this, consider for example the possible heights and widths of the two summands, &c). It follows that assuming the coefficients of $x^7$, $xy^3$, $y^2z$ and $xz^2$ are non-zero, any possible factorization has the shape you found. If one of those coefficients vanishes, then the shape of $P_p$ would be different, and one has to consider more cases—notice that the vanishing of the coefficient of $x^3yz$ does not matter, if the other coefficients are non-zero, because it is in the interior of our polygon.
N.B.: It is easy to see that the polygons corresponding to the shapes of the two factors you found cannot be written as a Minkowski sum. From this it follows that (assuming their coefficients corresponding to the vertices of those polygons do not vanish—and in fact they can't if the same holds for $p$) the two factors you have found are irreducible.
Mariano Suárez-ÁlvarezMariano Suárez-Álvarez
113k88 gold badges166166 silver badges299299 bronze badges
$\begingroup$ There are several details missing before this can be considered an explanation... but hopefully the idea gets through :D $\endgroup$ – Mariano Suárez-Álvarez Feb 1 '12 at 7:45
$\begingroup$ A keyword to find information of this type of construction is Newton polytopes. $\endgroup$ – Mariano Suárez-Álvarez Feb 1 '12 at 7:48
Not the answer you're looking for? Browse other questions tagged polynomials diophantine-equations factoring or ask your own question.
on systems of bivariate polynomial equations (quartic)
Polynomial multiplication- Shouldn't the highest power be $2n$?
Polynomials with some roots whose product is 1
Factoring polynomials when roots are external to the ring
Closed-form formula for system of two bivariate quadratic polynomials
Are there ways to solve equations with multiple variables?
How to prove that shifting does not change the discrimintant?
Compute multiplication of two polynomials.
Connectedness of a subset of structured matrices satisfying prescribed conditions on roots of polynomials
Are matrices which yield a given characteristic polynomial and have specified structure connected? | CommonCrawl |
Mechanism of Common-mode Noise Generation in Multi-conductor Transmission Lines
Constant output characteristics and design methodology of double side LC compensated capacitive power transfer
Qiao Xiong, Ying Shao, … Yan Liang
Switchable edge-line coupler based on parity time-reversal duality symmetry
Iram Nadeem, Valentina Verri, … Stefano Maci
Broadband passive isolators based on coupled nonlinear resonances
Dimitrios L. Sounas, Jason Soric & Andrea Alù
Robust and efficient wireless power transfer using a switch-mode implementation of a nonlinear parity–time symmetric circuit
Sid Assawaworrarit & Shanhui Fan
Scattering of guided waves propagating through pipe bends based on normal mode expansion
Wenjun Wu, Hao Dong & Shangyu Zhang
Perfect absorption in complex scattering systems with or without hidden symmetries
Lei Chen, Tsampikos Kottos & Steven M. Anlage
Analytic modelling of a planar Goubau line with circular conductor
Tobias Schaich, Daniel Molnar, … Mike Payne
Compact folded dipole metasurface for high anomalous reflection angles with low harmonic levels
Nasim Al Islam & Sangjo Choi
Analysis of coupling between magnetic dipoles enhanced by metasurfaces for wireless power transfer efficiency improvement
Hemn Younesiraad & Mohammad Bemani
Souma Jinno ORCID: orcid.org/0000-0002-1533-58531,
Shuji Kitora1,
Hiroshi Toki1 &
Masayuki Abe ORCID: orcid.org/0000-0001-5619-39111
Scientific Reports volume 9, Article number: 15036 (2019) Cite this article
The mechanism of common-mode noise generation in multi-conductor transmission lines is presented. Telegraphic equations, wave equations, and reflection coefficients in the normal and common modes are derived, which provide the mechanism of common-mode noise generation. In addition to coupling among transmission lines, the origin of the common-mode noise generation is elucidated by deriving the reflection coefficients in the normal and common modes.
In the fabrication of electric and electronic circuits, noise analysis is an important process to improve the performance of original designs. Electromagnetic (EM) noise among the transmission lines in the propagation of electric signals is a troublesome problem. Although multi-conductor transmission line (MTL) theory should enable the interference of signals among distributed lines to be described1,2,3,4,5,6,7,8,9,10, there remain difficulties in obtaining an intuitive interpretation of noise and in reducing the noise significantly.
From a physical point of view, electromagnetic noise originates from interactions between a circuit and its environment, such as the ground and the earth. However, the environment in an MTL system is usually represented theoretically as an ideal reference frame, which does not have any physical function. It is important to treat conductors in the environment explicitly for the quantitative understanding of electromagnetic noise. If we approximate the conductors in the environment as a transmission line, we are able to formulate a telegraph equation representing potentials and currents in MTLs from the Maxwell equations11. In a previous study, the concepts of the normal mode (NM) and the common mode (CM) were introduced in order to obtain an intuitive understanding of EM noise. The NM is usually treated as a circuit signal, and the CM is generated by interaction with the environment and causes various noises. Originally, the concepts of the NM and CM voltages were discussed by Maxwell in his book12. He pointed out that the observable quantity is the normal mode, which is the differential mode, and did not pay much attention to the common mode, even saying that the common mode had "no physical meaning". However, these two modes interact with each other, and electric signals are interchanged from one to the other13,14. This interaction would induce CM noise and even generate unwanted external radiation15,16,17,18,19.
One of the successful cases in actual applications to reduce noise is a power supply of accelerators for particle physics20,21. The power supplies have a symmetrical configuration of three transmission lines together with lumped circuits, which suppresses the interference of the normal and common modes in the power lines22. The symmetric three-line circuit was verified to be the best configuration for the reduction of noise, where the normal mode decouples from the common mode11. We formulated a three-line circuit for numerical calculations and showed the relation between the CM noise23 and can say that electrically and geometrically symmetric circuit structure does not generate CM noise and does not couple with the normal mode24.
Previously, we showed numerically why the symmetric three-line circuit is the best configuration for noise reduction. However, it is desirable to understand why this configuration is the best and to know what is important to the actual choice of electric components. To this end, we would like to formulate the reflection coefficients of the three-line circuit for any electric components to be attached to the three-line circuit. In this way, we are able to quantify the amount of common mode noise in three-line circuits.
In the present paper, we introduce an analytical approach that supports further understanding of the electromagnetic noise induced in three-conductor transmission-line circuits. In Section 2, we introduce a formulation in the NM and CM framework for all of the quantities describing the circuit. We apply the formula to three-line systems and define vectors and matrices in the NM and CM quantities, which provide an intuitive understanding of the CM noise generation at the MTL boundaries. Equations and parameters of the signal propagation are expressed in the NM and CM framework. We found that coupling between the NM and CM modes occurs not only along the MTL, but also at the edges of the MLT, and the coupling at the edges of the MLT also depends on the lumped-parameter circuit configuration. In Section 3, we discuss the configurations of both the distributed- and lumped-parameter circuits for the reduction of the CM noise and calculate explicitly the amount of coupling for several circuit configurations using the formula developed in the present paper. Section 4 summarizes the present study.
Transmission-Line Theory in Terms of the Normal and Common Modes of a Three-Line Circuit
Telegraphic and wave equations
In order to formulate reflection coefficients in the NM and CM, as in our previous studies11, we use a three-line circuit to which lumped-parameter circuits are connected, as shown in Fig. 1. In addition to a conventional two-line circuit configuration (distributed lines 1−1′ and 2−2′ with lumped parts), another conductor line, 3−3′, is connected on the source side as the ground. The NM is expressed as a relative relationship between two lines of the circuit and usually used in the circuit theory as "observable" difference mode:
$${V}_{{\rm{n}}}={U}_{1}-{U}_{2}$$
$${I}_{{\rm{n}}}=\frac{1}{2}({I}_{1}-{I}_{2})$$
Three-line circuit consisting of three lines, 1 − 1′, 2 − 2′, and 3 − 3′, with resistances R12 and R23 and a power supply denoted by a circled ± on the source side and resistance R′12 on the load side.
Since the CM is generated by the interaction between the circuit and the surrounding environment, line 3−3′ should be explicitly considered as a conductor influencing the signal lines. In the same way as NM, the CM can be formulated by considering the difference mode between the circuit and the environment, where we express the circuit as the average potential and sum current of two lines22:
$${V}_{{\rm{c}}}=\frac{1}{2}({U}_{1}+{U}_{2})-{U}_{3}$$
$${I}_{{\rm{c}}}=\frac{1}{2}({I}_{1}+{I}_{2}-{I}_{3})$$
The NM-CM expression of the potentials and currents under the TEM approximation leads to telegraphic equations for the NM-CM quantities11.
$$\frac{\partial {V}_{{\rm{n}}}(x,t)}{\partial t}=-{P}_{{\rm{nn}}}\frac{\partial {I}_{n}(x,t)}{\partial x}-{P}_{{\rm{nc}}}\frac{\partial {I}_{{\rm{c}}}(x,t)}{\partial x},$$
$$\frac{\partial {V}_{{\rm{c}}}(x,t)}{\partial t}=-{P}_{{\rm{cn}}}\frac{\partial {I}_{n}(x,t)}{\partial x}-{P}_{{\rm{cc}}}\frac{\partial {I}_{{\rm{c}}}(x,t)}{\partial x},$$
$$\frac{\partial {V}_{{\rm{n}}}(x,t)}{\partial x}=-{L}_{{\rm{nn}}}\frac{\partial {I}_{n}(x,t)}{\partial t}-{L}_{{\rm{nc}}}\frac{\partial {I}_{{\rm{c}}}(x,t)}{\partial t},$$
$$\frac{\partial {V}_{{\rm{c}}}(x,t)}{\partial x}=-{L}_{{\rm{cn}}}\frac{\partial {I}_{n}(x,t)}{\partial t}-{L}_{{\rm{cc}}}\frac{\partial {I}_{{\rm{c}}}(x,t)}{\partial t}.$$
Here, the three lines are assumed to be lossless and are in a uniform medium, where both the permittivity and magnetic permeability are constant. Furthermore, we assume that there is no radiation from the circuit, I1 + I2 + I3 = 022. Both P and L represent potential and inductance coefficients, respectively. We use P instead of capacitance, which is conventionally used in the one-dimensional telegraphic equation, because we need to consider potentials induced by charge for further discussion25,26. The subscripts denote the diagonal and coupling terms of the normal and common modes11.
The telegraphic equations indicate that the potentials and currents propagate in the conductor lines accompanying the NM and CM couplings, unless the decoupling condition Pcn = Pnc = Lcn = Lnc = 0 is satisfied. Physically, the decoupling conditions are that lines 1−1′ and 2−2′ have the same sizes and are positioned symmetrically with respect to line 3−3′24.
Definitions of vectors and matrices in the NM-CM quantities
In order to understand the NM-CM coupling, we define the vectors and matrices in the NM-CM quantities.
$$\begin{array}{c}{\bf{V}}(x,t)=(\begin{array}{c}{V}_{{\rm{n}}}(x,t)\\ {V}_{{\rm{c}}}(x,t)\end{array}),\,{\bf{I}}(x,t)=(\begin{array}{c}{I}_{{\rm{n}}}(x,t)\\ {I}_{{\rm{c}}}(x,t)\end{array}),\\ {\bf{P}}=(\begin{array}{cc}{P}_{{\rm{n}}{\rm{n}}} & {P}_{{\rm{n}}{\rm{c}}}\\ {P}_{{\rm{c}}{\rm{n}}} & {P}_{{\rm{c}}{\rm{c}}}\end{array}),\,{\bf{L}}=(\begin{array}{cc}{L}_{{\rm{n}}{\rm{n}}} & {L}_{{\rm{n}}{\rm{c}}}\\ {L}_{{\rm{c}}{\rm{n}}} & {L}_{{\rm{c}}{\rm{c}}}\end{array}).\end{array}$$
Here, V and I are vectors of the NM and CM quantities, respectively. In addition, P and L are matrices of the potential and inductance coefficients, which were derived in a previous study11.
Definitions of the respective vectors and matrices provide wave equations in the NM-CM quantities.
$$\frac{{\partial }^{2}}{\partial {x}^{2}}{\bf{V}}(x,t)=\frac{1}{{c}^{2}}\frac{{\partial }^{2}}{\partial {t}^{2}}{\bf{V}}(x,t),$$
$$\frac{{\partial }^{2}}{\partial {x}^{2}}{\bf{I}}(x,t)=\frac{1}{{c}^{2}}\frac{\partial }{\partial {t}^{2}}{\bf{I}}(x,t\mathrm{).}$$
Here, c is the signal velocity in the MTL, and the coefficient matrices satisfy the relation LP−1 = P−1 L = 1/c21, where 1 is an identity matrix22.
The reflection coefficients at the boundaries of the MTL depend both on the MTL configurations and on the elements of the lumped circuits connected to the MTL. Here, in order to derive the reflection coefficients in the NM-CM quantities, we study a case in which resistances and voltage sources are connected to the edges of the three-line circuit, as shown in Fig. 2. Here, we derive the NM and CM equations of the lumped circuit which is the boundary of MTL by using Kirchhoff's equations and Ohm's law. First, the currents flowing at the boundary (I1(0, t), I2(0, t) and I3(0, t)) are expressed by brunch currents in the lumped circuit:
$${I}_{1}(0,t)=-{I}_{13}(t)-{I}_{12}(t)$$
$${I}_{2}(0,t)={I}_{12}(t)-{I}_{23}(t)$$
$${I}_{3}(0,t)={I}_{23}(t)+{I}_{13}(t)$$
Lumped-parameter circuit consisting of resistances and voltage sources connected to a three-line MTL. By substituting V13 = V23 = 0 and R13 → ∞, the circuit becomes equal to the source (left) side of Fig. 1. The right-hand boundary circuit of Fig. 1 will be obtained by setting V12 = V13 = V23 = 0 and R12 → ∞ and R13 → ∞.
Next, the potentials at the boundary (U1(0, t), U2(0, t) and U3(0, t)) are also expressed by the lumped circuit quantities which are resistances, brunch currents and voltage sources:
$${U}_{1}(0,t)-{U}_{2}(0,t)={R}_{12}{I}_{12}(t)+{V}_{12}(t)$$
We can now rewrite these boundary conditions in terms of the NM and CM voltages and currents expressed in Eqs (1, 2, 3, 4). We write the results in a matrix form as:
$${\bf{V}}(x=\mathrm{0,}\,t)=-\,{\bf{RI}}\mathrm{(0,}\,t)+{\bf{E}}(t\mathrm{).}$$
Here, R and E(t) are a resistance matrix and a voltage source vector in the NM and CM quantities, respectively. Then, R is given as
$${\bf{R}}=(\begin{array}{cc}{R}_{{\rm{n}}{\rm{n}}} & {R}_{{\rm{n}}{\rm{c}}}\\ {R}_{{\rm{c}}{\rm{n}}} & {R}_{{\rm{c}}{\rm{c}}}\end{array}),$$
$${R}_{{\rm{nn}}}=\frac{{R}_{12}({R}_{23}+{R}_{13})}{{R}_{12}+{R}_{23}+{R}_{13}},$$
$${R}_{{\rm{cc}}}=\frac{1}{4}\frac{4{R}_{23}{R}_{13}+{R}_{12}{R}_{23}+{R}_{13}{R}_{12}}{{R}_{12}+{R}_{23}+{R}_{13}},$$
$${R}_{{\rm{nc}}}={R}_{{\rm{cn}}}=\frac{1}{2}\frac{{R}_{12}({R}_{13}-{R}_{23})}{{R}_{12}+{R}_{23}+{R}_{13}}.$$
Here, E(t) is expressed as
$${\bf{E}}={\bf{R}}{{\bf{I}}}_{S},$$
$${{\bf{I}}}_{S}=(\begin{array}{c}{V}_{12}/{R}_{12}-{V}_{23}/2{R}_{23}+{V}_{13}/2{R}_{13}\\ {V}_{23}/{R}_{23}+{V}_{13}/{R}_{13}\end{array}).$$
As for the lumped-parameter part at x = L, the relations of the lumped-parameter part connecting the nodes can be transformed into the NM and CM quantities in a similar manner.
$${\bf{V}}(x=L,t)={\bf{RI}}(L,t)+{\bf{E}}(t).$$
Reflection coefficients
In a manner usually used in the recursive solution for the MTL7, we use boundary conditions that satisfy Eqs (10), (11), (18), and (25) at both edges (x = 0, L), and obtain the time-domain expression for V:
$$\begin{array}{rcl}{\bf{V}}\mathrm{(0,}\,t) & = & {{\bf{M}}}_{0}{{\bf{V}}}_{0}(t)\\ & & +\mathrm{(1}+{{\boldsymbol{\Gamma }}}_{0})[{{\bf{M}}}_{L}{{\bf{V}}}_{L}(t-{T}_{D})+{{\boldsymbol{\Gamma }}}_{L}{{\boldsymbol{\Gamma }}}_{0}{{\bf{M}}}_{L}{{\bf{V}}}_{L}(t-3{T}_{D})\\ & & \,\,\,\,\,\,+\cdots +{({{\boldsymbol{\Gamma }}}_{L}{{\boldsymbol{\Gamma }}}_{0})}^{N}{{\bf{M}}}_{L}{{\bf{V}}}_{L}(t-\mathrm{(2}N+\mathrm{1)}{T}_{D})]\\ & & +\mathrm{(1}+{{\boldsymbol{\Gamma }}}_{0}){{\boldsymbol{\Gamma }}}_{L}[{{\bf{M}}}_{0}{{\bf{V}}}_{0}(t-2{T}_{D})+{{\boldsymbol{\Gamma }}}_{0}{{\boldsymbol{\Gamma }}}_{L}{{\bf{M}}}_{0}{{\bf{V}}}_{0}(t-4{T}_{D})\\ & & \,\,\,\,\,\,+\cdots +{({{\boldsymbol{\Gamma }}}_{0}{{\boldsymbol{\Gamma }}}_{L})}^{N}{{\bf{M}}}_{S}{{\bf{V}}}_{0}(t-\mathrm{2(}N+\mathrm{1)}{T}_{D})]\end{array}$$
$$\begin{array}{rcl}{\bf{V}}(L,t) & = & {{\bf{M}}}_{L}{{\bf{V}}}_{L}(t)\\ & & +\mathrm{(1}+{{\boldsymbol{\Gamma }}}_{L}){{\boldsymbol{\Gamma }}}_{0}[{{\bf{M}}}_{L}{{\bf{V}}}_{L}(t-2{T}_{D})+{{\boldsymbol{\Gamma }}}_{L}{{\boldsymbol{\Gamma }}}_{0}{{\bf{M}}}_{L}{{\bf{V}}}_{L}(t-4{T}_{D})\\ & & \,\,\,\,\,\,+\cdots +{({{\boldsymbol{\Gamma }}}_{L}{{\boldsymbol{\Gamma }}}_{0})}^{N}{{\bf{M}}}_{L}{{\bf{V}}}_{L}(t-\mathrm{2(}N+\mathrm{1)}{T}_{D})]\\ & & \,+\mathrm{(1}+{{\boldsymbol{\Gamma }}}_{L})[{{\bf{M}}}_{0}{{\bf{V}}}_{0}(t-{T}_{D})+{{\boldsymbol{\Gamma }}}_{0}{{\boldsymbol{\Gamma }}}_{L}{{\bf{M}}}_{0}{{\bf{V}}}_{0}(t-3{T}_{D})\\ & & \,\,\,\,\,\,+\cdots +{({{\boldsymbol{\Gamma }}}_{0}{{\boldsymbol{\Gamma }}}_{L})}^{N}{{\bf{M}}}_{0}{{\bf{V}}}_{0}(t-\mathrm{(2}N+\mathrm{1)}{T}_{D})]\end{array}$$
Here, the subscripts for vectors and matrices 0 and L denote positions x = 0 and x = L of the MTL, respectively. In addition, TD is the time for a signal to propagate from one end to the other. Although the amplitude of the reflection wave depends on characteristic impedances and resistances, we introduce matrices M and Γ in Eqs (26) and (27) in order to express the equations in the same form as the reflection of ordinary transmission line theory7.
$${\bf{M}}={\bf{Z}}{({\bf{R}}+{\bf{Z}})}^{-1}$$
$${\boldsymbol{\Gamma }}=({\bf{R}}-{\bf{Z}})({\bf{R}}+{\bf{Z}}{)}^{-1}$$
Here, Z is the impedance matrix in the NM-CM framework and is given as
$${\bf{Z}}=(\begin{array}{cc}{Z}_{{\rm{n}}{\rm{n}}} & {Z}_{{\rm{n}}{\rm{c}}}\\ {Z}_{{\rm{c}}{\rm{n}}} & {Z}_{{\rm{c}}{\rm{c}}}\end{array})={\bf{P}}/c={\bf{L}}c.$$
Note that the introduction of these matrices allows us to discuss wave reflection in the NM-CM framework in the standard way. Formulations of these matrices are similar to those defined in the standard MTL theory7,9: Here, M and Γ correspond to the voltage division coefficient and the reflection coefficient. Matrix elements of the voltage division M and the reflection Γ are given as follows:
$${\bf{M}}=(\begin{array}{cc}{M}_{{\rm{n}}{\rm{n}}} & {M}_{{\rm{n}}{\rm{c}}}\\ {M}_{{\rm{c}}{\rm{n}}} & {M}_{{\rm{c}}{\rm{c}}}\end{array}),$$
$${\boldsymbol{\Gamma }}=(\begin{array}{cc}{\Gamma }_{{\rm{n}}{\rm{n}}} & {\Gamma }_{{\rm{n}}{\rm{c}}}\\ {\Gamma }_{{\rm{c}}{\rm{n}}} & {\Gamma }_{{\rm{c}}{\rm{c}}}\end{array}).$$
These elements are derived from Eqs (28) and (29):
$${M}_{{\rm{nn}}}=\frac{1}{A}[{Z}_{{\rm{nn}}}({R}_{{\rm{cc}}}+{Z}_{{\rm{cc}}})-{Z}_{{\rm{nc}}}({R}_{{\rm{cn}}}+{Z}_{{\rm{cn}}})],$$
$${M}_{{\rm{n}}{\rm{c}}}=\frac{1}{A}[{Z}_{{\rm{n}}{\rm{c}}}{R}_{{\rm{n}}{\rm{n}}}-{Z}_{{\rm{n}}{\rm{n}}}{R}_{{\rm{n}}{\rm{c}}}],$$
$${M}_{{\rm{cn}}}=\frac{1}{A}[{Z}_{{\rm{cn}}}{R}_{{\rm{cc}}}-{Z}_{{\rm{cc}}}{R}_{{\rm{cn}}}],$$
$${M}_{{\rm{cc}}}=\frac{1}{A}[\,-\,{Z}_{{\rm{cn}}}({R}_{{\rm{nc}}}+{Z}_{{\rm{nc}}})+{Z}_{{\rm{cc}}}({R}_{{\rm{nn}}}+{Z}_{{\rm{nn}}})],$$
$${\Gamma }_{{\rm{nn}}}=\frac{1}{A}[({R}_{{\rm{nn}}}-{Z}_{{\rm{nn}}})({R}_{{\rm{cc}}}+{Z}_{{\rm{cc}}})-({R}_{{\rm{nc}}}-{Z}_{{\rm{nc}}})({R}_{{\rm{cn}}}+{Z}_{{\rm{cn}}})],$$
$${\Gamma }_{{\rm{nc}}}=\frac{2}{A}[{Z}_{{\rm{nn}}}{R}_{{\rm{nc}}}-{Z}_{{\rm{nc}}}{R}_{{\rm{nn}}}],$$
$${\Gamma }_{{\rm{cn}}}=\frac{2}{A}[{Z}_{{\rm{cc}}}{R}_{{\rm{cn}}}-{Z}_{{\rm{cn}}}{R}_{{\rm{cc}}}],$$
$${\Gamma }_{{\rm{c}}{\rm{c}}}=\frac{1}{A}[\,-\,({R}_{{\rm{c}}{\rm{n}}}-{Z}_{{\rm{c}}{\rm{n}}})({R}_{{\rm{n}}{\rm{c}}}+{Z}_{{\rm{n}}{\rm{c}}})+({R}_{{\rm{c}}{\rm{c}}}-{Z}_{{\rm{c}}{\rm{c}}})({R}_{{\rm{n}}{\rm{n}}}+{Z}_{{\rm{n}}{\rm{n}}})],$$
$$A=({R}_{{\rm{nn}}}+{Z}_{{\rm{nn}}})({R}_{{\rm{cc}}}+{Z}_{{\rm{cc}}})-({R}_{{\rm{nc}}}+{Z}_{{\rm{nc}}})({R}_{{\rm{cn}}}+{Z}_{{\rm{cn}}}).$$
Both M and Γ depend on R, which is determined by the lumped-parameter circuit connected to MTL at the boundary, and on Z, which is determined by the structure of the distributed-parameter circuit.
Discussion on the Reduction of Electromagnetic Noise
The derivation of the above formulae provides insight for understanding CM noise generation in the MTL. There are two origins of CM noise generation. First, signal propagation induces CM noise in the MTL. The wave equations (Eqs (10) and (11)) show that the NM and CM current and voltage are not coupled with each other in appearance, even if the non-diagonal elements of P and L are not zero. Telegraph equations (Eqs (5) through (8)), however, exhibit NM-CM coupling. Note that coupling occurs between current and voltage in each mode. This means that, in CM noise generation, voltage-to-voltage or current-to-current coupling may not be so strong. The voltage-to-current (or current-to-voltage) conversion complicates understanding of the EM noise phenomena.
Next, CM noise generation occurs at the boundaries. The second term of Γnn indicates that NM-CM coupling also occurs at the boundaries. Common mode noise is always generated when the signal arrives at the boundaries, unless the coupling terms are zero. Although the condition of Rnc − Znc = 0 can decouple the two modes, it is actually almost impossible to adjust Rnc and Znc to be equal so as to satisfy these conditions.
One practical solution for NM-CM decoupling is to simultaneously satisfy Rnc = 0 and Znc = 0, which is satisfied by a symmetrical arrangement of the MTL and lumped elements, as shown in Fig. 3. Line 3−3′ is positioned symmetrically at the center between lines 1−1′ and 2−2′ (d13 = d12/2). The lumped-parameter parts should also be connected symmetrically to the MTL.
Three-line MTL circuit without NM and CM mode coupling.
The matrix elements of M and Γ at both ends can be obtained by setting V13 = V23 = 0 and R13 = R23 on the source side, and V12 = V13 = V23 = 0, R12, R13 → ∞ and R′12 to be the characteristic impedance between lines 1−1′ and 2−2′ on the load side.
$${{\bf{M}}}_{0}=(\begin{array}{cc}{Z}_{{\rm{n}}{\rm{n}}}/({R}_{{\rm{n}}{\rm{n}}}+{Z}_{{\rm{n}}{\rm{n}}}) & 0\\ 0 & 0\end{array}),$$
$${{\boldsymbol{\Gamma }}}_{0}=(\begin{array}{cc}({R}_{{\rm{n}}{\rm{n}}}-{Z}_{{\rm{n}}{\rm{n}}})/({R}_{{\rm{n}}{\rm{n}}}+{Z}_{{\rm{n}}{\rm{n}}}) & 0\\ 0 & ({R}_{{\rm{c}}{\rm{c}}}-{Z}_{{\rm{c}}{\rm{c}}})/({R}_{{\rm{c}}{\rm{c}}}+{Z}_{{\rm{c}}{\rm{c}}})\end{array}),$$
$${{\bf{M}}}_{L}=(\begin{array}{cc}{Z}_{{\rm{n}}{\rm{n}}}/({R}_{{\rm{n}}{\rm{n}}}+{Z}_{{\rm{n}}{\rm{n}}}) & 0\\ 0 & 0\end{array}),$$
$${{\boldsymbol{\Gamma }}}_{L}=(\begin{array}{cc}0 & 0\\ 0 & 1\end{array}).$$
Since the symmetrical arrangement of MTL and lumped elements is satisfied, the non-diagonal (coupling) elements of R and Z are zero on both sides, which also results in zero values in the non-diagonal elements of M and Γ.
We explicitly calculate the reflection coefficients Γ0 and ΓL using the formula developed herein for various circuit configurations. The results are shown in Fig. 4, where the matrix elements of Γ0 (x = 0) and ΓL (x = L) are shown by changing the distance d13 between lines 1−1′ and 3−3′. As shown in Fig. 4(a,b) for the symmetric circuit of Fig. 3, the coupling elements of Γ0 and ΓL simultaneously become zero at d13 = 0.01 m, which is the central position of lines 1−1′ and 2−2′. This means that the CM noise is not generated for this case on both sides because the coupling elements are simultaneously zero. On the other hand, as shown in Fig. 4(c,d) for the asymmetric circuit of Fig. 1, the coupling elements of Γ0 never become zero. This means that CM noise is always generated on the source side because the coupling terms of resistance matrix R0 always have finite values due to the asymmetric lumped-circuit connection with respect to the ground line of line 3−3′ on the source side, as shown in Fig. 1. Here, ΓL has the same behavior as the circuit in Fig. 3 because both circuits have the same condition on the load side. Only Γcn is dependent on d13 and becomes zero at d13 = 0.01 m. On the other hand, Γnc is always zero because Rcc becomes infinite on the load side, because line 3–3′ is not connected to other lines. This means that only NM is converted to CM, and not vice versa, on the load side. The above results confirm that the symmetrical configuration in both lumped- and distributed-parameter circuits is the only way to eliminate CM generation.
Matrix elements of Γ0 (x = 0) and ΓL (x = L) as functions of the distance d13 between lines 1 − 1′ and 3 − 3′. Two different circuits equivalent to symmetrical (Fig. 3) and asymmetrical (Fig. 1) circuits are used. The distance between lines 1 − 1′ and 2 − 2′ is d12 = 0.02 m in each circuit. The length and radius of the three lines are L = 2 m and 0.5 mm, respectively. The values of the resistances are R12 = 50 Ω, R23 = 5 Ω, R13 = R23 = 300 Ω, and R12′ = 349.715 Ω, which is equivalent to the characteristic impedance between lines 1 − 1′ and 2 − 2′(=Znn).
We studied the mechanism of electromagnetic noise generation in symmetric and asymmetric three-line circuits. Telegraph equations, wave equations, and reflection coefficients were derived in the framework of the normal and common modes. The formulation with normal and common mode provides interpretation of the CM noise. We showed that there are two origins of the generation of the electromagnetic noise: (1) coupling during signal propagation in the MTL and (2) coupling at the boundaries of the MTL connected to the lumped-parameter elements.
Although we can discuss CM noise generation in one-dimension (multi-conductor transmission line) problems analytically, this becomes impossible for two or three dimensions. In this case, we are not able to mathematically define the common mode, because the two- or three-dimensional conductors introduce wave propagations in two- and three-dimensional directions, respectively. Hence, we are developing numerical methods by extending the MTL formulation23 to two- or three-dimensional circuits for noise analysis of a plane circuit27.
Bewley, L. Y. Traveling waves on transmission systems. Trans. Am. Inst. Electr. Eng. 50, 532–550 (1931).
Marx, K. D. Propagation Modes, equivalent circuits and characteristic terminations for multiconductor transmission lines with inhomogeneous dielectrics. IEEE Trans. Microwave Theory Tech. 21, 450–457 (1973).
Dommel, H. W. & Meyer, W. S. Computation of electromagnetic transients. Proc. IEEE 62, 983–993 (1974).
Kami, Y. & Sato, R. Circuit-concept approach to externally excited transmission lines. IEEE Trans. Electromagn. Compat. EMC-27, 177–183 (1985).
Djordjević, A. R., Sarkar, T. K. & Harrington, R. F. Analysis of lossy transmission lines with arbitrary nonlinear terminal networks. IEEE Trans. Microwave Theory Tech. 34, 660–666 (1986).
Djordjević, A. R., Sarkar, T. K. & Harrington, R. F. Time-domain response of multiconductor transmission lines. Proc. IEEE 75, 743–746 (1987).
Paul, C. R. Analysis of multiconductor transmission lines. (Wiley-IEEE Press, 1993).
Paul, C. R. Decoupling the multiconductor transmission line equations. IEEE Trans. Microw. Theory Tech. 44, 1429–1440 (1996).
Giovanni, M. & Antonio, M. Transmission lines and lumped circuits. (Academic Press, 2001).
Chang, F. Y. Transient Analysis of Lossless Coupled Transmission Lines in a Nonhomogeneous Dielectric Medium. IEEE Trans. Microw. Theory Tech. 18, 616–626 (1970).
Toki, H. & Sato, K. Multiconductor Transmission-Line Theory with Electromagnetic Radiation. J. Phys. Soc. Japan 81, 014201 (2012).
James Clerk, M. A. Treatise on Electricity and Magnetism. (Dover, 1876).
Kitora, S., Abe, M. & Toki, H. Electromagnetic noise in electric circuits: Ringing and resonance phenomena in the common mode. AIP Adv. 4, 117119 (2014).
Bockelman, D. E., Eisenstadt, W. R. & Member, S. Combined Differential and Common-Mode Scattering Parameters: Theory and Simulation. IEEE Trans. Microw. Theory Tech. 43, 1530–1539 (1995).
Paul, C. R. & Bush, D. R. Radiated emissions from common-mode currents. In Proc. IEEE Int. Symp. Electromagn. Compat. 1–7 (1987).
Paul, C. R. A comparison of the contributions of common-mode and differential-mode currents in radiated emissions. IEEE Trans. Electromagn. Compat. 31, 189–193 (1989).
Watanabe, T., Wada, O., Miyashita, T. & Koga, R. Common-Mode-Current Generation Caused by Difference of Unbalance of Transmission Lines on a Printed Circuit Board with Narrow Ground Pattern. IEICE Trans. Commun. E83-B, 593–598 (2000).
Wang, S. & Lee, F. C. Investigation of the transformation between differential-mode and common-mode noises in an EMI filter due to unbalance. IEEE Trans. Electromagn. Compat. 52, 578–587 (2010).
Su, C. & Hubing, T. H. Imbalance difference model for common-mode radiation from printed circuit boards. IEEE Trans. Electromagn. Compat. 53, 150–156 (2011).
Sato, K. & Toki, H. Synchrotron magnet power supply network with normal and common modes including noise filtering. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers, Detect. Assoc. Equip. 565, 351–357 (2006).
Kobayashi, H. Beam Commissioning of the J-Parc Main Ring*. In Beam Commissioning of the J-PARC Main Ring. Proceedings of PAC09, 1823–1827 (2008).
Toki, H. & Sato, K. Three conductor transmission line theory and origin of electromagnetic radiation and noise. J. Phys. Soc. Japan 78, 1–8 (2009).
Abe, M. & Toki, H. Theoretical Study of Lumped Parameter Circuits and Multiconductor Transmission Lines for Time-Domain Analysis of Electromagnetic Noise. Sci. Rep. 1–17 (2018).
Jinno, S., Toki, H. & Abe, M. Configuration of three distributed lines for reducing noise due to the coupling of the common and normal modes. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers, Detect. Assoc. Equip. 844, 19–23 (2017).
Pipes, L. A. X. Matrix theory of multiconductor transmission lines. London, Edinburgh, Dublin Philos. Mag. J. Sci. 24, 97–113 (1937).
Pipes, L. A. Steady-state analysis of multiconductor transmission lines. J. Appl. Phys. 12, 782–799 (1941).
Article ADS MathSciNet Google Scholar
Jinno, S., Kitora, S., Toki, H. & Abe, M. Study of Simultaneous Switching Noise in Two-Dimensional Transport Theory Including Radiation Effect. In Progress in Electromagnetics Research Symposium 2018-Augus, 642–647 (2018).
The present study was supported in part by Grants-in-Aid for Scientific Research (18K19023, 16H03872, 19J14259, 19H05789) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT) and by the Japan Science Society through a Sasakawa Scientific Research Grant (29-201). The present study was conducted in collaboration with the Nichicon Corporation. S. J. and S. K. acknowledge the support of the Faculty of Engineering Science of Osaka University.
Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka, 560-8531, Japan
Souma Jinno, Shuji Kitora, Hiroshi Toki & Masayuki Abe
Souma Jinno
Shuji Kitora
Hiroshi Toki
Masayuki Abe
S.J., S.K. and H.T. derived reflection coefficients of normal and common mode voltages, together. S.J. calculate reflection coefficients of many circuits' configurations for the discussion on the reduction of electromagnetic noise. S.J. and M.A. wrote main manuscript text together and H.T. reviewed it.
Correspondence to Souma Jinno.
Jinno, S., Kitora, S., Toki, H. et al. Mechanism of Common-mode Noise Generation in Multi-conductor Transmission Lines. Sci Rep 9, 15036 (2019). https://doi.org/10.1038/s41598-019-51259-w | CommonCrawl |
The epidemiology of porcine Taenia solium cysticercosis in communities of the Central Highlands in Vietnam
Dinh Ng-Nguyen1,2,
John Noh3,
Kathleen Breen4,
Mark Anthony Stevenson1,
Sukwan Handali3 &
Rebecca Justine Traub1
Parasites & Vectors volume 11, Article number: 360 (2018) Cite this article
Taenia solium cysticercosis, recognized as a neglected tropical disease by the WHO, is distributed mostly in developing countries of Latin America, sub-Saharan Africa and Asia. Pigs and humans act as intermediate hosts, acquiring T. solium cysticerci (larval stage) in their tissue, through the ingestion of T. solium eggs shed in the faeces of humans infected with adult tapeworms. The disease has a negative impact on rural economies due to losses in productivity arising from human disease, pork carcass condemnations and loss of market access. The aim of this study was to estimate the prevalence of T. solium cysticercosis in pigs in Dak Lak Province in the Central Highlands of Vietnam and to identify household level characteristics associated with T. solium porcine cysticercosis.
This was a cross-sectional study of household pigs in three districts of Dak Lak Province. A total of 408 households in six villages in three districts were visited between June and October 2015. A questionnaire was administered to the head of each household, and within each household, serum samples were collected from three pigs. Serum samples were analyzed using the recombinant T24H antigen in enzyme-linked immunoelectrotransfer blot assay and lentil lectin purified glycoprotein in EITB assay. A Bayesian, mixed-effects logistic regression model was developed to identify management factors associated with the probability of a household having at least one cysticercosis-positive pig.
The prevalence of porcine T. solium cysticercosis in this study was low at 0.94 [95% confidence interval (CI) 0.51–1.68] cases per 100 pigs at risk, in agreement with other studies conducted throughout Vietnam. Scavenging of food and coprophagy were associated with T. solium cysticercosis [odds ratios 1.98 (95% CrI: 0.55–4.74) and 2.57 (95% CrI: 1.22–4.66), respectively].
This study proves that the seroprevalence of porcine cysticercosis in Dak Lak Province was as low as that of other studies conducted throughout Vietnam. Scavenging of food and coprophagy are modifiable factors, providing the opportunity to decrease the prevalence of porcine cysticercosis further in the province.
Taenia solium cysticercosis is recognized as a neglected tropical disease by the WHO [1]. It is distributed mostly in developing countries of Latin America, sub-Saharan Africa and Asia [2]. Pigs and humans act as intermediate hosts, acquiring T. solium cysticerci (larval stage) in their tissue, through the ingestion of T. solium eggs shed in the faeces of humans infected with adult tapeworms. Consumption of raw and/or undercooked pork with active T. solium cysticerci may result in T. solium taeniasis in humans. The presence of porcine cysticercosis impacts negatively on an economy due to costs arising from carcass condemnation and negative impacts on market access and trade of pork.
Although Vietnam is located in a region endemic for T. solium [3], the data on porcine cysticercosis are limited. In addition, it is not clear whether porcine cysticercosis is endemic in the country. During 1994 to 2005, the highest reported prevalence of porcine cysticercosis in Vietnam among four carcass-based studies was less than 1% [4]. These estimates were, for the most part, based on studies conducted in commercial slaughterhouses, and therefore do not reflect the prevalence of cysticercosis in pig populations in rural communities that are not processed in commercial slaughterhouses [5]. Of the relatively small number of studies that have quantified the prevalence of porcine cysticercosis in the country, most were conducted prior to 2003 with a focus on the north of Vietnam [4]. To the best of our knowledge, the only study of porcine cysticercosis in the south of Vietnam was carried out in 1994 [6].
Dak Lak Province is located in the Central Highlands in the south of Vietnam. A recent study in the communities of the province demonstrated an apparent prevalence of 1.2% T. solium taeniasis [7]. In most communities in the province, pigs are free-roaming and outdoor defaecation is common [8]. These characteristics are conducive for infection and transmission of T. solium cysticercosis to pigs. The aim of this study was to estimate the prevalence of cysticercosis in pigs in Dak Lak and to identify household level characteristics associated with T. solium porcine cysticercosis.
Study site and sampling
Fieldwork was conducted between June and October 2015 in Krong Nang, M'Drak and Buon Don districts in Dak Lak Province, Vietnam. These districts were chosen as the study sites based on their diverse geographical characteristics. The study sites have been described in detail elsewhere [7]. In brief, M'Drak is located in the east of Dak Lak Province with an average altitude of 400 m to 500 m and has a tropical monsoon climate typical of the Vietnam's Central Coast. Krong Nang is situated in the north of the province at an altitude of 800 m. Buon Don, situated to the west of the province with an average elevation of 330 m and has a hot and dry climate. The standard of living in this area of Vietnam is generally poor. Open defaecation using outdoor pit latrines is common practice and livestock access to these latrine areas is usually unrestricted. The practice of non-confinement of pigs and cattle is common with slaughter activities often carried out in backyards [4].
A sampling frame listing the name of all villages in the three study districts was obtained from Sub-Department of Animal Health office. Villages eligible for sampling comprised those with more than 1000 pigs, as recorded by the Sub-Department of Animal Health within the Ministry of Agriculture. All eligible villages within each of the three study districts were assigned a number and two numbers were chosen at random to select villages from each district for inclusion in the study. Within each selected village, a list of householder names was obtained from the respective village head person, and each householder name was then coded with a number (the number of households per village ranged from 200 to 300). A sheet of paper was drawn up into squares and each square numbered from 1 to 300. The squares were cut into pieces and placed face-down on a table. The village head was then asked to select between 100 and 140 squares. The number on each selected square identified each household to be sampled. All selected households were visited several days before sampling to obtain consent from participants. Within each district, blood samples were collected from pigs in each of the study households. At the time of each household visit pig owners were asked to complete a questionnaire on the number and type of pigs kept and details of demography, husbandry practices, and diet (Table 1). All questionnaires were conducted in local Vietnamese phraseology and their validity pre-tested on 30 pig owners in another community in Dak Lak Province before application to the field survey. In addition, interviewers were trained before administering the questionnaires. Pigs were selected at random by the member of research group for blood sampling. Pigs that were pregnant or ill, and pigs aged less than 2 months of age were excluded from sampling. At the time of each household visit 10 ml of blood was obtained from the cranial vena cava of each pig into plain blood collection tubes. The blood samples were allowed to clot at ambient temperature prior to centrifugation at 3200× g for 5 min to separate serum. Serum was dispensed into 1.5 ml aliquots and stored at -20 °C until use.
Table 1 Details of information about general and pig information
Sample size estimation
The aim of this study was to estimate the prevalence of porcine T. solium cysticercosis in Dak Lak Province. Based on a previous slaughterhouse based survey by Van De at al. [9], the prevalence of porcine cysticercosis was assumed to be 10%. Assuming 95% certainty that this estimate was within 5% of the actual population prevalence (i.e. cysticercosis prevalence ranged from 5% to 15%) and ignoring the possibility that cysticercosis positive pigs were clustered within households, we estimated that a total of 384 pigs were required to be sampled. We then assumed the average number of pigs eligible for random sampling per household was at least three and an intra-class correlation coefficient for T. solium cysticercosis of 0.07 [10] returning a design effect of 1.14. Our revised sample size, accounting for the likelihood that porcine cysticercosis clusters within households, was 384 × 1.14 = 438 for each of the three study districts.
Pig serum samples were analyzed using the recombinant T24H antigen in enzyme-linked immunoelectrotransfer blot (rT24H-EITB) assay as described by Noh et al. [11] and lentil lectin purified glycoproteins in EITB (LLGP-EITB) assay as previously described by Tsang et al. [12] and Gonzalez et al. [13]. Both the LLGP-EITB and rT24H-EITB assay are immunoblot methods. The LLGP-EITB detects antibodies to one or more of seven lentil lectin purified glycoproteins (LLGPs), namely GP50, GP42, GP24, GP21, GP18, GP14 and GP13 which are present in the soluble fraction of an extract of T. solium cysticerci [11]. Reaction to any of these 7 glycoprotein antigens is considered positive. The rT24H-EITB assay detects antibodies against rT24H antigen derived from 24- and 42-kDa glycoproteins of the LLGPs [14].
To ascertain the analytical specificity of the rT24H antigen, we subjected 29 cysticercosis-negative USA pig sera, 12 necropsy-positive T. solium-positive Peruvian pig sera and 4 T. hydatigena necropsy-positive Vietnamese pig sera to the rT24H-EITB. All USA pig sera and Vietnamese T. hydatigena pig sera were negative for the rT24H-EITB, and all Peruvian T. solium positive pig sera were positive on the rT24H-EITB. These preliminary results provided the basis of results show that under experimental conditions, the rT24H-EITB do not cross-react to pig sera with T. hydatigena.
Individual serum samples were screened in pools of four using the rT24H antigen in EITB assay format to detect the presence of antibodies against T. solium cysticerci. The rT24H antigen was utilized in the EITB assay as it offers the best overall diagnostic performance compared with other recombinant or synthetic antigens based on our preliminary data and previous human-based studies [15, 16]. Individual serum sample from each positive pools were then re-screened using rT24H-EITB and LLGP (native antigen of T. solium cysticerci) antigens. The LLGP-EITB has been used as the reference standard assay for serological diagnosis of T. solium cysticercosis in humans and pigs that has the specificity of 100% and sensitivity between 98–100% [12, 13]. Jayashi et al. [17] when validating the LLGP-EITB for naturally acquired porcine cysticercosis pointed out that the LLGP-EITB achieves optimal sensitivity of 78% (95% CI: 52–94%) and specificity of 76% (95% CI: 66–85%) when the assay reacts to ≥ 3 of 7 LLGP antigens. The diagnostic performance of LLGP-EITB for porcine T. solium cysticercosis was evaluated in Peruvian pigs, and the cross-reaction of the assay to T. hydatigena was not known [13]. An individual serum sample was considered positive for T. solium antibodies if it was positive to both rT24H and native LLGP antigens.
Risk factors for porcine T. solium cysticercosis in the communities of Dak Lak Province were identified using logistic regression. In this study, the outcome of interest was a dichotomous variable where households where at least one pig was T. solium cysticercosis-positive were assigned a value of 1, and 0 otherwise. The association between each of a set of household-level candidate explanatory variables from the questionnaires and the outcome of interest were tested using unconditional odds ratios and the chi-square test. All explanatory variables associated with the outcome of being T. solium cysticercosis positive at an alpha level of < 0.20 using the chi-square test were selected for inclusion in the multivariable model.
A frequentist fixed-effects logistic regression model was developed in which the probability of a household having at least one cysticercosis-positive pig was parameterized as a function of the explanatory variables with significance of the chi-square test at P < 0.20, as described above. The significance of each explanatory variable in the model was tested using the chi-square test. Explanatory variables that were not statistically significant were removed from the model one at a time, beginning with the least significant, until the estimated regression coefficients for all the explanatory variables retained in the model were significant at an alpha level of < 0.05.
To account for the hierarchical structure of the data (households within villages) a village-level random effect term (V i ), was included in the model as shown in Equation 1. District was not a significant predictor of household level T. solium cysticercosis status at the alpha level of 0.05 and was therefore not considered further in the mixed-effects model.
$$ \log \kern0.5em \left[\frac{pi}{1- pi}\right]=\kern0.5em {\beta}_0\kern0.5em +\kern0.5em {\beta}_{1\kern0.5em }{x}_{1i}\kern0.5em +\kern0.5em \cdots \kern0.5em +\kern0.5em {\beta}_m{x}_{mi\kern0.5em }\kern0.5em +\kern0.5em {V}_i\kern0.5em +\kern0.5em {\varepsilon}_i $$
Due to the low prevalence of T. solium cysticercosis (12 of 1281 pigs were positive) regression coefficients for the mixed-effects logistic regression model were estimated using a Bayesian approach implemented in JAGS [18, 19]. Flat (uninformed) prior distributions were assumed for the intercept β0 and each of the regression coefficients for the fixed effects β1 ⋯ β m . The village-level random effect term (V i ) was parameterized as having a normal distribution with mean 0 and precision (inverse variance) τ.
For each of the Bayesian regression analyses we ran the Markov chain Monte Carlo sampler for 40,000 iterations and discarded the first 1000 'burn-in' samples. Convergence was visually assessed by plotting cumulative path plots for each of the monitored parameters [20, 21] and quantified using the Raftery & Lewis convergence diagnostic [22, 23]. Parallel chains were run using diverse initial values to ensure that convergence was achieved to the same distribution [24]. Posterior sample sizes were determined by running sufficient iterations to ensure that the Monte Carlo standard error of the mean was at least one order of magnitude smaller than the posterior standard deviation for each parameter of interest.
The results of the final mixed-effects logistic regression model are reported in terms of adjusted odds ratios for each explanatory variable. Assuming a causal relationship between a given explanatory variable and porcine cysticercosis, an adjusted odds ratio [and its 95% credible interval (CrI)] of > 1 indicates that, after adjusting for other variables in the model, the explanatory variable increased the risk of a pig being cysticercosis positive. An adjusted odds ratio (and its 95% CrI) of < 1 indicates that exposure to the explanatory variable was protective, and an OR of 1 indicates that the variable was not associated with porcine cysticercosis risk.
Statistical analyses were performed using the packages R2jags [25] and coda [26] implemented in R version 3.3.0 [27].
General description of study population
A total of 1324 pig serum samples were collected in Krong Nang, M'Drak and Buon Don districts in Dak Lak Province. Of these, 1281 samples were eligible for further examination as 43 samples were excluded from analysis due to hemolysis or missing questionnaire information. All 1281 serum samples were screened in pools of 4 using the rT24H-EITB assay, and 10 pool samples were identified as positive. Twelve single samples among the 10 positive pool samples were positive for T. solium antibodies using the rT24H-EITB assay and all 12 single samples were positive using LLGP-EITB. The prevalence of T. solium cysticercosis in pigs in the study districts was 0.94 (95% CI: 0.51–1.68) per 100 pigs at risk. The 12 positive pigs belonged to 11 households in the three study districts. Of 203 households visited in M'Drak district, 9 (4.43%, 95% CI: 2.17–0.85%) possessed T. solium seropositive pigs. Among 70 visited households in Krong Nang district, two (2.8%, 95% CI: 0.49–10.8%) possessed T. solium cysticercosis positive pigs, and no seropositive pigs were identified in 135 households in Buon Don District.
The hierarchical structure of the data in this study is shown in Table 2. The 1281 pigs were from 408 households and, within each household, an average of three pigs were sampled (minimum 1; maximum 24).
Table 2 Structure of the data from 1281 study pigs from six villages in M'Drak, Buon Don and Krong Nang districts
Of the 408 households, 266 (65%, 95% CI: 60–70%) used a pit latrine; however, most of these latrines were of temporary construction, which animals were able to access. A total of 35% (95% CI: 30–40%) of householders responded that their family members practiced outdoor defection; children typically defaecated around the main household building while adults defaecated some distance from the main household building, within the confines of the household property. Seventy-six percent (95% CI: 72–80%) of households had a pigsty and 58% (95% CI: 53–63%) confined their pigs at all times. Free-roaming pigs habitually ranged around the village to seek food and to return to their litters located under stilt housing in the afternoon. Approximately 7% (95% CI: 5–10%) of households used water sourced from lakes, streams or ponds for their pigs. The remaining households used either rainwater or water from wells or pipes. Dogs were kept in 63% (95% CI: 59–68%) of the households that owned pigs (Table 3).
Table 3 General description of household data
Among the 1281 pigs that were sampled, 41% (95% CI: 38–44%) were of local breed (Soc). A total of 27% (95% CI: 24–29%) of the sampled pigs were reported to regularly consume human faeces. A little over half of the pigs were routinely offered raw, unwashed vegetables (57%; 95% CI: 54–59%). Commercial and/or homemade bran was offered to 89% (95% CI: 86–91%) of pigs. The small proportion of pigs that were not supplied bran (11%, 95% CI: 9–12%) were scavenging for the most part (Table 4).
Table 4 General description of pig data
Risk factors for porcine T. solium cysticercosis
Of the data recorded using the questionnaire, we identified two factors associated with a pig's likelihood of being T. solium cysticercosis positive: (i) frequent coprophagy of human faeces; and (ii) scavenging for food. Estimated regression coefficients for the mixed-effects logistic regression model provided in Table 5. After adjusting for the other explanatory variables in the model, the odds of a household where pigs routinely consumed human faeces being T. solium cysticercosis positive was 2.57 (95% CrI: 1.22–4.66) times that of a household where pigs did not consume human faeces. The odds ratio for a household where pigs routinely scavenged for food was 1.98 (95% CrI: 0.55–4.74).
Table 5 Risk factors associated with T. solium cysticercosis positive in pigs
In low-income rural communities of Vietnam, a substantial proportion of the human population practice open defaecation and uncooked pork and beef consumption is relatively common. Allowing pigs to roam freely is a common husbandry practice in the region [4, 5, 9, 28, 29]. Despite all relevant risk factors for cysticercosis being present in each of the communities in this study, the prevalence of T. solium cysticercosis was low at 0.94 (95% CI: 0.51–1.68) cases per 100 pigs at risk. A similar, low prevalence of cysticercosis has been observed not only in Dak Lak Province but in other regions of Vietnam [4, 6, 30]. In 1994 Huan, in a cross-sectional study of pigs submitted for slaughter from 12 provinces in the south of Vietnam, reported a prevalence of 0.90 (95% CI: 0.45–1.76) cases per 100 pigs at risk [6]. In three studies carried out in 10 provinces in the north of Vietnam between 1999 and 2003, the prevalence of cysticercosis ranged from 0 to 0.06 cases per 100 pigs at risk [31,32,33]. In the Province of Bac Ninh in the north of Vietnam, a known foci of T. solium in humans, carcass examination of 26 village pigs identified no cases of T. solium cysticercosis. Instead, 10 pigs were positive for T. hydatigena cysticerci [34]. These findings are in agreement with those of Conlan et al. [35], who reported a low prevalence of T. solium cysticercosis of 0.8% in village pigs in Laos with a relatively high prevalence of T. hydatigena (22 cases per 100 pigs at risk) and a high prevalence of T. solium taeniasis. Conlan et al. [35] hypothesized that T. hydatigena is likely to cross-protect pigs from T. solium infection.
In Vietnam the prevalence of T. hydatigena cysticercosis in pigs has been reported to be high, ranging between 25–38% in the north and the prevalence has been strongly correlated with the presence of T. hydatigena infection in dogs [36]. In addition, most of the households that owned pigs in the three study districts also kept dogs and approximately one half of the households owning dogs reported that they had observed proglottids in their dog's faeces (Table 3). All of the 10 free-roaming village pigs that were backyard slaughtered had T. hydatigena cysticerci present in the mesentery, stomach, spleen, and liver (personal observation from fieldwork). It is our inference that the relatively high prevalence of T. hydatigena infection in dogs is likely to be a major source of T. hydatigena cysticercosis in pigs. If this is true a cautioned approach to cysticercosis control in pigs would be advised if T. hydatigena were to be targeted through pig confinement and canine deworming programs. Eliminating or reducing pig exposure to T. hydatigena would likely result in an increase in the observed prevalence of T. solium in non-compliant free-roaming pigs, providing an increased public health risk to the community.
Among the investigated households that owned pigs, a little under one-quarter did not have a pigsty and most of the pigs that were kept were of the local breed (Table 4). Local breeds are preferred due to their ability to thrive under harsh raising conditions and poor feeding [12]. Allowing pigs to free-roam for food was common practice (Table 3). The odds of a household where pigs routinely scavenged for food being seropositive for T. solium cysticercosis was 1.98 (95% CrI: 0.55–4.74) times greater than the odds of a household where pigs were fed commercial and/or homemade bran (Table 5). Similarly, the odds ratio for a household where pigs routinely consumed human faeces being seropositive was 2.57 (95% CrI: 1.22–4.66) times greater than the odds of a household where this practice was not habitual. It is known that risk factors for transmission and circulation of porcine cysticercosis are numerous and may vary in different settings. We found that allowing pigs to free-roam and allowing pigs to routinely consume human faeces were associated with T. solium exposure, consistent with other studies [37,38,39]. Research on the epidemiological characteristics of porcine cysticercosis in Peru [40], Mozambique [41] and Mexico [42] found that older pigs were more likely to show evidence of exposure to T. solium compared to younger pigs, an association not identified in this study. Similarly, while studies from Zambia [43], Mexico [44], and Tanzania [45] showed that the prevalence of T. solium exposure was higher in male pigs compared with females, this association was not identified in this study. In this study, both of the risk factors identified are modifiable, which means that there exists an opportunity to decrease the prevalence of porcine cysticercosis even further. We propose that a combination of intervention measures including education and public awareness campaigns, strategies to reduce coprophagy among pigs and enhanced meat inspection (particularly backyard slaughtered stock) are likely to have the greatest impact on porcine cysticercosis risk with positive secondary effects on human health [46]. For this to be successful there is a need for commitment and support from local and/or central veterinary and medical health authorities.
The prevalence of porcine T. solium cysticercosis in this study was low at 0.94 (95% CI: 0.51–1.68) cases per 100 pigs at risk, in agreement with other studies conducted throughout Vietnam. Scavenging of food and coprophagy were associated with T. solium cysticercosis risk. Both of these characteristics are modifiable providing the opportunity to decrease the prevalence of porcine cysticercosis even further.
EITB:
enzyme-linked immunoelectrotransfer blot
LLGP:
lentil lectin purified glycoprotein
rT24H:
recombinant antigen T24H
FAO/WHO. Multicriterial-based Ranking for Risk Management of Food-borne Parasites. Microbiological Risk Assessment Series No 23. In: Rome: WHO Press; 2014.
Donadeu M, Lightowlers MW, Fahrion AS, Kessels J, Donadeu M, Lightowlers MW, et al. Taenia solium: WHO endemicity map update. Wkly Epidemiol Rec. 2016;49/50:595–9.
Aung AK, Spelman DW. Taenia solium taeniasis and cysticercosis in southeast Asia. Am J Trop Med Hyg. 2016;94:947–54.
Ng-Nguyen D, Stevenson MA, Traub RJ. A systematic review of taeniasis, cysticercosis and trichinellosis in Vietnam. Parasit Vectors. 2017;10:150.
Huong NT. Taeniasis and cysticercosis in a selected group of inhabitants from a mountainous province in North Vietnam. Antwerpen, Belgium: Prince Leopold Institute of Tropical Medicine; 2006.
Huan L Van. Parasitic helminths in pigs in several southern provinces and preventative measures National Institution of Veterinary Research; 1994. http://luanan.nlv.gov.vn/luanan?a=d&d=TTkFvmnEdmia1994.1.2&e=-%2D-%2D-%2D-vi-20%2D-1%2D-img-txIN-%2D-%2D-%2D-%23. Accessed 10 Nov 2015.
Ng-Nguyen D, Stevenson MA, Dorny P, Gabriël S, Van VT, Nguyen VT, et al. Comparison of a new multiplex real-time PCR with the Kato-Katz thick smear and copro-antigen ELISA for the detection and differentiation of Taenia spp. in human stools. PLoS Negl Trop Dis. 2017;11:e0005743.
Nguyen TH. Evaluation of market opportunities for producing local pigs in Dak Lak. Tay Nguyen J Sci. 2009;5:21–6.
Van De N, Le TH, Lien PTH, Eom KS. Current status of taeniasis and cysticercosis in Vietnam. Korean J Parasitol. 2014;52:125–9.
Asaava LL, Kitala PM, Gathura PB, Nanyingi MO, Muchemi G, Schelling E. A survey of bovine cysticercosis/human taeniosis in Northern Turkana District, Kenya. Prev Vet Med. 2009;89:197–204.
Noh J, Rodriguez S, Lee YM, Handali S, Gonzalez AE, Gilman RH, et al. Recombinant protein- and synthetic peptide-based immunoblot test for diagnosis of neurocysticercosis. J Clin Microbiol. 2014;52:1429–34.
Article PubMed PubMed Central CAS Google Scholar
Tsang VCW, Brand JA, Boyer AE. An enzyme-linked immunoelectrotransfer blot assay and glycoprotein antigens for diagnosing human cysticercosis (Taenia solium). J Infect Dis. 1989;159:50–9.
Article PubMed CAS Google Scholar
Gonzalez AE, Cama V, Gilman RH, Tsang VCW, Pilcher JB, Chavera A, et al. Prevalence and comparison of serologic assays, necropsy, and tongue examination for the diagnosis of porcine cysticercosis in Peru. Am J Trop Med Hyg. 1990;43:194–9.
Hancock K, Pattabhi S, Whitfield FW, Yushak ML, Lane WS, Garcia HH, et al. Characterization and cloning of T24, a Taenia solium antigen diagnostic for cysticercosis. Mol Biochem Parasitol. 2006;147:109–17.
Handali S, Klarman M, Gaspard AN, Noh J, Lee YM, Rodriguez S, et al. Multiantigen print immunoassay for comparison of diagnostic antigens for Taenia solium cysticercosis and taeniasis. Clin Vaccine Immunol. 2010;17:68–72.
Rodriguez S, Wilkins P, Dorny P. Immunological and molecular diagnosis of cysticercosis. Pathog Glob Health. 2012;106:286–98.
Jayashi CM, Gonzalez AE, Castillo Neyra R, Rodríguez S, García HH, Lightowlers MW. Validity of the enzyme-linked immunoelectrotransfer blot (EITB) for naturally acquired porcine cysticercosis. Vet Parasitol. 2014;199:42–9.
Kruschke JK. Review of doing Bayesian data analysis: a tutorial with R, JAGS, and Stan (second edition). Clin Neuropsychol. 2017;31:1268–70.
Plummer M. JAGS Version 3.4.0 user manual. 2013;0–41. http://www.stats.ox.ac.uk/~nicholls/MScMCMC15/jags_user_manual.pdf
Yu B, Mykland P. Looking at Markov samplers through cusum path plots: a simple diagnostic idea. Stat Comput. 1998;8:275–86.
Robert CP, Casella G. Monte Carlo Statistical Methods. New York, USA: Springer New York; 2004.
Book Google Scholar
Raftery AE, Lewis SM. One long run with diagnostics: implementation strategies for Markov Chain Monte Carlo. Stat Sci. 1992;7:493–7.
Raftery EA, Lewis MS. How many iterations in the Gibbs Sampler? In: Benardo J, Berger J, Dawid A, Smith A, editors. Bayesian Stat 4. New York: The Clarendon Press and: Oxford University Press; 1992. p. 763–74.
Gelman A. Inference and monitoring convergence. In: Gilks W, Richardson S, Spiegelhalter D, editors. Markov Chain Monte Carlo Pract. London: Chapman & Hall; 1996. p. 131–43.
Su Y-S, Yajima M. Using R to Run "JAGS." 2015. https://cran.r-project.org/web/packages/R2jags/R2jags.pdf
Plummer M, Best N, Cowles K, Vines K, Sarkar D, Bates D, et al. Coda: Output Analysis and Diagnostics for MCMC. R News. 2016;6:7–11.
Team. R Core. In: R: A Language and Environment for Statistical Computing [Internet]. Vienna, Austria: R Foundation for Statistical Computing; 2017. http://www.r-project.org.
Anh Tuan P, Dung TTK, Nhi VA. Sero-epidemiological investigation of cysticercosis in the southern provinces. J Malar Parasite Dis Control. 2001;4:81–7.
Trung DD, Praet N, Cam TDT, Lam BVT, Manh HN, Gabriël S, et al. Assessing the burden of human cysticercosis in Vietnam. Trop Med Int Health. 2013;18:352–6.
Willingham AL, Van De N, Doanh NQ, Cong D, Dung TV, Dorny P, et al. Current status of cysticercosis in Vietnam. Southeast Asian J Trop Med Public Health. 2003;34:35–50.
Van KP, Luc P. Veterinary Parasitology. Hanoi: Hanoi Agricultural Publishing House; 1996.
Doanh NQ, Holland W, Vercruysse J, Van DN. Results of survey on cysticercosis pig in some northern provinces in Vietnam. J Malar Parasite Dis Control. 2002;6:76–82.
De N Van, Chau L Van, Son DT, Chuyen LT, Hop NT, Vien HV, et al. Taenia solium survey in Hanoi. J Malar Parasite Dis Control. 2004;6:93–99.
Doanh NQ, Kim NT, De NV, Lung NL. Result of survey on taeniasis and cysticercosis humans and pigs in Bac Ninh and Bac Kan provinces. Vet Sci Tech. 2002;9:46–9.
Conlan JV, Vongxay K, Khamlome B, Dorny P, Sripa B, Elliot A, et al. A cross-sectional study of Taenia solium in a multiple taeniid-endemic region reveals competition may be protective. Am J Trop Med Hyg. 2012;87:281–91.
Lan NTK, Quyen NT, Hoat PC. The correlation between the prevalence of the tapeworm Taenia hydatigena in dogs and their larvae Cysticercus tenuicollis in cattle and pigs - the effect of tapeworm treatment in dogs. Vet Sci Tech. 2011;18:60–5.
Ngowi HA, Kassuku AA, Maeda GEM, Boa ME, Carabin H, Willingham AL. Risk factors for the prevalence of porcine cysticercosis in Mbulu District, Tanzania. Vet Parasitol. 2004;120:275–83.
Sikasunge CS, Phiri IK, Phiri AM, Dorny P, Siziya S, Willingham AL. Risk factors associated with porcine cysticercosis in selected districts of Eastern and Southern provinces of Zambia. Vet Parasitol. 2007;143:59–66.
Pouedet MSR, Zoli AP, Nguekam VL, Assana E, Speybroeck N, et al. Epidemiological survey of swine cysticercosis in two rural communities of West-Cameroon. Vet Parasitol. 2002;106:45–54.
Lescano AG, García HH, Gilman RH, Guezala MC, Tsang VCW, Gavidia CM, et al. Swine cysticercosis hotspots surrounding Taenia solium tapeworm carriers. Am J Trop Med Hyg. 2007;76:376–83.
Pondja A, Neves L, Mlangwa J, Afonso S, Fafetine J, Willingham AL 3rd, et al. Prevalence and risk factors of porcine cysticercosis in Angonia District, Mozambique. PLoS Negl Trop Dis. 2010;4:e594.
Sarti Gutierrez E, Schantz PM, Aguilera J, Lopez A. Epidemiologic observations on porcine cysticercosis in a rural community of Michoacan State, Mexico. Vet Parasitol. 1992;41:195–201.
Sikasunge CS. The prevalence and transmission risk factors of porcine cysticercosis in eastern and southern provinces of Zambia. MSc Thesis: University of Zambia, Lusaka, Zambia; 2005.
García HH, Gilman RH, Gonzalez AE, Verastegui M, Rodriguez S, Gavidia C, et al. Hyperendemic human and porcine Taenia solium infection in Perú. Am J Trop Med Hyg. 2003;68:268–75.
Shonyela SM, Mkupasi EM, Sikalizyo SC, Kabemba EM, Ngowi HA, Phiri I. An epidemiological survey of porcine cysticercosis in Nyasa District, Ruvuma Region, Tanzania. Parasite Epidemiol Control. 2017;2:35–41.
Gabriël S, Dorny P, Mwape KE, Trevisan C, Braae UC, Magnussen P, et al. Control of Taenia solium taeniasis/cysticercosis: The best way forward for sub-Saharan Africa? Acta Trop. 2017;165:252–60.
We are grateful to the Institute of Biotechnology and Environment Tay Nguyen University for providing resources and facilities for the fieldwork. We thank to local veterinarian staff at M'Drak, Krong Nang and Buon Don district for assisting in sample collection. The authors are most thankful to Ms. Nguyen Thi Ngoc Hien, Ms. Long Khanh Linh, and Ms. Nguyen Thi Lan Huong, who assisted with laboratory work.
This research was self-funded by RJT. This work was done with partial support for travel from the Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Australia. DNN received PhD scholarship from Australia Awards Scholarships, Department of Foreign Affairs and Trade, Australia Government. The materials for serological diagnosis were provided by the Division of Parasitic Diseases and Malaria, Centers for Disease Control and Prevention, Atlanta, Georgia, United States of America.
All relevant data are included within this published article.
Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Parkville, Victoria, 3052, Australia
Dinh Ng-Nguyen, Mark Anthony Stevenson & Rebecca Justine Traub
Faculty of Animal Sciences and Veterinary Medicine, Tay Nguyen University, Buon Ma Thuot, Dak Lak, Vietnam
Dinh Ng-Nguyen
Division of Parasitic Diseases and Malaria, Centers for Disease Control and Prevention, Atlanta, Georgia, USA
John Noh & Sukwan Handali
Department of Livestock, Montana Veterinary Diagnostic Lab, Bozeman, Montana, USA
Kathleen Breen
John Noh
Mark Anthony Stevenson
Sukwan Handali
Rebecca Justine Traub
DNN designed study, analyzed data and wrote manuscript; JN: provided material, designed study and edited paper; KB: assisted laboratory work, MAS: assisted with analyses of data and edited paper; SH: provided material, designed study and edited paper; RJT provided material, supervised study and edited paper. All authors read and approved the final manuscript.
Correspondence to Dinh Ng-Nguyen.
This study was reviewed and approved by the Animal Ethics and scientific committee, Tay Nguyen University (reference number 50.KCNTY), and conducted under the supervision of the local center for animal health, Dak Lak, Vietnam. Verbal consent was obtained to participate in the study. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Ng-Nguyen, D., Noh, J., Breen, K. et al. The epidemiology of porcine Taenia solium cysticercosis in communities of the Central Highlands in Vietnam. Parasites Vectors 11, 360 (2018). https://doi.org/10.1186/s13071-018-2945-y
Porcine cysticercosis | CommonCrawl |
Sharon M. Frechette
Asssociate Professor
[email protected]
Dept. of Mathematics & Computer Science
1 College Street
Haberlin 309
Ph.D., Dartmouth College, 1997.
Advisor: Thomas Shemanske.
Thesis: "Decomposition of Spaces of Half-Integral Weight Cusp Forms".
A.M.,Dartmouth College, 1994.
B.A., Boston University, 1988.
Senior Thesis Advisor: Paul Blanchard.
TEACHING EXPERIENCE AND INTEREST
I have taught courses at all levels of Holy Cross' undergraduate curriculum, including Calculus, Principles of Real Analysis, Algebraic Structures, Linear Algebra, Modern Algebra, Number Theory, Cryptography, and Combinatorics.
I have also developed specialty courses for the College's Montserrat Program, a first-year seminar program with an emphasis on writing and discussion, and the College Honors Program. Previous seminars have included some flavor of Mathematics of Art and Architecture. This year, I'm teaching a year-long sequence in Montserrat, on the history and mathematics of cryptology:
Ciphers and Heroes, Fall 2014:
How are secret codes constructed? What weaknesses allow many of them to be cracked by clever analysts? Welcome to cryptology, the scientific study of encoding and decoding secret messages. We will explore many cryptosystems, investigating their strengths and weaknesses, and surveying their historical developments, setbacks, and implications. This semester we focus on cryptosystems such as the shift ciphers used by Caesar, the Vigenere cipher used during the Victorian era, and most thrillingly, the ENIGMA cipher used during World War II. Along with the mathematics of these ciphers, we will discover fascinating facts about their creators and the clever analysts who crack the codes, including the Polish and British heroes who cracked the seemingly unbreakable ENIGMA.
Privacy in the Digital Age, Spring 2015:
How does Amazon.com keep your credit card information secure when you order online? What weaknesses can hackers exploit, in their quest to steal your identity online? Secure electronic communication is vital to today's society, and modern cryptosystems are at the heart of this enterprise. Most of these systems are based on the mathematics of elementary number theory, and the stunning development of public key cryptography, a revolutionary concept born in the computer revolution of the 1970s. This semester we focus on these modern cryptosystems, the visionaries who created them, and the advances in computing that have made them secure.
Number Theory: Specifically modular forms, multiple Dirichlet series, special values of L-functions, and finite-field hypergeometric functions as related to traces of Hecke operators.
"Gaussian Hypergeometric Functions and Rational Points on Calabi-Yau $n$-folds,'' (with Matthew Papanikolas, Jonathan Root, and Valentina Vega), in preparation.
Shalika Germs for $\mathfrak{sl}_n$ and $\mathfrak{sp}_{2n}$ are Motivic," (with Julia Gordon and Lance Robson), accepted for publication, Proceedings of the Women in Numbers--Europe Conference, October 2013m, CIRM. [arxiv version]
"Newly Irreducible Iterates in Families of Quadratic Polynomials," (with Katherine Chamberlin, Emma Colbert, Patrick Hefferman, Rafe Jones, and Sarah Orchard), Involve, 5 (no. 4), (2012), 481--495. [Preprint version]
" A Crystal Definition for Symplectic Multiple Dirichlet Series," (with Jennifer Beineke and Ben Brubaker), Multiple Dirichlet series, L-functions and automorphic forms, 37--63, Progr. Math., 300, Birkhauser/Springer, New York, 2012. [Preprint version]
" Weyl Group Multiple Dirichlet Series for Type $C$," (with Jennifer Beineke and Ben Brubaker), Pacific J. Math., 254 (1), (2011), 11--46. [Preprint version]
" An Interdisciplinary Course: 'On Beauty: Perspective, Proportion, and Rationalism in Western Culture,' (with Alison Fleming and Sarah Luria), refereed Conference Proceedings for Renaissance Banff: Mathematical Connections in Art, Music, and Science (July 2005), accepted for publication, 4 pages.
"Determinants Associated to Zeta Matrices of Posets," (with Cristina Ballantine and John Little), Linear Algebra Appl., 411C (2005), 364-370. [Preprint version]
"Nonvanishing Twists of $GL(2)$ Automorphic $L$-functions," (with Alina Bucur, Ben Brubaker, Gautam Chinta, and Jeffrey Hoffstein), Internat. Math. Res. Notices, 78 (2004), 4211-4239. [Preprint version]
"The Combinatorics of Traces of Hecke Operators," (with Ken Ono and Matthew Papanikolas), Proc. of the National Academy of Sciences, USA, 101 (2004), 17016-17020. [Preprint version]
"Gaussian Hypergeometric Functions and Traces of Hecke Operators," (with Ken Ono and Matthew Papanikolas), Internat. Math. Res. Notices, 60 (2004), 3233-3262. [Preprint version]
"A Classical Characterization of Newforms with Equivalent Eigenforms in S_{k + 1/2}(4N,\chi)," J. London Math. Soc., (3) 68 (2003), 563--578. [Preprint version]
"Nonvanishing of Special Values of $L$-functions for Quadratic Twists of Newforms," preprint.
"Hecke Structure of Spaces of Half-Integral Weight Cusp Forms," Nagoya Math. J., 159 (2000), 53--85. [Preprint version]
I was one of the co-organizers for the 18th Annual Workshop on Automorphic Forms and Related Topics, which was held March 21-24, 2004 at the University of California, Santa Barbara.
Back to Math Dept. page
Created by Sharon M. Frechette on Jan. 24, 2004
Last modified by Sharon M. Frechette on October 14, 2013 | CommonCrawl |
About Columbus
Session Conveners
August 7-11 Columbus, Ohio
Center for Cosmology and AstroParticle Physics
Indico Download Poster
GC Excess
Multi-Messenger
Radio UHECR
DM Complementarity
v/CR Systematics
Coffee and Pastries in the Physics Research Building Lobby
Meet in the lobby of the Physics Research Building at the Ohio State Main Campus. Coffee and pastries will be provided for all participants.
Discussion in Smith Laboratory 1009
The morning discussion section will take place in Smith Laboratory 1009, which is next door to the Physics Research Building.
Lunch break on your own. There are a number of nice restaurants near the Ohio State campus.
The afternoon discussion section will take place in Smith Laboratory 1009, which is next door to the Physics Research Building.
Discussion in Physics Research Building 4138
The morning discussion section will take place in Physics Research Building 4138.
The afternoon discussion section will take place in Physics Research Building 4138.
Discussion in the Smith Seminar Room
The morning discussion section will take place in the Smith Seminar Room in the Physics Research Building.
The afternoon discussion section will take place in the Smith Seminar Room in the Physics Research Building.
Discussion in Physics Research Building M2015
The morning discussion section will take place in the Physics Research Building M2015.
The afternoon discussion section will take place in the Physics Research Building M2015.
Discussion in the Price Place
The morning discussion section will take place in the Price Place in the Physics Research Building.
The afternoon discussion section will take place in the Price Place in the Physics Research Building.
Gamma-Rays/Galactic
Neutrinos/Dark Matter
Cosmic Rays
Davidson Theater
John Beacom
Challenges of particle physics theory
Nima Arkani-Hamed
Galactic cosmic ray sources: a multimessenger view
Julia Tjus
In this talk, the current state-of-the-art on our knowledge and ignorance of galactic cosmic ray sources will be presented. In particular, cosmic ray observables from MeV to (super-)PeV will be presented. This concerns a (potentially) direct view on the sources via ionization signatures, neutrinos and gamma-rays as well as those pieces of information provided by cosmic rays themselves, i.e. spectrum, composition and anisotropy. The latter heavily relies on the transport properties through the magnetized plasma of the interstellar medium, which will be a special focus of the talk. The different methods for Galactic propagation will be reviewed in the context of the interpretation of the data, discussion in particular the role of the diffusion tensor and advection and their contributions to a possible Galactic wind.
Coffee Break in the Davidson Lobby
Indirect Searches for Dark Matter
Tracy Slatyer
I will review the current status of indirect dark matter searches, and discuss possible future directions.
PandaX Dark Matter Search
Xiangdong Ji
Shanghai Jiao Tong University/University of Maryland
In this talk, I will describe the status and plans for PandaX dark matter search from Jinping Undeground Lab in China.
Hidden-sector dark matter
Kathryn Zurek
LBNL
We discuss the paradigm of dark matter from a hidden sector, and observational implications for colliders and direct detection experiments.
Diverse Galactic Rotation Curves and Self-Interacting Dark Matter
Hai-Bo Yu
The rotation curves of spiral galaxies exhibit a diversity that has been difficult to understand in the cold dark matter (CDM) paradigm. In this talk, I will show that the self-interacting dark matter (SIDM) model provides excellent fits to the rotation curves of a sample of galaxies with asymptotic velocities in the 25 to 300 km/s range that exemplify the full range of diversity. We only assume the halo concentration-mass relation predicted by the CDM model and a fixed value of the self-interaction cross section. The impact of the baryons on the SIDM halo profile and the scatter from the assembly history of halos as encoded in the concentration-mass relation can explain the diverse rotation curves of spiral galaxies. I will also discuss other smoking-gun signatures of SIDM in astrophysical observations.
Can Self-interacting Dark Matter Solve the 'Diversity Problem' in Galactic Rotation Curves?
Anna Kwa
While the $\Lambda$CDM paradigm has been extremely successful in matching observations of dark matter structure at large scales, several discrepancies between observations of dark matter structures at smaller scale (galactic and below) and $\Lambda$CDM's predictions have motivated particle physicists to consider an self-interacting dark matter (SIDM) as a possible solution. In this talk I will focus on how SIDM may solve the so-called 'diversity problem', in which the wide range of dark matter density profiles as inferred through the rotation curves of nearby galaxies are difficult to reproduce through baryonic feedback in $\Lambda$CDM. SIDM allows for a wide range of scatter in the core sizes of dark matter halos in the following ways: (1) Collisional scatterings between dark matter particles thermalize the inner regions of the dark matter halo and let its density profile adjust in response to the baryonic gravitational potential; and (2) scatter in the concentration-mass relation leads to scatter in the halo core size at fixed halo mass. Taking these two effects into account ,we fit the SIDM scattering cross section and galaxy parameters for nearby galaxies in the SPARC sample. We find a clear preference for the SPARC rotation curves to be fit with SIDM cross sections >$\mathcal{O}(0.1)$ cm$^2$/g.
Constraining Self-Interacting Dark Matter through Equal Mass Galaxy Cluster Mergers
Stacy Kim
While the LCDM model has been wildly successful at explaining structure on large scales, it fails to do so on small scales---dark matter halos of scales comparable to that of galaxy clusters and smaller are more cored and less numerous than LCDM predicts. One potential solution challenges the canonical assumption that dark matter is collisionless and instead assumes that it is self-interacting. The most stringent upper limits on the dark matter self-interaction cross section have come from observations of merging galaxy clusters. Self-interactions cause the merging dark matter halos to evolve differently from the galaxies, which are effectively collisionless. It has been hypothesized that this leads to a spatial offset between the peaks in the dark matter and galaxy distributions. We show that in equal mass mergers, offsets matching those observed do not develop except under a narrow range of merger conditions that promote extreme dark mass loss during collision. Furthermore, offset formation cannot be described by a drag force nor by tail formation alone, as has previously been claimed. Self-interactions have a significant influence on other aspects of merger evolution, which can be exploited to derive stronger constraints on the self-interaction cross section. In particular, we expect a large fraction of BCGs to be miscentered by order 100s of kpc with cross sections greater than 1 cm^2/g; the lack of such large miscenterings implies a cross section no larger than 0.1 cm^2/g.
SIMPs with Vector Mediators
Alexander Natale
Korea Institute for Advanced Study
A general mechanism for thermal production of dark matter (DM) via 3-to-2 scatterings, or other higher-order interactions, allows for sub-GeV dark matter and strong self-interactions that meet existing constraints but have the potential to explain mysteries with cold DM and structure formation. In such models, so-called Strongly Interacting Massive Particles (SIMPs), a correct thermal average is important. These SIMP mechanism can exist in models with multiple scalars or in a strongly coupled gauge theory where the Weiss-Zumino-Witten term generates the 3-to-2 interaction. Particularly, a two-scalar model with a residual $Z_5$ discrete symmetry and a model with a dark QCD sector can produce parameter spaces where the SIMP paradigm is realized. In both models, the importance of vector mediators in the SIMP mechanism, and how these vector mediators affect the thermal average, is discussed.
Enabling Forbidden Dark Matter
Hongwan Liu
The thermal relic density of dark matter is conventionally set by two-body annihilations. We point out that in many simple models, 3→2 annihilations can play an important role in determining the relic density over a broad range of model parameters. This occurs when the two-body annihilation is kinematically forbidden, but the 3→2 process is allowed; we call this scenario "Not-Forbidden Dark Matter". We illustrate this mechanism for a vector portal dark matter model, showing that for a dark matter mass of mχ ∼ MeV - 10 GeV, 3→2 processes not only lead to the observed relic density, but also imply a self-interaction cross section that can solve the cusp/core problem. This can be accomplished while remaining consistent with stringent CMB constraints on light dark matter, and can potentially be discovered at future direct detection experiments.
Chiral Effective Theory of Dark Matter Direct Detection
Joachim Brod
The existence of dark matter is one of the few solid hints for physics beyond the standard model. If dark matter has indeed particle nature, then direct detection via scattering on atomic nuclei is one of the most promising discovery channels. In order to connect this nonrelativistic process with astrophysical and collider searches, as well as UV model building, a consistent setup of effective field theories for the different energy scales is necessary. I will present our work on the explicit connection between these energy scales, from the UV down to the nuclear scale. I will, in particular, discuss previously neglected chiral effects that can change the cross section by more than an order of magnitude.
Coffee Break in the Oak Room
Measurement of low energy ionization signals from Compton scattering in a CCD dark matter detector
Karthik Ramanathan
An important source of background in direct searches for low-mass dark matter particles are the energy deposits by small-angle scattering of environmental γ rays. We report detailed measurements of low-energy spectra from Compton scattering of γ rays in the bulk silicon of a charge-coupled device (CCD). Electron recoils produced by γ rays from 57Co and 241Am radioactive sources are measured between 60 eV and 4 keV. The observed spectra agree qualitatively with theoretical predictions, and characteristic spectral features associated with the atomic structure of the silicon target are accurately measured for the first time. A theoretically-motivated parametrization of the data that describes the Compton spectrum at low energies for any incident γ-ray flux is derived. The result is directly applicable to background estimations for low-mass dark matter direct-detection experiments based on silicon detectors, in particular for the DAMIC experiment down to its current energy threshold.
The LDMX Experiment
Joshua Hiltbrand
The Light Dark Matter eXperiment (LDMX) proposes a high-statistics search for lowmass dark matter at a new experimental facility, Dark Sector Experiments at LCLS-II (DASEL), at SLAC. LDMX employs the missing momentum technique, where electrons scattering in a thin target can produce dark matter via "dark bremsstrahlung" that are not observed in the detector. To identify these rare signal events, LDMX individually tags incoming beam-energy electrons, unambiguously associates them with low energy, moderate transverse-momentum recoils of the incoming electron, and establishes the absence of any additional forward-recoiling charged particles or neutral hadrons. LDMX will employ low mass tracking to tag incoming beam-energy electrons with high purity and cleanly reconstruct recoils. A high-speed, granular calorimeter with MIP sensitivity is used to reject the high rate of bremsstrahlung background at trigger level while working in tandem with a hadronic calorimeter to veto rare photo nuclear reactions. Ultimately, LDMX aims to probe thermal dark matter over most of the viable sub-GeV mass range to a decisive level of sensitivity. This talk will summarize the current status of the LDMX design and performance studies and progress in developing the DASEL beamline.
The DAMIC experiment at SNOLAB
Charge-coupled devices (CCDs) are excellent particle detectors with the ability to probe a wide range of low-mass dark matter candidates. Initially developed for use in astronomy, CCDs have low per-pixel noise and excellent spatial resolution, giving them unique background discrimination and low (<100eV) energy thresholds. I will present the status of the DAMIC100 experiment, an ongoing direct dark matter search consisting of an array of 16 megapixel CCDs operated in the low radioactivity environment of the SNOLAB underground laboratory.
Recent Results and Current Status of SuperCDMS
Tsuguo Aramaki
SuperCDMS (Cryogenic Dark Matter Search) has been one of the leading direct dark matter search experiments using low-temperature semiconductor detectors. The recoil energy induced by dark matter scattering inside the detector is measured using phonon (lattice vibration) and ionization signals. CDMSlite (low-ionization threshold experiment) within SuperCDMS Soudan has the best dark matter-nucleon scattering cross section limits in the world for low-mass dark matter particles with masses between 2-5GeV/c2. With unique discovery potential for low-mass dark matter and complementary to higher-mass dark matter searches with other experiments, SuperCDMS plays a vital role in the search for dark matter. We are now moving forward with SuperCDMS SNOLAB, a DOE/NSF/CFI funded direct detection dark matter search experiment. In this talk, I will present the recent results from SuperCDMS Soudan as well as an overview and the current status of SuperCDMS SNOLAB.
MiniBooNE Dark Matter Search
Ranjan Dharmapalan
The MiniBooNE experiment at Fermilab performed the first dedicated search for accelerator proton beam produced dark matter. By steering the 8 GeV beam into an iron beam dump, the neutrino production from charged meson decay was suppressed while the photon production from neutral mesons remained unchanged. According to hidden-sector vector portal models, the Standard Model photons kinetically mix with dark photons that decay into dark matter and travel towards the MiniBooNE detector. The experiment looked for dark matter particles scattering elastically off of nucleons in the detector medium and set new limits for the existence of sub-GeV dark matter within a vector portal model. In this talk, the experimental setup, analysis methods and ongoing analyses will be discussed. The results from MiniBooNE show that Fermilab could be at the forefront of searches for sub-GeV dark matter.
Scintillating Bubble Chamber for Dark Matter and CEνNS Detection
Jianjie Zhang
Scintillating bubble chambers are demonstrated to have clean separation between electron recoils and nuclear recoils down to a thermodynamic "Seitz" threshold of 2 keV with a prototype liquid xenon chamber developed at Northwestern University, as the former only produce scintillation light while the later produce both scintillation light and bubble nucleation. This clean separation is expected to extend down to the thermal stability limit of the target fluid, enabling the realization of WIMP or CEvNS detectors with sub-keV threshold. The demonstrated behavior for liquid xenon is expected to hold for other noble liquids such as argon, which expands the physics reach of the new technology. The prototype chamber is instrumented with a CCD camera for near-IR bubble imaging, a solar-blind PMT to detect 175-nm xenon scintillation light, and piezoelectric acoustic transducers to detect the ultrasonic emission from a growing bubble.
Recent results from the MAJORANA DEMONSTRATOR
Jordan Myslik
The MAJORANA DEMONSTRATOR is an experiment constructed to search for neutrinoless double-beta decays in germanium-76 and to demonstrate the feasibility to deploy a large-scale experiment in a phased and modular fashion. It consists of two modular arrays of natural and 76Ge-enriched germanium detectors totalling 44.1 kg, located at the 4850' level of the Sanford Underground Research Facility in Lead, South Dakota, USA. While crucial for its neutrinoless double-beta decay search, the ultra-low backgrounds and excellent energy resolution of the MAJORANA DEMONSTRATOR also allow it to probe additional physics beyond the Standard Model. This includes searches for dark matter and solar axions. This talk will discuss the results to date from the neutrinoless double-beta decay and beyond the Standard Model searches, as well as the future prospects of the MAJORANA DEMONSTRATOR.
Very High Energy Astrophysics with VERITAS
Gareth Hughes
Harvard-Smithsonian CFA
For more than a decade VERITAS, an imaging atmospheric-Cherenkov telescope array, has been probing the Northern very-high-energy (VHE; >100 GeV) gamma-ray sky. Located in Southern Arizona, VERITAS consists of four 12-m diameter reflectors and is one of the worlds most sensitive detectors of gamma rays between 85-GeV to 30-TeV. Over 50 galactic and extra-galactic sources have been detected at these energies many in conjunction with multi-wavelength and multi-messenger partners. Areas of investigation include the acceleration and propagation of cosmic rays in both galactic and extra-galactic sources, fundamental physics topics including the study of dark matter candidates, and an active multi-messenger follow up program for triggers received from electromagnetic, neutrino, and gravitational wave partners.
Highlights from MAGIC
Konstancja Satalecka
DESY Zeuthen, Germany
For more than a decade the MAGIC Collaboration is delivering outstanding results in the field of very high energy gamma-ray physics. The two 17m telescope system is one of the best performing instruments in its class, especially at low energies, crucial for observations of e.g. high redshift sources, pulsars and GRBs. This talk will discuss recent key results from Galactic and extragalactic observation campaigns, including our fundamental physics (e.g. dark matter, Lorentz Invariance Violation) and cosmic ray (e.g. earth-skimming tau-neutrinos) programs. The basic instrumental features and challenges will also be presented. Finally, a perspective on the future of the experiment will be given.
An energy cutoff in the TeV gamma ray spectrum of the SNR Cassiopeia A
Emma De Ona Wilhelmi
CSIC-IEEC
It is widely believed that Galactic Cosmic Rays (CR) are accelerated in Supernova Remnants (SNRs) through the process of diffusive shock acceleration. In this scenario, particles should be accelerated up to energies around 1 PeV (the so-called 'Knee') and emit gamma rays. To test this hypothesis, precise measurements of the gamma-ray spectra of young SNRs at TeV energies are needed. Among the already known SNRs, Cassiopea A (Cas A) appears as one of the best candidates for such studies. It is relatively young (about 300 years) and it has been largely studied in radio and X-ray bands, which constrains essential parameters for testing emission models, such as the magnetic field. We will present the results of a multi-year campaign of Cas A with the MAGIC Imaging Atmospheric Cherenkov Telescopes for a total of 158 hours of good-quality data. We obtained a spectrum of the source from 100 GeV to 10 TeV and fit it assuming it follows a power-law distribution, with and without an exponential cut-off. We found, for the first time in Very High Energy (VHE, E>100GeV), observational evidence for the presence of a cut-off in Cas A. Assuming that TeV gamma rays are produced by hadronic processes and that there is no significant cosmic ray diffusion, this indicates that Cas A is not a PeVatron (PeV accelerator) at its present age.
A First Look at the Very Highest-Energy Gamma-Ray Sky from HAWC
Kelly Malone
TeV observations of gamma-ray sources are very important probes of cosmic-ray accelerators, as leptonic and hadronic spectra differ in this energy range. The High Altitude Water Cherenkov (HAWC) Observatory, located in Puebla, Mexico, is capable of detecting air showers initiated by gamma rays in the multi-TeV energy range. The upper end of this range is previously unexplored. The detector consists of 300 water Cherenkov tanks located at an altitude of 4100m, each instrumented with 4 PMTs. Because its instantaneous field of view is ~2sr and it has a duty cycle > 95% percent, the array is well-suited to performing all-sky surveys. I will present a method to reconstruct energy of the primary gamma rays on an event-by-event basis by measuring the charge density as a function of distance to the air shower axis. This greatly improves the dynamic range compared to the current method used by HAWC, which assigns a mean energy value for all events of a given shower size. I will use the method to show the latest HAWC observations of gamma-ray sources above 50 TeV, which are among the highest-energy gamma rays ever studied.
Search for PeV Photons with IceTop and IceCube
Zachary Griffith
IceCube Collaboration, University of Wisconsin - Madison
We present results of a search for galactic PeV gamma rays with the IceCube observatory, presently the most sensitive facility for PeV gamma-ray sources in the Southern Hemisphere. This includes a search for point sources over IceCube's field of view, as well as tests for correlations with TeV sources detected by H.E.S.S. and neutrino events from IceCube's high energy starting event sample, with the goal to constrain the Galactic component to the astrophysical neutrino flux observed by IceCube. In addition, we search for correlations of PeV gamma rays with the Galactic plane, using the pion decay component of the Fermi-LAT diffuse emission model as a spatial template. As cosmic rays producing such gamma rays are necessarily an order of magnitude greater in energy, this result provides a new constraint on the galactic source contribution to the cosmic ray flux above the "knee".
Supernova Remnant Studies with Fermi LAT
John Hewitt
Supernova remnants (SNRs) have been studied at GeV energies using the Fermi Large Area Telescope (LAT) for nearly a decade. The detection of the pion bump in four SNRs demonstrates that these are sources of cosmic ray protons. However, the detailed physics of particle acceleration (or re-acceleration) and diffusion remain undetermined. To determine the Galactic cosmic ray contribution from SNRs requires both a larger gamma-ray sample of Galactic SNRs and detailed spectral and spatial studies at GeV and TeV energies. Recently released Pass 8 data significantly improves the sensitivity and angular resolution of GeV studies of SNRs. A complete search for extended sources located along the Galactic plane at energies above 10 GeV has detected 46 extended sources, 16 of which are newly identified and likely to be either SNRs or pulsar wind nebulae. Joint studies with observatories at TeV energies — HAWC, MAGIC and VERITAS - characterize spectra of 16 new unassociated HAWC sources at TeV energies and spatially resolve shock-cloud interaction regions of the well-studied SNR IC 443. Such multi-instrument studies promise to uncover the origins of SNRs as cosmic accelerators.
Multi-year observations of the Galactic Center region with the MAGIC telescopes
Ievgen Vovk
Max-Planck-Institut für Physik
The Galactic Center region is one of the primary targets for observations with the current generation of gamma-ray telescopes. This attention is primarily caused by the presence of a black hole of 4 million solar masses, which provides a rare opportunity to study the interaction of a super-massive black hole with surrounding matter at a relatively close distance. Recently the interest to this region was increased by a series of exciting discoveries: the large, extended bubbles detected with Fermi/LAT, the envisioned burst of high-energy emission due to the passage of the G2 gas cloud, the likely pevatron nature of the primary source unveiled with H.E.S.S. and the discovery of a new source in the region, reported by the major Cherenkov telescopes MAGIC, H.E.S.S. and VERITAS. All these underline the complex physics of the region, revealed by deep gamma-ray observations. In this talk I will present the results of the multi-year observational program of the Galactic Center region with the MAGIC telescopes, conducted at large zenith angle. I will discuss in detail the morphology of this region and compare it with predictions of several different models.
Evidence Against a Dark Matter Explanation of the Galactic Center Excess
Oscar Macias
An anomalous, apparently diffuse, gamma-ray signal not readily attributable to known Galactic sources has been found in Fermi space telescope data covering the central ~10 degrees of the Galaxy. This "Galactic Center Gamma-Ray Excess" (GCE) signal has a spectral peak at ~2 GeV and reaches its maximum intensity at the Galactic Centre (GC) from where it falls off as a radial power law ~r^{-2.4}. Given its morphological and spectral characteristics, the GCE is ascribable to self-annihilation of dark matter particles governed by an Navarro-Frenk-White-like density profile. However, it could also be composed of many dim, unresolved point sources for which millisecond pulsars (MSPs) or pulsars would be natural candidates given their GeV-peaked spectra. Statistical evidence that many sub-threshold point sources contribute up to 100% of the GCE signal has recently been claimed. We have developed a novel analysis that exploits hydrodynamical modelling to better register the position of gamma-ray emitting gas in the Inner Galaxy. Our improved analysis reveals that the excess gamma-rays are spatially correlated with both the X-shaped stellar over-density in the Galactic bulge and the nuclear stellar bulge. Given these correlations, we argue that the excess is not a dark matter phenomenon but rather associated with the stellar population of the X-shaped bulge and the nuclear bulge.
Characterizing the population of pulsars in the Galactic bulge with the Fermi Large Area Telescope
Mattia Di Mauro
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Our analysis is capable of finding the characteristics of this putative population of MSPs in the Galactic bulge of our Galaxy. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
The Galactic-Center excess and 511 keV Bulge emission: two sides of the same coin?
Richard Bartels
A clear excess at ~2 GeV, known as the Galactic-Center Excess (GCE), has been detected in the Galactic Bulge region by the Fermi telescope. In addition, the Galactic Bulge is characterised by the annihilation of positrons resulting in a 511 keV line. Both signals look morphologically similar, but so far a detailed comparison has been lacking from the literature. We model the GCE using the new gamma-ray modelling code SkyFact and compare the results to spatial models of the 511 keV excess. We find that the GCE and 511 keV excess are compatible with identical source distributions and speculate about potential common origins in terms of population synthesis. In addition, we discuss future directions that can potentially test whether the GCE and 511 keV signal are connected.
Observing supernova neutrinos to late times
Shirley Li
The next Galactic supernova (SN) will probably occur while current or next generation neutrino experiments are online. It is crucial to have correct understanding of the basic characteristics of the expected neutrino signals. The nominal expectation of the duration of the neutrino signal is ~ 10 s; this expectation guided both theoretical and experimental effort. We simulate SN neutrino emission at late times and predict the detected neutrino signals in large neutrino experiments. We find that neutrino signals from a SN should be detected out to ~ 1 min. We will discuss how this will change future theoretical and experimental effort in SN studies.
Neutrino flavour conversions near the supernova core
Francesco Capozzi
Supernova neutrinos can experience "fast" self-induced flavor conversions almost immediately above the core, with important implications for the explosion mechanism and nucleosynthesis. Very recently, a novel method has been proposed to investigate these phenomena, in terms of the dispersion relation for the complex frequency and wave number (ω, k) of disturbances in the mean field of the νe-νx flavour coherence. I discuss a systematic approach to such instabilities, originally developed in the context of plasma physics. Instabilities are typically seen to emerge for complex ω, and can be further characterized as convective (moving away faster than they spread) and absolute (growing locally), depending on k-dependent features. The analytical classification of both unstable and stable modes leads not only to qualitative insights about their features but also to quantitative predictions about the growth rates of instabilities.
Neutrino astronomy: Measuring the size of the Sun's core
I introduce the idea of using neutrinos as probes for measuring the size of the solar core. I review previous work showing that neutrinos from galactic supernovae, detected in water Cherenkov experiments such as Super Kamiokande, can be used to locate their sources. Using these ideas I discuss my recent work in Phys.Rev.Lett. 117 (2016) 211101 on the prospects for measuring the size of the solar core using 8B neutrinos, for Super Kamiokande and future experiments such as Hyper Kamiokande. I show using a maximum likelihood analysis, that it is possible to actually locate neutrino emission within the solar core with approximately 4 years of data from an experiment like Hyper Kamiokande.
Muon-induced spallation backgrounds in DUNE
Guanying Zhu
Galactic supernovae are rare, just a few per century, so it is important to be prepared. If we are, then the long-baseline detector DUNE could detect thousands of events, compared to the tens from SN 1987A. An important question is backgrounds from muon-induced spallation reactions. We simulate particle energy-loss processes in liquid argon, and compare relevant isotope yields with those in the water-Cherenkov detector SuperK. Our approach will help optimize the design of DUNE and further benefit the study of supernova neutrinos.
Showering Muons in Super Kamiokande
Scott Locke
Super-Kamiokande (SK), the world's largest underground water Cherenkov detector, observes about 2 muons a second passing through it at a depth of 1 km. A fraction of these muons shower, and sometimes create radioactive isotopes (spallation). Those isotopes live anywhere from microseconds to several seconds, forming a dominant background to neutrino searches above 6 MeV and below 20 MeV. Detection of Cherenkov light from the showers points to the location of potential spallation products. Spallation is predominantly produced by neutrons and pions interacting with oxygen in the water. Therefore the detection of neutrons produced by muons serves both as an effective tag as well as an independent position measurement of spallation production. Recently, these neutrons were successfully detected in SK. The development of this technique may prove critical for future water Cherenkov detectors with less overburden, such as Hyper Kamiokande. The addition of water soluble gadolinium salt will improve the neutron detection efficiency and muon time correlation.
Astrophysics with the NOvA neutrino experiment
Matthew Strait
NOvA is a long-baseline neutrino oscillation experiment with the primary goals of discovering CP violation in the neutrino sector, determining the neutrino mass hierarchy and constraining the mixing angle $\theta_{23}$. NOvA also has a rich program of cosmic ray and astrophysical measurements. We will set competitive limits on the flux of magnetic monopoles as well as for neutrinos resulting from dark matter annihilation in the Sun. Both the NOvA near and far detectors are capable supernova observatories. The NOvA near detector has confirmed a puzzling reversal, first seen by MINOS, of the usual seasonal trend of cosmic rays underground in the case of multiple muons. Several other astrophysical topics will also be discussed.
Flavoured Dark Matter in Dark Minimal Flavour Violation
Simon Kast
Karlsruhe Institute for Technology
We study simplified models of flavoured dark matter in the framework of Dark Minimal Flavour Violation. In this setup the coupling of the dark matter flavour triplet to SM quark triplets constitutes the only new source of flavour and CP violation. The parameter space of the model is restricted by LHC searches with missing energy final states, by neutral meson mixing data, by the observed dark matter relic abundance, and by the absence of signal in direct detection experiments. We consider all of these constraints in turn, studying their implications for the allowed parameter space. Especially interesting is the combination of all constraints, reveling a non-trivial interplay. Large parts of the parameter space are excluded, most significantly in light of future bounds from upcoming experiments.
Enhancing Dark Matter Annihilation Rates with Dark Bremsstrahlung
Nicole Bell
Many dark matter interaction types lead to annihilation processes which suffer from $p$-wave suppression or helicity suppression, rendering them sub-dominant to unsuppressed $s$-wave processes. We demonstrate that the natural inclusion of dark initial state radiation can open an unsuppressed $s$-wave annihilation channel, and thus provide the dominant dark matter annihilation process for particular interaction types. We illustrate this effect with the bremsstrahlung of a dark pseudoscalar or vector boson from fermionic dark matter, $\overline{\chi}\chi\rightarrow \overline{f}f\phi$ or $\overline{f}fZ'$. The dark initial state radiation process, despite having a 3-body final state, proceeds at the same order in the new physics scale $\Lambda$ as the annihilation to the 2-body final state $\overline{\chi}\chi\rightarrow \overline{f}f$. This is lower order in $\Lambda$ than the well-studied lifting of helicity suppression via Standard Model final state radiation, or virtual internal bremsstrahlung. This dark bremsstrahlung process should influence LHC and indirect detection searches for dark matter.
Dark Forces in the Sky: Signals from Z' and the Dark Higgs
We consider the indirect detection signals for models containing a fermionic DM candidate, a dark gauge boson, and a dark Higgs field. Compared with a model containing only a dark matter candidate and vector mediator, the addition of the scalar provides a mass generation mechanism for the dark sector particles which, in some cases, is required in order to avoid unitarity violation at high energies. We demonstrate that the dark matter interaction types, and hence the annihilation processes relevant for relic density and indirect detection, are strongly dictated by the mass generation mechanism chosen for the dark sector particles, and the requirement of gauge invariance. We outline important phenomenology of such two-mediator models, which is missed in the usual single-mediator simplified model approach. In particular, the inclusion of the two mediators opens up a new, dominant, s-wave annihilation channel that does not arise when a single mediator is considered in isolation.
Dark Matter Models for the Galactic Center Excess
Miguel Escudero
IFIC-University of Valencia
The origin of the Galactic Center Gamma-Ray excess still remains unclear. Astrophysical interpretations have been proposed, but these explanations require either a significant degree of tuning or a large population of millisecond pulsars that have a very different population than that observed in globular clusters or near the Milky Way. If the dark matter annihilation interpretation is assumed, one should expect additional signatures at colliders and at direct detection experiments. In this talk I will present the current constraints on dark matter models that are able to successfully explain the Galactic Center excess.
Sharp spectral features from light dark matter decay via gravity portals
Sebastian Ingenhutt
TUM & MPP, Munich
So far, all evidence for the existence of dark matter is based on its gravitational interactions with the observable sector, and its precise particle nature remains mysterious. However, even if dark matter is stable against decay in flat spacetime, as commonly assumed in the literature, the presence of nonminimal couplings to gravity of the dark matter field can spoil this stability in curved spacetime, with potentially remarkable phenomenological implications. More specifically, a scalar dark matter candidate with a mass in the MeV-GeV region, destabilized through a linear coupling to the Ricci scalar, can decay into electron-positron pairs and photons. This has implications for both the thermal history of the Universe and the present-day gamma-ray spectrum observed at Earth. Observations of the cosmic microwave background by the Planck satellite and of the extragalactic isotropic gamma-ray background by COMPTEL, EGRET and Fermi LAT can be used to constrain the size of the nonminimal coupling parameter.
Cooling sterile neutrino dark matter
Stefan Vogl
Max-Planck-Institut für Kernphysik
Sterile neutrinos produced through resonant or non-resonant oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. Based on general considerations we find a thermalization mechanism which can increase the yield after resonant and non-resonant production. At the same time, it alleviates the growing tensions with structure formation and X-ray observations and even revives simple non-resonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy- and entropy conservation describes the mechanism well.
Phase Method for the evaluation of Sommerfeld Enhancement
Hiren Patel
In this talk, I will present the application of Calogero's Variable Phase method for the determination of Sommerfeld Enhancement factors relevant for dark matter cross section calculations. In contrast to directly solving the radial Schrodinger equation, the evaluation using the Variable Phase Method offers a rapid and stable evaluation, even for multichannel systems. Time permitting, I will outline strategies for obtaining new analytic results by looking for asymptotic approximations to the variable phase equations.
Au Genesis From Cogenesis: Heavy Asymmetric Dark Matter Makes Gold
Joseph Andrew Bramante
Perimeter Institute
The early universe could feature multiple reheating events, leading to jumps in the visible sector entropy density that dilute both particle asymmetries and the number density of frozen-out states. In fact, late time entropy jumps are usually required in models of Affleck-Dine baryogenesis, which typically produces an initial particle-antiparticle asymmetry that is much too large. An important consequence of late time dilution, is that a smaller dark matter annihilation cross section is needed to obtain the observed dark matter relic density. For cosmologies with high scale baryogenesis, followed by radiation-dominated dark matter freeze-out, the perturbative unitarity mass bound on thermal relic dark matter is relaxed to thousands of PeV. An extensive direct detection search program is necessary to uncover such dark matter. Intriguingly, within the dilute thermal relic framework, PeV mass asymmetric dark matter could be responsible for the production of heavy r-process elements, including gold.
High-Energy Gamma-Rays from the Milky Way: Three-Dimensional Spatial Models for the Cosmic-Ray and Radiation Field Densities
Troy Porter
High-energy gamma rays of interstellar origin are produced by the interaction of cosmic-ray (CR) particles with the diffuse gas and radiation fields in the Galaxy. The main features of this emission are well-understood and are reproduced by existing CR propagation models employing 2D Galactocentric cylindrically symmetrical geometry. However, the high-quality data from instruments like the Fermi Large Area Telescope reveal significant deviations from the model predictions on few to tens of degree scales indicating the need to include the details of the Galactic spiral structure and thus require 3D spatial modelling. In this contribution the high-energy interstellar emissions from the Galaxy are calculated using the newly released version of the GALPROP code (v55) employing 3D spatial models for the CR source and interstellar radiation field (ISRF) densities. The interstellar emission models that include arms and bulges for the CR source and ISRF densities provide plausible physical interpretations for features found in the residual maps from high-energy gamma-ray data analysis. The 3D models for CR and ISRF densities provide a more realistic basis that can be used for refined interpretation of the non-thermal interstellar emissions from the Galaxy.
Propagation of TeV-energy cosmic rays using stochastic differential equations in CRPropa3.1
Lukas Merten
Ruhr-University Bochum
The propagation of charged cosmic rays through the Galactic environment influences all aspects of the observation at Earth. Energy spectrum, composition and anisotropy are changed due to deflections in magnetic fields and interactions with the interstellar medium. Today the transport is simulated with different simulation methods either based on the solution of a transport equation (multi-particle picture) or a solution of an equation of motion (single-particle picture). We developed a new module for the publicly available Propagation software CRPropa3.1, where we implemented an algorithm to solve the transport equation using stochastic differential equations. This technique allows us to use a diffusion tensor which is anisotropic with respect to an arbitrary magnetic background field, such as the well-known JF12 field. In this contribution, we present first studies on the influence of a anisotropic diffusion along the magnetic field line on the cosmic ray outflows and compare our results to observations.
Multi-wavelength Signatures of Cosmic Rays in the Milky Way
Elena Orlando
Cosmic rays propagate in the Milky Way and interact with the interstellar medium and magnetic fields. These interactions produce emissions that span the electromagnetic spectrum and are an invaluable tool for understanding the intensities and spectra of cosmic rays in different regions of the Milky Way. Hence observations of these emissions complement information from cosmic ray measurements. We present updates on the study of cosmic ray properties by combining multi-wavelength observations of this interstellar emission with latest accurate CR direct measurements.
Solar Modulation studies with the Alpha Magnetic Spectrometer
Veronica Bindi
University of Hawaii, Manoa
The Alpha Magnetic Spectrometer (AMS), on the International Space Station (ISS) since May 2011, has acquired the largest number of particles ever measured in space by a single experiment, performing the most precise measurement of galactic cosmic rays (GCR) to-date. The detailed time variation of multiple particle species fluxes measured in the first years of operations, during the ascending phase of solar cycle 24 and reversal of the Sun's magnetic eld polarity (from negative A < 0 to positive A > 0). For all particles, the high energy spectrum remains stable versus time, while the low-energy range is strongly modulated by the solar activity. In addition, AMS measured several Forbush decreases (FD) and solar energetic particles (SEP) associated with the short term solar activity.
Cosmic ray propagation around the Sun
Mike Kroll
The Sun shadow can be measured with the IceCube detector and varies in depth corresponding to the magnetic field. Hence, we are given a possibility to understand cosmic ray propagation in the magnetic field of the Sun, for which a sufficiently good modelling is necessary. We investigate the field with its temporal deviations in strength and orientation. In times of low solar activity, the field can be approximated by a dipole structure. During higher activities, however, the field becomes increasingly inhomogeneous, especially in regions near the solar surface. These regions are spatially constrained and can reach magnetic field strengths of up to 50 Gauss. In this work, we simulate protons with energies up to Ep, max = 40 TeV. This energy is the median energy of those cosmic rays that are used in IceCube's Sun shadow analysis. Its data allows to determine the Sun shadow at different times in the solar cycle and compare the results to our simulation. We obtain solar magnetic field data within the PFSS model from the GONG data archive.
Observation of the Moon and Sun with HAWC
Mehr Un Nisa
The Sun and Moon produce deep deficits in the nearly isotropic flux of TeV cosmic rays measured at Earth. Observations of these cosmic-ray deficits, or "shadows," can provide unique measurements of the solar and Galactic environment. For example, the displacement of the shadow of the Moon in the geomagnetic field allows for charge discrimination of high-energy Galactic cosmic rays. The Sun shadow varies strongly with the solar cycle, and multi-year measurements enable precise tests of coronal magnetic field models. Moreover, the Sun may also be a TeV gamma ray source due to interactions of Galactic cosmic rays in its photosphere. The High Altitude Water Cherenkov (HAWC) Observatory, a wide field-of-view detector of TeV cosmic rays and gamma rays, performs unbiased high-statistics measurements of the Sun and Moon each day. Using measurements of the Moon shadow with two years of data from the complete HAWC array, we will present strong limits on the flux of antiprotons above 1 TeV. We will also present the first upper limits on the flux of gamma rays above 1 TeV from the solar disk.
Latest results from AMS on the Space Station
Melanie Heil
Massachusetts Inst. of Technology
The Alpha Magnetic Spectrometer (AMS) is a multi-purpose magnetic spectrometer measuring cosmic rays up to TeV energies on the International Space Station (ISS) since 2011. Its precision, large acceptance and ability to identify particle types over a wide energy range during its long duration mission in Space make it unique in astro-particle physics. To date AMS has collected over 100 billion charged cosmic ray events. The latest AMS results will be presented.
Antiproton Flux and Antiproton-to-Proton Flux Ratio in Primary Cosmic Rays Measured with AMS on the Space Station
Andreas Bachlechner
Rheinisch-Westfaelische Tech. Hoch.
Precision measurements by AMS of the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays in the absolute rigidity range from 1 to 450 GV are presented based on $3.49 \times 10^5$ antiproton events and $2.42 \times 10^9$ proton events. At $~20$ GV the antiproton-to-proton flux ratio reaches a maximum. Unexpectedly, above 60 GV the antiproton spectral index is consistent with the proton spectral index and the antiproton-to-proton flux ratio shows no rigidity dependence in the rigidity range from $~60$ to $~500$ GV. This unexpected observation requires new explanation of the origin of cosmic ray antiprotons.
Precision Measurement of the Combined Electron and Positron Flux in Primary Cosmic Rays with AMS on the ISS
Matteo Duranti
Universita e INFN, Perugia
We present the latest measurement of the combined electron and positron flux in cosmic rays based on the analysis of all the AMS data collected during more than 5 years of operations. The multiple redundant identification of electrons and positrons, and the match of energy measured by the 17 radiation lengths calorimeter and the momentum measured by the tracker in the magnetic field enable us to select a clean electron and positron sample up to the highest energies. The extensive calibration of the detector in the test beam at CERN verifies the energy scale and the proton rejection power. These latest results, based on twice the statistics of our previous publication, disagree with the results of other experiments, especially at high energies. Our results in the region from 30 to 1000 GeV can be described accurately by a single power law dependence.
Cosmic-Ray Lithium Production at a Type Ia Supernova Following a Nova Eruption
Norita Kawanaka
Recent direct measurements of cosmic-ray (CR) light nuclei (protons, helium, and lithium) by AMS-02 have shown that the flux of each element has an unexpected hard component above $\sim 300~{\rm GeV}$, and that the spectral indices of those components are almost the same. This implies that there are some primary sources that produce CR lithium nuclei, which have been believed to be produced via spallation of heavier nuclei in the ISM (secondary origin). We propose the nearby Type Ia supernova following a nova eruption from a white dwarf as the origin of CR Li.
Measurement of anisotropies in cosmic ray arrival directions with the Alpha Magnetic Spectrometer on the International Space Station
Jorge Casaus
Centro de Investigaciones Energéti cas Medioambientales y Tecno
Analysis of anisotropies in the arrival directions of galactic protons, electrons and positrons has been performed by AMS on the International Space Station. An absolute anisotropy measurement has been performed with protons, electrons and positrons. These, together with the results of the anisotropy analysis of the electron to proton, positron to proton, and the positron to electron ratios will be presented.
Properties of Elementary Particle Flux Ratios in Primary Cosmic Rays
Yuan-Hann Chang
National Central University TW
We report the measurements of the fluxes of elementary particles: electrons, positrons, protons, and antiprotons, in the cosmic rays by the AMS experiment. The measured spectra show distinctive features that cannot be explained by ordinary cosmic ray models. In particular, in spite of the different production and propagation properties of protons, antiprotons and positrons, the antiproton-to-proton and positron-to-proton flux ratios are rigidity independent above 60 GV, while the electron flux shows completely different rigidity dependence. To explain these unexpected features, new understandings of elementary particles in the cosmic rays are needed.
Cosmic ray driven winds in the Galactic environment and the cosmic ray spectrum
Sarah Recchia
University Paris 7 – APC
Cosmic Rays escaping the Galaxy exert a force on the interstellar medium directed away from the Galactic disc. If this force is larger than the gravitational pull due to the mass embedded in the Galaxy, then galactic winds may be launched. Such outflows may have important implications for the history of star formation of the host galaxy, and in turn affect in a crucial way the transport of cosmic rays, both due to advection with the wind and to the excitation of waves by the same cosmic rays, through streaming instability. The possibility to launch cosmic ray induced winds and the properties of such winds depend on environmental conditions, such as the density and temperature of the plasma at the base of the wind and the gravitational potential, especially the one contributed by the dark matter halo. In this paper we make a critical assessment of the possibility to launch cosmic ray induced winds for a Milky-Way-like galaxy and how the properties of the wind depend upon the conditions at the base of the wind. Special attention is devoted to the implications of different conditions for wind launching on the spectrum of cosmic rays observed at different locations in the disc of the galaxy. We also comment on how cosmic ray induced winds compare with recent observations of Oxygen absorption lines in quasar spectra and emission lines from blank-sky, as measured by XMM-Newton/EPIC-MOS.
Observing the Polarization of the Cosmic Microwave Background with SPIDER
Alexandra Rahlin
Fermi National Accelerator Laboratory
SPIDER is a balloon-borne telescope designed to characterize the linear polarization of the cosmic microwave background at degree angular scales, and in particular to place constraints on the $B$-mode angular power spectrum arising from primordial gravitational waves. For the inaugural flight in January 2015, SPIDER observed approximately 12% of the sky with nearly 2000 detectors at frequencies of 95 GHz and 150 GHz in order to characterize the CMB power spectrum over the range $30 < \ell < 300$. In combination with _Planck_ data at higher frequencies, the relatively large sky coverage over multiple frequency bands also enables characterization of foreground contributions from Galactic dust. A second flight in December 2018 will include several high-frequency instruments to further constrain Galactic foregrounds in our region of the sky. In this talk, we present preliminary analysis of data from the first flight, as well as science prospects for the second flight.
A New Limit on CMB Circular Polarization from SPIDER
Johanna Nagy
I will present a new upper limit on CMB circular polarization from the 2015 flight of SPIDER, a balloon-borne telescope designed to search for B-mode linear polarization from cosmic inflation. Although the level of circular polarization in the CMB is predicted to be very small, experimental limits provide a valuable test of the underlying models. By exploiting the non-zero circular-to-linear polarization coupling of the half-wave plate polarization modulators, data from SPIDER's 2015 Antarctic flight provides a constraint on Stokes V at 95 and 150 GHz from 33 < l < 307. No other limits exist over this full range of angular scales, and SPIDER improves upon the previous limit by several orders of magnitude. As linear CMB polarization experiments become increasingly sensitive, similar techniques can be applied to obtain even stronger constraints on circular polarization.
SPT-3G: A new instrument on the South Pole Telescope
Daniel Dutcher
Kavli Institute for Cosmological Physics, University of Chicago & Department of Physics, University of
Chicago) The South Pole Telescope is a 10-meter diameter telescope located at the NSF Amundsen-Scott South Pole Station in Antarctica, designed for high-precision measurements of the temperature anisotropy and polarization properties of the cosmic microwave background. The third-generation camera on the telescope, SPT-3G, was deployed in the 2016-2017 austral summer season and represents a significant technological upgrade over previous instruments. The secondary optics, receiver cryostat, readout electronics, and detectors have all been redesigned and replaced with significantly improved versions. The SPT-3G focal plane consists of over 2700 trichroic, dual-polarization pixels with observing bands centered at 95, 150, and 220 GHz for a total of over 15,000 detectors on-sky. The higher detector count and larger focal plane footprint will yield a ~20x faster mapping speed and ~5x lower noise compared to the previous camera, SPTpol. The increased sensitivity and resolution of SPT-3G will yield high signal-to-noise maps of lensing B-modes, and when combined with other experiments could constrain the sum of the neutrino masses to within 0.06 eV, directly probing the neutrino mass hierarchy. I will discuss the technology of the upgraded instrument, its installation onto the telescope, and what we've learned in the few months since deployment.
Measuring the Cosmic Microwave Background B-mode Polarization with the POLARBEAR Experiment
Neil Goeckner-Wald
The odd-parity (B-mode) polarization anisotropy of the Cosmic Microwave Background (CMB) provides a unique window into the history and contents of the universe. At sub-degree scales this polarization is primarily created by the gravitational lensing of the CMB due to intervening large scale structure while at degree scales B-mode polarization can indicate the presence of primordial gravitational waves predicted by the theory of inflation. We present an update on the analysis of data taken by the POLARBEAR experiment. We show the B mode power spectrum inferred from two seasons of data taken between 2012 and 2014 on an effective sky area of 25 square degrees over a range of multipole moments $500 \leq \ell \leq 2100$. The measured amplitude of lensing B modes after subtraction of galactic foregrounds is found to be $A_L = 0.60 ^{+0.26} _{-0.24} ({\rm stat}) ^{+0.00} _{-0.04}({\rm inst}) \pm 0.14 ({\rm foreground}) \pm 0.04 ({\rm multi})$ where $A_L = 1$ corresponds to $\Lambda CDM$ cosmology. In mid 2014 POLARBEAR deployed a continuously rotating half wave plate polarization modulator and began scanning a 700 effective square degree patch to measure degree angular scale B-mode polarization. We present the status of this analysis and outline considerations for future experiments designed to operate in this configuration including the Simons Array and Simons Observatory.
High Accuracy Calibration Sources for Cosmic Microwave Background Polarimeters: POLOCALC
Federico Nati
I will describe a novel method to measure the absolute orientation of the polarization plane of the CMB with arcsecond accuracy, that will enable unprecedented measurements for cosmology and fundamental physics. Existing and planned CMB polarization instruments looking for primordial B-mode signals need an independent, experimental method for systematics control on the absolute polarization orientation. The lack of such a method limits the accuracy of the detection of inflationary gravitational waves, the constraining power on the neutrino sector through measurements of gravitational lensing of the CMB, the possibility of detecting Cosmic Birefringence, and the ability to measure primordial magnetic fields. Sky signals used for calibration and direct measurements of the detector orientation cannot provide an accuracy better than 1 deg. Self-calibration methods provide better accuracy, but may be affected by foreground signals and rely heavily on model assumptions. The POLarization Orientation CALibrator for Cosmology, POLOCALC, will dramatically improve instrumental accuracy by means of an artificial calibration source flying on balloons and aerial drones. A balloon-borne calibrator will provide far-field source for larger telescopes, while a drone will be used for tests and smaller polarimeters. POLOCALC will also allow a unique method to measure the telescopes' polarized beam. It will use microwave emitters between 40 and 150 GHz coupled to precise polarizing filters. The orientation of the source polarization plane will be registered to sky coordinates by star cameras and gyroscopes with arcsecond accuracy. Any CMB experiment observing our calibrator will enable measurements of the polarization angle in absolute sky coordinates.
Status and Recent Results from the BICEP Suite of Experiments at the South Pole
Abigail Vieregg
Increasingly precise maps of the polarization of the CMB are a unique and powerful tool for understanding new physics, including inflation, the superluminal expansion of the universe during the first moments after the Big Bang. I will discuss constraints on inflation, set using the BICEP series of experiments at the South Pole (BICEP2, The Keck Array, and BICEP3). I will then discuss projections for the future of the BICEP program, including BICEP Array.
Cosmology with the HETDEX survey
Donghui Jeong
HETDEX (Hobby-Eberly Telescope Dark Energy eXperiment) is a galaxy survey targeting Lyman-alpha emitters (LAEs) at high redshifts (1.9<3.5). Starting from late 2017, the survey will observe about a million LAEs over ~400 sq. degrees, which corresponds to ~10Gpc^3 in volume. The main science goal of HETDEX is to measure the angular diameter distance and the Hubble expansion rate at high redshifts (z~2.5 and z~3) within a percent accuracy, so that we can measure the dark energy density better than 3-sigma. In this talk, I will introduce the HETDEX survey, summarize the survey design, observing strategy, as well as some result from the commissioning data.
CMB Foregrounds: Problems, Parameterizations, and Progress
James Colin Hill
The next frontiers in cosmic microwave background (CMB) science include a detailed mapping of the CMB polarization field, with goals of detecting the inflationary B-mode signal and constructing high-fidelity maps of the matter distribution via CMB lensing reconstruction, as well as a first detection of CMB spectral distortions. At these levels of precision (~nK), Galactic and extragalactic foregrounds may be the ultimate limiting factor in deriving cosmological constraints. In this context, I will discuss recent work focused on extending CMB foreground parameterizations in a systematic, flexible way, with applications to both polarization and spectral distortion measurements. I will apply this methodology to spectral distortion detection forecasts for the Primordial Inflation Explorer, showing that high-significance measurements of the Compton-y and relativistic thermal Sunyaev-Zel'dovich signals can be expected. I will conclude with a discussion of foregrounds in CMB lensing measurements, focusing on the kinematic Sunyaev-Zel'dovich effect, as well as recent progress in B-mode delensing, in which the CMB lensing signal itself represents a foreground for inflationary B-mode constraints.
CMB Polarization B-mode Delensing with SPTpol and Herschel
W.L. Kimmy Wu
Inflation generically predicts a background of primordial gravitational waves, which generate a primordial B-mode component in the polarization of the cosmic microwave background (CMB). The measurement of such a B-mode signature would lend significant support to the paradigm of inflation and be important for development of quantum gravity theories. Observed B modes also contain a component from the gravitational lensing of primordial E modes, which can obscure the measurement of the primordial B modes. If the amplitude of primordial B modes is sufficiently small, the lensing component will need to be cleaned using a process called 'delensing.' Delensing has been studied theoretically and with simulations but has not been demonstrated with data until recently. I will present delensing of a measurement of the CMB B-mode power spectrum from SPTpol using data from Herschel as a tracer of the lensing potential. The measured B-mode power is reduced by 28 percent on sub-degree scales, in agreement with predictions from simulations, and the null hypothesis of no delensing is ruled out at 6.9 sigma. Furthermore, we develop and use a suite of realistic simulations to investigate and validate the delensing process. This work represents a crucial step on the road to detecting primordial gravitational waves.
Cosmology with the Lyman-alpha Forest
David Weinberg
The Lyman-alpha forest provides a powerful probe of cosmic structure at z = 2-4, with physics that is relatively straightforward. I will discuss current constraints on dark energy from baryon acoustic oscillation measurements in the 3-d Lya forest and on neutrino masses from the 1-d Lya forest power spectrum, with measurements coming from the Baryon Oscillation Spectrosopic Survey (BOSS). I will discuss prospects and challenges ahead, with emphasis on accurate modeling and anticipated measurements from DESI.
Cosmological Results from BOSS and eBOSS
I will present results from the completed Sloan Digital Sky Survey (SDSS)-III Baryon Oscillation Spectroscopic Survey (BOSS) and initial results from the SDSS-IV extended BOSS (eBOSS). These experiments have obtained spectroscopic redshifts for nearly 2 million galaxies and quasars, allowing the creation of 3 dimensional maps spanning most of the history of the Universe. In addition to other key science results, I will explain how the baryon acoustic oscillation (BAO) feature can be located in these maps and used as a "standard ruler" to allow distance measurements and powerful tests of dark energy.
Emulating galaxy clustering and galaxy-galaxy lensing into the deeply nonlinear regime
Ben Wibking
We model galaxy-galaxy lensing and clustering into nonlinear scales with a suite of N-body simulations, and we project significantly tighter cosmological parameter constraints possible within the ΛCDM parameter space and a HOD galaxy biasing model by using small scales. To include possible assembly bias effects, we introduce a two-halo environmental density dependence parameter into our model and show that fully-marginalized cosmological constraints should improve by greater than a factor of two using scales $0.5 < r_p < 30$ Mpc h$^{−1}$ compared to using only scales > 5 Mpc h$^{−1}$. We forecast that combining clustering information from the BOSS LOWZ sample and galaxy-galaxy lensing from SDSS imaging can constrain the combined cosmological parameter $\sigma_8 \Omega_M^{0.3}$ to 2.4 per cent, and full-depth DES imaging may improve this constraint to 1 per cent (assuming 10 galaxies per square arcminute).
Extremely Efficient Cosmological Perturbation Theory with FAST-PT
Xiao Fang
Cosmological perturbation theory is a powerful tool to model observations of large-scale structure in the weakly non-linear regime. However, even at next-to-leading order, it results in computationally expensive mode-coupling integrals. In this talk, I will focus on the physics of our extremely efficient algorithm, FAST-PT. I will show how the algorithm can be applied to calculate 1-loop power spectra for several cosmological observables, including the matter density, galaxy bias, galaxy intrinsic alignments, the Ostriker-Vishniac effect, the secondary CMB polarization due to baryon flows, and redshift-space distortions. Our public code is written in Python and is easy to use and adapt to additional applications.
Neutrinos
Cosmology/Particle
Neutrinos in the Era of Multimessenger Astronomy
Francis Halzen
We will review the status of the observations of cosmic neutrinos and the model-independent constraints on the properties of the sources where they originate. We will emphasize the multimessenger relations connecting neutrino, gamma ray, and cosmic-ray observations and conclude that neutrinos are ubiquitous in the nonthermal universe suggesting a more significant role than previously anticipated. We will discuss the prospect of observing individual point sources, as well as the "guaranteed" flux of cosmogenic neutrinos.
Radio Detection of the Highest Energy Neutrinos
Searches for ultra-high energy neutrinos ($E>10^{17}$ eV) probe the nature of the highest energy universe in a unique way and test our understanding of particle physics at energies much greater than those achievable at particle colliders. I will discuss the range of strategies used to search for the highest energy neutrinos via radio emission from neutrino-induced showers, and the current status of measurements. The future of high energy neutrino detection lies with ground-based radio arrays, which would represent an enormous leap in sensitivity and may be able to push the energy threshold for radio detection down to overlap with the energy range probed by IceCube.
Neutrino Astronomy - Fast Forward
Marek Kowalski
HU Berlin and DESY
I will review some of the open questions in high-energy neutrino astronomy raised by the observations of IceCube in concert with cosmic ray and gamma-ray observatories, and how they can be addressed through a new generation of neutirno observatories.
Very high-energy emission from transients
Raffaella Margutti
I briefly review the status of high-energy emission from astronomical transients concentrating on current efforts to detect high-energy emission from strongly interacting shocks of young stellar explosions (both ordinary and superluminous). In particular, I will present the case of the search for high-energy emission from the remarkable SN2014C. SN2014C evolved from a normal Type I core-collapse SN into a strongly interacting SN of Type II, violating the traditional classification scheme of type I vs. type II stellar explosions.
Connections to Compact Dark Matter
Marc Kamionkowski
The elusive nature of dark matter calls for new ideas. An old but largely overlooked possibility is compact dark matter—perhaps primordial black holes—with masses comparable to the masses of stars. Null microlensing searches rule out fairly robustly masses below ten solar masses. Constraints to higher masses are, however, a bit trickier but have been the subject of considerable recent study. I will review the motivation for this exploration, current constraints (and their uncertainties), and some possible future probes. The work discussed makes connections with gravitational-wave astrophysics, high-energy astrophysics, stellar dynamics, the cosmic microwave background, and cosmology at much earlier and much later times.
Testing Cold Dark Matter with Strong Gravitational Lensing of AGN
Anna Nierenberg
Strong gravitational lensing provides a means of measuring the halo mass function into regimes below which baryons are reliable tracers of structure. In this low mass regime (M_vir<10^9 M_sun), the microscopic characteristics of dark matter affect the predicted abundance of dark matter halos. Strong gravitational lensing has been limited by the small number of systems which can be used to detect dark matter substructure. I will discuss the narrow-line lensing technique, which enables a significant increase in the number of systems which can be used to measure the subhalo mass function, and the projected constraint on Cold vs. Warm Dark Matter with just the current sample of known strong gravitational lenses.
Searching for dark disks using Gaia
Katelin Schutz
Astrometry in the Gaia era will give an unprecedented amount of information about the full 6D phase space distribution of the local galactic halo. In this talk, I will use an analysis of vertical motions to show how the Gaia data will be sensitive to the presence of structures including dark disks (either from novel dark matter microphysics or from baryonic dragging.)
Dwarf Galaxy Population of a Nearby Star-Forming Galaxy and Implications for Dark Matter
We present our first results from a deep LBT survey of dwarf satellites of nearby star-forming galaxies outside the local group. We present our candidates and report the number and distribution of satellites for our first system. We are sensitive to deep within the ultra faint dwarf and ultra diffuse galaxy regime outside. We discuss the implications of these new observations on the dark matter halo distribution function and dark matter models.
Searching for ultra-faint galaxies in three years of data from the Dark Energy Survey
Keith Bechtol
LSST
Deep optical imaging surveys have revealed a population of extremely low luminosity and dark matter dominated galaxies orbiting the Milky Way. The total number of Milky Way satellite galaxies and the demographics of this population are still largely unknown, in part, because of complex selection effects that limit our ability to detect the lowest surface brightness galaxies. The Dark Energy Survey (DES) has now finished a complete reprocessing of data from the first three observing seasons, yielding a dataset with substantially improved depth, homogeneity, and photometric precision. We will describe progress towards a more robust statistical search for Milky Way satellite galaxies in DES data with the ultimate goal of constraining the luminosity function of the faintest galaxies as a test of galaxy formation and dark matter physics.
Black Mergers, Quiet Kilonovae, and r-Process Afterglow Donuts From Dark Matter
Yu-Dai Tsai
We identify new astrophysical signatures of NS-imploding DM, which could decisively test these hypotheses in the next few years. First, NS-imploding DM forms ≪10−10 solar mass black holes inside NSs, thereby converting NSs into ∼1.5 solar mass BHs. This decreases the number of NS mergers seen by LIGO/VIRGO (LV) and associated merger kilonovae seen by telescopes like DES, BlackGEM, and ZTF, instead producing a population of "black mergers" containing ∼1.5 solar mass black holes. Second, DM-induced NS implosions create a new kind of kilonovae that lacks a detectable, accompanying gravitational signal. Using DES data and the Milky Way's r-process abundance, we set bounds on these DM-initiated "quiet-kilonovae." Third, the spatial distribution of merger kilonovae, quiet kilonovae, and fast radio bursts in galaxies can be used to detect dark matter. NS-imploding DM destroys most NSs at the centers of mature disc galaxies, so that NS merger kilonovae would appear mostly in a donut at large radii. We find that as few as ten NS merger kilonova events, located to ∼1 kpc precision could validate or exclude DM-induced NS implosions at 2σ confidence, exploring DM-nucleon cross-sections over an order of magnitude below current limits. Similarly, NS-imploding dark matter as the source of fast radio bursts can be tested at 2σ confidence once 20 bursts are located in host galaxies by radio arrays like CHIME and HIRAX. URL: https://arxiv.org/abs/1706.00001 I am also submitting an abstract to the track Particle Physics.
Dark Fires in the Sky: Model-Independent Dark Matter Detection via Kinetic Heating of Neutron Stars
Nirmal Raj
A largely model-independent probe of dark matter-nucleon interactions is proposed. Accelerated by gravity to relativistic speeds, local dark matter scattering against old neutron stars deposits kinetic energy that heats them to infrared blackbody temperatures. The resulting radiation could be detected by next generation telescopes such as James Webb, the Thirty Meter Telescope and the European Extremely Large Telescope. While underground direct detection searches are not (or poorly) sensitive to dark matter with sub-GeV masses, higher-than-weak-scale masses, scattering below neutrino floors, spin-dependent scattering well below nuclear cross-sections, pseudoscalar-mediated scattering, and inelastic scattering for inter-state transitions exceeding O(100 keV), dark kinetic heating of neutron stars advances these frontiers by orders of magnitude, and should vastly complement these searches. Popular dark matter candidates previously suspected challenging to probe, such as thermal Higgsinos, may be discovered.
LUX, and the Combatting of the WIMP Lamppost Effect
Matthew Szydagis
New results from the Large Underground Xenon (LUX) detector, a 100-kg-scale, 2-phase xenon direct dark matter search experiment, will be shared. Dark matter, the missing ~25% of the mass-energy content of the universe, is sought in new ways, using effective field theory operators to extend the search to higher-mass Weakly Interacting Massive Particles (WIMPs), spin-dependent interaction operators, and electron instead of nuclear recoil, to seek axions. In addition, 2-neutrino double electron capture of 124Xe will be explored. Lastly, both old and new calibrations and position and energy reconstruction techniques will be reviewed, in the context of the new background and signal models being developed by LUX which will be expanded to higher energies and with inclusion of pulse-shape discrimination.
The Next-Generation Dark Matter Project, LUX-ZEPLIN
University at Albany, SUNY
Xenon-based dark matter experiments have been leading the field of direct detection for a decade now, as realized most recently by the PandaX, LUX, and now XENON1T results, setting increasingly stringent limits on WIMP scattering. The near-future commencement of construction of LUX and ZEPLIN's 10-ton-scale scale-up, next-generation successor, LZ, will be discussed here. We plan on achieving our baseline sensitivity of 2.3 x 10^-48 cm^2 for a WIMP of 40 GeV/c^2 rest mass, with a 5.6-ton fiducial mass in a two-phase xenon time-projection chamber. LZ has recently passed its final CD-2/3 approvals from the DOE, and unveiled its design details, background estimates, and projected sensitivities for different types of dark matter in its technical design Report. These will all be presented.
First result of XENON1T
Lin Qing
Understanding the properties of dark matter particle is a fundamental problem in particle physics and cosmology. The search of dark matter particle scattering off nuclei target using ultra-low background detector is one of the most promising technology to decipher the nature of dark matter. The XENON1T experiment, which is a dual phase detector with ~2.0 tons of xenon running at the Gran Sasso Laboratory in Italy, is designed to lead the field of dark matter direct detection. Since November 2016, the XENON1T detector is continuously taking data, with a background rate of more than one order of magnitude lower than any current generation dark matter search experiment. In this talk, I will present the first dark matter search results from XENON1T. Details about the XENON1T detector as well as the data analysis techniques will also be covered.
Dark matter search results from the PandaX-II experiment
Yong Yang
Shanghai Jiao Tong University CN
The PandaX project consists of a series of xenon-based experiments, located at China JinPing underground Laboratory. The current experiment, PandaX-II, is a direct dark matter search experiment with a 500 kg-scale liquid xenon dual-phase time projection chamber. PandaX-II started physics data taking in 2016. In this talk we report latest results and the current status of the PandaX-II experiment.
Direct dark matter search with the CRESST-III experiment
Andrea Munster
Technical University of Munich TUM
The CRESST (Cryogenic Rare Event Search with Superconducting Thermometers) experiment aims at the direct detection of dark matter particles via their elastic scattering off nuclei. The target material consists of scintillating CaWO$_4$ single crystals operated as cryogenic detectors at a temperature of ~10mK. For several years, these crystals have successfully been produced within the collaboration at the Technical University of Munich (TUM) and a significant improvement in radiopurity could be achieved. In CRESST-II Phase 2, an extended physics run between 2013 and 2015, the experiment demonstrated its leading sensitivity in the field of direct searches for dark matter masses below ~1.7GeV/c$^2$. A further detector optimization for the search of low-mass dark matter particles was performed for CRESST-III, whose Phase 1 started taking data in summer 2016. In this contribution the performance of the CRESST-III detectors as well as preliminary results will be presented. Requirements and perspectives for the upcoming CRESST-III Phase 2, in particular with respect to radiopurity, will be discussed.
Dark matter searches with the PICO bubble chambers
Scott Fallows
The PICO collaboration uses superheated fluid detectors to attempt to directly detect interactions between dark matter particles and ordinary matter. These detectors can be operated in conditions under which they are insensitive to gamma and beta radiation, typically the dominant backgrounds for direct dark matter searches. The PICO-60 bubble chamber is located 2km underground at SNOLAB in Sudbury, Ontario, where neutron backgrounds from cosmic rays are strongly suppressed. These backgrounds are further suppressed by a water tank surrounding the chamber, and by the selection and clean handling of very radiopure components. Piezoacoustic transducers detect the sound of bubble nucleation, which can be used to distinguish between nuclear recoils and U/Th chain alpha decays, the predominant background in superheated fluid dark matter searches. During its first physics run the PICO-60 C$_3$F$_8$ bubble chamber was operated at a thermodynamic energy threshold of 3.3 keV, acquiring a background-free WIMP-search exposure of 1167 kg-days. A similar exposure was then acquired at 2.4 keV and is currently under analysis. Its successor, PICO-40L, will begin commissioning in the latter part of 2017. This detector has an inverted vertical orientation, intended to eliminate potential backgrounds caused by water droplets, particulates, and surface tension effects in previous chambers. It is intended to act as a prototype and proof-of-principle for the proposed ton-scale bubble chamber PICO-500.
Search for hidden-photon Dark Matter with FUNK
Ralph Engel
KIT - Karlsruhe Institute of Technology
Many extensions of the Standard Model of particle physics predict a parallel sector of at least one new U(1) symmetry, giving rise to hidden photons. If produced non-thermally in the early universe, these hidden photons can be candidate particles for cold Dark Matter. Hidden photons are expected to kinetically mix with regular photons. If hidden photons pass through a conducting surface a tiny electromagnetic signal is produced. Due to the kinematics of the process, these photons are emitted almost perpendicularly to the surface. The corresponding photon frequency is given by the mass of the hidden photons. In this contribution we present results of a search for dark photons in the mass range from 2 to 8 eV using a spherical metallic mirror of 14 m^2 area. We will also discuss future Dark Matter searches in the eV and sub-eV range by application of different detectors for electromagnetic radiation.
The Rise of the Leptons: Pulsar Emission Dominates the TeV Gamma-Ray Sky
Tim Linden
Recent HAWC observations have found extended TeV emission coincident with the Geminga and Monogem pulsars. In this talk, I will show that these detections have significant implications for our understanding of pulsar emission. The isotropic nature of this emission provides a new avenue for detecting nearby pulsars with radio beams that are not oriented towards Earth. Additionally, I will show that the total emission from all unresolved pulsars produces the majority of the TeV gamma-ray flux observed from the Milky Way.
Are Starburst Galaxies Proton Calorimeters?
Xilu Wang
Several starburst galaxies have been observed in the GeV and TeV bands; in this regime, gamma-rays are mainly produced by cosmic-ray interactions with the interstellar medium ($p_{\rm cr}p_{\rm ism} \to \pi^{0} \to \gamma\gamma$). Furthermore, the dense environments of starbursts may act as proton "calorimeters" where collisions dominate losses, so that a substantial fraction of cosmic-ray energy input is emitted in gamma rays. Here we build a one-zone, "thick-target" model implementing calorimetry and placing a firm upper bound on gamma-ray emission from cosmic-ray interactions. The model assumes that cosmic rays are accelerated by supernovae, and all suffer nuclear interactions rather than escape. Our model has only two free parameters: the cosmic-ray proton acceleration energy per supernova $\epsilon_{\rm cr}$, and the proton injection spectral index $s$. We calculate the pionic gamma-ray emission from 10 MeV to 10 TeV, and derive thick-target parameters for six galaxies with *Fermi*, *H.E.S.S.*, and/or *VERITAS* data. Our model provides good fits for the M82 and NGC 253, and yields $\epsilon_{\rm cr}$ and $s$ values suggesting that supernova cosmic-ray acceleration is similar in starbursts and in our Galaxy. We find that these starbursts are indeed nearly if not fully proton calorimeters. For NGC 4945 and NGC 1068, the models are consistent with calorimetry but are less well-constrained due to the lack of TeV data. However, the Circinus galaxy and the ultraluminous infrared galaxy Arp 220 exceed our pionic upper-limit; possible explanations will be discussed.
The Multiwavelength Properties of Arp 220
Tova Yoast-Hull
Canadian Institute for Theoretical Astrophysics
When analyzed together, radio and gamma-ray observations make for a very powerful tool for studying and diagnosing extragalactic cosmic ray populations. The recent gamma-ray detection of the ultra-luminous galaxy Arp 220 is well above past predictions, indicating evidence of a very large cosmic ray population. Whether the star formation or an active galactic nucleus is the source of the additional cosmic, there is a clear excess of gamma-ray emission in comparison to the observed radio flux. Here, we analyze the amount of energy necessary to power the observed gamma-ray flux and compare with traditional tracers of the star-formation rate. We also explore possible mechanisms for lowering the corresponding radio flux and check for consistency with observed properties of the interstellar medium.
A hadronic origin of the high-energy gamma rays from LMC
Qingwen Tang
Nanchang University & CCAPP
It has been suggested that high-energy gamma-ray emission ($>$100MeV ) of nearby star-forming galaxies may be produced predominantly by cosmic rays colliding with the interstellar medium through neutral pion decay. The pion-decay mechanism predicts a unique spectral signature in the gamma-ray spectrum, characterized by a fast rising spectrum and a spectral break below a few hundreds of MeV. We here report the evidence of a spectral break around 500 MeV in the disk emission of Large Magellanic Cloud (LMC), which is found in the analysis of the gamma-ray data extending down to 60 MeV observed by {\it Fermi}-Large Area Telescope. The break is well consistent with the pion-decay model for the gamma-ray emission, although leptonic models, such as the electron bremsstrahlung emission, cannot be ruled out completely.
Connecting Superluminous Supernovae, Gamma-ray Bursts, and Fast Radio Bursts to the Birth of Millisecond Magnetars
Ben Margalit
The repeating fast radio burst (FRB) 121102 was recently localized to a star-forming region in a dwarf host galaxy remarkably similar to those of superluminous supernovae (SLSNe) and long gamma-ray bursts (GRB), both of which were previously proposed to be powered by the birth of a millisecond magnetar. We demonstrate how a single magnetar engine can power both a SLSN and GRB, depending on the engine lifetime and the misalignment angle between the magnetar's rotation and magnetic axes. We also show that the production of a successful relativistic jet may be ubiquitous in engine-powered SLSNe, and describe several observational tests of this connection, including orphan radio afterglows and early optical signatures in the SLSN light curve. Finally, we describe recent results on the time-dependent ionization structure of the expanding supernova ejecta on timescales of decades following the explosion, due to photo-ionization by the nascent 'magnetar wind nebula'. We precisely quantify when a fast radio burst can escape through the ejecta; create probability distributions of the local dispersion measure; and make predictions for the synchrotron radio emission from the magnetar wind nebula (analogous to the quiescent radio counterpart to FRB 121102).
RoboPol: Revealing the connection between gamma-ray activity in blazars and optopolarimetric rotations
Ioannis Myserlis
Max Plank Institute for Radio Astronomy
Optical Synchrotron emission from blazars is significantly polarized and the polarization probes the magnetic field structure in the jet. Rotations of the polarization angle in blazars reveal important information about the evolution of disturbances responsible for blazar flares. Early results indicated that such rotations might be coincident with unusual gamma-ray activity of such sources. The RoboPol program for the polarimetric monitoring of statistically complete samples of blazars was developed in 2013 to systematically study this class of events and their possible connection with gamma-ray flares. RoboPol uses an innovative polarimeter installed at the 1.3m telescope of the University of Crete, and it is a collaboration between the University of Crete, Caltech, the Max-Planck Institute for Radio Astronomy, the Inter-University Centre for Astronomy and Astrophysics in India, and the Nicolaus Copernicus University in Poland. I will review the results of the 4-year aggressive, high-cadence gamma-ray—loud blazar monitoring with RoboPol, including the classification of the optopolarimetric properties of gama-ray—loud blazars, the statistical properties of polarization rotations, and their, now confirmed, relation to gamma-ray activity in blazar jets.
Gamma-ray and optical polarimetric monitoring of GeV bright blazars
Ryosuke Itoh
Tokyo Institute of technology
Blazars are thought to possess a relativistic jet that is pointing toward the direction of the Earth and the effect of relativistic beaming enhances its apparent brightness. Although numerous measurements have been performed, the mechanisms behind jet variability, creation, and composition are still debated. We performed simultaneous gamma-ray and optical photopolarimetry observations of 45 blazars with Fermi/LAT and Kanata telescope between 2008 July and 2014 December to investigate the mechanisms of variability and search for a basic relation between the several subclasses of blazars. Consequently, we found that a correlation between the gamma-ray and optical flux might be related to gamma-ray luminosity and the maximum polarization degree might be related to gamma-ray luminosity or the ratio of gamma-ray to optical flux. These results imply that low gamma-ray luminosity blazars emit from multiple regions.
Blazar Radio and Optical Survey (BROS): A New Catalog of Blazar Candidates
Yasuyuki Tanaka
By using deep radio source catalogs currently available, we present a new blazar candidate catalog, BROS, which includes 56314 sources located at declination $\delta > -40^{\circ}$ and outside the Galactic Plane ($|b| > 10^{\circ}$). We picked up flat-spectrum radio sources of $\alpha > -0.5$ ($\alpha$ is defined as $F_{\nu} \propto \nu^{\alpha}$) from 0.15 GHz TGSS and 1.4 GHz NVSS catalogs. Then, we identified their optical counterparts by cross-matching with the Pan-STARRS1 data. Color-color and color-magnitude plots for the selected flat-spectrum radio sources clearly showed two populations, "quasar-like" and "elliptical-galaxy-like" sequences. We emphasize that the latter population emerged for the first time and is missed by previous CRATES catalog because of the higher radio flux threshold. We found that the color-magnitude relation of nearby bright elliptical galaxies up to z=0.3 follows the "elliptical-galaxy-like" sequence. The index of the logN-logS distribution for this sample is $1.44\pm0.06$, supporting the interpretation of "nearby" because the measurement is consistent with a value for a static Euclidean universe. This BROS catalog is useful to search for electromagnetic counterparts of ultra-high-energy cosmic rays as well as PeV neutrions recently detected by IceCube, thus a powerful catalog in the era of multi-messenger astronomy. We also emphasize that this BROS catalog includes nearby ($z \leq 0.3$) BL Lac objects, a fraction of which would be TeV emitters and detectable by future Cherenkov Telescope Array. We will soon make this catalog available once published.
Challenges in reconciling observations and theory of the brightest high-energy flare ever of 3C 279
Eugenio Bottacini
University of Padova
Recent high-energy missions have allowed keeping watch over quasars in flaring states, which provide deep insights into the engine powered by supermassive black holes. However, having a quasar caught in a very bright flaring state is not easy requiring long surveys. Therefore, the observation of such flaring events represents a goldmine for theoretical studies. Such a flaring event was captured by the INTEGRAL mission in June 2015 while performing its today's deepest extragalactic survey when it caught the prominent quasar 3C 279 in its brightest flare ever recorded at gamma-ray energies. The flare was simultaneously recorded by the Fermi-LAT mission, by the Swift mission, and by observations ranging from UV, through optical to the near-IR bands. The derived snapshot of this broad spectral energy distribution of the flare has been modeled in the context of a one-zone radiation transfer leptonic and lepto-hadronic models constraining the single emission components. I will discuss results and challenges faced by trying to reconcile these observations and theory. Also implications for the detection of VHE gamma rays by Atmospheric Cherenkov Telescopes for such a flare will be discussed.
Flat-Spectrum Radio-Quasars with H.E.S.S. II
Matteo Cerruti
CNRS, LPNHE
With the installation of a fifth 28-m diameter telescope in the center of the array, the H.E.S.S. telescope array is now in its phase II, characterized by a low energy threshold below 100 GeV. The low-energy window is particularly appealing for extragalactic gamma-ray astronomy, because it allows the study of more distant sources, as well as sources characterized by softer spectra. In particular, flat-spectrum radio-quasars (FSRQs), which dominate the Fermi-LAT extragalactic sky but are rarer in the very-high-energy (VHE) gamma-ray band due to their low-frequency SED peak, are among the most interesting targets for H.E.S.S. II observations. In this contribution I will review some recent results from H.E.S.S. II observations on FSRQs, including the discovery of VHE emission from PKS 0736+017, the detection of 3C 279 during the 2015 outburst, and the detection of PKS 1510-089 during the 2016 outburst.
High-Energy Gamma Rays and Neutrinos from Nearby Radio Galaxies
Carlos Blanco
University of Chiago, Department of Physics
Radio Galaxies are the most likely class of sources for the diffuse flux of high-energy neutrinos reported by the IceCube Collaboration as suggested by multi-messenger data. Here, the gamma-ray spectrum from four nearby radio galaxies (Centaurus A, PKS 0625-35, NGC 1275, and IC 310) is analyzed in order to constrain the spectral shape and intensity of their respective injected emission. Our analysis handles gamma ray propagation though galactic and extragalactic environments accounting for the effects of electromagnetic cascades. Assuming interactions of cosmic ray protons with gas are the origin of this gamma-ray emission, we calculate the resulting neutrino flux for the selected sources. While the predicted neutrino fluxes are consistent with constraints published by the IceCube and ANTARES Collaborations, they consistently fall within an order of magnitude below the current point source sensitivity. The prospects appear very encouraging for the future detection of neutrino emission from the nearest radio galaxies.predicted from each of these sources. Although this scenario is consistent with the constraints published by the IceCube and ANTARES Collaborations, the predicted fluxes consistently fall within an order of magnitude of the current point source sensitivity. The prospects appear very encouraging for the future detection of neutrino emission from the nearest radio galaxies.
The Compton Spectrometer and Imager: Results from the 2016 Super-Pressure Balloon Campaign
John Tomsick
The Compton Spectrometer and Imager is a 0.2-5 MeV Compton telescope capable of imaging, spectroscopy and polarimetry of astrophysical sources. Such capabilities are made possible by COSI's twelve germanium cross-strip detectors, which provide for high efficiency, high resolution spectroscopy, and precise 3D positioning of photon interactions. In May 2016, COSI took flight from Wanaka, New Zealand on a NASA super-pressure balloon. For 46 days, COSI floated at a nominal altitude of 33.5 km, continually telemetering science data in real-time. The payload made a safe landing in Peru, and the hard drives containing the full raw data set were recovered. Analysis efforts have resulted in detections of various sources such as the Crab Nebula, Cyg X-1, Cen A, Galactic Center e+e- annihilation, and the long duration gamma-ray burst GRB 160530A. In this presentation, I will provide an overview of our main results, which include measuring the polarization of GRB 160530A, and our image of the Galactic Center at 511 keV. Additionally, I will summarize results pertaining to our detections of the Crab Nebula, Cyg X-1, and Cen A.
Spectral and Temporal Behaviour of Mrk 501 in Gamma Rays
Nachiketa Chakraborty
Max-Planck-Institut fuer Kernphysik
The blazar Mrk 501 is a well-known BL-Lac type object emitting very high energy photons interacting with the EBL despite the modest redshift and is highly variable across wavelengths down to timescales of a few minutes at TeV energies. This makes it an excellent laboratory for studying particle acceleration and radiative emission processes in jets through the spectral and temporal properties of the observed emission. It also allows us to constrain the Extragalactic Background Light (EBL) and Lorentz Invariance Violation (LIV). H.E.S.S. has observed Mrk 501 during some of its active states at the highest energies in 2014, triggered by FACT which continuously monitors it, profiling its long-term TeV behaviour as Fermi-LAT does at GeV energies. Here, we present the temporal and spectral behaviour of Mrk 501 at gamma ray energies. We compute the gamma ray power spectral density as well as the energy spectrum for the highest TeV flux state observed by H.E.S.S. and FACT in June 2014 that shows rapid variability and contrast it with the long term average behaviour. We also derive strong constraints on the LIV scale via the non-detection of EBL opacity modifications and from time-of-flight studies from the H.E.S.S. flare data.
Prompt neutrinos from heavy flavors
Mary Hall Reno
Neutrinos from charmed hadrons produced by cosmic ray interactions with air nuclei are the main background to high energy astrophysical neutrino flux measurements. Recent evaluations of the prompt neutrino flux from charm will be reviewed, including approaches using next-to-leading order QCD, the dipole model and kT factorization. Nuclear corrections and the impact of multi-component models of the incident cosmic ray flux will be discussed.
New Measurement of Atmospheric Neutrino Oscillations with IceCube
Tyce Deyoung
The DeepCore infill array of the IceCube Neutrino Observatory enables observations of atmospheric neutrinos with energies as low as 5 GeV. Using a set of 40,000 neutrino events with energies ranging from 5.6 - 56 GeV recorded during three years of DeepCore operation, we measure the atmospheric oscillation parameters $\theta_{23}$ and $\Delta m^2_{32}$ with precision competitive with long-baseline neutrino experiments, by observing distortions in the neutrino energy-zenith angle distribution. Our measurements are consistent with those made at lower energies, and prefer a value of $\theta_{23}$ close to maximal.
Solar Atmospheric Neutrinos: A New Neutrino Floor for Dark Matter Searches
Kenny Chun Yu Ng
Weizmann Institute of Science
As is well known, dark matter direct detection experiments will ultimately be limited by a "neutrino floor," due to the scattering of nuclei by MeV neutrinos from, e.g., nuclear fusion in the Sun. Here we point out the existence of a new "neutrino floor" that will similarly limit indirect detection with the Sun, due to high-energy neutrinos from cosmic-ray interactions with the solar atmosphere. We have two key findings. First, solar atmospheric neutrinos ≲1 TeV cause a sensitivity floor for standard WIMP scenarios, for which higher-energy neutrinos are absorbed in the Sun. This floor will be reached once the present sensitivity is improved by just one order of magnitude. Second, for neutrinos ≳1 TeV, which can be isolated by muon energy loss rate, solar atmospheric neutrinos should soon be detectable in IceCube. Discovery will help probe the complicated effects of solar magnetic fields on cosmic rays. These events will also be backgrounds to WIMP scenarios with long-lived mediators, for which higher-energy neutrinos can escape from the Sun.
Search for Solar Atmospheric Neutrinos with IceCube
Carsten Rott
Sungkyunkwan University
High-energy neutrinos are expected to be produced in cosmic-ray interactions with the solar atmosphere. The resulting neutrino flux is expected to offer insights into cosmic ray transport in the inner solar system and on solar magnetic fields. Besides the high theoretical interest in solar atmospheric neutrinos, an observed signal could be the first high-energy neutrino point source and valuable for calibration. Preliminary selection criteria and optimization studies will be discussed. We present sensitivities and the prospects to observe the solar atmospheric neutrino signal with IceCube data. The interplay with on-going dark matter searches from the Sun will be discussed.
High energy neutrinos from the Sun
Carlos Arguelles
In this talk I will discuss the production of high-energy neutrinos from interactions of cosmic rays with the solar atmosphere. Production of solar atmospheric neutrinos has been previously considered in the literature both as a potential source of high-energy neutrinos and as an irreducible background for dark matter searches. In our new calculation we estimate the uncertainties that arise from the solar atmosphere and hadronic interaction models. We further improve on previous calculations by considering neutrino oscillations in the propagation of neutrinos through the Sun. We predict that current event selections should observe ~ 1 event per year in detectors such as IceCube or the proposed mediterranean neutrino observatory, Km3Net. Finally, we put this rate in the context of indirect dark matter searches from the Sun by calculating the high-energy solar neutrino floor, which is analogous to the low-energy solar neutrino floor in dark matter direct detection experiments.
Measurement of neutrino events above 1 TeV with contained vertices in IceCube
Nancy Wandkowsky
IceCube is a cubic kilometer scale detector in the deep antarctic ice. The 5160 deployed digital optical modules lead to the unambiguous detection of astrophysical neutrinos using events starting inside the detector with deposited energies above 60 TeV. Lowering the energy threshold down to 1 TeV, while maintaining a >90% neutrino-pure sample, greatly increases statistics. We will present the latest results of this data sample, containing more than 7000 all-sky neutrino-induced cascades and tracks. The data set allows a more precise measurement of the astrophysical spectrum, down to the atmospherically dominated region. Astrophysical models involving a second power-law component will be tested. Furthermore, we will present improved limits on the contribution of neutrinos from charmed mesons to the atmospheric flux. Having a significant contribution of both tracks and cascades in the same data sample allows to constrain the neutrino flavor space.
High Energy Astrophysical Neutrino Flux Measurement Using Neutrino-induced Cascades Observed in 4 Years of IceCube Data
Hans Niederhausen
We report a new measurement of the diffuse flux of high energy extraterrestrial neutrinos from the whole sky with energies of O(1 TeV) and above, that is predominantly sensitive to electron and tau flavors. We analyzed 4 years of IceCube data recorded from 2012-2015 focusing on neutrino-induced cascades. Cascades provide good energy resolution and have a lower atmospheric neutrino background contribution than muon neutrinos. A new event selection has been developed combining straight cuts with gradient boosted multi-class decision trees to isolate cascades with increased efficiency over previous methods, resulting in the largest cascade sample obtained by IceCube to date. Our methods achieve a neutrino purity of better than 90%. At energies above 20 TeV the contribution of muon neutrinos to the total number of expected astrophysical neutrinos in this cascade sample is estimated to be 10%. At these energies the extra-terrestrial component dominates the observed spectrum and is well described by a single, unbroken power-law. We will discuss preliminary fit results and study the possibility of a spectral hardening at the upper end of the spectrum by allowing a second power-law component to enter our flux model.
Searches for astrophysical sources of neutrinos using cascade events in IceCube
Mike Richman
The IceCube neutrino observatory has observed a flux of high-energy astrophysical neutrinos using both track events from muon neutrino interactions and cascade events from interactions of all neutrino flavors. Searches for astrophysical neutrino sources have focused on track events due to the significantly better angular resolution of track reconstructions. To date, no such sources have been confirmed. In this talk we turn our attention to complementary and statistically-independent source searches using cascade events with deposited energies as small as 1 TeV. Compared to the classic approach using tracks, the cascade channel offers improved sensitivity to sources in the southern sky, especially if the emission is spatially extended or follows a soft energy spectrum. We will show results from a first search using 263 cascades collected from May 2010 to May 2012, as well as projected sensitivity estimates for an upcoming analysis of six years of data.
A new search for multi-flavour PeV neutrinos with IceCube
Chiba University
PeV neutrinos detected by the IceCube observatory are the highest-energy extraterrestrial elementary particles ever seen on Earth. More knowledge on PeV neutrinos such as seeing a spectral cut-off would help understand the possible connection to the sources of ultra-high energy cosmic rays. A new selection has been developed for PeV neutrinos which are not selected by the existing high-energy searches. The new channel has been optimised for partially-contained cascades generated via Glashow resonance. It has then been combined with samples of high-energy starting events and extremely-high-energy tracks to determine the characteristics of the high-energy end of the diffuse spectrum. In this talk, results on the cut-off energy will be shown and constraints on scenarios which predict cosmogenic PeV neutrinos will be discussed.
Improving the angular resolution in IceCube cascade reconstruction
Tianlu Yuan
UW Madison
Neutrino interactions occurring in IceCube require accurate reconstruction techniques to quantify the neutrino's energy and arrival direction. At the highest energies, the angular resolution of IceCube is limited primarily by ice property uncertainties. Previous studies have shown that a perfect knowledge of the ice may improve cascade angular resolutions by a factor of two or more. We present a new method for evaluating the effect of ice model uncertainties and explore several channels by which the reconstructed angular resolution may be improved.
A new IceCube starting track event selection and realtime stream
Sarah Mancina
University of Wisconsin -- Madison
IceCube analyses which look for an astrophysical neutrino signal in the southern sky face a large background of atmospheric neutrinos and muons created in cosmic ray air showers. Earlier, it was found that rejecting events that deposit energy in the outer region of the detector reduces not only the muon background, but also the atmospheric neutrino background in the southern sky due to the atmospheric self-veto effect. However, using outer layer fiducial cuts reduces the size of the detector and leads to a selection optimized for cascades. In this event sample, we select for muon tracks which have a starting vertex contained inside the detector. By using the improved directional reconstruction from muons, the selection determines a veto region behind the starting vertex for each event and calculates the likelihood for not observing a hit on the IceCube optical modules (DOMs) in the veto region. These cuts give a selection which has a high astrophysical neutrino purity above 10 TeV in the southern sky. We will present our most recent results from our neutrino point source and diffuse flux searches and provide a first look at the realtime events stream derived from the selection.
IceCube Search for Galactic Neutrino Sources using the HAWC 2HWC Catalog
Joshua Wood
University of Wisconsin, Madison
We present prospects for IceCube to detect neutrino emission from Galactic TeV gamma-ray sources outlined in the HAWC Observatory's recently published 2HWC catalog. We do this by evaluating the sensitivity of two analyses using IceCube data. The first is a stacked analysis of promising point sources from the catalog which are chosen based on their high TeV photon fluxes and lack of association with known pulsar wind nebula. Here we assume the highest energy photons originate from the decay of charged pions produced by hadronic interactions at each source. The second is a template analysis using the full Galactic plane morphology measured by HAWC. This morphology should trace neutrino emission if pion decay predominantly occurs in the environment surrounding identified HAWC sources.
Unexpectedly bright: high energy emission from mixed morphology SNRs
Katie Auchettl
Supernova remnants (SNRs) are the long lived structures that result from the explosive end of a massive star. The expanding shock-front produced by the supernova explosion heats stellar ejecta and swept-up ISM to X-ray emitting temperatures, and are sites in which populations of relativistic particles can be efficiently accelerated to the knee of the Cosmic-ray spectrum. For SNRs that are born in regions of high density, the interaction between the SNR with this dense molecular material has a profound effect on the morphology and emission properties of these objects. Until recently, most studies have focused on individual sources, however, in this talk I will focus on the importance of systematically studying the properties of these objects. In particular, I will highlight current investigations into the high energy emission of these remnants using X-ray and gamma-ray satellites, and discuss the insights these studies have provided us with in regards to the properties of their progenitor, and their surrounding environment, as well as their ability to accelerate particles.
Complexities of a Mid-Life Crush: A Study of the Pulsar Wind Nebula Vela X
Patrick Slane
The Vela supernova remnant is a canonical example of a middle-aged composite system in which the SNR reverse shock has disrupted the central pulsar wind nebula, Vela X. Due to a non-uniform ambient medium, the shock has propagated asymmetrically, crushing the northern part of the PWN. The result is a complex structure characterized by nonthermal X-rays from the pulsar wind, thermal X-rays from ejecta mixed into the nebula, and gamma-ray emission in both the GeV and TeV bands, for which the morphology shows striking differences. Here we report on an XMM Large Project to study Vela X. We study variations in the spectral index of the nonthermal X-ray emission, along with the distribution and thermal properties of the shocked ejecta, and correlate these with the gamma-ray properties of the PWN. We evaluate these properties using hydrodynamical simulations in the context of the evolution of PWNe in composite SNRs, with a view to the ultimate fate of the relativistic particles produced in these systems.
Studies of pulsar wind nebulae in TeV gamma-rays with H.E.S.S.
Yves Gallant
LUPM, CNRS/IN2P3, U. de Montpellier
The survey of the Galactic plane in TeV gamma-rays by H.E.S.S. allows a systematic study of the population of pulsar wind nebulae (PWNe) in this energy domain. We find a mild trend of decreasing TeV luminosity with age, or decreasing spin-down power, as well as a trend of increasing size with age. Older TeV PWNe are generally displaced from the pulsar position, with offsets larger than can plausibly be explained by pulsar proper motion, which could be due to PWN interaction with the reverse shock in an asymmetric environment. The observed gamma-ray spectra can be ascribed to inverse Compton scattering of ambient photons, in which the Galactic far-infrared background often predominates; this may explain why luminous TeV PWNe are more readily detected in the inner spiral arms than in the outer Galaxy. We also present a more detailed morphological study of the TeV emission from the PWN in the composite supernova remnant MSH 15-52. We compare its gamma-ray morphology with that in synchrotron emission, obtained from archival X-ray observations, and discuss the implications for the magnetic field in the nebula. We also discuss potential extended gamma-ray emission beyond the X-ray PWN. Such an extended morphological component could come from electrons and positrons which have escaped the PWN, which would have implications for scenarios of PWNe as sources of leptonic cosmic rays.
New insights on particle acceleration at non-relativistic shocks
Damiano Caprioli
I present the results of large kinetic simulations of particle acceleration at non-relativistic collisionless shocks, which allow an ab-initio investigation of diffusive shock acceleration (DSA) at the blast waves of supernova remnants, the most prominent sources of Galactic cosmic rays (GCRs). Ion acceleration efficiency and magnetic field amplification are obtained as a function of the shock properties and compared with theoretical predictions and multi-wavelength observations of individual remnants. In particular, I will focus on two new results: 1) the origin of the chemical enhancement of heavy elements observed in GCRs as naturally due to DSA, and 2) the re-acceleration of energetic particle "seeds" and its phenomenological implications.
Fermi Acceleration in Relativistic Shocks
Donald Ellison
I will discuss cosmic ray production at relativistic shocks. I will emphasize the differences expected for relativistic shocks compared to non-relativistic ones and examine possible applications such as relativistic supernovae and gamma ray bursts.
Particle acceleration at pulsar-wind termination shocks
Gwenael Giacinti
MPIK Heidelberg
The X-ray emission from pulsar wind nebulae arises from particles accelerated at the shock that terminates the relativistic, strongly magnetized pulsar wind. However, conventional theories of particle acceleration break down at this shock, because the combination of low particle density and strong magnetic field places it outside the domain of validity of MHD. We first discuss how particles are, nevertheless, injected into a first-order-Fermi-like process and accelerated. We then study their acceleration in the equatorial region of the termination shock. To do so, we integrate the individual trajectories of electrons and positrons, in the test-particle limit, in the background electromagnetic fields present in that region. We find that the spectrum of accelerated electrons (or positrons, depending on the polarity) is significantly harder than $E^{-2.2}$. We calculate the resulting synchrotron spectrum, and compare our results with Chandra observations of the X-ray spectrum of the Crab nebula. I am also submitting an abstract to the track "cosmic rays".
Pulsars with MAGIC
Jezabel Rodriguez Garcia
Max-Planck-Institut fuer Physik
MAGIC is a stereoscopic system of two imaging atmospheric Cherenkov telescopes, located at the Roque de los Muchachos Observatory, in La Palma (Spain) sensitive to gamma rays from few tens of GeV to tens of TeV. Pulsar physics is one of important topics in the MAGIC scientific program. In 2008, MAGIC for the first time detected VHE gamma-rays demission above 25 GeV from the Crab pulsar. Ever since, crab observations with MAGIC have beenIn this talk, we will present the recent scientific results derived from observations of the Crab pulsar, we will describe the technical innovations of MAGIC for the study of pulsars at a few tens of GeV, and will give an outlook to future pulsar observations at very-high-energy gamma rays and their relevance to understanding these extreme celestial objects. providing important results for the understanding of pulsar physics.
The Sun as a new laboratory for cosmic rays, gamma rays, neutrinos, and dark matter
Bei Zhou
CCAPP, OSU
The Sun must shine brightly in GeV—TeV gamma rays and neutrinos. These particles are produced by the interactions of cosmic rays with solar matter and radiation. Additional fluxes may be caused by the annihilation of dark matter in the solar core, perhaps with the eventual particles produced outside of the Sun through the decay of metastable mediators. Importantly, a new generation of experiments is reaching the sensitivity required to detect the Sun at high energies. In gamma rays, the Sun has been detected in the GeV range by Fermi and will soon be studied in the TeV range by ARGO-YBJ, HAWC, and LHAASO. In neutrinos, IceCube is nearing the sensitivity required to detect TeV neutrinos. I will detail the physics prospects for what these observations will teach us about cosmic rays in the inner solar system, solar magnetic fields, and dark matter. This talk will highlight work from our group, including arXiv:1508.06276, arXiv:1612.02420, arXiv:1703.04629, arXiv:1703.10280, as well as the rapid growth in interest from other groups.
VERITAS Observations of Galactic Binary Systems
Payel Kar
University of Utah, VERITAS Collaboration
Only a handful of High Mass X-ray Binaries (HMXB) in our galaxy are known emitters of TeV gamma rays. The variable VHE emission from these sources are generally attributed to modulation by their orbital periods but the particle acceleration and gamma-ray production processes in these HMXBs are not well understood. In its 10 years of operation, VERITAS has observed 2 of these TeV emitting HMXBs, LS I +61 303 and HESS J0632+057 for more than 450 hours, conducting many multi-wavelength campaigns. The results from recent observations, long-term monitoring and multi-wavelength study with X-ray (Swift-XRT) and GeV (Fermi-LAT) for LS I +61 303 and HESS J0632+057 are discussed. Besides these two TeV binaries, an outline of the binary discovery program by VERITAS is presented with particular emphasis on PSR J2032+4127, the long period (45-50 years) binary in a highly eccentric orbit with the Be star MT91 213. This system could be the origin of very high energy emission from the unidentified source TeV J2032+4130. We present the status of observations of PSR J2032+4127, preliminary results and the plan for continued monitoring through periastron in 2017.
Finale of a Quartet: Hints on Supernova Formation
Supernovae (SNe) and their remnants are important cosmic ray sources. However, the origin of one major type of SNe, the Type Ia, is still not well understood. Two most popular hypotheses are the single-degenerate scenario, where one white dwarf (WD) accretes matter from its giant companion until the Chandrasekhar limit is reached, and the double-degenerate scenario, where two WDs merge and explode. We focus on the second scenario. It has long been realized that binary WD systems normally take extremely long time to merge via gravitational waves and it is still unclear whether WD mergers can fully account for the observed SN Ia rate. Recent effort has been devoted to the effects of introducing a distant tertiary to the binary system. The standard "Kozai-Lidov" mechanism can lead to high eccentricities of the binary WDs, which could lead to direct collisions or much efficient energy dissipation. Alternatively, we investigate the long-term evolution of the hierarchical quadruple systems, i.e. WD binary with a binary companion, which are basically unexplored, yet they should be numerous. We explore their interesting dynamics and find that the fraction of reaching high eccentricities is largely enhanced, which hints on a higher WD merger rate than predicted from triple systems with the same set of secular and non-secular effects considered. Considering the population of quadruple stellar systems, the quadruple scenario might contribute significantly to the overall rate of Ia SNe.
Massive neutrinos in cosmology
Massimiliano Lattanzi
Cosmological observations represent a powerful tool to constrain neutrino physics, complementary to laboratory experiments. In particular, observations of the cosmic microwave background (CMB) have the potential to constrain the properties of relic neutrinos, as well as of additional light relic particles in the Universe. I will present current constraints on neutrino properties, focusing on their mass and effective number, from the most recent Planck data, possibly in combination with other cosmological probes, especially galaxy surveys. I will also discuss prospects from future experiments, both from the ground and from space.
Interacting neutrinos in cosmology: exact description and constraints
Isabel M. Oldengott
Bielefeld University
We study and constrain the impact of non-standard neutrino interactions on the CMB angular power spectrum. Starting from the collisional Boltzmann equation, we derive the Boltzmann hierarchy for neutrinos including interactions with a massive scalar particle. In contrast to the Boltzmann hierarchy for photons, our interacting neutrino Boltzmann hierarchy is momentum dependent, which reflects non-negligible energy transfer in the considered neutrino interactions. We implement this Boltzmann hierarchy into the Boltzmann solver CLASS and compare our results with known approximations in the literature. We thereby find a very good agreement between our exact approach and the relaxation time approximation (RTA). The popular $\left( c_{\text{eff}}^2,c_{\text{vis}}^2 \right)$-parametrization however does not reproduce the correct signal in the CMB angular power spectrum. Using the RTA, we furthermore derive constraints on the effective coupling constant $G_{\text{eff}}$ from currently available cosmological data. Our results reveal a bimodal posterior distribution, where one mode represents the standard $\Lambda$CDM limit, and the other a scenario of neutrinos self-interacting with $G_{\rm eff} \simeq 3 \times 10^9 \, G_{\rm F}$ (where $G_{\text{F}}$ is the Fermi coupling).
Constraints on secret interactions among sterile neutrinos from Planck 2015 data
Francesco Forastieri
University of Ferrara
Light sterile neutrinos with eV mass have been suggested by different anomalies observed in short-baseline neutrino experiments. These particles would have been produced in the Early Universe changing the amount of relativistic energy density by increasing the effective number of relativistic species ($N_{\rm{eff}}$). This results in a conflict with existing cosmological bounds on primordial radiation density and neutrino mass. In order to alleviate these discrepancies, basically avoiding the thermalization of eV sterile neutrinos in the early Universe, secret interactions in the sterile sector mediated by a massive vector boson ($M_X < M_W$ ) have been proposed. Secret interactions reduce the effective mixing angle generating a large matter potential in the sterile neutrino sector and suppressing the active-sterile oscillations. In particular if the interactions are mediated by a gauge boson having $M_X < 10$ MeV, the sterile neutrino productions is suppressed for $T > 0.1$ eV; this behaviour seems to ameliorate the conflict between the cosmological constraints and laboratory experiments. Observations of the cosmic microwave background (CMB) radiation represent a powerful and unique tool to test this model in more detail. During my presentation I will show results of a dedicated study presenting constraints on the strength of the secret interaction and the corresponding mass bounds using the latest CMB Planck 2015 data. Finally I will discuss the status of the sterile neutrino secret interaction scenario after this work. Page 23
Neutrinos with CMB and CMB lensing
Alexander Van Engelen
A number of ground-based CMB surveys are currently in the planning stages that will provide significant new information on properties of physics beyond the standard model. Measurements of the lensing of the CMB are expected to provide evidence for neutrino mass in the case of a minimal mass as expected from neutrino oscillation experiments. Meanwhile measurements of the phase of the acoustic plasma will provide constraints on particles that are in equilibrium with the standard model prior to decoupling. After reviewing the prospects for these measurements I will discuss some of the technical challenges involved.
Cosmological searches for a non-cold dark matter component
The standard $\Lambda$CDM cosmological model has successfully explained large scale cosmological observations. However, there are some discrepancies between the $\Lambda$CDM predictions and measurements at small scales. Even though these discrepancies could be due to unaccounted effects on weak lensing analyses and/or numerical simulations, in this talk, I will explore the possibility of extending the standard cosmological model with an additional, subdominant, non-cold dark matter component. In particular, I will show the impact of such a scenario on various cosmological probes, including the CMB (Planck), weak lensing surveys measurements (KIDS) and the number of satellite galaxies in the Milky Way.
New Direct Detection Signals from Dark Matter Bound States
Yue Zhang
I will discuss new signatures in direct detection experiments if part or all of the dark matter particles in nature are in the form of bound states. It will allow the sub-GeV dark matter candidates to be much more visible at dark matter and neutrino detectors.
Capture and Decay of Electroweak WIMPonium
Patrick Fitzpatrick
Center For Theoretical Physics, Massachusetts Institute of Technology
The spectrum of Weakly-Interacting-Massive-Particle (WIMP) dark matter generically possesses bound states when the WIMP mass becomes sufficiently large relative to the mass of the electroweak gauge bosons. The presence of these bound states enhances the annihilation rate via resonances in the Sommerfeld enhancement, but they can also be produced directly with the emission of a low-energy photon. I will present a calculation of the rate for SU(2)-triplet dark matter (the wino) to bind into WIMPonium -- which is possible via single-photon emission for wino masses above 5 TeV for relative velocity v < O(10^{-2}) -- and the subsequent decays of these bound states. I will also present results with applications beyond the wino case, e.g. for dark matter inhabiting a general nonabelian dark sector.
The LHC Dark Matter Working Group
Antonio Boveia
The LHC Dark Matter Working Group (LHC DM WG) brings together theorists and experimentalists to provide the benchmark models, interpretation, and characterisation needed for a robust and broad set of searches for dark matter at the LHC. I will discuss the work of the LHC DM WG, and its predecessor the ATLAS/CMS Dark Matter Forum---the interaction between theory and experiment, the types of signals being considered for LHC Run-2 searches, the evolving interpretation of the LHC results alongside direct and indirect detection, and the future.
Search for long-lived particles at CMS
Brian Francis
New particles with long lifetimes are introduced by many extensions to the standard model and would produce striking and non-conventional signatures in collider experiments such as long-lived charged particles, highly displaced jets, and particles that come to a rest within the detector and later decay. We present the results of several recent searches for long-lived particles with the CMS experiment in Run II of the LHC.
Search for low mass dijet resonances at CMS
Javier Mauricio Duarte
Fermi National Accelerator Lab.
We present several complementary searches for low mass dijet resonances using a 35.9 inverse femtobarn data set of proton-proton collisions at 13 TeV collected with the CMS experiment at the LHC in 2016. One search uses the CMS scouting data stream concept to record larger data rates than otherwise possible. Another search uses an initial state radiation jet to overcome trigger thresholds and search for boosted dijet resonances, whose decay products are merged into a single jet. Novel jet substructure techniques are used to avoid sculpting the distribution of the jet mass distribution and the dominant background is estimated from data. Both searches are interpreted in the context of leptophobic vector resonances and simplified models of dark matter with a leptophobic mediator. This approach has also been extended to the search for boosted Higgs bosons decaying to bottom quark-antiquark pairs.
Rare, Exotic, and Invisible Higgs Decays at CMS
Ben Kreis
We present the latest results in the search for rare, exotic, and invisible Higgs boson decays in proton-proton collision events collected with the CMS detector at the LHC. The rich experimental program we describe, which includes searches for lepton flavor violation and decays to dark matter and light scalars, provides a wide ranging probe for physics beyond the standard model.
Global Fits of the MSSM with GAMBIT
Jonathan Cornell
The wide range of probes of physics beyond the standard model (BSM) lead to the need for tools that consistently combine experimental results to make the most robust possible statements about the validity of new physics theories and the preferred regions of their parameter space. In this talk, I will introduce a new publicly released code for such studies: GAMBIT, the Global and Modular BSM Inference Tool. GAMBIT is a flexible and extensible framework for global fits of essentially any BSM theory. The code currently incorporates constraints from the dark matter relic density, direct and indirect dark matter searches, limits on production of new particles from the LHC and LEP, complete flavor constraints from LHCb, LHC Higgs production and decay measurements, and various electroweak precision observables. I will discuss the code's capabilities and results of scans of the parameter space of the Minimal Supersymmetric Standard Model.
Collider and Indirect Singatures of Non-Minimal Dark Matter sectors
Linda Carpenter
Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross-section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as bb or τ+τ−. In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of Effective Field Theory contact interactions in the high-mass limit.
Victoria Kaspi
I review our present observational understanding of the mysterious new phenomenon of Fast Radio Bursts -- short (few ms) bursts of radio waves arriving from apparently cosmological distances -- as well as models for what these sources may be. I also describe the CHIME telescope, currently being built in Canada, and how it will impact this interesting puzzle.
The new era of galactic cosmic rays
A new era in galactic cosmic rays physics has started with the precise and continuous observations from space experiments such as PAMELA and AMS-02. Their invaluable results are rewriting the theory of acceleration and propagation of cosmic rays. Both at high energies, where several new behaviors have been measured, challenging the accuracy of theoretical models, as well as at low energies, in the region affected by the solar modulation. These precise measurements are improving our knowledge of galactic cosmic rays, allowing detailed studies of acceleration, propagation and composition as it has never been done before. These measurements will serve as a high-precision baseline for distinguishing the background from the signal of possible exotic sources. In this review, the status of the latest measurements in galactic cosmic rays together with the current open questions and the future prospects are presented.
Ultra-High-Energy Cosmic Rays
In this talk I will review the status and prospects of understanding the physics of ultra-high-energy cosmic rays. Focusing on the progress made thanks to data of the Pierre Auger Observatory and Telescope Array, observations are discussed in the context of their implications for various source scenarios and remaining uncertainties are highlighted. The talk concludes with a summary of ongoing efforts to upgrade existing cosmic ray detectors and developing new ones with even larger aperture.
Cosmological magnetic fields and particle acceleration in the laboratory
Gianluca Gregori
Magnetic fields are ubiquitous in the Universe. The energy density of these fields is typically comparable to the energy density of the fluid motions of the plasma in which they are embedded. Magnetic fields are also essential for the production of high energy cosmic rays. The standard theoretical model for the origin of these strong magnetic fields is through the amplification of tiny seed fields via turbulent dynamo to the level consistent with current observations. Here we demonstrate, using laser-produced colliding plasma flows, that turbulence is indeed capable of rapidly amplifying seed fields to equipartition with the turbulent fluid motions. These results support the notion that turbulent dynamo is responsible for the observed present-day magnetization of the Universe. We also show that such turbulent and magnetized plasmas can drive plasma instabilities that energize electrons above the thermal background, thus providing a possible injection mechanism for cosmic ray acceleration. We conclude by discussing future experiments at the National Ignition Facility laser to study second order Fermi acceleration.
Cosmic-ray and gamma-ray anomalies and their interpretations
Daniele Gaggero
GRAPPA, University of Amsterdam
I will review several interesting anomalies in cosmic-ray (CR) and gamma-ray data and discuss possible interpretations, focusing on what they can reveal about the nature of CR sources and the physics of CR transport in the Galaxy.
Dark Energy Survey Year 1 Cosmology Results
Elisabeth Krause
The first year of the Dark Energy Survey observations imaged 1321 square degree of the Southern sky in griz. We present measurements of galaxy clustering and weak gravitational lensing from this data set, and cosmological parameters inferred from these these two-point correlation functions in a blind analysis.
Improving the search for extragalactic dark matter using N-body simulations
Nicholas Rodd
Extragalactic galaxies and galaxy clusters are expected to be some of the brightest sources of dark matter annihilation on the sky. Further, catalogs such as the 2MASS survey, tell us where thousands of these objects are. The challenge, however, is that catalogs only detail a subset of the baryonic properties of these galaxies. In this talk I will outline how to map from a catalog of galaxies to map of the extragalactic dark matter distribution on the sky. I will show how the biases and systematics of the method can be understood in the context of an N-body simulation, and demonstrate that the projected sensitivity of this method at Fermi-LAT could produce limits comparable with those coming from the Milky Way Dwarfs.
A search for dark matter annihilation in nearby galaxy groups
Siddharth Mishra Sharma
We perform a search for dark matter (DM) annihilation in nearby galaxies using 413 weeks of publicly-available Fermi Pass 8 gamma-ray data, utilizing a novel method that takes advantage of recently-developed galaxy group catalogs based on the 2MASS Redshift Survey. Having validated our method using N-body simulations, we construct nearly all-sky maps of an expected DM annihilation signal in the local (z < 0.03) universe and look for this structure in the Fermi data, probing theoretically well-motivated regions of parameter space for conservative assumptions about substructure enhancement. I will present the results of our analysis, discussing the effect of modeling uncertainty and implications for the DM interpretation of the Galactic Center excess.
Searching for Dark Matter Annihilation in Milky Way Satellite Galaxies
Alex Drlica-Wagner
Fermilab), BECHTOL, Keith LSST), ALBERT, Andrea Los Alamos National Lab
Milky Way dwarf spheroidal satellite galaxies are the most dark-matter-dominated galaxies known. Due to their proximity, high dark matter content, and lack of astrophysical backgrounds, dwarf spheroidal galaxies are one of the most promising targets for the indirect detection of dark matter annihilation via gamma rays. Indeed, Fermi-LAT observations of previously known dwarf galaxies have robustly constrained the dark matter annihilation cross section to be less than the generic thermal relic cross section for dark matter particles with mass < 100 GeV. Recently, large optical surveys, such as the Dark Energy Survey and Pan-STARRS, have nearly doubled the known population of confirmed and candidate dwarf galaxies. We will present an updated gamma-ray analysis combining previously known and recently discovered dwarf galaxies, and discuss how current and future optical surveys will improve the sensitivity of gamma-ray searches for dark matter annihilation in dwarf galaxies.
Sommerfeld-Enhanced J-Factors For Dwarf Spheroidal Galaxies
Kimberly Boddy
For models in which dark matter annihilation is Sommerfeld-enhanced, the annihilation cross section increases at low relative velocities. Dwarf spheroidal galaxies (dSphs) have low characteristic dark matter particle velocities and are thus ideal candidates to study such models. We model the dark matter phase space of dSphs as isotropic and spherically-symmetric and determine the $J$-factors for several of the most important targets for indirect dark matter searches. For Navarro-Frenk-White density profiles, we quantify the scatter in the $J$-factor arising from the astrophysical uncertainty in the dark matter potential. We show that, in Sommerfeld-enhanced models, the ordering of the most promising dSphs may be different relative to the standard case of velocity-independent cross sections. This result can have important implications for derived upper limits on the annihilation cross section, or on possible signals, from dSphs.
Investigating what the Milky Way's dwarfs have to tell us about the Galactic Center extended excess
Ryan Keeley
UC, Irvine
The Milky Way's Galactic Center may harbor the signal of annihilating dark matter in a gamma-ray excess, though dwarf galaxies remain dark in their expected commensurate emission. We incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the Fermi Gamma Ray Space Telescope. The range of particle annihilation rate and masses expand when including these unknowns. However, two of the most precise empirical determinations of the Milky Way halo's local density and density profile leave the signal region to be in considerable tension with dark matter annihilation searches from combined dwarf galaxy analyses for single-channel dark matter annihilation models. Accordingly, we quantify this tension in a joint likelihood analysis. We also show that astrophysical models and a representative self-interacting dark matter model avoid the tensions between the GCE signal and lack of a signal from the dwarfs. Since these arguments disfavor the intepretation of the GCE as prompt annihilation of dark matter, we set limits on the cross section for that process.
Searching for dark matter annihilation in Galactic substructure with photon statistics
Laura Chang
We propose a novel method to search for signatures of dark matter annihilation in Galactic substructure using gamma-ray data from the $\it Fermi$ Large Area Telescope. The method takes advantage of the fundamentally different photon-count statistics that describe dark matter annihilation from a population of subhalos versus from the smooth Milky Way halo. In addition, it exploits differences in the spatial distribution of subhalos and other astrophysical populations to improve the sensitivity to the substructure signature. We apply this analysis method to simulated $\it Fermi$ data and derive the projected sensitivity to dark matter annihilation in substructure. We can probe theoretically well-motivated regions of parameter space, providing a complementary method to existing dark matter searches.
LD2DM scenario as an explanation of the AMS-02 positron excess
Jatan Buch
In this talk, we discuss a scenario called late-decaying two-component dark matter (LD2DM), where the entire DM consists of two semi-degenerate species. Within this framework, the heavier species is produced as a thermal relic in the early universe and decays to the lighter species over cosmological timescales. Consequently, the lighter species becomes the DM which populates the universe today. We show that annihilation of the lighter DM species with an enhanced cross-section, produced via such a non-thermal mechanism, can explain the observed excess in AMS-02 positron flux while avoiding CMB constraints from the recombination epoch. We demonstrate that the scenario is robust, subject to constraints from structure formation and CMB constraints on late-time energy depositions during the cosmic dark ages. We explore possible cosmological and particle physics signatures in a toy model that realizes this scenario.
Zero-Range Effective Field Theory for Resonant Wino Dark Matter
Evan Johnson
The most dramatic "Sommerfeld enhancements'' of neutral-wino-pair annihilation occur when the wino mass is near a critical value where there is a zero-energy S-wave resonance at the neutral-wino-pair threshold. Near a critical mass, low-energy winos can be described by a zero-range effective field theory in which the winos interact nonperturbatively through a contact interaction. The effective field theory is controlled by a renormalization-group fixed point at which the neutral and charged winos are degenerate in mass and their scattering length is infinite. The parameters of the zero-range effective field theory can be determined by matching wino-wino scattering amplitudes calculated by solving the Schrödinger equation for winos interacting through a potential due to the exchange of weak gauge bosons. If the wino mass is larger than the critical value, the resonance is a wino-pair bound state. The power of the zero-range effective field theory is illustrated by calculating the rate for formation of the bound state in the collision of two neutral winos through the emission of two soft photons.
Thermal Dark Matter Below an MeV
Asher Berlin
In this talk, I will discuss a class of models in which thermal dark matter is lighter than an MeV. If dark matter thermalizes with the Standard Model below the temperature of neutrino-photon decoupling, constraints from measurements of the effective number of neutrino species are alleviated. This framework motivates new experiments in the direct search for sub-MeV thermal dark matter and light force carriers.
Indirect Detection of Neutrino Portal Dark Matter
Barmak Shams Es Haghi
We investigate the feasibility of the indirect detection of dark matter in a simple model using the neutrino portal. The model is very economical, with right-handed neutrinos generating neutrino masses through the Type-I seesaw mechanism and simultaneously mediating interactions with dark matter. Given the small neutrino Yukawa couplings expected in a Type-I seesaw, direct detection and accelerator probes of dark matter in this scenario are challenging. However, dark matter can efficiently annihilate to right-handed neutrinos, which then decay via active-sterile mixing through the weak interactions, leading to a variety of indirect astronomical signatures. We derive the existing constraints on this scenario from Planck cosmic microwave background measurements, Fermi dwarf spheroidal galaxies and Galactic Center gamma-rays observations, and AMS-02 antiprotons observations, and also discuss the future prospects of Fermi and the Cherenkov Telescope Array. Thermal annihilation rates are already being probed for dark matter lighter than about 50 GeV, and this can be extended to dark matter masses of 100 GeV and beyond in the future. This scenario can also provide a dark matter interpretation of the Fermi Galactic Center gamma ray excess, and we confront this interpretation with other indirect constraints. Finally we discuss some of the exciting implications of extensions of the minimal model with large neutrino Yukawa couplings and Higgs portal couplings.
EBL constraints using TeV blazars observed with the MAGIC telescopes
Gaia Vanzo
Instituto de Astrofísica de Canarias
Recent results on the Extragalactic Background Light (EBL) intensity obtained from a combined likelihood analysis of blazar spectra detected by the MAGIC telescopes are reported. The EBL is the optical-infrared diffuse background light accumulated during galaxy evolution, directly and/or reprocessed by dust, which provides unique information about the history of galaxy formation. The low energy photons from the EBL may interact with very high energy (VHE, E > 100 GeV) photons from blazars, which are a subclass of Active Galactic Nuclei whose relativistic jets point towards Earth. This interaction between the EBL and the gamma-ray photons leaves an energy-dependent imprint of the EBL on the VHE gamma-ray spectra of the blazars. Therefore, the study of their spectra can be used to constrain the EBL density at different wavelengths and its evolution in time. The MAGIC telescopes are a stereoscopic system of two Imaging Atmospheric Cherenkov Telescopes of 17 m diameter each, situated in La Palma, Spain, and sensitive to gamma-ray photons with energies larger than about 50 GeV. In the last few years the MAGIC telescopes obtained accurate measurements of the spectra of 12 blazars in the redshift range from z=0.03 to z=0.944 for over 300 hours of observation. This allows us to improve upon previous constraints on EBL by MAGIC, compatible with the state-of-the-art EBL models.
Determining the Intergalactic Photon Densities from Deep Galaxy Surveys and the Gamma-ray Opacity of the Universe
Floyd Stecker
We have calculated the extragalactic IR-UV photon density as a function of redshift, and the resulting IR-UV spectrum of the extragalactic background light. Our empirically-based approach is based on local-to-deep galaxy survey data obtained in different wavelength bands using many space-based telescopes. This approach allowed us, for the first time, to obtain a completely model independent determination of extragalactic photon densities, and also to quantify their uncertainties. Using our photon density results, we were able to place 68% confidence upper and lower limits on the opacity of the universe to gamma-rays as a function of energy and redshift. We compared our results with Fermi analyses of the spectra of extragalactic gamma-ray sources.
Time dependence of AGN pair echo, and halo emission as a probe of weak extragalactic magnetic fields
Foteini Oikonomou
Gamma-rays with energy exceeding 100 GeV emitted by extragalactic sources initiate cascades in the intergalactic medium. The angular and temporal distribution of the cascade photons that arrive at the Earth depend on the strength and configuration of extragalactic magnetic fields in the line of sight. For weak enough fields, extended emission around the source (halo) is expected to be detectable, and the characteristics (angular size, energy dependence, and shape) of this emission are a sensitive probe of EGMF strength and correlation length. We model the expected specta and angular profiles of blazars, and misaligned active galactic nuclei (radio-galaxies) in a wide range of parameter space of the extra-galactic magnetic field strength and correlation length, which is unconstrained by existing measurements. Our calculations focus on the time dependence of such halo emission, which is being discussed for the first time in this work. We present the competitive bounds on/measurement of the extragalactic magnetic field strength and correlation length that are implied by the absence/detection of such, extended emission, in stacked searches of GeV halo emission by blazars, and radio-galaxies.
Searches for Angular Extension in High Latitude Fermi-LAT Sources
Manuel Meyer
We report on the Fermi High-Latitude Extended Source Catalog (FHES), a systematic search for spatial extension of gamma-ray point sources detected with the Fermi Large Area Telescope (LAT) at Galactic latitudes |b| > 5 degrees. Point sources listed in the 3FGL and 3FHL catalogs are used for this search. While the majority of high-latitude LAT sources are extragalactic blazars that appear point-like within the LAT angular resolution, there are several physics scenarios that predict the existence of populations of spatially extended sources. If dark matter consists of weakly interacting massive particles, the annihilation or decay of these particles in subhalos of the Milky Way would appear as a population of unassociated gamma-ray sources with finite angular extent. Gamma-ray emission from blazars could also be extended (so-called pair halos) due to the deflection of electron-positron pairs in the intergalactic magnetic field (IGMF). The pairs are produced in the absorption of gamma rays in the intergalactic medium and subsequently up-scatter photons of background radiation fields to gamma-ray energies. The measurement of pair halos would constrain the strength and coherence length scale of the IGMF. We report on new extended source candidates and their associations found in the FHES as well as limits on the IGMF based on the non-observation of the cascade.
Probing extragalactic magnetic fields with the gamma-ray spectrum of PG 1553+113
Matthias Lorentz
IRFU - CEA Saclay
The very high energy ($E > 100 $ GeV) gamma-ray flux from extragalactic sources is attenuated due to $e^+e^-$ pair production on the extragalactic background light (EBL). This attenuation process can lead to the development of electromagnetic cascades from the inverse-Compton scattering of background photons by the produced $e^+e^-$ pairs. The cascade secondary gamma-ray emission is reprocessed at lower energies and its spectral and temporal behavior depends on the properties of the extragalactic magnetic field (EGMF) along the line of sight due to the deflections of the charged component of the cascade. The temporal and spectral energy distribution structure of the gamma-ray emission from blazars can then be used to probe the EGMF. In this contribution the gamma-ray blazar PG 1553+113 is identified as a promising target to search for secondary emission, based on H.E.S.S. and Fermi-LAT energy spectra. Monte-Carlo simulations of cascades using the public code ELMAG are performed to derive EGMF constraints under different assumptions on the intrinsic spectrum and the time span of the gamma-ray activity of the source. Prospects to take advantage of the quasi-periodic modulation seen in the activity of PG 1553+113 to look for secondary gamma-ray emission are also presented.
Probing faint gamma-ray point sources in the inner Milky Way using PCAT
Tansu Daylan
Poisson regression of the Fermi-LAT data in the inner Milky Way reveals an extended gamma-ray excess. An important question is whether the signal is coming from a collection of unresolved point sources, possibly old recycled pulsars, or constitutes a truly diffuse emission component. Previous analyses have relied on non-Poissonian template fits or wavelet decomposition of the Fermi-LAT data, which find evidence for a population of faint point sources just below the 3FGL flux limit. In order to test this hypothesis, we use a Bayesian, transdimensional, and hierarchical inference framework, PCAT (Probabilistic Cataloger), by sampling from the posterior catalog space of faint point sources consistent with the observed gamma-ray emission in the inner Milky Way. By marginalizing over faint point sources, we constrain their spatial and spectral distributions. We then compare the performance of probabilistic cataloging with that of fluctuation analysis when inferring unresolved point sources in the low signal-to-noise limit.
SkyFACT: a novel analysis of Fermi diffuse gamma-ray data
Francesca Calore
LAPTh, CNRS
All models for Galactic diffuse gamma-ray emission share one property: They give formally a remarkably bad fit to the data. A large number of statistically significant residuals remain, making it very challenging to discriminate genuine features in the data from analysis artefacts. We present SkyFACT (Sky Factorization with Adaptive Constrained Templates) [1], a new approach for studying, modeling and decomposing diffuse gamma-ray emission. In contrast to previous approaches, we can account for fine-grained variations related to uncertainties in gas tracers and small scale variations in the cosmic-ray density, that are missed in predictions from cosmic-ray propagation codes, by introducing (and handling) ~100,000 nuisance parameters. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. We apply the method to the gamma-ray emission from the inner Galaxy, as observed by the Fermi Large Area Telescope. We define a simple reference model that removes most of the residual emission from the inner Galaxy and characterize extended emission components: the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. [1] E.Storm, C. Weniger and F. Calore, arXiv:1705.04065 [astro-ph.HE], Submitted to JCAP.
Measuring High-Energy Gamma-Ray Spectra with HAWC
Samuel Marinelli
The High-Altitude Water-Cherenkov (HAWC) experiment is a TeV gamma-ray observatory located at 4100 m above sea level on the Sierra Negra mountain in Puebla, Mexico. Each of the detector's 300 water-filled tanks is instrumented with four photomultiplier tubes that detect the Cherenkov radiation produced by charged particles created in extensive air showers. With an instantaneous field of view of 2 sr and a duty cycle exceeding 95%, HAWC is a powerful survey instrument sensitive to pulsar wind nebulae, supernova remnants, active galactic nuclei, and other gamma-ray sources. The mechanisms of particle acceleration at these sites can be probed by measuring their emitted photon energy spectra. To this end, we have developed an event-by-event method for reconstructing the energies of HAWC gamma-ray events using an artificial neural network. We will show that this new technique greatly improves HAWC's energy resolution and enables it to precisely resolve energies as high as 100 TeV in Monte Carlo. We will also present the progress towards measuring high-energy spectra with the new energy-estimation method.
A Hadronic Model of the Fermi Bubbles: Galactic Cosmic-Rays in a Breeze
We present a self-consistent model of the Fermi Bubbles, described as a decelerating outflow of gas and non-thermal particles produced within the Galactic center region, on a $\sim 100$ Myr timescale. Motivated by observations, we use an outflow with velocity O(100 km/s), which is slower than the velocities used in models describing the Bubbles as a more recent outburst. We take into account cosmic-ray (CR) energy losses due to proton-proton interactions, and calculate the resulting gamma-ray emission. Our model can reproduce both the spatial morphology and the spectra of the Bubbles, on a range of different scales. We find that CRs diffusing and advecting within a breeze outflow result in an approximately flat surface brightness profile of the gamma-ray emission, as observed by Fermi satellite. Finally, we apply similar outflow profiles to larger Galactocentric radii, and investigate their effects on the CR spectrum and boron-to-carbon ratio. Hardenings can appear in the spectrum, even in cases with equal CR diffusion coefficients in the disk and halo. It is postulated that this hardening effect may relate to the observed hardening feature in the CR spectrum at a rigidity of $\sim 200$ GV.
Radio and Gamma-Ray Constraints on the Wind, Magnetic Field, and Cosmic Rays along the minor axis of the starburst M82
Benjamin Buckman
Cosmic rays can be probed by their non thermal emission in the radio and in gamma-ray bands. One-zone models of cosmic rays have been used to match the integrated emission of starburst galaxies. We construct multi-dimensional models of the local starburst M82 using cosmic ray propagation code GALPROP. Using the integrated gamma-ray and radio spectra, along with the vertical distribution of radio emission along the minor axis, we constrain the gas density, magnetic field strength, and cosmic ray population. We show that the wind velocity and diffusion coefficient can be constrained by the morphology of the radio halo. We discuss the interplay between gas density, magnetic field, and outflow velocity and how they effect the emission. We comment on the energetics of cosmic ray species in the system. We provide direct constraints on the dynamical importance of cosmic rays in driving the outflow of the galaxy.
Astronomy's "Next Big Thing:" What can we expect from direct gravitational-wave observations?
Szabolcs Marka
Advanced LIGO's ongoing observation runs provided humanity with the first direct detection of gravitational waves, just in time for the 100th anniversary of Einstein's prediction. Beyond the discovery, there is a growing focus on incorporating gravitational waves as a new window on questions from violent transients to cosmology. I will discuss some aspects of (i) the instrumental breakthroughs that enabled the unprecedented sensitivity reached by Advanced LIGO and (ii) the key scientific directions in which gravitational wave searches can be utilized, directly as well as in the context of multimessenger astronomy.
A "nu" look at gravitational waves: The black hole birth rate from neutrinos combined with the merger rate from LIGO
I present my recent paper arXiv:1704.05073 on making projections for measuring the black hole birth rate from the diffuse supernova neutrino background (DSNB) by future neutrino experiments, and the information which can be gained by combining this with the merger rate from LIGO. The DSNB originates from neutrinos emitted by all the supernovae in the Universe, and is expected to be made up of two components: neutrinos from neutron-star-forming supernovae, and a sub-dominant component at higher energies from black-hole-forming "unnovae". We perform a Markov Chain Monte Carlo analysis of simulated data of the DSNB in an experiment similar to Hyper Kamiokande, focusing on this second component. Since the only evidence for unnovae comes from simulations of collapsing stars, we choose two sets of priors: one where the unnovae are well-understood and one where their neutrino emission is poorly known. By combining the black hole birth rate from the DSNB with projected measurements of the black hole merger rate from LIGO, we show that the fraction of black holes which lead to binary mergers observed today ϵ could be constrained to be within the range 8⋅10^−4≤ϵ≤5⋅10^−3, after ten years of running an experiment like Hyper Kamiokande.
Gravitational waves from bubble dynamics: Beyond the Envelope
Masahiro Takimoto
We study gravitational wave (GW) production from bubble dynamics during a cosmic first-order phase transition by using the method of relating the GW spectrum to the two-point correlation function of the energy-momentum tensor < T(x) T(y) >. We adopt the thin-wall approximation but not the envelope approximation, and take the (long-lasting) non-envelope parts into account by assuming free propagation after collision. We first write down the analytic expressions for the spectrum, and then evaluate them with numerical methods. As a result, the growth and saturation of the spectrum are observed as a function of the duration time of the non-envelope parts. It is found that the IR region of the spectrum shows a significant enhancement compared to the one with the envelope approximation, growing from $f^3$ to $f^1$ in the long-lasting limit. In addition, we find saturation in the spectrum in the same limit, indicating a decrease in the correlation of the energy-momentum tensor at late times. Our results are relevant to GW production from bubble collisions and sound waves.
Neutron star mergers and multi-messenger astronomy
Daniel Siegel
With the discovery of binary black hole mergers by LIGO, the era of gravitational wave (GW) astronomy and multi-messenger astronomy including GWs has begun. As the advanced LIGO and Virgo detectors approach design sensitivity in the next few years, exciting discoveries are expected to be made, including neutron star mergers, which are among the most promising GW events for multi-messenger astronomy. In this talk, I will present recent results from general-relativistic magnetohydrodynamic simulations of the merger and long-term post-merger evolution, and discuss how multi-messenger observations of NS mergers may represent the key to understand and solve several long-standing problems in astrophysics; these include the origin and generation of GWs and accompanied electromagnetic transients across the electromagnetic spectrum including short gamma-ray bursts and kilonovae, the properties of nuclear matter at high densities, and the origin of the heavy elements in the universe.
The Present and Future of Real-Time Alerts from AMON
James Delaunay
The Astrophysical Multimessenger Observatory Network (AMON), will connect observatories from around the world, enabling real-time coincidence searches of all four messengers (neutrinos, cosmic rays, gamma rays, and gravitational waves) and rapid follow-up observations of these alerts. AMON's first real-time alerts were commissioned in 2016 with "pass-through" notices of IceCube likely-cosmic (HESE and EHE type) neutrino events, leading to multiple follow-up campaigns which have been reported through the GCN circulars. Looking ahead, AMON's first bona-fide multimessenger real-time alerts are planned to be high-energy neutrino + gamma-ray ("nu + gamma") alerts resulting from coincidence of IceCube neutrinos and Swift, Fermi, or High-Altitude Water Cherenkov (HAWC) gamma-ray transients or subthreshold signals. The talk will summarize key properties of current alert streams and preview the expected properties of upcoming nu+gamma AMON Alerts.
Search for High-energy Neutrino Emission from Fast Radio Bursts
Donglian Xu
Fast radio bursts (FRBs) are non-periodic millisecond radio outbursts that are thought to be of astrophysical origin. Since the first FRB was discovered by the Parkes Radio Telescope in 2007, a total of 23 FRBs with unique locations (FRB 121102 has repeated dozens of times) have been observed to date, with multiple radio telescopes. Although the nature of the FRBs is still largely unknown, the high dispersion measures of the FRBs indicate that they are most likely originating from extragalactic sources. A large multitude of models have been proposed to explain the FRB phenomena, most of which involve strong magnetic fields and are of leptonic nature. Currently, there are no concrete models predicting high-energy neutrinos from FRBs, while in principle a strongly magnetized environment such as that from a magnetar could produce short radio bursts due to the volatility of the magnetic fields, and having hadronic processes present at the same time. We will present the results from a recent search for high-energy neutrinos coincident spatially and temporally with FRBs in 6 years of IceCube data.
An FRB in a bottle? : A multi-wavelengh approach
Kazumi Kashiyama
Recently a repeating fast radio burst (FRB) 121102 has been confirmed to be an extragalactic event and a persistent radio counterpart has been identified. One of the leading models is the young neutron star model, in which a pulsar (or magnetar) of $\leq$ 100 yrs-old surrounded by a wind nebula and supernova remnant are considered and connections between FRBs and luminous pulsar-driven supernovae are predicted. I will discuss multi-messenger/multi-wavelength approaches to testing the scenario.
Seeking the sources of high-energy neutrinos with Swift
Azadeh Keivani
IceCube has reported discovery of the first high-energy astrophysical neutrino candidates, however the nature of the sources responsible for these neutrinos - potentially also the sources of the highest-energy cosmic rays - is still unknown and no high-confidence electromagnetic counterparts to any of the neutrino events have yet been detected. If the sources producing these highest-energy cosmic neutrinos are transient, they may be identifiable in rapid-response observations at Swift. The possibility of discovering multimessenger transient sources in this fashion is one of the main motivations for the Astrophysical Multimessenger Observatory Network (AMON). I will present the results of the first Swift satellite follow-up campaigns seeking to identify transient or variable X-ray or UV/optical sources that might be associated with the high-energy neutrinos detected by the IceCube Neutrino Observatory. Real-time public alerts providing coordinates and arrival times of high-energy neutrino events have been provided by IceCube and distributed via AMON and Gamma-Ray Coordinates Network (GCN) since April 2016.
Precision Measurement of 3He-to-4He ratio in Cosmic Rays with the Alpha Magnetic Spectrometer on the International Space Station
Francesca Giovacchini
CIEMAT SPAIN)
Knowledge of the energy dependence of the 3He-to-4He flux ratio (3He/4He) is important in understanding the propagation of cosmic rays. As 3He is assumed to be produced by interactions of heavier nuclei with the interstellar matter, the 3He/4He ratio is a powerful tool for determining the amount of interstellar material traversed by cosmic rays. AMS results are based on 9 million 3He events and 56 million 4He events collected in the first five years of operation onboard the ISS. The precise measurement of the 3He/4He ratio from 0.7 GeV/n to 10GeV/n is presented for the first time. The AMS results are unique and distinct from all the previous data.
Evidence for Stochastic Acceleration of Secondary Antiprotons by Supernova Remnants
Ilias Cholis
Supernova remnants are known to be the main sources of galactic cosmic-rays. They could also be a possible explanation for rise of the positron fraction, if secondary positrons are produced and then accelerated around the supernova shock front. Yet, if secondary positrons are stochastically accelerated in such shocks, other secondary cosmic ray species will also be accelerated. Using recent measurements of the cosmic-ray antiproton and proton fluxes in the energy range of 1 – 1000 GeV, we study the contribution to the antiproton to proton ratio from stochastically accelerated secondary antiprotons in SNRs. We consider several well-motivated models for cosmic-ray propagation in the interstellar medium and marginalize our results over the uncertainties related to the antiproton production cross section and the time-, charge-, and energy-dependent effects of solar modulation. We find that the increase in the antiproton to proton ratio at energies above 100 GeV cannot be accounted for within the context of conventional cosmic-ray propagation models and that there is statistical evidence for an additional component that could be provided by the stochastically accelerated secondary antiprotons. Under the same conditions in SNRs, we show that accelerated secondary positrons, can account for a significant fraction of the observed positron flux excess.
Impact of Cosmic Ray Transport on Galactic Winds
Ryan Farber
The role of cosmic rays generated by supernovae and young stars has very recently begun to receive significant attention in studies of galaxy formation and evolution due to the realization that cosmic rays can efficiently accelerate galactic winds. Microscopic cosmic ray transport processes are fundamental for determining the efficiency of cosmic ray wind driving. Previous studies focused on modeling of cosmic ray transport either via a constant diffusion coefficient or via streaming proportional to the Alfven speed. However, in predominantly neutral gas, cosmic rays can propagate faster than in the ionized medium and the effective transport can be substantially larger; i.e., cosmic rays can decouple from the gas. We perform three-dimensional magnetohydrodynamical simulations of patches of galactic disks including the effects of cosmic rays. Our simulations include the decoupling of cosmic rays from the neutral interstellar medium. We find that, compared to the ordinary diffusive cosmic ray transport case, accounting for the decoupling leads to significantly different wind properties such as the gas density and temperature, significantly broader spatial distribution of cosmic rays, and larger wind speed. These results have implications for X-ray, gamma-ray and radio emission, and for the magnetization and pollution of the circumgalactic medium by cosmic rays.
Secondary production of cosmic ray anti-deuteron and anti-Helium3
Ryosuke Sato
We evaluated flux of cosmic ray anti-deuteron and anti-Helium3 from secondary astrophysical production. The production cross section at proton-proton collision is one of the most important input parameter to determine the secondary cosmic ray flux. However, composite (anti-)nuclei production cross section is very small and the cross section data at collider experiments is quite limited. That is why proton-heavy ion and heavy ion-heavy ion collision data are often used to determine anti-nuclei production cross sections in the literature. In heavy ion collision, physical volume of hadron emission region is larger than that of proton-proton collision. Also, in nuclear physics, it is known that composite nuclei production rate obeys a scaling law with the volume of emission region. However, this point has been neglected in the calculation of cosmic ray flux. We applied this scaling law to calculate anti-deuteron and anti-Helium3 flux. Our result is larger than the previous works by 1-2 orders of magnitudes. In particular, secondary anti-Helium3 flux could be within the reach of a five-year exposure of AMS-02.
Nearby Pulsars and The Cosmic Ray Positron Excess
Dan Hooper
Measurements of the Geminga and B0656+14 pulsars by HAWC and Milagro indicate that these objects generate significant fluxes of very high-energy electrons. From the very high-energy gamma-ray intensity and spectrum of these pulsars, one can calculate their expected contributions to the local cosmic-ray positron spectrum. From these considerations, we find that pulsars produce a flux of high-energy positrons that is similar in spectrum and magnitude to the positron fraction measured by PAMELA and AMS-02. In light of this result, we conclude that it is very likely that pulsars provide the dominant contribution to the long perplexing cosmic-ray positron excess. I will also discuss the implications of these results for pulsars in the Galactic Center region.
CALET : Summary of the First Two-Year on Orbit
Yoichi Asaoka
Waseda University JP
The CALorimetric Electron Telescope (CALET) space experiment, which has been developed by Japan in collaboration with Italy and the United States, is a high-energy astroparticle physics mission to be installed on the International Space Station (ISS). The primary goals of the CALET mission include investigating possible nearby sources of high energy electrons, studying the details of galactic particle propagation and searching for dark matter signatures. During a two-year mission, extendable to five years, the CALET experiment will be measureing the flux of cosmic-ray electrons (including positrons) to 20 TeV, gamma-rays to 10 TeV and nuclei with Z=1 to 40 up to 1,000 TeV. The instrument consists of two layers of segmented plastic scintillators for the cosmic-ray charge identification (CHD), a 3 radiation length thick tungsten-scintillating fiber imaging calorimeter (IMC) and a 27 radiation length thick lead-tungstate calorimeter (TASC). CALET has sufficient depth, imaging capabilities and excellent energy resolution to allow for a clear separation between hadrons and electrons and between charged particles and gamma rays. The instrument was launched on August 19,2015 to the ISS. Since the start of operation, a continuos observation has successfully being kept. We will have a summary of the CALET observation: 1) Electron energy spectrum, 2) Proton and Nuclei spectrum, 3) Gamma-ray observation, with results of the performance study on orbit. We also present the results of observations of the electromagnetic counterparts to LIGO-VIRGO gravitational wave events.
Recent results from the Pierre Auger Observatory
Radomir Smida
Karlsruhe Institute of Technology
The objectives of the Pierre Auger Observatory are to probe the origin and characteristics of cosmic rays above $10^{17}$ eV and to study the interactions of these, the most energetic particles observed in nature. The Observatory design features an array of water Cherenkov stations deployed over a surface of $3000$ km$^2$ overlooked by fluorescence telescopes. This design and a sophisticated data analysis pipeline provide us with a large set of high quality data, which has led to major breakthroughs in the last decade. The Observatory has recorded data from an exposure exceeding $60,000$ km$^2$ sr yr since its beginning in 2004. The latest results together with systematic uncertainties are discussed in this talk. A major upgrade, known as AugerPrime, with an emphasis on improved mass composition determination using the surface detector is also presented.
Recent results from the Telescope Array Experiment
Tareq Abuzayyad
The Telescope Array (TA) measures the properties of ultra high energy cosmic ray (UHECR) induced extensive air showers. TA employs a hybrid detector comprised of a large surface array of scintillator detectors overlooked by three fluorescence telescopes stations. The TA Low Energy extension (TALE) detector has operated as a monocular Cherenkov/fluorescence detector for nearly three years, and has just been complemented by a closely spaced surface array to commence data taking in hybrid mode. The TAx4 upgrade is underway and aims to, as the name suggests, quadruple the size of the surface array to improve statistics at the highest energies (post-GZK events). In this talk we will describe the experiment and it's various upgrades, and we will summarize the latest results on the energy spectrum, composition, and anisotropy of UHECR, obtained with nine years of observation. The energy spectrum measured by the TALE FD, extending from a low energy of 4 PeV to a high of few EeV, will be presented in some detail.
The cosmic ray flux spectrum above 300 PeV of the Pierre Auger Observatory
Alan Coleman
The flux of cosmic rays is observed at the Pierre Auger Observatory spanning almost three decades in energy. This energy range is possible by combining the measurements from the nested 1500 m and 750 m surface detector arrays. The energy scale relies on the almost calorimetric energy measurements performed with Auger's fluorescence detectors. With a total exposure of about 52000 [km^2 sr yr] the observatory has accumulated large statistics which allows for a precise measurement of the flux spectrum above 300 PeV. A spectral shape, motivated by potential cosmic ray source models, is used to identify the spectral features including the ankle and the suppression at the highest energies. The shape of the spectrum is discussed along with its implications.
First results from the full-scale prototype for the Fluorescence detector Array of Single-pixel Telescopes
The Fluorescence detector Array of Single-pixel Telescopes (FAST) is a design concept for the next generation of Ultra-High-Energy Cosmic Ray (UHECR) observatories, addressing the requirements for a large-area, low-cost detector suitable for measuring the properties of the highest energy cosmic rays. In the FAST design, a large field of view is covered by a few pixels at the focal plane of an optical apparatus. Motivated by the successful detection of UHECRs using a prototype comprised of a single 200 mm PMT and a 1 square meter Fresnel lens system, we have developed a new full-scale prototype consisting offour 200 mm PMTs at the focus of a 1.6m segmented mirror. In October 2016 we installed the full-scale prototype at the Telescope Array site in central Utah, USA, and began steady data acquisition. We report on first results of the full-scale FAST prototype, including measurements of artificial light sources, distant ultraviolet lasers, and UHECRs.
Sensitivity of cosmological data to the neutrino mass hierarchy
Martina Gerbino
University of Rome, Sapienza
Present measurements are not able to firmly single out nature's choice for the neutrino mass hierarchy. Consequently, in the absence of a robust measurement of the neutrino mass ordering, a desirable bound on the neutrino mass would be one which relies on the less informative possible assumption about the hierarchical distribution of the total mass among the three eigenstates. We will discuss a novel method to quantify the sensitivity of cosmological data to the neutrino mass hierarchy in the context of Bayesian analysis.
Cosmological Observables Beyond Linear Gravity
Tom Giblin
In recent years, advances in numerical methods have allowed us to calculate precision observables with fewer assumptions. Here I will discuss two of these observables, Hubble Diagrams and the Weak Lensing Convergence Power Spectrum. I will comment on the role that inhomogeneities, entering at high order, effects cosmological measurements.
Primordial non-Gaussianity and statistical anisotropy
Saroj Adhikari
I will describe how statistical anisotropies, such as dipole modulations of the cosmic microwave background temperature and polarization fluctuations, are more likely if the primordial fluctuations are non-Gaussian. I will then discuss the implications of this effect for observations in the cosmic microwave background temperature and polarization anisotropies, and how such observations can be used to constrain the level of non-Gaussianities in the primordial fluctuations. In particular, I will focus on how the addition of statistical anisotropy information from E-mode polarization can help tighten current primordial non-Gaussianity constraints.
CLUMPY: A public code for γ-ray and ν signals from Dark Matter structures
Moritz Hutten
DESY/Humboldt-Universität Berlin
In this talk we will present the latest development of the CLUMPY code. The first version aimed at the calculation of the astrophysical J-factors from dark matter annihilation/decay in any galaxy or galaxy cluster dark matter halo including substructures. While refining on several aspects of the first version (halo-to-halo concentration scatter, multi-level boost factors, and triaxiality), the second release additionally provides i) a full refactoring of the I/O, ii) skymaps for γ-ray and ν fluxes from generic annihilation/decay spectra and the associated angular power spectrum, and iii) a Jeans analysis module to obtain dark matter density profiles and J-factors from kinematic data in relaxed spherical systems (e.g., dwarf spheroidal galaxies). After presenting some examples of these functionalities, we will also discuss the ongoing development of a third release that will include the overall extragalactic γ-ray flux from cosmic dark matter structure.
Dark Matter Imprints of Heavy Long-Lived Particles
Bibhushan Shakya
Cincinnati/Michigan
Particles present in the early Universe can leave observable imprints if they affect dark matter properties after dark matter has gone out of equilibrium with the thermal bath. We will investigate such possibilities and their associated observable signatures in several well-motivated dark matter frameworks.
Searching for the Darkest Galaxies: Covering the Entire Southern Sky with DECam
Fermilab), BECHTOL, Keith LSST
Milky Way satellite dwarf galaxies are among the oldest, smallest, and most dark matter dominated galaxies in the known Universe. The study of these tiny galaxies can help shed light on the nature of dark matter and the mysteries of galaxy formation. Over the last two years, efforts using the Dark Energy Camera (DECam) have nearly doubled the known population of Milky Way satellite galaxies. However, to date, only a fraction of the southern sky has been uniformly imaged by DECam. We will present results from two new surveys, the Magellanic Satellites Survey (MagLiteS) and the Blanco Imaging of the Southern Sky (BLISS) survey, which are using DECam to image the southern sky to unprecedented depths.
Science with the Hyper Suprime-Cam Survey
Rachel Mandelbaum
Hyper Suprime-Cam (HSC) is an imaging camera mounted at the Prime Focus of the Subaru 8.2-m telescope operated by the National Astronomical Observatory of Japan on the summit of Maunakea in Hawaii. A consortium of astronomers from Japan, Taiwan and Princeton University is carrying out a three-layer, 300-night, multiband survey from 2014-2019 with this instrument. In this talk, I will focus on the HSC survey Wide Layer, which will cover 1400 square degrees in five broad bands (grizy), to a 5 sigma point-source depth of r~26. We have covered 240 square degrees of the Wide Layer in all five bands, and the median seeing in the i band is 0.60 arcseconds. This powerful combination of depth and image quality makes the HSC survey unique compared to other ongoing imaging surveys. In this talk I will describe the HSC survey dataset and the completed and ongoing science analyses with the survey Wide layer, including galaxy studies, strong and weak gravitational lensing, but with an emphasis on weak lensing. I will demonstrate the level of systematics control, the potential for competitive cosmology constraints, some early results, and describe some lessons learned that will be of use for other ongoing and future lensing surveys.
Maximizing Information-Extraction from Next-Generation Surveys
Zachary Slepian
I will present a suite of new algorithms for measuring higher-point statistics from large-scale structure surveys. I will begin with a transformatively-fast algorithm that enables computation of the isotropic 3-point correlation function scaling as the number of galaxies squared. This algorithm was applied to BOSS data to make the first high-significance detection of the Baryon Acoustic Oscillations as well as to constrain novel forms of biasing. I will then present a generalization of the algorithm allowing computation of the anisotropic 3PCF. I'll close by showing how the fundamental kernel of these algorithms enables measurement of N-point functions for any desired N scaling as the square of the number of galaxies.
Failures of homogeneous and isotropic cosmologies in Extended Quasi-Dilaton Massive Gravity
We analyze the Extended Quasi-Dilaton Massive Gravity model around a Friedmann-Lemaitre-Robertson-Walker cosmological background. We present a careful stability analysis of asymptotic fixed points. We find that the traditional fixed point cannot be approached dynamically, except from a perfectly fine-tuned initial condition involving both the quasi-dilaton and the Hubble parameter. A less-well examined fixed-point solution, where the time derivative of the 0-th Stuckelberg field vanishes $\dot\phi^0=0$, encounters no such difficulty, and the fixed point is an attractor in some finite region of initial conditions. We examine the question of the presence of a Boulware-Deser ghost in the theory. We show that the additional constraint which generically allows for the elimination of the Boulware-Deser mode is $\textit{only}$ present under special initial conditions. We find that the only possibility corresponds to the traditional fixed point, and the initial conditions are the same fine-tuned conditions that allow the fixed point to be approached dynamically. Statement of Acknowledgement: This presentation was made possible, in part, through financial support from the School of Graduate Studies at Case Western Reserve University.
Gamma Rays/Galactic
Cosmic Rays/Extragalactic
High-energy x-rays
Fiona Harrison
High-energy emission from compact astrophysical objects
Compact astrophysical sources represent the most extreme and powerful end-points of the life of massive stars. They power relativistic and magnetized plasma which interact with the ambient medium, leading to a large variety of phenomena observable in the high- and very-high energy regime. In particular the complex Pulsar/Pulsar Wind-Nebulae/Supernova Remnant blast provides an optimal scenario to study fundamental questions on plasma-magnetic field interactions, covering a wide range of dimension and acceleration regime scales, ranging from a few kilometres to more than 20 pc in some cases, and from a few thousands of kilometres per second to relativist velocities. We will review the most recent experimental results concerning this kind of objects and the implications in our current knowledge of the physics processes behind the observed radiation.
The Multi-TeV Northern sky seen with HAWC
Miguel Mostafa
I will present the most recent results from two years of HAWC data.
The Search of Dark Matter in the Gamma Ray Sky
Mariangela Lisanti
The annihilation of dark matter can lead to observable signatures in high-energy gamma rays. I will review the current status of such dark matter searches with data from the Fermi Large Area Telescope. In particular, I will discuss searches within the Milky Way and Local Group, and present results from a new study that uses galaxy surveys to improve sensitivity to signals of extragalactic dark matter.
Recent Updates on the 3.5 keV Line
Esra Bulbul
A well-motivated warm dark matter candidate, sterile neutrinos, can radiatively decay and emit X-rays detectable in observations of large dark matter aggregations such as galaxies and clusters of galaxies. I will review the current and past efforts on searching for decaying dark matter in galaxy clusters and galaxies with a special focus on the 3.5 keV line. Additionally, I will summarize how the recent constraints can be improved using the future X-ray observations.
Resolving High Energy Universe Using Strong Gravitational Lensing
Anna Barnacka
Extragalactic jets are the largest particle accelerators in the universe, producing radiation ranging from radio wavelengths up to very high-energy gamma rays. Spatial origin of gamma-ray radiation from these sources cannot be fathom due to the poor angular resolution of the detectors. We propose to investigate gravitationally lensed blazars. Cosmic lenses magnify the emission and produce time delays between mirage images. These time delays depend on the position of the emitting regions in the source plane. We combine the precisely measured time delays at gamma rays, well-resolved positions of radio images, a model of the lens and the Hubble constant to elucidate the origin of gamma-ray flares from bright blazar B2 0218+35. With this approach, we achieve 1 milliarcsecond spatial resolution of the source at gamma-ray energies. We find that the gamma-ray flares do not originate from the radio core as commonly assumed.
Empirical Determination of Dark Matter Velocity Distribution Using Metal Poor Stars
Lina Necib
In this talk, I will show that metal poor halo stars have similar kinematics as dark matter in the solar neighborhood, using the hydrodynamic zoom-in simulation Eris of the Milky Way. Within this expectation, I extract the first empirically-determined dark matter velocity distribution using the velocity dispersions of the halo stars as measured by the Sloan Digital Sky Survey, and show that using this newly-found velocity distribution, the direct detection limits on dark matter scattering off nuclei are loosened by almost an order of magnitude at low dark matter masses.
On the applicability of Eddington's inversion methods to direct dark matter searches
Martin Stref
Montpellier University
Predictions for direct dark matter searches rely on the knowledge of the local speed distribution of dark matter particles. This distribution can be derived within a dynamically constrained Milky Way mass model using the Eddington formalism or some extended versions of it. This method, however, can lead to unconsistent or unphysical solutions, depending on the details of the mass model. I will discuss the limitations of the method and its applicability to predictions in direct detection. I will also discuss how it may, or may not, capture the actual dynamics of dark matter by comparing with cosmological simulation results.
Astrophysical distribution of dark matter and direct detection implications
Nassim Bozorgnia
The interpretation of dark matter direct detection results is complicated due to the unknown distribution of dark matter in our local neighborhood. Astrophysical uncertainties in the dark matter distribution are a major barrier preventing a precise determination of the properties of the dark matter particle. High resolution cosmological simulations of galaxy formation including baryons have recently become possible, and provide important information on the properties of the dark matter halo. I will discuss the local dark matter density and velocity distribution of Milky Way-like galaxies obtained from recent hydrodynamical simulations. In particular I will discuss the effect of baryons on the dark matter velocity distribution, the prevalence of dark disks, and implications for dark matter direct detection.
Reverse Direct Detection: Cosmic Ray Tests of Light Dark Matter Elastic Scattering
Christopher Cappiello
Many dark matter studies have considered indirect detection (χχ → ff), direct detection (χf →χf ), and collider searches (ff → χχ). We propose a new strategy in searching for dark matter elastic cross section by considering cosmic-ray propagation in the galactic dark matter halo. We find that cosmic rays can lose significant fraction of their energy through scattering with dark matter (fχ → fχ). Using existing cosmic-ray data and a simple cosmic-ray propagation model, we study the qualitative effects of dark matter scattering on cosmic-ray propagation and obtain new constraints of dark matter elastic cross sections on light dark matter (keV–GeV), a regime that is difficult for traditional direct detection experiments to probe.
Determinations of Properties of Low-Mass WIMPs from Direct Dark Matter Detection Experiments with Non-negligible Threshold Energy
Chung-Lin Shan
Xinjiang Astronomical Observatory, Chinese Academy of Sciences
In this talk, we discuss the effects of a non-negligible threshold energy on our model-independent methods developed for reconstructing WIMP properties by using measured recoil energies in direct Dark Matter detection experiments directly. Our expressions for reconstructing the mass and the (ratios between the) spin-independent and the spin-dependent WIMP-nucleon couplings have been modified. We will focus on low-mass (m_chi <~ 50 GeV) WIMPs and present some (preliminary) numerical results obtained by Monte-Carlo simulations.
Hunting for WIMPs, how low should we go?
Aaron Pierce
We discuss direct detection of WIMP dark matter in two benchmark cases: a Majorana fermion that primarily interacts via the Z-boson, and a Majorana fermion whose relic density is primarily set via co-annihilations with colored partners. We discuss the Z-mediated case with reference to a simple UV-completion, the singlet doublet model. We discuss the co-annihilation case with reference to stop co-annihilation in the Minimal Supersymmetric Standard Model. We find that Z-mediated Dark Matter is likely to be largely probed by future experiments, but co-annihilating Dark matter may present a formidable challenge.
ABRACADABRA: A novel approach to detecting axion dark matter
Benjamin Safdi
ABRACADABRA is a proposed experiment to search for ultralight (10^-14 - 10^-6 eV) axion dark matter. When ultralight axion dark matter encounters a static magnetic field, it sources an effective electric current that follows the magnetic field lines and oscillates at the axion Compton frequency. In the presence of axion dark matter, a large toroidal magnet will act like an oscillating current ring, whose induced magnetic flux can be measured by an external pickup loop inductively coupled to a SQUID magnetometer. The readout circuit can be broadband or resonant and both are considered. ABRACADABRA is fielding a 10-cm prototype in 2017 with the intention of scaling to a 1m^3 experiment. The long term goal is to probe QCD axions near the GUT-scale. In this talk I will review the design, sources of noise, and sensitivity of the experiment.
The Axion Dark Matter Experiment
William Wester
The Axion Dark Matter Experiment, ADMX, is taking data with sensitivity to a possible dark matter candidate that would also solve the strong-CP problem of QCD. The experiment, it's status, and preliminary results will be presented along with a path to cover much of the highly motivated parameter space. Page 51
Axion and ALP Dark Matter Search with the International Axion Observatory (IAXO) and mini-IAXO
Julia Katharina Vogel
Lawrence Livermore Nat. Laboratory
The nature of dark matter (DM) remains one of the fundamental questions in cosmology. Axions are one of the current leading candidates for the hypothetical, non-baryonic DM. Especially in the light of LHC slowly closing in on WIMP searches, axions and axion-like particles (ALPs) provide a viable alternative approach to solving the dark matter problem. The fact that makes them very appealing is that they were initially introduced to solve a long-standing QCD problem in the Standard Model of particle physics. Helioscopes are searching for axions produced in the core of the Sun via the Primakoff effect. The International Axion Observatory (IAXO) is a next generation axion helioscope aiming at a sensitivity to the axion-photon coupling of 1 - 1.5 orders of magnitude beyond the currently most sensitive axion helioscope (CAST). IAXO will be able to challenge the stringent bounds from SN1987A and test the axion interpretation of anomalous white-dwarf cooling. Beyond standard axions, this new experiment will be able to search for a large variety of ALPs and other novel excitations at the low-energy frontier of elementary particle physics. Mini-IAXO is proposed as a small pilot experiment increasing the sensitivity to axion-photon couplings down to a few 10$\times$11 GeV$^{-1}$. This contribution will introduce the IAXO and mini-IAXO experiments and outline the expected science reach. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The Axion Resonant InterAction Detection Experiment (ARIADNE)
Justin Shortino
The Axion Resonant InterAction Detection Experiment (ARIADNE) will search for evidence of the QCD axion using nuclear magnetic resonance to search for a short-range spin-dependent interaction in the sub-millimeter range, which results from the Axion. ARIADNE features spin polarized 3He interacting with a rotating unpolarized tungsten mass as a probe for this interaction. We will outline the concept of the measurement and describe the status of the R&D progress to date.
A Fresh Look at Axions: Using NuSTAR Solar Observations in Search for Dark Matter
Jaime Ruz Armendariz
While the discovery of the Higgs boson at the LHC experimentally confirms the widely successful Standard Model (SM) of particle physics, the theory still falls short of explaining several fundamental features of our Universe. A major shortcoming is the SM's silence on the nature of Dark Matter (DM). Currently, axions and WIMPs are the leading DM candidates with axions simultaneously addressing an additional weakness of the SM, i.e. its inability to explain why strong interactions do not violate charge-parity symmetry as expected from theory. Non-QCD axions on the other hand appear naturally in extensions of the SM, e.g. string theory. If axions exist, they will be created in great numbers in the solar core by the Primakoff effect, via the interaction of a photon from the core's radiation field with a virtual photon in a nucleus. By the inverse mechanism, one can generate an X-ray flux beyond the solar core. Extensive ground-based searches, notably the CAST experiment at CERN and the proposed next generation helioscope IAXO, use laboratory magnets for the reverse conversion. We employ a novel approach using solar observations of NASA's hard x-ray astrophysics mission NuSTAR (Nuclear Spectroscopic Telescope Array) to search for the same process via magnetic fields in the solar corona, which, although weaker than those of laboratory magnets, are much more extensive in scale. We will report on the latest results of our research. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Emission of Photons and Relativistic Axions from Axion Stars
Abhishek Mohapatra
The number of nonrelativistic axions can be changed by inelastic reactions that produce relativistic axions or photons. Any even number of nonrelativistic axions can scatter inelastically into two relativistic axions. Any odd number of axions can annihilate into two photons. This reaction produces a monochromatic radio-frequency signal at an odd-integer harmonic of the fundamental frequency set by the axion mass. The loss rates of axions from axion stars through these inelastic relations are calculated using the framework of a nonrelativistic effective field theory. Odd-integer harmonics of a fundamental radio-frequency signal provide a unique signature for collapsing axion stars or any dense configuration of axions.
Signatures of dark-matter sub-structure in axion direct detection experiments
Joshua Foster
ABRACADABRA10cm is a new experiment which seeks to detect axion dark matter through its interactions with the electromagnetic field. The experiment, which is planned to start collecting data this year, will probe unstudied regions of axion parameter space and lay the groundwork for future, larger-scale efforts. I will discuss the results of numerical and analytical work towards understanding the signatures of axion dark matter substructure in the experiment. In particular, I will focus on the effects of dark matter streams, especially as informed by cosmological N-body simulation data.
Cherenkov Telescope Array: The Next Generation Gamma-ray Observatory
A. Nepomuk Otte
The Cherenkov Telescope Array (CTA) will be a new observatory to study very-high-energy gamma-ray sources. It is designed to achieve an order of magnitude improvement in sensitivity in the 20 GeV to 300 TeV energy band compared to currently operating instruments: VERITAS, MAGIC, and H.E.S.S. CTA will probe known sources with unprecedented sensitivity, angular resolution, and spectral coverage, while also detecting hundreds of new sources. This presentation describes the science drivers for CTA and the status of the project with an emphasis on the planned US contribution.
Overview and current status of the ALPACA experiment
Takashi Sako
Yachay Tech University
The ALPACA (Andes Large area PArticle detector for Cosmic ray physics and Astronomy) experiment is aimed at observing cosmic gamma rays above 10 TeV in the southern sky with wide field of view and high sensitivity. We will construct an 83,000 m^2 surface air-shower array and a 5,400 m^2 underground muon detector array, on a highland (Chacaltaya Hill) at the altitude of 4,740 m halfway up Mt. Chacaltaya on the outskirts of La Paz, Bolivia. Prior to the construction of the full ALPACA array, we are planning to build a 1/10-scale prototype air-shower array. In this talk, the overview and current status of the ALPACA experiment will be reported.
All-Sky Medium Energy Gamma-ray Observatory (AMEGO) - A discovery mission for the MeV gamma-ray band
Jeremy Perkins
NASA/GSFC
The MeV domain is one of the most underexplored windows on the Universe. From astrophysical jets and extreme physics of compact objects to a large population of unidentified objects, fundamental astrophysics questions can be addressed by a mission that opens a window into the MeV range. AMEGO is a wide-field gamma-ray telescope with sensitivity from ~200 keV to >10 GeV. AMEGO provides three new capabilities in MeV astrophysics: sensitive continuum spectral studies, polarization measurements, and nuclear line spectroscopy. AMEGO will consist of four hardware subsystems: a double-sided silicon strip tracker with analog readout, a segmented CZT calorimeter, a segmented CsI calorimeter and a plastic scintillator anticoincidence detector, and will operate primarily in an all-sky survey mode. In this presentation we will describe the AMEGO mission concept and scientific performance.
Ex Luna Scienta: The Lunar Occultation Explorer (LOX)
University of Alabama in Huntsville
The Lunar Occultation Explorer (LOX) is a paradigm shift - a next-generation mission concept that will provide new capabilities in time-domain nuclear astrophysics and establish the Moon as a platform for nuclear astrophysics. Currently under review by NASA's Explorer Program, LOX's performance requirements are driven by focused science goals designed to resolve the enigma of Type-Ia supernova (SNeIa) and their role in galactic evolution and cosmology. LOX will survey and continuously monitor the Cosmos in the MeV regime (0.1-10 MeV), a unique capability that supports both the primary science goals as well as multi-messenger detection and monitoring campaigns, by leveraging the Lunar Occultation Technique (LOT). Key benefits of the LOX/LOT approach include maximizing the ratio of sensitive-to-total deployed mass, low implementation risk, and demonstrated operational simplicity that leverages extensive experience with planetary orbital geochemistry investigations. LOX will also deliver a time-domain survey of the nuclear cosmos. Proof-of-principle efforts have validated all aspects of the mission using previously deployed lunar science assets, including the first high-energy gamma-ray source detected at the Moon. LOX mission design, performance, and science will be presented.
Fermi's view of the gamma-ray sky above 10 GeV
Pablo Saz Parkinson
The University of Hong Kong
We present some of the main results from the Third Catalog of Hard Fermi-LAT Sources (3FHL). This catalog, based on the first 7 years of LAT data using the Pass 8 event-level analysis, contains 1556 sources characterized in the 10 GeV--2 TeV energy range. The sensitivity and angular resolution are improved by factors of 3 and 2 relative to the previous LAT catalog in this energy range (1FHL). Most 3FHL sources (79%) are extragalactic, while 9% are Galactic and 12% are unassociated (or associated with a source of unknown nature). The catalog includes 214 new gamma-ray sources. The 3FHL catalog provides an excellent opportunity to relate observations from space to those accessible from the ground (e.g. HESS, MAGIC, VERITAS, HAWC), including in the near future with the Cherenkov Telescope Array.
Galactic Sources with HAWC
Petra Huentemeyer
Michigan Tech
The High-Altitude Water Cherenkov (HAWC) TeV Gamma-Ray Observatory in Mexico is a wide-field-of-view telescope with a nearly 100 % duty cycle. It has been taking data with a complete detector configuration since spring 2015 and is particularly well suited to measure very energetic transient and extended gamma-ray emission in our Galaxy. In my presentation, I will give an overview of the most recent HAWC results relating to Galactic sources. This will include highlights from the 2HWC catalog, measurements of PWNe in the general Geminga region, and observations of binary systems and extended structures such as the Northern Fermi Bubble, molecular clouds, or diffuse emission at TeV energies.
VERITAS and Fermi-LAT observations of TeV gamma-ray sources from the second HAWC catalog
Nahee Park
The HAWC (High Altitude Water Cherenkov) observatory recently published their second source catalog with 39 very high energy gamma-ray sources based on 507 days of exposure time. We studied thirteen HAWC sources without known counterparts with VERITAS and Fermi-LAT data. VERITAS, an array of four imaging atmospheric Cherenkov telescopes observing gamma rays with energies higher than 85 GeV, can provide a more detailed image of the source with much shorter exposure time and with better angular resolution. With Fermi-LAT data, we searched for the counterparts at lower energies (E>10 GeV). VERITAS found weak gamma-ray emission in the region of PWN DA495 coinciding with 2HWC J1953+294 in this follow-up study. We will present results focusing on the PWN DA495 region and the SNR G54.1+0.3 region where Fermi-LAT detected a GeV counterpart of SNR G54.1+0.3, a known TeV source detected by both VERITAS and HAWC.
Follow-up VERITAS and NuSTAR observations of Galactic HAWC gamma-ray sources
Michelle Hui
NASA/MSFC
The 2HWC gamma-ray catalog was recently released using 17 months of data from the HAWC observatory, a TeV surveying instrument located in Mexico. A total of 39 sources were detected, of which ~40% are not near known TeV sources and over half do not have clear association with known astrophysical sources. Many are extended and encompass multiple known X-ray and gamma-ray sources. In an effort to identify counterparts that could explain the complex morphology seen by HAWC and understand the emission mechanisms, follow-up observations by VERITAS in TeV have confirmed some detections. We obtained additional follow-up time for two Galactic HAWC sources in hard X-rays by NuSTAR. These are the initial observations of the HAWC-VERITAS-NuSTAR Galactic Survey Legacy program.
A novel maximum likelihood method for VERITAS analysis
Tom Brantseg
Gamma ray astronomy provides a powerful way to study particle acceleration and diffusion within high-energy astrophysical phenomena such as supernova remnants and pulsar wind nebulae. Constructing a coherent physical picture of these sources requires the ability to detect extended regions of gamma-ray emission, the ability to analyze small-scale spatial variation within these regions, and the ability to synthesize data from multiple observatories across multiple wavebands. Air Cherenkov telescopes provide fine angular resolution (<1 degree), but their limited fields of view typically make detection of extended sources challenging. Maximum likelihood methods are well-suited to simultaneous analysis of multiple fields with overlapping sources, and to combining data from multiple gamma-ray observatories. These methods allow for estimation of the cosmic ray background for air Cherenkov observations for sources as large as the entire telescope field of view. We report here on the development of a maximum likelihood approach for the air Cherenkov observatory VERITAS and discuss potential applications of this method.
Observation of the Cygnus Region using HAWC data
Binita Hona
Michigan Technological University for the HAWC collaboration)
The Cygnus region consists of multiple gamma-ray source types such as pulsar wind nebulae (PWN), supernova remnants, binary systems, and star clusters. Several gamma-ray instruments have observed gamma-ray sources in this region. For instance, Fermi-LAT found gamma-ray emission at GeV energies due to a Cocoon of freshly accelerated cosmic rays, which is co-located with a known PWN seen by several TeV gamma-ray observatories. TeV J2032+4130 is likely powered by the pulsar PSR J2032+4127 based on the multi-wavelength observation and asymmetric morphology reported by VERITAS. The High Altitude Water Cherenkov (HAWC) observatory has reported five sources within the Cygnus region, three of which lie in the vicinity of the cocoon region reported by Fermi-LAT. This presentation will discuss the analysis of data collected with the HAWC instrument to provide a deeper understanding of the Cygnus cocoon. The study of HAWC data will provide more information regarding the morphology, emission origin, and the correlation with the GeV emission.
Searching for Gamma-Ray Signal from Giant Molecular Clouds with HAWC
Hugo Ayala
Giant Molecular Clouds (GMC) are large reservoirs of gas and dust in the galaxy, which makes them ideal for the production of gamma-ray emission due to the interaction of cosmic rays with the ambient gas. This gamma-ray emission is part of the galactic diffuse gamma-ray emission, which is useful for tracing the propagation and distribution of cosmic rays throughout our Galaxy. The search of gamma-ray emission from GMCs may allow us to probe the flux of cosmic rays in distant galactic regions and compare it with the local cosmic ray flux measured at Earth. The High Altitude Water Cherenkov (HAWC) Observatory is located at 4100 m above sea level in Mexico. It is designed to measure high-energy gamma rays between 300 GeV to 100 TeV. HAWC possesses a large field of view of 2 sr and good sensitivity to spatially extended sources, which currently makes it the best suited ground-based observatory to detect extended regions. HAWC data is used to search for gamma-ray emission from Aquila Rift, Hercules and Taurus GMCs. Preliminary results will be presented.
Primordial Black Holes and Gamma Ray Emission near pulsars with HAWC
James Thomas Linnemann
The High Altitude Water Cherenkov (HAWC) gamma-ray observatory is a continuously operated, wide field-of-view (FOV) observatory sensitive to 100 GeV - 100 TeV gamma rays and cosmic rays. HAWC has been making observations since summer 2012 and officially commenced data-taking operations with the full detector in March 2015. With an FOV of 2 steradians, HAWC observes 2/3 of the sky in 24 hours. HAWC is sensitive to transients and also to galactic steady sources, including both point and more diffuse emission. HAWC can be used to search for astrophysical signatures of dark matter (DM) and primordial black holes (PBHs). I will present HAWC's latest results on searches for evaporating PHBs, which would appear in transients in our archived data, but with energy and time signatures distinct from GRBs. HAWC's measurement of the spatial distribution of TeV gamma ray emission from the region surrounding nearby pulsars is also relevant to interpretation of the excess of positrons observed at Earth. I will also present our measurements on TeV gamma ray emission near Geminga and the Monogem pulsar.
High-energy neutrino interactions: first cross section measurements at TeV and above
Mauricio Bustamante
Neutrino interactions, though feeble, are tremendously important in particle physics and astrophysics. Still, at neutrino energies above ~350 GeV there has been, up to now, no direct experimental information on neutrino interactions; calculations rely on extrapolations from lower energies. Now, for the first time, we can measure the neutrino-nucleon cross section at the TeV scale and above, thanks to the recent discovery, by IceCube, of high-energy astrophysical neutrinos. We will show new cross section measurements extracted from the 4-year sample of IceCube High Energy Starting Event showers between 20 TeV and 2 PeV. The measurements agree with standard cross-section calculations and constrain new physics beyond the Standard Model at these energies.
New physics searches with the TeV scale neutrinos
IceCube observation of high-energy astrophysical neutrinos has opened the astrophysical neutrino window. As we accumulate statistics IceCube not only starts characterizing the astrophysical neutrino component, but also makes improved measurements of the highest energy atmospheric neutrinos. In this talk I will discuss how we can use both high-energy atmospheric neutrinos as well as astrophysical neutrinos as a probe of new physics.
Imaging Galactic Dark Matter with IceCube High-Energy Cosmic Neutrinos
Ali Kheirandish
The origin of the observed extraterrestrial neutrinos is still unknown, and their arrival directions are compatible with an isotropic distribution. This observation, together with dedicated studies of Galactic plane correlations, suggest a predominantly extragalactic origin. Dark matter-neutrino interactions, which have been extensively studied in cosmology, would thus lead to a slight suppression of flux at energies below a PeV and deficit of events in the direction of Galactic center, which would be seen by IceCube. I will present results of a recent analysis using the four-year high-energy starting event dataset to constrain the strength of dark matter-neutrino interactions and show that in spite of low statistics IceCube can probe regions of the parameter space inaccessible to current cosmological methods.
Searching for Planck Scale Physics with High Energy Neutrinos
Some Planck-scale physics and quantum gravity models predict a slight violation of Lorentz invariance (LIV) at high energies. High-energy cosmic neutrino observations can be used to test for such LIV. Operators in an effective field theory (EFT) can be used to describe the effects of LIV. They can be used to describe kinematically allowed energy losses of possible superluminal neutrinos. These losses can be caused by both vacuum pair emission (VPE) and neutrino splitting. Assuming a reasonable distribution of extragalactic neutrino sources, we determined the resulting after-loss neutrino spectra using Monte Carlo propagation calculations. We then compared them with the neutrino spectrum observed by IceCube to determine the implications of our results regarding Planck-scale physics. If the drop off in the observed IceCube neutrino flux above 2 PeV is caused by LIV, a potentially significant pileup effect would be produced just below the drop-off energy in the case of CPT-even operator dominance. However, such a clear drop off effect would not be observed if a CPT-odd, CPT-violating term dominates.
Constraining the Flavor Structure of Lorentz Violation Hamiltonian with the Measurement of Astrophysical Neutrino Flavor Compositions
Wei-Hao Lai
National Chiao-Tung University
We study Lorentz violation effects to flavor transitions of high energy astrophysical neutrinos. It is shown that the appearance of Lorentz violating Hamiltonian can drastically change the flavor transition probabilities of astrophysical neutrinos. Predictions of Lorentz violation effects to flavor compositions of astrophysical neutrinos arriving on Earth are compared with IceCube flavor composition measurement which analyzes astrophysical neutrino events in the energy range between $25~{\rm TeV}$ and $2.8~{\rm PeV}$. Such a comparison indicates that the future IceCube-Gen2 will be able to place stringent constraints on Lorentz violating Hamiltonian in the neutrino sector. We work out these expected constraints for different flavor structures of Lorentz violating Hamiltonian. In some cases these expected constraints can improve upon the current constraints obtained from other types of experiments by more than two orders of magnitudes.
Hidden neutrino interactions with dark energy: Effects on oscillation probabilities and tests with high-energy neutrinos
Niki Klop
If dark energy is some kind of scalar field rather than a cosmological constant and can interact with the neutrino sector, it might cause CPT/Lorentz violating effects and also modifies the neutrino oscillation phenomenology. The effects will be insignificantly small compared to the ordinary oscillation effect at low energies, but might become visible in very high energies, since the terms in the transition probability induced by interactions with dark energy do not depend on the energy, while the ordinary component decreases with 1/E. If such dark energy effects were found, it would be a strong indication that the nature of dark energy is different from a cosmological constant. We investigate the effect of such a dark energy interaction in the three-neutrino scheme and use IceCube data to put constraints on the new oscillation parameters that emerge from this interaction.
Interpretations of Astrophysics Neutrino Data
Thomas Weiler
Vanderbilt Univ
Neutrino spectral indices Galactic vs. Extra-galactic sources, and potential use of Glashow events are analyzed.
Multi-PeV Signals from a New Astrophysical Neutrino Flux Beyond the Glashow Resonance
Ranjan Laha
The IceCube neutrino discovery was punctuated by three showers with $E_\nu$ ~ 1-2 PeV. Interest is intense in possible fluxes at higher energies, though a marked lack of $E_\nu$ ~ 6 PeV Glashow resonance events implies a spectrum that is soft and/or cutoff below ~few PeV. However, IceCube recently reported a through-going track event depositing 2.6 $\pm$ 0.3 PeV. A muon depositing so much energy can imply $E_{\nu_\mu} \gtrsim$ 10 PeV. We show that extending the soft $E_\nu^{-2.6}$ spectral fit from TeV-PeV data is unlikely to yield such an event. Alternatively, a tau can deposit this much energy, though requiring $E_{\nu_\tau}$ ~10x higher. We find that either scenario hints at a new flux, with the hierarchy of $\nu_\mu$ and $\nu_\tau$ energies suggesting a window into astrophysical neutrinos at $E_\nu$ ~ 100 PeV if a tau. We address implications, including for ultrahigh-energy cosmic-ray and neutrino origins.
How bright can the brightest neutrino source be?
Shin'Ichiro Ando
After the discovery of extraterrestrial high-energy neutrinos, the next major goal of neutrino telescopes will be identifying astrophysical objects that produce them. The flux of the brightest source Fmax, however, cannot be probed by studying the diffuse neutrino intensity. We aim at constraining Fmax by adopting a broken power-law flux distribution, a hypothesis supported by observed properties of any generic astrophysical sources. The first estimate of Fmax comes from the fact that we can only observe one universe, and hence, the expected number of sources above Fmax cannot be too small compared with one. For abundant source classes such as starburst galaxies, this one-source constraint yields a value of Fmax that is an order of magnitude lower than the current upper limits from point-source searches. Then we derive upper limits on Fmax assuming that the angular power spectrum is consistent with neutrino shot noise yet. We find that the limits obtained with upgoing muon neutrinos in IceCube can already be quite competitive, especially for rare but bright source populations such as blazars. The limits will improve nearly quadratically with exposure, and therefore be even more powerful for the next generation of neutrino telescopes.
High-Energy Neutrinos from Supernovae
Kohta Murase
Neutrinos from supernovae (SNe) are crucial probes of explosive phenomena at the deaths of massive stars and neutrino physics. High-energy neutrinos are produced through hadronic processes by cosmic rays, which can be accelerated during interaction between the SN ejecta and circumstellar material (CSM). We investigate high-energy neutrino emission from Galactic SNe. Recent observations of extragalactic SNe have revealed that a dense CSM is commonly expelled by the progenitor star. We show that IceCube/KM3Net can detect about 10-1000 events from Type II-P/II-L SNe at a distance of 10 kpc. A successful detection will give us a multi-energy neutrino view of SN physics and new opportunities to study neutrino properties, as well as clues to the cosmic-ray origin. GeV-TeV neutrinos may also be seen by KM3Net, Hyper-Kamiokande, and PINGU.
Diffuse and point-source neutrino emission from Type IIn supernovae
Maria Petropoulou
Type IIn supernovae (SNe) explode in dense circumstellar media that have been modified by the SNe progenitors at their last evolutionary stages. The interaction of the freely expanding SN ejecta with the circumstellar medium gives rise to a shock wave propagating in the dense SN environment, which may accelerate protons to multi-PeV energies. Inelastic proton-proton collisions between the shock-accelerated protons and those of the circumstellar medium can lead to multi-messenger signatures. I will present our results on the diffuse neutrino emission from SNe IIn in comparison to IceCube observations. In particular, SNe IIn could be the dominant component of the diffuse astrophysical flux, only if 4 per cent of all core collapse SNe were of this type and 30 per cent of the shock energy was channeled to accelerated protons. Even more stringent constraints on the acceleration efficiency can be placed by the identification of a single SN IIn as a neutrino point source with IceCube using up-going muon neutrinos.
Difficulties of Star-forming Galaxies as the Source of IceCube Neutrinos
Takahiro Sudoh
Star-forming and starburst galaxies are among candidate sources of high energy neutrino flux detected in the IceCube experiment. Previous studies mainly used simple correlations between gamma-ray and infrared luminosities and assume a common value of gamma-ray spectral index for all starburst galaxies, though it should depend on properties of individual galaxies. In this work, we present a new theoretical prediction of the gamma-ray and neutrino flux from star-forming galaxies by using a semi-analytical model of cosmological galaxy formation, which quantitatively reproduces many observations of local and high-redshift galaxies such as luminosity functions and the cosmic star-formation history. We construct realistic models of gamma-ray and neutrino emission from individual galaxies at various redshifts from their properties, taking into account the cosmic-ray production, propagation and interaction inside them, assuming the energy densities of cosmic-ray and magnetic field are in equilibrium in each galaxy. We calibrate our model by using data of local galaxies detected by Fermi-LAT to make reliable calculations. Our baseline model, which is in remarkable agreement with gamma-ray data of local star-forming galaxies, predicts that less than 10 % of IceCube data can be explained even with the most optimistic parameters. Therefore other sources are required to explain IceCube neutrinos.
The Galactic Contribution to IceCube's Astrophysical Neutrino Flux
Peter Denton
Niels Bohr International Academy
High energy neutrinos have been detected by IceCube, but their origin remains a mystery. Determining the sources of this flux is a crucial first step towards multi-messenger studies. In this work we systematically compare two classes of sources with the data: galactic and extragalactic. We build a likelihood function on an event by event basis including energy, event topology, absorption, and direction information. We present the probability that each high energy event with deposited energy $E_{\rm dep}>60$ TeV in the HESE sample is galactic, extragalactic, or background. The galactic fraction of the astrophysical flux has a best fit value of $0.07^{+0.09}_{-0.06}$ and zero galactic flux is allowed at $1.2\sigma$.
Radio detection of cosmic rays - achievements and opportunities
Tim Huege
Over the last 15 years, we have achieved a detailed understanding of the physics of radio emission from extensive air showers, and have consequently succeeded in developing sophisticated detection schemes and analysis approaches. In particular, we have demonstrated that the important air-shower parameters arrival direction, particle energy and depth of shower maximum can be reconstructed reliably from radio measurements, with both precision and accuracy comparable with those of other detection techniques. In this talk I will review the achievements of the radio detection technique and discuss the potential for future application in existing and new experiments for cosmic-ray detection.
Results of the TREND experiment
Olivier Martineau
We demonstrate here the ability of TREND, a self-triggered radio array, to autonomously detect and identify air showers induced by cosmic rays. TREND (Tianshan Radio Experiment for Neutrino Detection) is an array of 50 dipolar antennas, deployed over a total area of 1.5km² on the site of the 21CMA radio interferometer in the radio-quiet Tianshan mountains (China), and running between 2009 and 2014. TREND DAQ system was designed to allow for a trigger rate up to 200Hz per antenna, based on a very basic signal-over-threshold trigger condition. The reconstruction and discrimination of air showers from the ultra-dominant background noise is then performed through an offline treatment. We present here, for the first time, the complete analysis of the TREND data. We first detail the background-rejection algorithm which allowed to select about 500 air shower radio candidates from the ~10^9 radio pulses recorded with the TREND array. We then show that the distribution of the directions of arrivals of these 500 candidates is compatible with what is expected for air showers. We finaly compute the TREND air shower detection efficiency, thanks to an end-to-end simulation chain which will be detailed here. Given the fairly basic TREND acquisition chain, these results can be considered extremely encouraging in the perspective of future experiments using radio as a way to detect air showers, such as the Giant Radio Array for Neutrino Detection.
Observations of the Very Local Interstellar Medium with the IBEX and Voyager Spacecraft
Eric Zirnstein
The Interstellar Boundary Explorer (IBEX) is an Earth-orbiting spacecraft equipped with two single-pixel cameras that detect neutral atoms produced by the interaction of the solar wind (SW) with the very local interstellar medium (VLISM), as well as neutral atoms flowing in from the VLISM itself. After its launch in 2009, IBEX discovered the unexpected existence of the "ribbon," a nearly circular arc of enhanced hydrogen ENA flux at ~keV energies. The enhanced ribbon fluxes are believed to originate from look directions perpendicular to the local interstellar magnetic field draped around the heliosphere. A comparative analysis of ribbon flux simulations with IBEX data derived a "pristine" interstellar field strength of ~3 μG just beyond the influence of the heliosphere, directed towards ~(26°, 50°) in galactic longitude/latitude. IBEX observations complement the only in situ observations of the VLISM made by the Voyager 1 spacecraft. Since crossing the heliopause in August 2012, Voyager 1 has been measuring the VLISM plasma properties, including the galactic cosmic ray flux, (indirectly) the compressed interstellar plasma density, as well as the interstellar magnetic field draped around the heliosphere. This talk will review key IBEX and Voyager observations that inform us of the VLISM environment, in particular the local interstellar magnetic field, which is important for understanding the galactic cosmic ray fluxes observed at Earth.
Completing and Improving the TeV Cosmic-ray Sky with HAWC
Daniel Fiorino
University of Maryland College Park
In 2015, the HAWC Observatory was completed and began operation as the most sensitive TeV cosmic-ray detector in the Northern Hemisphere. Since that time, we have recorded over 1 trillion cosmic-ray air showers, designed a likelihood-based cosmic-ray energy reconstruction, and implemented a new minimally-biased method for reconstructing all-sky anisotropy. These three advances in statistics, energy resolution, and signal recovery allow us to better disentangle the properties of the TeV cosmic-ray anisotropy from detector effects. This has led to a combined anisotropy sky map with IceCube in the Southern Hemisphere. Although the nature of this anisotropy has been explored and modeled, the exact realization of the anisotropy could hold clues important to describing local accelerators of observed cosmic rays and the local magnetic fields through which they propagate. We will share our results for both HAWC and the combined HAWC-IceCube anisotropy sky maps.
A search for cosmic-ray proton anisotropy with the Fermi Large Area Telescope
Justin Vandenbroucke
Although cosmic rays are nearly isotropic, ground-based arrays sensitive to TeV cosmic rays have measured a small anisotropy in right ascension. Understanding the morphology and energy dependence of this anisotropy can yield insight into cosmic-ray sources and propagation in the local magnetic field. The Fermi Large Area Telescope (LAT) is optimized for gamma-ray measurements, but it records cosmic-ray protons at an even higher rate. We present a Fermi LAT search for cosmic-ray proton anisotropy at energies ~100 GeV and greater. The energy range is complementary to ground-based measurements. Moreover, while ground-based instruments cover only part of the sky and most are only sensitive to the right ascension component of the anisotropy, the LAT is sensitive to the full sky and to all orientations of anisotropy.
Symmetric Achromatic Variability: A New and Totally Unexpected Phenomenon
Anthony Readhead
We have discovered a rare new form of long-term radio variability in the light-curves of active galaxies (AG) (arXiv:1702.06582, arXiv:1702.05519) --- Symmetric Achromatic Variability (SAV) --- a pair of opposed and strongly skewed peaks in the radio flux density observed over a broad frequency range. We propose that SAV arises through gravitational milli-lensing when relativistically moving features in AG jets move through gravitational lensing caustics created by 10^3-10^6 solar mass subhalo condensates or black holes located within intervening galaxies. The lower end of this mass range has been inaccessible with previous gravitational lensing techniques. This new interpretation of some AG variability can easily be tested and if it passes these tests, will enable a new and powerful probe of cosmological matter distribution on these intermediate mass scales, as well as provide, for the first time, micro-arcsecond resolution of the nuclei of AG --- a factor of 30--100 greater resolution than is possible with ground-based millimeter VLBI.
F-GAMMA program: high-cadence, multi-wavelength radio monitoring as a probe of the physical conditions and variability processes in AGN jets
Max-Planck-Institut für Radioastronomie
The jets of active galactic nuclei (AGN) are among the most powerful systems in the Universe. Their emission spans over an extremely wide energy range, from radio to gamma-rays or even TeV energies, and often shows pronounced variability with timescales anywhere between a few years and several minutes. Therefore, high-cadence, multi-band monitoring programs are essential in the investigation of their physical conditions and variability processes. The F-GAMMA (Fermi-GST AGN Multi-frequency Monitoring Alliance) was a program for the monitoring of the broad-band radio emission of about 90 Fermi-GST AGN, 25 of which have been detected also at TeV energies. The sources were observed from 2007 to 2015 at 12 radio frequencies between 2.6 GHz and 345 GHz with a mean cadence of 1-1.3 months. Both flux-density and (linear and circular) polarization variability was monitored. Here we present a compilation of science highlights from the F-GAMMA program, which include various multi-band correlation and population studies (e.g. gamma-ray loudness versus radio variability, radio versus gamma-ray fluxes, variability of FSRQs and BL Lacs), the localization of the gamma-ray emission site in AGN jets, the calculation of their Doppler factors using their multi-wavelength variability as well as a unification scheme for their broad-band spectral variability patterns which show an extremely diverse behavior.
Identifying Short, Extreme Blazar Flares with the HAWC Real-Time Flare Monitor
Thomas Weisgarber
We present a search for hour-scale very high energy (VHE) flares from 187 blazars monitored by the HAWC observatory. With a wide field of view of ~2 sr and sensitivity to energies above a few hundred GeV, HAWC functions as a survey instrument and facilitates searches for rapid variability in the VHE band. The currently operational HAWC real-time flare monitor takes advantage of this capability by issuing alerts within minutes of the identification of flaring activity. In this presentation, we describe the real-time flare monitor and report on the detection of several rapid flares in over 2 years of data collected between November 2014 and February 2017. We interpret these observations as an unbiased constraint on the rate of extreme blazar flares. We also summarize the prospects for future multiwavelength studies of extreme flares detected by the real-time flare monitor to provide clues into the mechanisms powering the blazar jets and probe the particles and fields in intergalactic space.
The AMEGO View of MeV AGNs
Tonia Venters
Extensive observations by Fermi, AGILE, and TeV telescopes have opened a new window into the high-energy physical processes of AGNs and raised questions about the physics of their jets, their formation and cosmological evolution, and their impact on their environments and the growth of structure in the Universe. Multiwavelength observations in X-rays and at GeV and TeV energies point to a large class of blazars whose peak output is in the poorly explored MeV band. With unprecedented sensitivity between 200 keV and 10 GeV, the All-Sky Medium Energy Gamma-ray Observatory (AMEGO) will fill in the MeV gap in blazar spectral energy distributions, providing crucial clues about their emission mechanisms in this regime. Also, with its wide field of view, AMEGO will survey the entire sky every 3 hours, allowing it to monitor variations in blazar light curves on short and long timescales that arise from changes in their jets. Furthermore, multiwavelength observations of the blazar population indicate that the sub-population of MeV blazars are among the most distant and luminous AGNs; thus, AMEGO observations of MeV blazars will allow it to probe the growth of supermassive black holes at earlier epochs than allowed by other types of AGNs.
Multi-Wavelength Correlations and AGN as Possible Particle Accelerators
John Gallagher
Robust connections exist between various energy regions in the spectra of nearby galaxies. The flux ratios from widely separated spectral regions are often remarkably constant while originating via very different processes with varying efficiencies. Although the radio-far infrared (FIR) correlation is best known, consistent flux ratio relationships also are found between gamma-rays and the FIR, as well as between gamma-ray and radio fluxes. These relationships are understood in cases where the underlying linkage involves related power sources, such as massive stars. However, some systems containing substantial AGNs as well as starbursts still fall close to the standard flux ratio correlations. In this talk, I will explore some of the astronomical issues playing into the interpretation of these correlations and also briefly touch on the range of cosmic power sources that may produce these patterns.
Astrophysics with Novel Observables
Despite intensive research, some fundamental properties of the most luminous particle accelerators and transients like AGNs, GRBs, etc. are unknown. Location and mechanisms of particle acceleration, connection to flaring and quiescent states, leptonic vs hadronic emission are open questions. Complexity of environments and processes make it hard to disentangle different scenarios. This suggests complementing conventional observables like the broadband spectrum with novel statistical observables like power spectral density (PSD) and the flux probability distribution (PDF) extracted from lightcurves, high and low energy polarisation, etc. While the PSD encodes the temporal structure of dynamical processes, particle acceleration and radiation and observing cadence, the PDF encodes the fundamental form of the emission processes (additive vs multiplicative). Polarised emission provides independent constraints on region geometry, magnetic fields, scattering processes, etc. to those from the above observables. These observables and related methods are also relevant for population studies. For e.g., time series methods used to compute the PSD, can also be used to estimate transient detection probability and resultant changes in source flux distribution, $\frac{dN}{dF}$. A detailed theoretical framework capable of predicting these statistical observables from first principles in these sources is currently in nascency. Finally, they are complementary and potentially crucial crosschecks to neutrino and gravitational wave observations in the multi-messenger era. In this presentation, I demonstrate the potential of using such novel observables and related methods to probe physics of individual particle accelerators and the population at large by applying it to prominent blazars like Mrk 421, BL Lac, PKS 2155, etc.
The magnetic reconnection model for blazar emission
Dimitrios Giannios
The recent observations of powerful, minute-timescale TeV flares from several blazars pose serious challenges to theoretical models for the blazar emission. In this talk, I will discuss the magnetic reconnection model for the blazar flaring. I argue that radiation emitted from the reconnection layers can account for the observed "envelope" of ~day-long blazar activity as well as the fastest observed flares. Moreover, I will show that the reconnection model predicts that the emission regions are characterized by rough equipartition between radiating particles and magnetic fields; in agreement with observations. Finally, I will show examples of lightcurves and spectra calculated directly with first-principle Particle in cell simulations of the magnetic reconnection layer.
Blazar Halo Morphology as a Probe of Helical Intergalactic Magnetic Fields
Andrew Long
In models of early universe cosmology, primordial magnetic fields with helicity can be created during cosmological inflation, and they may play a role in the generation of the matter / antimatter asymmetry of the universe. Such a primordial magnetic field will persist in the universe today as an intergalactic magnetic field, and the discovery of this cosmological relic will open a new window onto the early universe. In this talk I will discuss a new probe of helical intergalactic magnetic fields through TeV blazar halo morphology. The emission of TeV gamma rays from blazars at cosmological distances will induce an electromagnetic cascade when the TeV gamma rays are incident upon starlight and produce electron-positron pairs. These charged leptons are deflected by the presence of an intergalactic magnetic field, which forms a halo of GeV cascade gamma rays around the blazar. In this talk I will discuss how the halo can acquire a parity-violating shape if the intergalactic magnetic field is helical.
Weak lensing science with the WFIRST high-latitude survey
The WFIRST high-latitude survey (HLS) will provide an exciting dataset for constraining dark energy through a variety of measurement methods. In this talk, I will describe the current plans for the WFIRST HLS and the potential for competitive constraints on dark energy with weak lensing. I will also discuss the potential synergies with other surveys during the same time-frame, including the opportunities provided by joint analysis with LSST, Euclid, and CMB-S4. Finally, I will present the results of ongoing efforts to understand the impact of near-infrared detector systematics on weak lensing measurements, and to place requirements on the hardware to ensure that the scientific goals of the survey can be met.
Probing Dark Energy with the Canadian Hydrogen Intensity Mapping Experiment (CHIME)
Richard Shaw
CHIME will use the 21cm emission line of neutral hydrogen to map large-scale structure between redshifts of 0.8 and 2.5. By measuring Baryon Acoustic Oscillations (BAO) we will place constraints on the dark energy equation of state as it begins to dominate the expansion of the Universe, particularly at redshifts poorly probed by current BAO surveys. In this talk I will introduce CHIME, a transit radio interferometer designed specifically for this purpose. I will discuss its goals and describe the powerful new analysis techniques we have developed to confront the many challenges of such observations, in particular removal of astrophysical foregrounds which are six orders of magnitude larger than the 21cm signal. A smaller 40m x 37m pathfinder telescope is currently operating at the DRAO in Penticton, BC, and the full-sized 80m x 100m instrument will be completed this year. I will report on current progress, and the lessons already learned.
Cosmological Tests with Type Ia Supernovae
Daniel Shafer
Type Ia supernovae (SNe Ia) provided the first direct evidence for the accelerated expansion of the universe, leading to the now-standard Lambda-CDM model featuring dark energy. Beyond direct dark energy measurements, these accurate standard candles can be employed in a variety of ways to test the Lambda-CDM model. I will show how an analysis of the peculiar velocities of SNe Ia constitutes a powerful test of the cosmological model at the lowest redshifts. I will also illustrate that, while SNe Ia fundamentally measure distance, careful treatment can yield unbiased measurements of the relative expansion rate, facilitating fast subsequent cosmological inference and complementarity with similar measurements from other probes of geometry. I will present the highest-redshift SN Ia measurement of expansion from SNe observed via the HST CANDELS & CLASH programs.
Sample variance in the local measurements of the Hubble constant
Hao-Yi Wu
The Hubble constant H0 — the expansion rate of the Universe today — has recently been measured to percent-level precision, but two of the key results are in tension. The local measurements using distance ladders have indicated H0 ~ 73 km/s/Mpc, while the global measurements using cosmic microwave background have indicated H0 ~ 67 km/s/Mpc. In this talk, I will first review the methods and results of both local and global measurements. I will then present our efforts of using simulations to quantify the sample variance in the local measurements of H0. Taking into account the inhomogeneous selection of type Ia supernovae, we find that this tension cannot be alleviated by sample variance or local density fluctuations. I will conclude with other possible causes of this tension.
Testable Baryogenesis and Leptonic CP Violation in Seesaw Models
Jordi Salvado
Instituto de Fisica Corpuscular, University of Valencia
I will revisit the production of baryon asymmetries in the minimal type I seesaw model with two heavy Majorana singlets in the GeV range. Beside the tree level top scattering we include scattering processes on gauge bosons as well as $1\rightarrow 2$ processes of Higgs decay and inverse decays, that can contribute significantly to the wash-out effect. I will show that the region of parameter space that can account for the right baryon asymmetry overlaps considerably with the future experiment SHIP and FCC sensitivity regions. Finally I will show the relevant implication for determinating leptonic CP-violation and actual prediction of the baryon asymmetry from a hypothetical positive measurement in SHiP.
Leptogenesis from a First-Order Lepton-Number Breaking Phase Transition
In this talk I will discuss a model in which the matter / anti-matter asymmetry of the universe is generated during a first order cosmological phase transition associated with the spontaneous breaking of lepton-number, which gives rise to the Majorana mass for heavy sterile neutrinos. The dynamics leading to lepton-number generation, namely CP-violating scattering at a bubble wall, are reminiscent of electroweak baryogenesis. However, the degrees of freedom (sterile neutrinos) and energy scale are typically associated with thermal leptogenesis. The model predicts a stochastic background of gravitational waves (as in EW baryogenesis), neutrinoless double beta decay (as in thermal leptogenesis), as well as a light pseudo-Goldstone Majoron, which may play the role of dark matter.
The MATHUSLA detector: exploring the Lifetime Frontier and Cosmic Ray Physics
David Curtin
I will introduce the MATHUSLA proposal (Massive Timing Hodoscope for Ultra-Stable neutraL pArticles) for a ~200mx200m tracker above ATLAS or CMS at the HL-LHC. Its primary purpose is the search for exotic long-lived particles with lifetimes up to the BBN bound of ~ 0.1 seconds, where it would extend LHC sensitivity by orders of magnitude. In addition, the design and position of MATHUSLA close to the LHC main detectors may enable it to perform unique cosmic ray measurements. I will present some possible aspects of this cosmic ray physics program, while also soliciting input from the broader community.
A Radiative Neutrino Mass Model with SIMP Dark Matter
Takashi Toma
We propose the first viable radiative seesaw model, in which the neutrino masses are induced radiatively via the two-loop Feynman diagram involving Strongly Interacting Massive Particles (SIMP). The stability of SIMP dark matter (DM) is ensured by a Z5 discrete symmetry, through which the DM annihilation rate is dominated by the 3 to 2 self-annihilating processes. The right amount of thermal relic abundance can be obtained with perturbative couplings in the resonant SIMP scenario, while the astrophysical bounds inferred from the Bullet cluster and spherical halo shapes can be satisfied. We show that SIMP DM is able to maintain kinetic equilibrium with thermal plasma until the freeze-out temperature via the Yukawa interactions associated with neutrino mass generation.
Searching for Sterile Neutrinos at J-PARC with JSNS$^2$
Johnathon Jordan
The J-PARC Sterile Neutrino Search at the J-PARC Spallation Neutron Source (JSNS$^2$) will search for neutrino oscillations with $\Delta m^2 \sim$ 1 eV$^2$ at the J-PARC Material and Life Science Experimental Facility (MLF). The experiment will perform a search for $\bar{\nu}_\mu \rightarrow \bar{\nu}_e$ oscillations over a 24 m baseline using muon decay at rest neutrinos originating from 3 GeV proton interactions with a mercury target. Using two tanks of Gd-doped liquid scintillator with a total fiducial mass of 50 tons, JSNS$^2$ will exploit the unique signature of inverse beta decay (prompt positron signal, delayed gammas from neutron capture) to look for $\bar{\nu}_e$ appearance. Additionally, JSNS$^2$ will do novel cross section measurements using 236 MeV muon neutrinos from kaon decay at rest (KDAR).
Probing Fuzzy Dark Matter in Neutrino Experiments
Vedran Brdar
JGU Mainz
In this talk we will present novel ways in which neutrino oscillation experiments can probe dark matter. In particular, we focus on interactions between neutrinos and ultra-light ("fuzzy") dark matter particles with masses of order $10^{-22}$ eV. It has been shown previously that such dark matter candidates are phenomenologically successful and might help ameliorate the tension between predicted and observed small scale structures in the Universe. We will show that coherent forward scattering of neutrinos on fuzzy dark matter particles can significantly alter neutrino oscillation probabilities and lead to the effects which could be observable in current and future experiments. We present new limits on fuzzy dark matter (both scalar and vector) interacting with neutrinos using data from long-baseline accelerator experiments as well as the solar neutrino data.
Understanding neutron yield from neutrino interactions with ANNIE
Emrah Tiras
Neutron tagging is a promising experimental technique for separating between signal and background in a wide variety of astroparticle measurement. The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) located along the Booster Neutrino Beam at Fermilab has a goal of measuring the final state neutron multiplicity from charged current neutrino-nucleus interactions within the gadolinium-loaded water. Currently, ANNIE is running in Phase-I and it will be upgraded to Phase-II in the summer of 2017, by installing Large Area Picosecond Photodetectors (LAPPDs) in the detector. LAPPDs are a novel photodetector technology with single photoelectron time resolutions less than 100 picoseconds, and spatial imaging capabilities to within a single centimeter. They will play a crucial role to separate events of charged-current quasi-elastic (CCQE) interactions and inelastic multi-track charged current interactions. In this talk, we discuss the current status and future plans of the experiment.
Deep Learning and MicroBooNE to Investigate the MiniBooNE Excess
Jarrett Moon
MicroBooNE is a liquid argon TPC neutrino experiment based at Fermilab and situated on the Booster Neutrino Beam. MicroBooNE's primary aim is to investigate the excess of electron neutrino-like events seen by the MiniBooNE experiment, which is potential evidence for new non-Standard physics such as sterile neutrinos. This talk will discuss a search for low-energy electron neutrino interactions within the MicroBooNE detector. This analysis features a hybrid approach of traditional reconstruction methods along with a novel application of convolutional neural networks (CNNs), a deep learning algorithm highly adept at pattern recognition. This talk will describe the identification of events and the ways in which the CNNs are used. It will also outline the ways we are addressing issues related to applying CNNs, which are trained on simulated data, to data from the detector
Extragalactic/Cosmic Rays
TeV frontier in particle astrophysics
Hitoshi Murayama
Gamma-Ray and Radio Emission from Star-Forming Galaxies
Todd Thompson
I will review the physics of the far-infrared--radio correlation of star-forming galaxies and its implications for the GeV and TeV gamma-ray emission from normal galaxies, dense starbursts, and ultra-luminous galaxies. I will connect with predictions for the extra-galactic diffuse gamma-ray and high-energy neutrino backgrounds, and I will discuss implications for the physics of galactic winds.
Follow the roars and chirps: characterizing compact object mergers with gravitational wave, electromagnetic and multi-messenger radiation
Samaya Nissanke
To Follow
Small but mighty: Dark matter substructures
Francis-Yan Cyr-Racine
The fundamental properties of dark matter, such as its mass, self-interaction, and coupling to other particles, can have a major impact on the evolution of cosmological density fluctuations on small length scales. Strong gravitational lenses have long been recognized as powerful tools to study the dark matter distribution on these small subgalactic scales. In this talk, we discuss how gravitationally lensed quasars and extended lensed arcs could be used to probe non minimal dark matter models. We comment on the possibilities enabled by precise astrometry, deep imaging, and time delays to extract information about mass substructures inside lens galaxies. To this end, we introduce a new lensing statistics that allows for a robust diagnostic of the presence of perturbations caused by substructures. We determine which properties of mass substructures are most readily constrained by lensing data and forecast the constraining power of current and future observations.
Measurements of the Local Value of the Hubble Constant
Daniel Scolnic
I will review the local measurement by the SHOES team of the current rate of expansion (H0) of the universe from HST observations of Cepheid variables in host galaxies of Type Ia Supernovae. This measurement is a significant improvement from past measurements, and reduces many systematic uncertainties in past analyses. I will discuss the tension of our measurements with the inferred value of H0 from CMB measurements. I will also go over what improvements to expect in the next two years.
Observations of Supernova Remnants and Pulsar Wind Nebulae with VERITAS
Amanda Weinstein
The gamma-ray emission that arises from charged particle interactions with ambient photons and interstellar material provides insight into the nature and mechanism of charged particle (cosmic ray) acceleration taking place within the phenomena left behind by the death of massive stars: i.e. supernova remnants (SNRs) and pulsar wind nebulae (PWNe). The very-high-energy (VHE) gamma-ray observatory VERITAS has undertaken observations of a number of different SNRs and PWNe, with the twin goals of constraining particle acceleration models via measurements of the broadband energy spectrum and of mapping particle diffusion within and around these objects. We will provide an overview of recent results from this program of observations.
DAMPE and its first year in orbit
Stephan Zimmer
Universite de Geneve CH
The DArk Matter Particle Explorer (DAMPE), is a space mission within the strategic framework of the Chinese Academy of Sciences, resulting from a collaboration of Chinese, Italian, and Swiss institutions, is a new addition to the growing number of particle detectors in space. It was successfully launched in December 2015 and has commenced nominal science operations since shortly after launch. Lending technologies from its predecessors such as AMS and Fermi-LAT, it features a powerful segmented electromagnetic calorimeter which thanks to its 31 radiation lengths enables the study of charged cosmic rays in the energy domain of up to 100 TeV and gamma rays of up to 10 TeV. The calorimeter is complemented with a silicon-tungsten tracker converter which yields a comparable angular resolution as current space-borne pair-conversion gamma-ray detectors. In addition, the detector features a top anti-coincidence shield made of segmented silicon plastic scintillators and a boron-doped plastic scintillator on the bottom of the instrument to detect delayed neutrons arising from cosmic ray protons showering in the calorimeter. In this contribution I will present an overview of the mission and summarize the latest results in the domain of charged cosmic rays, gamma rays and heavy ions that were obtained using 1 year of orbit data.
Full Sky Indirect Detection: Challenges and Strategies
Sheldon Campbell
One of the best current constraints for indirect detection of dark matter at the 1-100 GeV mass scale is the Fermi-LAT stacking analysis of satellite dwarf galaxies of the Milky Way. This constraint is based on observations in a very small fraction of the sky, whereas undetectable, dense dark matter structures are predicted to be distributed throughout the Milky Way halo. I will describe strategies that open up searches for dark matter signatures to the whole sky. These methods improve sensitivity to cold thermal relics, as well as other dark matter scenarios.
Searching for Dark Matter in M31 with HAWC
Andrea Albert
Los Alamos National Lab
There is overwhelming evidence that non-baryonic dark matter constitutes ~85% of the mass in the Universe. Many promising dark matter candidates, like Weakly Interacting Massive Particles (WIMPs), are predicted to produce Standard Model particles like gamma rays via annihilation or decay. These gamma-rays would be observed by ground-based arrays like the High Altitude Water Cherenkov (HAWC) Observatory. With its wide field of view and constant monitoring, HAWC is well-suited to search for dark matter in extended targets like M31. We will present results from our search for a signal from dark matter annihilation or decay in M31 using 760 days of data from HAWC. A detection of dark matter through cosmic messengers would not only confirm the existence of dark matter through a non-gravitational force, but also indicate the existence of physics beyond the Standard Model.
Dark Matter Searches using Dwarf Galaxies with HAWC
Tolga Yapici
The High Altitude Water Cherenkov (HAWC) gamma-ray observatory is a wide field-of-view observatory sensitive to 0.5 TeV - 100 TeV gamma-rays and cosmic-rays in the State of Puebla, Mexico at an altitude of 4100m. The HAWC observatory performed an indirect search for dark matter via GeV-TeV photons resulting from dark matter annihilation and decay considering various sources, including 15 dwarf spheroidal galaxies (dSphs) and 31 dwarf irregular galaxies (dIrr), as well as combined limits for the dSphs and dIrrs. We searched for dark matter annihilation and decay at dark matter masses above 1 TeV. We have not detected statistically significant excess from these sources, thus we will present the calculated annihilation cross-section and decay lifetime limits.
Galactic Dark Matter substructure searches with the Cherenkov Telescope Array (CTA)
In the current understanding of structure formation in the Universe, the Milky Way is embedded in a clumpy halo of dark matter (DM). Regions of higher DM density are expected to present an enhanced rate of annihilation into gamma-rays with respect to the smooth halo regions. These point-like gamma-ray fluxes can possibly be detected by gamma-ray observatories on Earth, like the forthcoming Cherenkov Telescope Array (CTA). In this talk, I will present the expected gamma-ray fluxes from DM annihilation in Galactic subhalos together with a rigorous assessment of modeling and statistical uncertainties. I will then discuss the sensitivity of the CTA instrument to detect the brightest Galactic DM density clump in the projected extragalactic sky survey. I will also show how a CTA extragalactic survey dataset can be used to search for DM substructures as anisotropies in the angular power spectrum of the data.
Multi-wavelength Approach to Indirect Dark Matter Searches with RX-DMFIT
Alex McDaniel
Well motivated dark matter particle models predict self-annihilating dark matter to yield Standard Model particles that can potentially be detected by astrophysical observations in systems such as dwarf galaxies, normal galaxies, and galaxy clusters. The potential emission from the charged particle byproducts of dark matter annihilation includes radio emission due to synchrotron radiation as well as X-rays from inverse Compton scattering of CMB and starlight photons. These secondary emissions provide a method of probing the nature of dark matter that is complementary to previous gamma-ray searches and can place competitive constraints on dark matter properties. To facilitate multi-wavelength dark matter searches we have developed RX-DMFIT (Radio and X-ray - DMFIT), a tool for calculating the the expected radio and X-ray signals from dark matter annihilation. In this talk I will present RX-DMFIT and discuss the relevant particle and astrophysical components of the multi-wavelength approach including diffusion effects, radiative energy loss processes, and magnetic field modeling.
The GAPS Experiment for Cosmic-ray Antinuclei Signatures of Dark Matter
Kerstin Perez
The GAPS Experiment is the first experiment optimized specifically for low-energy antideuteron and antiproton cosmic-ray signatures. Low-energy antideuterons have been recognized as an extraordinarily low-background signature of new physics, and low-energy antiprotons are probes of both light dark matter and cosmic-ray propagation models. Together, these signatures offer a potential breakthrough in unexplored dark matter parameter space, providing complementary coverage with direct detection, collider, and other indirect searches. The GAPS program is designed to utilize long-duration balloon flights from Antarctica, and is currently scheduled by NASA for its first Antarctic flight in late 2020. The experiment uses a novel detection technique, based on exotic atom capture and decay, to be sensitive to antinuclei in an unprecedented low energy range (<0.25 GeV/n). The heart of GAPS will be 10 planes of lithium-drifted Si (Si(Li)) detectors, surrounded on all sides by a plastic scintillator time-of-flight. In this contribution, I will present the design, status, and discovery potential of the GAPS scientific program.
General Constraints on Decaying Dark Matter from the Cosmic Microwave Background
Chih-Liang Wu
Energy injection from dark matter (DM) between recombination and reionization could affect the ionization and thermal history of the universe, leaving a distinctive imprint on the cosmic microwave background (CMB). Therefore, precise measurements of the temperature and polarization anisotropies of the CMB provide a powerful tool by which to constrain DM. In this talk, I will characterize the possible CMB signatures via principal component analysis (PCA) and set constraints on the DM lifetime and decaying fraction. I will show that in many cases, a single number can be used to parameterize the effect of DM on the CMB. This result yields a simple prescription for detectability and an easy way to set model-independent bounds, which I have validated using Markov chain Monte Carlo methods applied to the Planck likelihood.
Primordial Black Holes as Dark Matter: Constraints from the Milky Way
Emma Storm
While constraints on primordial black holes as dark matter are strong over a wide mass range, a narrow window in the stellar-mass range remains relatively unconstrained. The recent discoveries of gravitational waves from merging black holes in roughly the 10-30 solar mass range has re-ignited the discussion of whether primordial black holes in this mass range could be a candidate for dark matter. If such sources exist in our Galaxy, they should produce observational signatures in the form of broadband radiation due to the accretion of gas onto the black holes. I will discuss a novel approach to constraining primordial black holes as dark matter using X-ray and radio observations of the Milky Way bulge, where both the density of gas and dark matter is highest, by including a realistic treatment of accretion processes based on observational evidence of inefficient accretion in our Galaxy today. The resulting constraints essentially rule out primordial black holes as a dark matter candidate in this unconstrained stellar-mass regime, and are complementary to searches in the early universe and other constraints from the dynamics of local systems. I will also comment on the potential of future surveys with SKA to further constrain this dark matter candidate.
Supernova 1987A Constraints on Low-Mass Hidden Sectors
Jae Hyeok Chang
YITP, Stony Brook
Supernova 1987A provides strong constraints on hidden-sector particles with masses below ~100 MeV. If such particles are produced in sufficient quantity, they reduce the cooling time of the supernova, in conflict with observations. We consider the resulting constraints on dark photons, milli-charged particles, and sub-GeV dark matter coupled to dark photons. For the first time, we include the effects of finite temperature and density on the kinetic-mixing parameter, ε, in this environment. Furthermore, we estimate the systematic uncertainties on the cooling bounds by deriving constraints assuming one analytic and four different simulated temperature and density profiles of the proto-neutron star. Our constraints exclude novel parameter spaces for sub-GeV dark matter, and for dark photons differs significantly from previous work in the literature.
Dark matter velocity spectroscopy
Dark matter decays or annihilations that produce line-like spectra may be smoking-gun signals. However, even such distinctive signatures can be mimicked by astrophysical or instrumental causes. We show that velocity spectroscopy-the measurement of energy shifts induced by relative motion of source and observer-can separate these three causes with minimal theoretical uncertainties. The principal obstacle has been energy resolution, but upcoming experiments will reach the required 0.1% level. We demonstrate this technique using existing technologies.
AMEGO: Dark Matter Prospects
The era of precision cosmology has revealed that ~80% of the matter in the universe is dark matter. Two leading candidates, motivated by both particle and astro-physics, are Weakly Interacting Massive Particles (WIMPs) and axionlike particles (ALPs), both of which have distinct gamma-ray signatures. The Fermi Large Area Telescope (Fermi-LAT) Collaboration continues to search for WIMP and ALP signatures spanning the 50 MeV to >300 GeV energy range in dwarf spheroidal galaxies, galaxy clusters, pulsars, the Galactic center, and a variety of other astrophysical targets. Thus far, Fermi-LAT has not conclusively detected a dark matter signature. There is however an intriguing excess of gamma rays associated with Galactic center (GCE). The poorer angular resolution of the LAT at lower energies makes source selection challenging and the true nature of the detected signal remains unknown. Identifying whether the GCE excess is a dark matter signature, a population of astrophysical point sources, or a combination of the two, requires higher resolution observations. ALP searches would also greatly benefit from increased angular and energy resolution at lower energies. To address these, we are developing AMEGO, the All-sky Medium Energy Gamma-ray Observatory. It has a projected energy and angular resolution that will increase sensitivity by a factor of 20-50 over previous instruments. This will allow us to explore new areas of WIMP and ALP parameter space and provide unprecedented access to the particle nature of dark matter. I will present an overview of the AMEGO dark matter search strategy as well as sensitivity projections.
Impact of Galactic subhalos on indirect dark matter searches with cosmic-ray antiprotons
The AMS-02 experiment has recently released a new measurement of the cosmic-ray antiproton spectrum. Assuming that cold dark matter (CDM) is made of self-annihilating particles, the AMS-02 data can be used to constrain the annihilation cross section. It is known however that CDM structures itself on scales much smaller than typical galaxies. This structuring translates into a very large population of subhalos which must impact predictions for indirect searches. I will present a dynamically constrained and consistent semi-analytic model of Galactic subhalos (based on arXiv:1610.02233) and discuss its impact on current constraints (or hot spots) inferred from the AMS-02 antiproton data.
Searching for optical counterparts for IceCube neutrinos using the Dark Energy Camera
The IceCube neutrino observatory routinely detects astrophysical neutrinos at TeV to PeV energies, but the origin of this signal is still unknown. To facilitate the identification of electromagnetic counterparts via time-domain searches, IceCube has begun issuing realtime public alerts for the highest confidence and best-localized neutrino events (median angular uncertainty < 1.0 deg). During the same period, the Dark Energy Survey (DES) has developed a Target-of-Opportunity program to search for optical transients associated with gravitational wave events using the Dark Energy Camera (DECam) on the Blanco telescope. During the 2017-2018 observing season for DES, we will expand this program to search for explosive optical transients, such as core-collapse supernovae, coincident with several IceCube alerts. The large aperture (4 m), wide field of view (3 deg^2), and southern location (latitude -30 deg) of Blanco/DECam complements the existing follow-up program for IceCube events. By targeting several neutrino events per year, beginning in 2017, we can reasonably expect to identify the first individual TeV-PeV neutrino source, or else place meaningful constraints on the internal physical processes occurring within core-collapse supernovae and other explosive optical transients.
Time-domain astronomy with the Fermi GBM
The Fermi Gamma-ray Burst Monitor (GBM) is an all-sky monitoring instrument sensitive to energies from 8 keV to 40 MeV. Over the past 8 years of operation, the GBM has detected over 240 gamma-ray bursts per year and provided timely GCN notices with localization to few-degree accuracy for follow-up observations. In addition to GRBs, Galactic transients, solar flares, and terrestrial gamma-ray flashes have also been observed. In recent years we have also been searching the continuous GBM data for electromagnetic counterparts to astrophysical neutrinos and gravitational wave events, as these are believed to be associated with gamma-ray bursts. With continuous data downlink every few hours and a temporal resolution of 2 microseconds, GBM is well suited for observing transients and supporting EM followup in the era of multi-messenger astronomy. I will discuss the details of our searches and summarize their current status.
Searching for Counterparts to Cosmic Neutrinos Using the Fermi LAT Satellite
Colin Turley
We present the results of an archival coincidence analysis between public gamma-ray data from the Fermi LAT satellite and public neutrino data from the IceCube neutrino observatory during its 40-string and 59-string observing runs. The analysis has the potential to detect either a statistical excess of correlated neutrino + gamma-emitting sources or alternatively, one or more rare, high-multiplicity events such as gamma-ray burst + neutrino coincidences. This work is an example of the multimessenger studies currently being performed by the Astrophysical Multimessenger Observatory Network (AMON). We will present the relevant datasets, the statistical approach, and the results of the analysis.
The MAGIC multi-messenger program
Recently many observational facilities have entered in their operational phase or they will approach the design sensitivity in the nearest future, allowing us to observe the universe with very high energy photons, cosmic rays, neutrinos and gravitational waves. The MAGIC observatory: a system of two Imaging Atmospheric Cherenkov Telescopes located at the Canary Island of La Palma, thanks to its low energy threshold (~ 50 GeV) and fast slewing capability is taking an active role in many multi-messenger activities. Since many years MAGIC is involved in several multi-instrument programs mostly connected to transient phenomena such as: Gamma Ray Bursts, Fast Radio Bursts, follow-up of gravitational wave and neutrino alerts. In this talk I will present the MAGIC telescopes strategies for multi-messenger follow-up and observations of transient sources and their recent results.
VHE Gamma Rays and Multi-Messenger Astrophysics: VERITAS Status and Strategies for CTA
Brian Humensky
The direct detection, for the first time, of gravitational wave (GW) transients by Advanced LIGO has motivated searches for their electromagnetic counterparts at all wavelengths. Neutrino astronomy is an emerging area of study in high-energy astrophysics, and astrophysical neutrinos are natural cousins of very high energy (VHE; E > 100 GeV) gamma rays. The VERITAS gamma-ray observatory has an active program of follow-up observations in the directions of potentially astrophysical high-energy neutrinos detected by IceCube, including prompt alerts, as well as in the direction of GW transients. The next-generation gamma-ray observatory Cherenkov Telescope Array (CTA) has similar plans. Since both neutrinos and gamma rays are produced in hadronic interactions, a joint study of both channels could reinforce the hadronic origin of the gamma rays, revealing high-power cosmic-ray accelerators and probing their properties. The directions of GW transients are uncertain at the level of tens to hundreds of square degrees, but the wide field of view provided by gamma-ray observatories (3.5° for VERITAS and ≥ 4.5° for CTA) can rapidly scan large regions; detections can yield improved localizations as well as insights on the astrophysics of the GW transient events. We present recent results from the VERITAS follow-up program and strategies for CTA.
Neutrinos from TeV blazars
The current generation of Cherenkov telescopes, together with Fermi-LAT, has greatly improved our knowledge of blazar physics, providing a precise measurement of their gamma-ray emission. The modeling of multi-wavelength spectral energy distributions of blazars has been proven to be a unique tool to constrain and refine blazar emission models, and thus the physics of outflows from super-massive black-holes. However, the long-standing question on leptonic vs hadronic models still remains open. A smoking-gun for hadronic processes in blazars is represented by neutrinos. In this contribution I will discuss blazar hadronic models and their associated neutrino emission, comparing it to current and future neutrino observatories.
Cosmic rays and neutrinos from blazars with nuclei injection
Xavier Rodrigues
DESY Zeuthen
We test the hypothesis that blazars are sources of Ultra-High Energy Cosmic Rays (UHECR), considering acceleration of isotopes heavier than Hydrogen. We perform numerical simulations of CR interactions using the NeuCosmA code. The injected isotope may efficiently disintegrate at high energies, thus producing a population of lighter secondaries. We study the ejected CR composition and neutrino spectra for different blazar classes: Flat-Spectrum Radio Quasars (FSRQs) and BL Lacs. We conclude that the former contribute significantly to the diffuse neutrino flux, whose maximal energy depends on the injected composition. BL Lacs, on the other hand, are found to dominate the UHECR flux. We show that blazars are able to power the UHECRs, while not violating the most recent IceCube limits on blazar neutrinos.
Exploring the connection between Gamma-Ray Bursts, Ultra-High Energy Cosmic Ray Nuclei and neutrinos
Daniel Biehl
Neutrino stacking analyses constrain the paradigm that Gamma-Ray Bursts (GRBs) are the sources of the Ultra-High Energy Cosmic Rays (UHECRs). The majority of previous studies focused on a pure proton composition of UHECRs; however, recent measurements by the Pierre Auger Observatory indicate a trend towards a mixed UHECR composition. Here, we present a combined source-propagation model for neutrino and cosmic-ray emission by GRBs with the injection of nuclei, where we take into account that a nuclear cascade from photo-disintegration can fully develop in the source. Our main objective is to test whether recent results from the IceCube and Pierre Auger Observatory can be accommodated with the paradigm that GRBs can be the sources of UHECRs. We demonstrate that the expected prompt neutrino flux weakly depends on the injected composition, which translates into strong constraints on UHECR models even in the case of nuclei. While the UHECR spectrum and composition measured by the Pierre Auger Observatory can be self-consistently reproduced over an energy range even covering the ankle, the IceCube bounds from the GRB stacking are already in tension with this hypothesis. If, however, only the UHECRs beyond the ankle come from GRBs, future neutrino data will be needed to further constrain this hypothesis.
Black hole jets in clusters of galaxies as common origins of high-energy cosmic particles
Ke Fang
It has been a mystery that with ten orders of magnitude difference in energy, high-energy neutrinos, ultrahigh-energy cosmic rays, and sub-TeV gamma rays all present comparable energy injection rate, hinting an unknown common origin. Here we show that black hole jets embedded in clusters of galaxies may work as sources of all three messengers. By simulating the particle propagation in the magnetized intracluster medium (ICM), we show that the highest-energy particles leave the source rectilinearly and contribute to the observed cosmic rays above 0.1 EeV, the intermediate-energy cosmic rays interact with the ICM gas and produce secondary neutrinos and gamma rays, and the lowest-energy cosmic rays are cooled due to the expansion of the radio lobes inflated by the jets. The energy output required to explain the measurements of all three messengers is consistent with observations and theoretical predictions of black hole jets in clusters.
The All-Sky Automated Survey for Supernovae (ASAS-SN)
Christopher Kochanek
Dept. of Astronomy, The Ohio State University
I will summarize the All-Sky Automated Survey for Supernovae (ASAS-SN), the first astronomical survey to observe the entire visible sky for bright optical transients on a nightly basis.
Deep Sea Neutrino Telescopes: Latest Results from ANTARES and Perspectives for KM3NeT
Daan Van Eijk
WIPAC
The ANTARES neutrino telescope has operated in the Mediterranean deep sea for roughly ten years. Its goal is to search for astrophysical neutrinos, both as a diffuse flux and originating from possible point sources. ANTARES is complementary to other neutrino observatories such as IceCube because of its good angular resolution and distinctive sky coverage. The ANTARES science program also includes indirect dark matter searches and participation in the so-called multi-messenger approach for transient sources. KM3NeT is the successor of ANTARES. Currently under construction, this neutrino telescope uses a single detector technology at two separate geographical sites, featuring different detector geometries. The respective science programs are dubbed KM3NeT ARCA and ORCA. KM3NeT ARCA is situated off the Sicilian coast in Italy and aims to detect multi-TeV astrophysical neutrinos. Due to an excellent angular resolution for all neutrino flavors, KM3NeT is able to study the point of origin of astrophysical neutrinos with unprecedented precision. In doing so, KM3NeT can independently confirm the IceCube astrophysical flux within one year of data taking. KM3NeT ORCA on the other hand will study atmospheric neutrino oscillations in the energy range below 100 GeV. Located off the southern French coast, it aims at a determination of the neutrino mass hierarchy with a precision of 3 sigma in three years of data taking. This presentation gives an overview of the latest ANTARES results. In addition, the current status of KM3NeT as well as its physics potential is discussed.
Status of IceCube-Gen2
The IceCube Neutrino Observatory has detected the first high-energy neutrinos of astrophysical origin, characterized its diffuse flux, and performed point- source searches throughout the sky. Now, a next-generation, in-ice Cherenkov telescope is being designed with increased sensitivities to high-energy neutrinos. IceCube-Gen2 will encompass about 8 cubic-kilometers of ice at the South Pole. Additional envisioned components such as a large surface array, dense infill array, and complementary radio array would improve sensitivities across a wide energy band. Further, several studies to upgrade and optimize the optical module are ongoing and show promise to cost-effectively increase the photosensitive area. We will summarize these developments and focus on the projected sensitivities for high-energy neutrinos with IceCube-Gen2.
The ANITA Experiment After Four Flights
Cosmin Deaconu
The ANtarctic Impulsive Transient Antenna (ANITA) is a long-duration balloon experiment with an interferometric radio payload. ANITA scans Antarctic ice for Askaryan radio emission from interactions of extremely-high-energy (>1 EeV) cosmogenic neutrinos. ANITA is also sensitive to geomagnetic radio emission from extensive air showers (EAS) initiated by both ultra-high-energy cosmic rays and tau leptons generated by Earth-skimming tau neutrinos. The fourth flight of ANITA recently was successfully completed in December 2016. After an overview of the instrument and analysis methods, this talk will highlight key results and ongoing analyses from the four flights of ANITA. Improvements for future flights will also be briefly discussed.
Low-power radio frequency amplification module with dynamic tunable notch filters for the Antarctic Impulsive Transient Antenna (ANITA)
Oindree Banerjee
The Antarctic Impulsive Transient Antenna (ANITA) is a NASA long-duration balloon experiment with the primary goal of detecting ultra-high-energy ($>10^{18}\,\mbox{eV}$) neutrinos via the Askaryan Effect. The fourth ANITA mission, ANITA-IV, recently flew from Dec 2 to Dec 29, 2016. The most significant change in signal processing in ANITA-IV from previous flights was the inclusion of the Tunable Universal Filter Frontend (TUFF) boards. The TUFF boards had a three-fold purpose: 1) second-stage amplification by 45 dB to help boost the $\sim\,\mu\mbox{V-level}$ radio frequency (RF) signals to $\sim$ mV-level for digitization, 2) mitigation of narrow-band, anthropogenic noise with tunable, switchable RLC notch filters and 3) supplying power via bias tees to the first-stage, antenna-mounted amplifiers. In this talk, we outline the design and performance of the TUFF boards during the ANITA-IV flight.
The Fourth Flight of the ANtarctic Impulsive Transient Antenna (ANITA)
Andrew Ludwig
ANITA is a NASA balloon-borne radio (200-1200 MHz) telescope with a primary goal of detecting coherent radio emission from ultra-high-energy (UHE) neutrinos. The instrument is also sensitive to detect radio impulses produced by cosmic ray induced extensive air showers. ANITA-4 flew Dec 2, 2016 and landed Dec 29, 2016 after 28 days. This talk will present the ANITA-IV instrument, flight operation, calibration, and the status of ongoing data analysis.
The Askaryan Radio Array: Current Status and Future Plans
The Askaryan Radio Array (ARA) is a gigaton, ultra-high energy (>10 PeV) radio neutrino detector under construction at South Pole; it searches for the characteristic radio Cherenkov pulses that are produced by neutrino interactions in the dense polar ice. The array has deployed three of the proposed ~37 stations so far, at depths up to 200m. In this talk, we will summarize the current status of the experiment's neutrino searches and hardware developments. We will also discuss the future of the array, including the planned deployment of an additional two upgraded stations in austral summer 2017, one of which is equipped with phased-array triggering capabilities.
Towards Space Probes of Astrophysical and Cosmogenic Neutrinos
Angela V. Olinto
The Probe Of Extreme Multi-Messenger Astrophysics (POEMMA) mission is being designed to establish charged particle astronomy with ultra-high energy cosmic rays (UHECRs) and to observe astrophysical and cosmogenic neutrinos using both fluorescence and Cherenkov emission from extensive air-showers (EAS). The POEMMA design combines the concept developed for the Orbiting Wide-field Light-collectors (OWL) mission, the experience of the Extreme Universe Space Observatory (EUSO) on the Japanese Experiment Module (JEM-EUSO) fluorescence detection camera as recently flown on EUSO-SPB1 by a NASA Super Pressure Balloon (SPB) from Wanaka, New Zealand, with the recently proposed CHerenkov from Astrophysical Neutrinos Telescope (CHANT) concept to form a multi-messenger probe of the most extreme environments in the Universe. The fluorescence and Cherenkov study of EASs from space will yield orders-of-magnitude increase in statistics of observed UHECRs at the highest energies and the observation of astrophysical and cosmogenic flux of neutrinos for a range of UHECR models. These observations should solve the long-standing puzzle of the origin of the highest energy particles ever observed, providing a new window onto the most energetic environments and events in the Universe, and on studies of particle interactions well beyond accelerator energies.
Progress in In Situ UHE Neutrino Detectors: Joint Studies on Simulation and Ice
Carl Pfendner
Two ultrahigh-energy ($>10^{17}$ eV) neutrino detectors are being deployed in Antarctica: the Askaryan Radio Array (ARA) and the Antarctic Ross Ice-Shelf Antenna Neutrino Array (ARIANNA). As the experiments differ in both the design and the surrounding ice, we describe the progress of a joint effort to understand the importance of these differences and we demonstrate convergent results in their simulated detector responses.
The ARIANNA Neutrino Detector
Christopher Persichilli
The ARIANNA experiment is designed to observe cosmogenic neutrinos with energies in excess of 10^16 eV. The design envisions a grid of over 1000 independent radio detector stations, using high-gain log-periodic dipole antennas just below the surface to measure the characteristic Askaryan radio pulses from particle cascades generated in the ice by these neutrinos. Spaced a kilometer apart, this array would effectively survey nearly 1000 cubic kilometers of Antarctic ice. A pilot array has been operating on the Ross Ice-Shelf since December 2014. We will report on most recent results concerning the hardware performance, the search for neutrinos, detection of cosmic ray background, signal propagation in the ice, and the future potential of a large array.
GRAND - the Giant Radio Array for Neutrino Detection
Anne Zilles
Institut d'Astrophysik de Paris
The Giant Radio Array for Neutrino Detection (GRAND) aims at detecting ultra-high-energy extraterrestrial neutrinos via the extensive air showers induced by the decay of tau leptons created in the interaction of neutrinos under the Earth's surface. Consisting of an array of $\sim 10^5$ radio antennas deployed over $\sim 2\cdot 10^5\,\mbox{km}^2$, GRAND plans to reach, for the first time, a sensitivity of $\sim 10^{-10}\,\mbox{GeV cm}^{-2} \mbox{s}^{-1} \mbox{sr}^{-1}$ above $5\cdot10^{17}\,\mbox{eV}$ and a sub-degree angular resolution, beyond the reach of other planned detectors. In this talk, we will show preliminary designs and simulation results, plans for the ongoing, staged approach to construction, and the rich research program made possible by the proposed sensitivity and angular resolution.
Trinity: An experiment to detect cosmogenic neutrinos with the Earth skimming technique
The predictions of the flux of cosmogenic neutrinos at $10^{9}$ GeV are pretty solid and solely depend on the composition of the primary flux of cosmic-rays above $10^{10}$ GeV. Pushing the experimental sensitivity into the predicted flux levels is a challenge and the hunt to detect the first cosmogenic neutrino is ongoing. A major obstacle for experiments is to get a large enough acceptance while keeping costs reasonable. We have performed a conceptual design study of a dedicated array of Cherenkov telescopes that uses the Earth skimming technique to detect taus, which are produced when tau neutrinos convert in the Earth's crust and then emerge from the ground. Our study shows that one can build an experiment based on small Cherenkov telescopes, which reaches a sensitivity of $2\cdot10^{-9}$ GeV cm$^{-2}$ s$^{-1}$ sr$^{-1}$ at $10^9$\,GeV for a total cost envelope of $4M. The projected sensitivity is competitive with other proposed neutrino experiments in that energy range and outperforms them in terms of costs. In this talk we present details of our design study and discuss the proposed array of Cherenkov telescopes, which we named Trinity.
Radio Detection of Neutrino-Induced Tau Lepton Air Showers at Altitude
Stephanie Wissel
Cosmogenic neutrinos produced by cosmic rays during propagation are expected to arrive at Earth in roughly equal ratios of electron, muon, and tau neutrinos. Due the cyclic regeneration of tau neutrinos and tau leptons, radio-based experiments are sensitive to the air showers produced by tau leptons emerging from the interaction of Earth-skimming tau neutrinos. We present a study of the sensitivity and optimization of radio detectors at altitudes to tau-lepton showers and discuss prospects for future mountain-top or balloon-borne instruments.
Cosmological electromagnetic cascades as probe of the Universe
Thomas Fitoussi
IRAP Toulouse - France
Very high energy gamma-rays produced by extragalactic sources are absorbed in the intergalactic medium. High energy photons interact with low energy photons from the extragalactic background light (UV to IR) producing pairs of electron positrons. Newly created leptons scatter CMB photons to gamma-ray energies. Spectral properties, halo extension and time delay due to the cascade strongly depend on the source and intergalactic medium properties. In particular, the development of such a cascade is crucial to probe the extragalactic magnetic field (EGMF) which cannot be probed by other means. We have developed a new Monte Carlo code to simulate the cascade physics. After a short presentation of the code, I will review how the search for cascade signatures can be used to derive constrains on the extragalactic medium. To conclude I will discuss the cascade contribution to the extragalactic gamma-rays background derived from recent Fermi data.
AMEGO as supernova alarms: alert, probe and diagnosis of Type Ia explosions
A Type Ia supernova (SNIa) could go entirely unnoticed in the Milky Way and nearby starburst galaxies, due to the large optical and near-IR extinction in the dusty environment, low radio and X-ray luminosities, and a weak neutrino signal. But the recent SN2014J confirms that Type Ia supernovae emit γ-ray lines from $^{56}$Ni→$^{56}$Co→$^{56}$Fe radioactive decay, spanning 158 keV to 2.6 MeV. The Galaxy and nearby starbursts are optically thin to γ-rays, so the supernova line flux will suffer negligible extinction. The All-Sky Medium Energy Gamma-ray Observatory (AMEGO) will monitor the entire sky every 3 hours from ~200 keV to >10 GeV. Most of the SNIa gamma-ray lines are squarely within the AMEGO energy range. Thus AMEGO will be an ideal SNIa monitor and early warning system. We will show that the supernova signal is expected to emerge as distinct from the AMEGO background within days after the explosion in the SN2014J shell model. The early stage observations of SNIa will allow us to explore the progenitor types and the nucleosynthesis of SNIa. Moreover, with the excellent line sensitivity, AMEGO will be able to detect the SNIa at a rate of a few events per year and will obtain enough gamma-ray observations over the mission lifetimes (~10 SNIa) to sample the SNIa. The high SNIa detection
Gamma-rays and Neutrinos from Efficient Cosmic-Ray Acceleration in Young Supernovae
Vikram Dwarkadas
Univ of Chicago
It is widely accepted that supernova (SN) shocks can accelerate particles to very high energies, although the maximum energies are still unclear. These accelerated particles can interact with other particles to produce gamma-ray emission. Details of the process are not well characterized, including the dynamics and kinematics of the SN shock wave, the nature and magnitude of the magnetic field, and the details of the particle acceleration process. The properties of the SN shock itself are regulated by the surrounding medium, which in a massive star is formed by mass-loss from the pre-SN progenitor during its lifetime. Thus the spectra of accelerated particles, and the resultant gamma-ray emission, depend on the evolution of the SN progenitor before it explodes. Herein we explore detailed aspects of SN evolution, particle acceleration, and the non-thermal emission, for young SNe right after outburst. We use these calculations to predict and constrain the detectability of young SNe of various types, via their hadronic signatures, namely gamma-ray emission from pp interactions, and synchrotron emission from secondary leptons. Our calculations also allow us to constrain the resulting TeV neutrino flux. After outlining the general considerations, we will provide a quantitative example in the form of the well-studied radio SN 1993J, for which we will calculate the gamma-ray and neutrino flux. We will also comment on the horizon of detectability of 1993J-like SNe with the upcoming Cherenkov Telescope Array (CTA).
Evaluating the Gamma-Ray Emission from SN 1987A, the Closest Visible Supernova to us in the last 300 years
Using a simplified model for the hadronic emission from young supernova remnants (SNRs), we derive an expression to calculate the hadronic luminosity with time, depending on the supernova (SN) ejecta density profile and the density structure of the surrounding medium. We then use this to estimate the gamma-ray emission from SN 1987A, the nearest visible supernova to us in over 300 years. The SN is surrounded by a three-ringed wind-blown structure that encloses a dense and complex surrounding medium. We present a hydrodynamic model of the medium surrounding SN 1987A, and the evolution of the SN shock wave within this medium. We demonstrate that our model is able to reproduce the time-evolution of the X-ray emission from SN 1987A, including detailed X-ray spectra. We then use this same hydrodynamic model to compute the gamma-ray emission from SN 1987A, and compare to observational constraints. Finally we reference recent observations of SN 1987A to predict the gamma-ray detectability of 87A in future. (I have also submitted an abstract to the Extragalactic sources (incl. transients) track, with various co-authors).
Cosmic-Ray Mass Spectrometry and Starburst Galaxies
Even when ultrahigh-energy (E > 10^10 GeV) cosmic rays (UHECRs) are heavy nuclei (with nuclear charge Z) as indicated by existing data, the pointing of cosmic rays to the nearest extragalactic sources (distance D) at highest energies remains expected, because the bending of the cosmic ray goes as BZD/E (B is the extra-galactic magnetic field). In addition, the acceleration capability of the sources grows linearly in Z, while the energy loss per distance traveled decreases with increasing Z and nucleon number A. Each of these facts favors heavy nuclei as the primaries of UHECRs. A single dimensional analysis may miss the relative importance of the phenomena depending on these variables (D,B,E,Z,A, and direction). A multi-dimensional cross-correlation (MDCC) of the individual emission spectra with nearby putative sources is needed. I will use MDCC to study the hypothesis that primaries are heavy nuclei subject to GZK photo-disintegration, and that metal-rich starburst galaxies are the most plausible candidate sources by far: combining the 3.9-sigma probability of starburst-galaxy sources from Auger data with the significance of the hotspot derived from Telescope Array data, we arrive at a 5-sigma probability that starburst galaxies are the origin sites for UHECRs. Also, starburst galaxies possess a large density of supernovae and therefore of pulsars, and so can accelerate heavy nuclei to the hard spectrum that is needed to accommodate Auger observations. At face value, this result provides an important step in resolving the more than 100 year old mystery of the origin of highest-energy cosmic rays.
On the Anisotropy of the Arrival Directions of Galactic Cosmic Rays
Markus Ahlers
Niels Bohr International Academy, Niels Bohr Institute
The arrival directions of multi-TeV cosmic rays show significant anisotropies at large and small angular scales. I will argue that these features can be understood from standard cosmic ray diffusion. It is well-known that a large-scale dipole anisotropy is expected from a cosmic ray density gradient following the distribution of Galactic sources. However, the observed anisotropy depends on cosmic ray propagation in our local magnetic environment. The observed dipole amplitude and phase are a result of anisotropic diffusion along the local ordered magnetic field. The small-scale structures, on the other hand, are expected to arise from cosmic ray scattering in local magnetic turbulence.
TeV-PeV Cosmic-Ray Anisotropy and Local Interstellar Turbulence
The shape of the large-scale cosmic-ray (CR) anisotropy depends on (and therefore contains information on) the local interstellar turbulence within ~ 10 pc from Earth. We calculate the TeV-PeV CR anisotropies predicted for a range of Goldreich-Sridhar (GS) and isotropic models of interstellar turbulence, and compare them with IceTop and IceCube data. The narrow deficits in the 400TeV and 2PeV data sets of IceTop can be fitted with a GS model that contains a smooth deficit of parallel-propagating waves and a broad resonance function, although some other models cannot, as yet, be ruled out. In particular, isotropic fast magnetosonic wave turbulence can match the observations at high energy, but cannot accommodate an energy dependence in the shape of the CR anisotropy. We discuss the impact of possible anisotropies in the power-spectrum of fast modes. Our findings suggest that data on the large-scale CR anisotropy could provide a new probe of the properties of the local turbulence. Finally, we compare our constraints with those (on bigger scales) from Planck's dust-polarization measurements.
High-energy cosmic ray nuclei from tidal disruption events: origin, survival, and implications
B. Theodore Zhang
Penn State University, Peking University
Tidal disruption events (TDEs) by supermassive or intermediate mass black holes have been suggested as candidate sources of ultrahigh-energy cosmic rays (UHECRs) and high-energy neutrinos. Motivated by the recent measurements from the Pierre Auger Observatory, which indicates a metal-rich cosmic-ray composition at ultrahigh energies, we investigate the fate of UHECR nuclei loaded in TDE jets. First, we consider the production and survival of UHECR nuclei at internal shocks, external forward and reverse shocks, and nonrelativistic winds. Based on the observations of Swift J1644+57, we show that the UHECRs can survive for external reverse and forward shocks, and disk winds. On the other hand, UHECR nuclei are significantly disintegrated in internal shocks, although they could survive for low-luminosity TDE jets. Assuming that UHECR nuclei can survive, we consider implications of different composition models of TDEs. Tidal disruption of main sequence stars or carbon-oxygen white dwarfs is difficult to reproduce the observed composition or spectrum. The observed mean depth of the shower maximum and its deviation could be explained by oxygen-neon-magnesium white dwarfs, but they may be too rare to be the sources of UHECRs.
Ultra-high-energy cosmic-ray hot spots?
Daniel Pfeffer
There is tentative evidence for an ultra-high-energy cosmic-ray (UHECR) "hot spot" coming from the direction of Centaurus A and also evidence for another hot spot that might be associated with the M81 group. Although the evidence is not firmly established, it is not all unreasonable, given the energetics and the physical conditions in Cen A and in various galaxies in the M81 group, that these signals might be real. In this talk I will discuss how additional evidence to establish or disprove the existence of a hot spot can be obtained with some simple modeling of the energy and angular distribution expected from an UHECR source. I discuss the improvements that can potentially be done with current data and also forecast how well parameters of a hot-spot model may be constrained with future measurements.
Impact of the Turbulent Galactic Magnetic Field on the Arrival Distributions of Ultra-High Energy Cosmic Rays
Michael Sutherland
The various coherent and turbulent components of the Galactic magnetic field have sufficiently high field strengths to alter the arrival distributions of cosmic rays, including those at ultra-high energies. I will highlight the results of a study considering a realistic GMF model including a persistent turbulent component, and rigidities $R \equiv E / Z \geq 10^{18}$ V reaching sufficiently low values as to describe Fe nuclei at energies above 50 EeV. In addition to the large scale Jansson-Farrar coherent field, our study investigates multiple realizations of the turbulent field; we vary the coherence length to determine its effect on the arrival distributions. For each rigidity and field model, the UHECR arrival direction distribution can be determined for an arbitrary source direction, by inverting the trajectories of more than $5 \times 10^{7}$ isotropically-distributed anti-CRs of the given rigidity, which we backtrack using the public code CRT. Aspects of the arrival direction distributions are examined for dependencies on rigidity and properties of turbulent field realizations. Except at high rigidity, the pattern of multiple images is very complex and depends strongly on the coherence length and source direction.
PHAESTOS: Using Galactic Magnetic Tomography to Backtrack UHECRs through the Milky Way
Vasiliki Pavlidou
The sources of the highest-energy particles in the Universe remain a still-unresolved mystery. The reason is that charged-particle astronomy is severely complicated by magnetic deflections, which, for sources in the local Universe, are dominated by the effect of the Galactic magnetic field. I will discuss the PHAESTOS project - a radically new approach to identifying individual sources of UHECR: constructing a 3-dimensional map of the Galactic magnetic field through optopolarimetric magnetic tomography, and backtracking the paths that UHECR traverse through the Galaxy before reaching us, to improve agreement between their (corrected) arrival directions and the location of their sources on the sky. Effectively, this technique aims to improve the charged-particle point-spread-function by a factor of several, boosting the sensitivity to individual sources by a similar factor. This approach is becoming possible for the first time thanks to two experimental breakthroughs: the unparalleled wealth of stellar distances that the Gaia mission is in the process of providing; and recent advances in optopolarimetry of point sources that make possible systematic large-area surveys of stars, such as the upcoming PASIPHAE survey. The combination of Gaia and PASIPHAE data enable the construction, for the first time, of a tomographic map of the Galactic magnetic field, paving the way to ultra-high-energy cosmic-ray astronomy.
Ultra-high-energy cosmic rays from local radio galaxies
Björn Eichmann
Local Radio galaxies (RGs) like Centaurus A are intensively discussed as the source of the observed Cosmic Rays above 3 EeV (UHECRs). In this talk a first systematic study is presented where all observational features of the UHECRs, i.e. the energy spectrum, the chemical composition and the arrival directions, are used to draw severe constraints on the UHECR contribution from the local RGs (up to a distance of about 120 Mpc from the Earth). Here, the radio luminousity of the RGs is linked to the UHECR luminosity to take the different states of the individual sources into account. Further, we also discuss the necessary contribution of the non-local sources. The propagation of the UHECR candidates is performed with the publicly available code CRPropa3 where the extragalactic magnetic field (EGMF) model from cosmological MHD simulations is used.
Ultrahigh-Energy Cosmic-Ray Nuclei from Radio Galaxies: Recycling Galactic Cosmic Rays through Shear Acceleration
Shigeo Kimura
We propose an astrophysical scenario for ultrahigh-energy cosmic-ray production, in which galactic cosmic rays are reaccelerated by kiloparsec-scale jets in active galactic nuclei. We perform Monte Carlo simulations of transrelativistic shear acceleration dedicated to a jet-cocoon system of active galactic nuclei. A certain fraction of galactic cosmic rays in a halo is entrained, and sufficiently high-energy particles can be injected to the reacceleration process and further accelerated up to a few EeV for protons and around 100 EeV for irons. We show that the shear reacceleration mechanism leads to a hard spectrum of escaping cosmic rays, and the supersolar abundance of ultrahigh-energy nuclei is achieved due to injections at TeV-PeV energies. As a result, the highest-energy spectrum and mass composition can be reasonably explained without contradictions with the anisotropy data.
The milliQan Experiment
In this talk, I will present the status and plans of a new dedicated experiment called milliQan that we propose to install at LHC Point 5. It is designed to be sensitive to particles produced in pp collisions that have EM charges ranging from 0.001 e to 0.1 e, as can arise in a variety of beyond-the-standard model scenarios.
Strongly Interacting Dark Matter at Fixed-Target Experiments
One interesting class of models involves dark matter as the lightest state of a strongly interacting hidden sector, similar to the pions of QCD. In this talk, I will examine the possibility that the lightest vector resonances of the hidden sector are nearby in mass and accessible within the current operating energy of fixed-target experiments. These states significantly modify processes in the early universe and give rise to striking signals at low-energy accelerators, involving missing energy and displaced pairs of leptons.
v Hopes for New Physics: Probing Weakly Coupled States at Neutrino Detectors
In this talk we discuss the sensitivity of probing light bosons in the Borexino-SOX experiment, and the possibility of detecting heavy leptons in the SHiP and DUNE experiments. Bringing an external radioactive source close to a large underground detector can significantly advance sensitivity not only to sterile neutrinos but also to "dark" gauge bosons and scalars. Here we address in detail the sensitivity reach of the Borexino-SOX configuration, which will see a powerful (a few PBq) 144Ce−144Pr source installed next to the Borexino detector, to light scalar particles coupled to the SM fermions. The mass reach of this configuration is limited by the energy release in the radioactive γγ-cascade, which in this particular case is 2.2 MeV. Within that reach one year of operations will achieve an unprecedented sensitivity to coupling constants of such scalars, reaching down to g∼10^{−7} levels and probing significant parts of parameter space not excluded by either beam dump constraints or astrophysical bounds. Should the current proton charge radius discrepancy be caused by the exchange of a MeV-mass scalar, then the simplest models will be decisively probed in this setup. We also update the beam dump constraints on light scalars and vectors, and in particular rule out dark photons with mass below 1 MeV, and couplings constants ϵ≥10^{−5}. We then move on to briefly discuss the possibility of utilizing SHiP and DUNE experiments to probe long-lived heavy leptons.
GPS Timing Synchronization: Characterization and Spatial Correlation
Rob Halliday
Case Western Reserve University, Dept. of Physics
Nanosecond precision timing synchronization via the Global Positioning System has become a common technique for a variety of particle physics and astrophysics experiments including, for example, large arrays of detectors for cosmic ray air showers. By using the common time-standard in GPS, time synchronization can be achieved at low cost, even over large areas in remote locations. However, in principle, synchronization accuracy is limited by atmospheric effects, especially over large distances. Here we present a new measurement of the accuracy of GPS timing synchronization, particularly as a function of distance between two receivers.
Measuring gravitational effects on antimatter in space
Giovanni Maria Piacentino
University of Molise, INFN - National Institute for Nuclear Physics
A direct measurement of the gravitational acceleration of antimatter has never been performed to date. Recently, such an experiment has been proposed, using antihydrogen with an atom interferometer and an Antihydrogen confinament has been realized at CERN. In alternative we propose an experimental test of the gravitational interaction with antimatter by measuring the branching fraction of the CP violating decay of KL in space. We show that at the altitude of the International Space Station, gravitational effects may change the level of CP violation such that a 5 sigma discrimination may be obtained by collecting the KL produced by the cosmic proton flux within a few years.
Light scalar at the high energy and intensity frontiers
Yongchao Zhang
In the minimal left-right symmetric model which could accommodate the tiny neutrino masses via TeV seesaw mechanism, the neutral scalar from the right-handed symmetry breaking sector could be much lighter than the electroweak scale. Limited by the meson oscillation and decay data, such a light particle is necessarily long-lived and decays predominantly into two photons, mediated by the heavy $W_R$ boson. It could be searched for at the LHC and in the intensity frontier experiments via (displaced) photon signals, if its mass is of order GeV scale. This provides a unique test of TeV scale left-right models and the seesaw mechanisms.
Overview of dark matter & exotics searches and prospects at the ATLAS experiment
Despite the recent discovery of the Higgs boson contributing to the success of the Standard Model, the large excess of dark matter in the Universe remains one of the outstanding questions in science. This excess cannot be explained by Standard Model particles. A compelling hypothesis is that dark matter is comprised of particles can be produced at the LHC, called Weakly Interacting Massive Particles (WIMPs). This talk presents a number of ATLAS searches for WIMP dark matter, outlining the main theoretical benchmarks and issues in terms of complementarity with direct and indirect detection experiments, and presents the prospects for dark matter searches at future LHC runs.
Searches for dark matter beyond mono-jets at the ATLAS experiment
Rui Wang
Dark matter can be sought in complementary experiments: direct detection, indirect detection and colliders all contribute to a comprehensive set of searches for weakly interacting massive particles (WIMPs). This talk underlines the searches for Dark Matter by the ATLAS experiment in the context of this complementarity, using models that include a mediator particle between SM and DM.
Searches for supersymmetric dark matter candidates and long-lived particles with the ATLAS experiment
Edoardo Maria Farina
Universita e INFN, Pavia
Will cover searches for neutralinos and electroweakinos.
Searches for BSM Higgs bosons at ATLAS, including rare and invisible decays
Gustavo Otero Y Garzon
Several theories beyond the Standard Model predict the existence of high mass neutral or charged Higgs particles or BSM decay modes of the 125 GeV Higgs boson. In this presentation, the latest ATLAS results on searches for these particles will be discussed.
IAS Princeton
Julia Becker Tjus
Ruhr University Bochum
Emma de Oña Wilhelmi
John Hopkins University
Stanford / SLAC
Northwestern U.
Miguel Mostafá
University of Geneva
TeVPA will take place in two excellent venues, both located in downtown Columbus. They are within easy walking distance from each other, as well as from the conference hotel. The area is also within walking distance of multiple alternative hotels, restaurants, as well as from the Short North, the liveliest part of town. TeVPA will feature extended two-hour lunch breaks, providing participants with plenty of time to experience Columbus while moving between conference venues.
Davidson Theatre
Morning plenary sessions will be held in the Davidson Theatre, a 903-seat facility located in the Vern Riffe Center for Government and the Arts. Named after Jo Ann Davidson, the first female speaker of Ohio's House of Representatives, the theatre is located directly across from the Ohio Statehouse and was founded in 1989.
Coffee and snacks will be provided each morning, during a break in the plenary sessions. The theatre is equipped with wireless internet, which will be accessible to all attendees throughout the conference.
Parking near the theatre is available at an extra cost. The nearest parking garages are the Columbus Commons Parking Garage on 3rd Street and Rich Street, or the Statehouse Parking Garage on 3rd Street. More information can be found here.
The Columbus Athenaeum
Afternoon parallel sessions will be held at the nearby Columbus Athenaeum, a renovated Masonic Temple built in 1896 in the heart of downtown Columbus. Since its restoration in 1996, the Athenaeum has stood as a premier location for conferences, weddings, and theatrical performances.
Parallel sessions will be held in the centrally located Small Theater, Corinthian, Macedonian, Spartan, and Athenian Rooms. Coffee and light snacks will be provided to all conference attendees in the mid-afternoon in the Oak Room. We have arranged to have a family room available during the parallel sessions. The Athenaeum is equipped with wireless internet throughout, which will be accessible to all attendees.
Parking near the Athenaeum is available at an additional cost. The closest parking facilities are the 4th Street & Elm Garage. More details can be found here.
Columbus is a major US city, with a population of nearly 850,000 and a metro area exceeding 2 million residents. It is part government-center -- established as the state capital of Ohio in 1838. It is part college-town -- home to the largest state-school in the United States, The Ohio State University. It is home to a wide variety of attractions -- including a major zoo, art museums, historical theatres, and a number of sports teams.
Columbus is easily accessible by car, located with one day's drive of half the US population. For international attendees, Columbus has an international airport (CMH), which is only a 20-minute drive from downtown.
The summer weather in Columbus is pleasant and the culture is friendly. Accomodations and dining options are plentiful and highly affordable. Columbus is notable for being one of the most LGBT-friendly cities in the US.
Below, we have selected some of our personal highlights around Columbus, including our favorite places to eat, drink, and visit. Almost all of these venues are either within walking distance, or have accessible mass-transit options. Of course, these are our personal opinions -- for more complete information we recommend that you Experience Columbus
The Short North
For dining options, we highly recommend the Short North, which is located about a half mile north of the conference hotel along High Street. High Street is the primary artery of Columbus, connecting its most important shopping, business, and artistic neighborhoods. In addition to fine dining, the Short North also hosts numerous bars, art galleries, and specialty shops. Jeni's Spendid Ice Creams is a beloved local brand and ice-cream shop, which all participants should visit on at least one occasion.
The Short North is easily accessible from the conference hotel and both conference venues. A free bus -- known as the CBUS -- operates along High Street every 15 minutes from 07:00 - 21:00 on Monday-Thursday, 07:00 - 24:00 on Friday, 09:00 - 24:00 on Saturday, and 10:30 - 18:00 on Sunday. The CBUS route extends from German Village -- south of Downtown -- to the Northern portion of the Short North. Details of the route can be found here. In addition to the free CBUS, the public bus system (COTA) has routes through the downtown area, most at a reasonable price of $2 for a one-way journey. More information on the Columbus bus system can be found here.
In addition to the Short North, we recommend a number of historic Columbus communities. Highlights include: German Village -- located just south of downtown and easily accessible via the CBUS -- and Franklinton, located directly across the Scioto River and hosting an up-and-coming art community.
We have assembled a number of dining options, both within the Short North as well as in other historic Columbus neighborhoods. We have personally visited and approved of these dining options. More details can be found here.
The Ohio State University is located approximately 2.5 miles (4 km) north of downtown, and is home to 58,322 students on the main campus, and over 100,000 students including the branch campuses. The ``Oval" is pictured on the right, and serves as a central meeting place for student life.
Research relevant to TeVPA is split between the Department of Physics and the Department of Astronomy, The Physics Department is located in the recently constructed Physics Research Building,, and currently hosts 56 faculty members, along with numerous post-doctoral fellows, graduate students and undergraduates. The Department of Astronomy is located in the nearby McPherson Laboratory and is home to an additional 18 faculty along with post-doctoral fellows, graduate students and undergraduates.
The Center for Cosmology and AstroParticle Physics, hosted within the Department of Physics, aims to build open the unique environment between these departments, in order to facilitate collaborative research at the interface of cosmology, astrophysics, and high-energy physics. CCAPP currently includes 25 faculty, 10 postdoctoral fellows, along with a number of graduate students and scholars.
Columbus is home to a flourishing art scene, including a number of historical theatres, such as the 3000-seat Ohio Theatre shown on the left. These theaters, run by the non-profit CAPA organization, also include the Davidson Theatre, where the plenary sessions for TeVPA 2017 will be held. A complete schedule of events associated with CAPA can be found here.
In addition to live performances, attendees may also be interested in the Columbus Crew, a major-league soccer team. In particular, the Columbus Crew will be hosting the Chicago Fire on August 12 at the Mapfre stadium, approximately 3 miles (5 km) from downtown. The Columbus Clippers, a minor-league baseball affiliate of the Cleveland Indians, will be hosting the Toledo Mud Hens on August 6 at Huntington Park, which is within walking distance of the conference hotel.
Finally we note some cultural events, such as the Festival Latino, which will take place the weekend of August 12 and August 13, and is easily accessible from downtown Columbus.
Niels Bohr Institute
Amy Connolly
Sergio Palomares-Ruiz
KICP
Marco Ajello
LANL
Yoshiyuki Inoue
Eric Dahl
Ben Safdi
LUPM
Galactic sources
Hong Kong University
CfA Harvard
Jay Gallagher
Extragalactic sources
Tuguldur Sukhbold
Pavel Fileviez Perez
Tongyan Lin
Haibo Yu
Dragan Huterer
Gordan Krnjaic
Co-chair
Jim Beatty
Annika Peter
Felix Aharonian (MPI Heidelberg)
Laura Baudis (U. Zurich)
John Beacom (OSU)
Lars Bergström (Stockholm U.)
Gianfranco Bertone (GRAPPA)
Elliott Bloom (Stanford)
Marco Cirelli (LPTHE, Paris)
Joakim Edsjö (Stockholm U.)
Jonathan Feng (UCI)
Gian Giudice (CERN)
Sunil Gupta (TIFR)
Francis Halzen (WIPAC)
Dan Hooper (Fermilab)
Olga Mena (IFIC)
Subir Sarkar (Oxford, NBI)
Tim Tait (UCI)
Masahiro Teshima (ICRR)
Zhang XinMin (IHEP)
Center for Cosmology and AstroParticle Physics (CCAPP)
Adapted from the code of conduct of the JINA Center for the Evolution of the Elements
TeVPA 2017 is a community event intended for networking and collaboration as well as learning. We value the participation of every attendee and want all attendees to have an enjoyable and productive experience. Accordingly, all attendees are expected to show respect and courtesy to other attendees throughout the workshop and to abide by the following Code of Conduct. Any participant who wishes to report a violation of this policy is encouraged to speak to any of the conference organizers. Thank you for helping make this a welcoming, friendly event for all.
TeVPA is committed to providing a harassment-free environment for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, nationality, or religion.
Harassment includes offensive verbal comments or jokes related to gender, sexual orientation, disability, physical appearance, body size, race, religion, sexual images in public spaces, deliberate intimidation, stalking, following, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact, and unwelcome sexual attention. All communication should be appropriate for a professional audience including people of many different backgrounds. Be kind to others. Do not insult or put down other attendees. Behave professionally. Participants asked to stop any harassing behavior are expected to comply immediately. Attendees violating these rules may be asked to leave the event at the sole discretion of the conference organizers.
The AAS, APS, and SKA Collaboration have produced detailed statements laying out the standards of our community as pertains to bullying and harassment of others. We will follow these guidelines in all conference activities. Full statements can be found on the AAS, APS, and SKA websites.
(Still Open!) | CommonCrawl |
Home | Programming Resources | Phase-locked loops | * Understanding Phase-Locked Loops Share This Page
Understanding Phase-Locked Loops
An in-depth look at a fascinating technique
— P. Lutus — Message Page —
Copyright © 2011, P. Lutus
Introduction | Detailed Description | Software PLL I | Software PLL II | Software PLL III | Application Example | Reader Feedback | References
Figure 1: Printing a weather chart while underway,
somewhere in the Pacific (1988)
Those who regularly read my technical articles may recognize the first theme I want to address — the rapid convergence of mathematics and everyday reality. In the pre-computer days of electronic technology, one would use mathematics to plan a project, then build and test a system that represented an approximate embodiment of the original math. For example, in the 1970s when I designed Space Shuttle electronics, the final flight hardware always reflected the intent of the original mathematics, but the distance between theory and practice was often large.
Now that computer methods have taken hold, in radio receiver and signal processor designs we're witnessing a rapid convergence of principles and practice. One reason is the gradual replacement of analog with digital circuits, another factor is the degree to which microprocessors now create in software what had once required explicit, single-purpose circuits. Because of the low cost and high speed of modern microprocessors, it no longer makes sense to consider most analog designs, and it's becoming practical to write signal processing code directly in software.
One of my recent projects is a software-based receiver/processor of slow-scan video images of nautical weather charts transmitted over the shortwave bands (a project named JWX). I have always needed weather charts, dating back to my around-the-world sail (1988-1991) and extending to the present, for my Alaska boat expeditions. In years past I used a demodulator box to decode shortwave weatherfax transmissions and print the result on paper (see Figure 1), but I recently realized that, because of great speed improvements, a modern laptop with a sound card should be able to perform the entire task in software, thus eliminating a single-purpose black box.
Figure 2: Phase-locked loop
In the weatherfax project, one of the key design issues was to convert a range of audio tones into a video signal, essentially FM detection. During a lengthy design and testing phase I evaluated most known methods for FM demodulation, beginning with a crude method that counted clock cycles between zero crossings, then a system of bandpass filters, and finally I designed a phase-locked loop detector. The phase-locked loop approach turned out to be vastly superior to the other methods, to the degree that I want to describe the method in detail, so others won't pass up this terrific approach. I'll have more to say about the JWX project at the end of this article, but first let's discuss phase-locked loops.
At its most basic, a phase-locked loop (hereafter PLL) compares the frequency of a local reference oscillator to that of a received signal, and uses a feedback scheme to lock the local oscillator's frequency to the incoming signal (see Figure 2).
At this point, one application for a PLL should be obvious — the oscillator control signal's amplitude is proportional to the difference between the incoming signal's frequency and the local oscillator's free-running frequency, therefore it represents demodulated FM. But unlike typical FM detectors and with reasonable care in PLL design, the oscillator control signal can be a near-perfect duplicate of the original modulating signal, suitable for high-fidelity music, scientific telemetry, video, and other demanding requirements.
PLLs are very good FM detectors, but by changing feedback parameters, they can also detect very weak signals buried in noise — indeed a PLL is the preferred approach for detecting weak signals, as from a deep space probe. For this application the loop low-pass filter (Figure 2, green) is adjusted to allow through only a very small bandwidth (to reject noise) and the reference oscillator is tuned to the expected signal frequency. In this configuration and given adequate integration time, a PLL can detect and track a signal 40 db below accompanying noise.
I caution my readers that a PLL can't fill both the above roles at once. It can demodulate FM signals with very high accuracy and reliability, or it can detect signals buried in noise, but it can't do both in a single configuration, because the two tasks require very different setups and assumptions. In its role as an FM detector, a PLL doesn't reject noise very efficiently, and as a weak-signal detector, it can't decode FM very efficiently. The reason should be obvious — to detect FM modulation of a given bandwidth, the PLL's feedback loop low-pass filter (Figure 2, green) must be opened up enough to allow the modulation's bandwidth to pass unimpeded, but this causes the PLL to become more susceptible to noise.
This section explains each block of the PLL diagram shown in Figure 2.
See Figure 2, blue square. It turns out that, because of how the phase detector works, the incoming signal level is critical to a PLL's performance, and in some cases one may want to consider an automatic gain control scheme. If the incoming signal drops below a certain level, the PLL will go out of lock, even in the absence of noise. If the incoming signal rises above a certain level, the possibility exists that some combinations of low-pass filter and free-running frequency settings will cause the PLL to become unstable (more on this below). The reason for these effects is that the overall feedback loop gain is proportional to the incoming signal level. Consequently, some designs may require some kind of level control, and rigorous testing under a wide range of input conditions is very desirable.
Phase detector
Figure 3: Phase detector analysis
See Figure 2, red circle. The most common kind of PLL phase detector is very simple — it multiplies the incoming signal and the reference oscillator signal together. For two sinewaves a and b, this is the result (see Figure 3):
$a \times b = a+b\ \text{and}\ a-b$
While the PLL is not in lock, in most cases it's trivial to discriminate between the a+b and a-b components (see Figure 3). Once the PLL goes into lock, once the signal and reference frequencies are the same, the low-pass filter (to be discussed next) is expected to discriminate between twice the signal/reference frequency and the expected range of modulation frequencies, which at that point will appear as amplitude variations.
I want to emphasize about the somewhat confusing Figure 3 that neither a nor b appear in the phase detector output, only a+b and a-b (only the red lines). This in essence means that, while the PLL is in lock, the low-pass filter only needs to discriminate between $2 \times a$ and the desired range of modulation frequencies.
Remember also that the modulation amplitude level present at the phase detector output represents a phase difference between the incoming and reference signals. In most cases, once that phase difference exceeds ± 90°, the PLL will go out of lock.
Loop low-pass filter
Figure 4: Low-pass filter response/phase shift
for a typical biquadratic filter
(vertical scale is both reponse %
and phase shift in degrees)
See Figure 2, green square. The loop low-pass filter discriminates between twice the signal/reference frequency and the desired modulation frequencies, but it can also be used to limit the PLL lock range and susceptibility to noise. It is not uncommon that the optimal filter setting for the PLL feedback loop, and for the output signal, are different, so in many cases it is desirable to have two low-pass filters, one to set the PLL feedback loop dynamics, another to treat the output signal (Figure 2, yellow square).
A typical low-pass filter introduces some phase shift, and this has implications for feedback loop stability. In Figure 4 we see that, for a typical biquadratic low-pass filter, the phase shift at the cutoff frequency (the -3 db point) is 90°. Remember about this result that, if the PLL feedback loop gain exceeds unity at frequencies above the cutoff frequency (greater than 90° phase shift), the loop will become unstable. Read this article for more about biquadratic filters.
Frequency-modulated reference oscillator
See Figure 2, purple square. The implementation details of the reference oscillator can make or break a PLL design, but because of the flexibility of the software approach, there is little basis for comparing pure software-based and old-style analog and digital PLL designs.
The reference oscillator is essentially a frequency-modulated signal generator whose frequency is controlled by a feedback signal emanating from the phase detector / low-pass filter. So we should first study how one frequency-modulates a signal. Here is a classic FM generator equation:
(1) $\displaystyle fm(t) = \cos\left(2 \pi\ f_c\ (t + \int m(t)\ dt) \right)$
t = time, seconds
fm(t) = frequency-modulated signal
fc = center frequency
m(t) = time-varying modulation signal, -1 <= m(t) <= 1
Figure 5: The result for the adjacent code
Don't be intimidated by the integral sign in equation (1) above — a practical embodiment is quite simple. Below is a complete Python program listing that solves the above equation — it generates an FM signal and plots the result (click here for plain-text source):
from pylab import *
sample_rate = 1000.0
cf = 16 # carrier frequency
mf = 2 # modulation frequency
mod_index = .5 # modulation index
fm_int = 0 # FM integral
dt = []
dfm = []
for n in range(int(sample_rate)):
t = n / sample_rate # time seconds
mod = cos(2 * pi * mf * t) # modulation
fm_int += mod * mod_index / sample_rate # modulation integral
fm = cos(2 * pi * cf * (t + fm_int)) # generate FM signal
dt.append(t)
dfm.append(fm)
plot(dt,dfm)
ylim(-1.2,1.2)
gcf().set_size_inches(4,3)
savefig('simple_fm_generator.png')
show()
In just a few lines, this listing produces what would be the most difficult part of a conventional (non-software) PLL design — a well-defined, controllable reference oscillator with few practical limitations. It also shows the advantage of the software design approach over the analog and integrated-circuit approaches — using the older methods, achieving the above result would lie somewhere between difficult and impossible.
Also, because it's the model for all subsequent programs in this article, note the details of the program listing. The basic idea is that a counter represents small slices of time and governs the conversion process. This arrangement is fundamental to modern signal processing, where (in a practical software radio) an analog-to-digital (A/D) converter is the signal source, delivering signal samples on a regular time schedule, and the "sample rate" referenced in the above listing is the A/D converter's clock rate.
Output low-pass filter
See Figure 2, yellow square. As mentioned earlier, it is often desirable to treat the feedback loop dynamics and the output differently. For example, one may wish to adjust the loop low-pass filter (Figure 2, green box) to allow the PLL to aggressively track a fast-changing video signal, but filter the output signal using a different profile, one that might prevent the PLL from successfully tracking the input signal if there was just one filter. The output low-pass filter isn't essential to the operation of the PLL, but it often produces a more acceptable result.
Software PLL I
Figure 6: PLL medium sweep-frequency test
In this section we will evaluate a typical software PLL with medium bandwidth, using an example written in Python.
Click here for a plain-text Python source file for this section's program.
Click here for a required additional module used to create the biquadratic filters.
This example generates a frequency-sweep test signal, mixes in some noise for realism, then applies the test signal to a phase-locked loop. (The horizontal scale of Figure 6 is the frequency of the test signal.) With respect to Figure 6:
The blue trace is the PLL transfer function, the PLL loop control signal that represents the demodulated FM signal. Note that within the lock range (from 1950 to 2050 Hz) it is perfectly straight, showing that it essentially duplicates the original modulation.
The green trace is a quadrature reference (a derivative of the PLL reference oscillator shifted by 90°) multiplied by the input signal. Unlike the PLL loop control signal, the quadrature result is at a maximum at the free-running frequency (2000 Hz). In this example, the quadrature result is used to detect the signal's presence and the lock state of the PLL.
The red trace is a logical level derived from the quadrature reference. This signal is used to unambiguously indicate the PLL lock state.
When the incoming signal's frequency equals the PLL free-running frequency, the PLL will be in lock, and there will be a 90° difference between the signal and the PLL reference oscillator. This in turn means the loop control average level is zero. If we need to detect the signal's presence and level, we need to produce a derivative of the PLL reference oscillator, shifted by 90° (a "quadrature reference") and perform another phase detection with this derived reference. The quadrature reference can be acquired with a difference equation:
$r_q = (r[n] - r[n-1]) \frac{\text{sample rate}}{2 \pi f}$
rq = Quadrature reference
r[n] = Present reference oscillator sample
r[n - 1] = Prior sample
f = Reference oscillator free-running frequency
This isn't the only way to get a quadrature reference — in a case where there are no speed or resource limitations, one can simply generate two parallel trigonometric reference signals (sin() and cos()) instead of one. That approach is easier to create and understand, but it requires more computer power.
Notice that the green trace in Figure 6 has a dome shape within the lock region. This results from the fact that the phase relationship between the signal and quadrature reference is -90° at the left border of the lock region, 0° at 2000 Hz, and +90° at the right border of the lock region.
Here is a short excerpt from the full source listing, showing the details of the PLL and quadrature code:
pll_loop_control = test_sig * ref_sig * pll_loop_gain # phase detector
pll_loop_control = loop_lowpass(pll_loop_control) # loop low-pass filter
output = output_lowpass(pll_loop_control) # output low-pass filter
pll_integral += pll_loop_control / sample_rate # FM integral
ref_sig = cos(2 * pi * pll_cf * (t + pll_integral)) # reference signal
quad_ref = (ref_sig-old_ref) * sample_rate / (2 * pi * pll_cf) # quadrature reference
old_ref = ref_sig
pll_lock = lock_lowpass(-quad_ref * test_sig) # lock sensor
logic_lock = (0,1)[pll_lock > 0.1] # logical lock
In this excerpt, the "output" variable produces the blue trace in Figure 6, "pll_lock" produces the green trace, and "logic_lock" produces the red trace.
This example has shown a typical PLL application — a relatively narrow, well-defined lock range, a low-pass filter set up to minimize the effect of noise, and an essentially perfect linear response in the lock range.
Software PLL II
Figure 7: PLL wide sweep-frequency test
In this section we will evaluate a software PLL with very wide bandwidth.
This example is meant to show that a PLL can be configured to have very wide lock bandwidth, if the communications channel has little noise and the signal has consistent amplitude. Figure 7 shows the transfer function for this example. The test signal (frequency on the horizontal axis) is swept from 0 Hz to 4,000 Hz, and the PLL remains locked to the input signal in the range 100 — 4000 Hz.
Notice in Figure 7 that the PLL loop control signal is perfectly linear over the entire lock range. This is expected in a PLL, but it may come as a surprise to those who have wrestled with other kinds of FM detectors.
An examination of the Python source reveals some surprising things about this example, for example there is no loop lowpass filter, only an output lowpass filter, and the loop gain is set to 8.0. One might expect such a high loop gain to cause instability, but the absence of a loop filter and its associated phase shift prevents this.
This example represents one extreme of possible PLL configurations, meant to maximize lock bandwidth. It also shows how little code is required to create a PLL — in this case, only four lines.
Software PLL III
Figure 8: PLL -40db S/N signal-detection test
In this section we will evaluate a software PLL with very narrow bandwidth, meant to recover a signal buried in noise.
This represents the opposite extreme of the prior example. In this example we intentionally use a very narrow acceptance bandwidth and monitor the quadrature detector's output, using a test signal with a -40 db (amplitude) signal/noise ratio. Expressed another way, in this test the noise level is 100 times the signal level.
This experiment is meant to show that a PLL can reliably detect weak signals in the presence of high noise levels. The tradeoff is that the bandwidth is restricted and the required detection time can be long. If the detected signal carries information, the data rate must be very low to avoid being overwhelmed by noise.
To get a sense of the difficulty involved in successfully detecting a signal with a -40 db signal/noise ratio, go to my signal generator page (create a separate browser tab if you can) and make the following settings:
Signal: sine, 1000 Hz, level 1%.
Modulation: disabled.
Noise: level 10%.
Notice about the signal generator that you can position your mouse cursor over the controls and spin your mouse wheel to change settings — there's no need to type in numbers. And it may be necessary to increase your computer sound system's volume level to hear the generated signal.
At a signal/noise ratio of 1/10 (-20 db), a person with normal hearing can just make out the signal in the noise. Now leave the signal level at 1% and increase the noise level — increase it to 20%, then 30%. At 30% with great care, a person with excellent hearing can just make out the 1000 Hz signal. Now consider that this section's PLL example can reliably detect the signal with a 1/100 signal/noise ratio (i.e. -40db). This is why a PLL is the preferred way to detect weak signals.
In this test program, we have set a very low PLL loop filter cutoff frequency of 0.06 Hz and a loop gain of 0.00003. Remember about PLL loop dynamics that, to assure stability, a relatively low filter cutoff frequency must usually be accompanied by low loop gain.
Figure 9: Biquadratic bandpass filter
detector transfer function
Here are more details about the JWX weatherfax decoder project I mentioned earlier, during which I found that a PLL was a much better FM demodulator than its alternatives. Immediately prior to considering a PLL, I had designed a system relying on biquadratic bandpass filters that did a reasonable job of demodulating the FM data from shortwave signals (see Figure 9). But there were some unsolved problems:
An FM demodulator system using bandpass filters tends to be sensitive to amplitude as well as frequency — this is a drawback in the presence of noise.
Because of the specifics of video detection, it's desirable to have a consistent relationship between the input signal's frequency and the output levels, regardless of the signal's amplitude. This was difficult to achieve using bandpass filters.
It would have been nice to be able to change the frequency of the detector without having to reoptimize the filter transfer function, but this turned out to be difficult.
Figure 10: Locally-generated video test image,
demodulated by JWX after conversion to a PLL system
(Click image for full size)
The PLL approach resolved all these issues, indeed once I became familiar with the methods and a handful of design principles, the system began to produce results I had not expected to achieve (see Figure 10), and that, strictly speaking, are more than adequate for shortwave weatherfax service.
Integer math
For the past few days, I've been looking at your discussions on software PLLs. You've done a great job by simplifying a concept that would normally be out-of-reach of average software hackers like myself. Thanks for putting in the time to write it all up so clearly! You're welcome — I'm glad the article is clear enough to help. I'm hoping to implement a software PLL based on your presentation. However, my project will need to use fixed point math running on an ARM microcontroller. I can translate your python into C++ fairly directly, but I'm scratching my head on what to do with "t" (as in time).
Your PLL examples run over a finite interval of t=0 to t=duration*sample_rate, with t increasing by 1/sample_rate for each iteration. However, in an actual implementation, one would like the PLL to run forever, which means the code needs to be massaged to avoid overflowing the numeric representation of t.
The line in question is this one: [snip code example]
I haven't coded this yet, but I think it will work. It seems like a hack though, so I thought I'd ask if you can suggest an alternative. Years ago, in the late 1970s, I wrote a lot of code for the earliest Apple II, before it had any floating-point ability (picture of my Apple II). I wrote graphics programs, music programs, all sorts of things. And all of them used trigonometric functions. Remember for what follows that my Apple II had four kilobytes of RAM. No, you didn't misread that — four kilobytes total system RAM.
To perform trig calculations I created a lookup table of sines from 0 to 90 degrees packed into 65 bytes (a deliberately small range to save memory space). Then I accepted an argument between 0 and 256 (256 represented 360 degrees) and placed my output in the correct quadrant with some simple code (see attachment).
As far as time is concerned, I thought in binary. A counter never represented units of 1/1000 of a second, it was always 1/256th or 1/65536th of a second, meaning my counters always represented time as 2-n seconds, for some arbitrary n. That way, when the counters wrapped around, nothing happened — things kept working.
I have attached a simple Python script that creates, and then uses, a small sine lookup table to generate trig values. It's set up so that its input and output arguments will fit into a single signed or unsigned byte (0-256 translates to 0-360 degrees, and the output range lies between 127 and -127). Here's a plot of two cycles of its output (arguments between 0 and 512):
integer_trig.png
Bottom line, if you think in binary, if you always use numbers to represent powers of 2, everything keeps working after your counters wrap around.
I have to say that, when I first wrote programs, the activity was much more like writing for a microcontroller than it is now — everything was integer math with severely limited numerical ranges. And one needed to constantly think in binary.
Phase-locked loop (Wikipedia)
Digital Biquad Filter (Wikipedia)
BiQuadDesigner (a biquadratic filter design tool)
JWX, a weatherfax receiver/demodulator
JSigGen, an online audio signal generator | CommonCrawl |
Comparisons between different techniques for measuring mass segregation
Parker, RJ and Goodwin, SP (2015) Comparisons between different techniques for measuring mass segregation. Monthly Notices of the Royal Astronomical Society, 449 (4). pp. 3381-3392. ISSN 0035-8711
1503.02692v1.pdf - Accepted Version
Publisher URL: http://dx.doi.org/10.1093/mnras/stv539
We examine the performance of four different methods which are used to measure mass segregation in star-forming regions: the radial variation of the mass function $\mathcal{M}_{\rm MF}$; the minimum spanning tree-based $\Lambda_{\rm MSR}$ method; the local surface density $\Sigma_{\rm LDR}$ method; and the $\Omega_{\rm GSR}$ technique, which isolates groups of stars and determines whether the most massive star in each group is more centrally concentrated than the average star. All four methods have been proposed in the literature as techniques for quantifying mass segregation, yet they routinely produce contradictory results as they do not all measure the same thing. We apply each method to synthetic star-forming regions to determine when and why they have shortcomings. When a star-forming region is smooth and centrally concentrated, all four methods correctly identify mass segregation when it is present. However, if the region is spatially substructured, the $\Omega_{\rm GSR}$ method fails because it arbitrarily defines groups in the hierarchical distribution, and usually discards positional information for many of the most massive stars in the region. We also show that the $\Lambda_{\rm MSR}$ and $\Sigma_{\rm LDR}$ methods can sometimes produce apparently contradictory results, because they use different definitions of mass segregation. We conclude that only $\Lambda_{\rm MSR}$ measures mass segregation in the classical sense (without the need for defining the centre of the region), although $\Sigma_{\rm LDR}$ does place limits on the amount of previous dynamical evolution in a star-forming region.
This is a pre-copy edited, author-produced PDF of an article accepted for publication in Monthly Notices of the Royal Astronomical Society following peer review.The version of record MNRAS (June 01, 2015) 449 (4): 3381-3392. is available online at: http://dx.doi.org/10.1093/mnras/stv539
astro-ph.GA; astro-ph.GA; astro-ph.SR
Astrophysics Research Institute
https://researchonline.ljmu.ac.uk/id/eprint/766 | CommonCrawl |
Cryptanalysis of a 2-party key establishment based on a semigroup action problem
AMC Home
Some remarks on the construction of class polynomials
February 2011, 5(1): 93-108. doi: 10.3934/amc.2011.5.93
Codes from incidence matrices and line graphs of Paley graphs
Dina Ghinelli 1, and Jennifer D. Key 2,
Dipartimento di Matematica, Università di Roma 'La Sapienza', I-00185 Rome, Italy
School of Mathematical Sciences, University of KwaZulu-Natal, Pietermaritzburg 3209, South Africa
Received September 2010 Published February 2011
We examine the $p$-ary codes from incidence matrices of Paley graphs $P(q)$ where $q\equiv 1$(mod $4$) is a prime power, and show that the codes are $[\frac{q(q-1)}{4},q-1,\frac{q-1}{2}]$2 or $[\frac{q(q-1)}{4},q,\frac{q-1}{2}]$p for $p$ odd. By finding PD-sets we show that for $q > 9$ the $p$-ary codes, for any $p$, can be used for permutation decoding for full error-correction. The binary code from the line graph of $P(q)$ is shown to be the same as the binary code from an incidence matrix for $P(q)$.
Keywords: codes, permutation decoding., Paley graphs.
Mathematics Subject Classification: Primary: 05C45, 05B05; Secondary: 94B0.
Citation: Dina Ghinelli, Jennifer D. Key. Codes from incidence matrices and line graphs of Paley graphs. Advances in Mathematics of Communications, 2011, 5 (1) : 93-108. doi: 10.3934/amc.2011.5.93
E. F. Assmus, Jr. and J. D. Key, "Designs and their Codes,'', Cambridge University Press, (1992). Google Scholar
R. Balakrishnan and K. Ranganathan, "A Textbook of Graph Theory,'', New York, (2000). Google Scholar
W. Bosma, J. Cannon and C. Playoust, The Magma algebra system I: The user language,, J. Symb. Comp., 24 (1997), 235. doi: 10.1006/jsco.1996.0125. Google Scholar
J. Cannon, A. Steel and G. White, Linear codes over finite fields,, in, (2006), 3951. Google Scholar
K. Chouinard, "Weight Distributions of Codes from Planes,'', Ph.D thesis, (2000). Google Scholar
W. Fish, J. D. Key and E. Mwambene, Binary codes of line graphs from the $n$-cube,, J. Symbolic Comput., 45 (2010), 800. doi: 10.1016/j.jsc.2010.03.012. Google Scholar
W. Fish, J. D. Key and E. Mwambene, Codes from the incidence matrices and line graphs of Hamming graphs,, Discrete Math., 310 (2010), 1884. doi: 10.1016/j.disc.2010.02.010. Google Scholar
W. Fish, J. D. Key and E. Mwambene, Codes from the incidence matrices and line graphs of Hamming graphs $H^k(n,2)$ for $k \ge 2$,, Adv. Math. Commun., (). Google Scholar
D. M. Gordon, Minimal permutation sets for decoding the binary Golay codes,, IEEE Trans. Inform. Theory, 28 (1982), 541. doi: 10.1109/TIT.1982.1056504. Google Scholar
W. C. Huffman, Codes and groups,, in, (1998), 1345. Google Scholar
B. Jackson, Hamilton cycles in regular 2-connected graphs,, J. Combin. Theory Ser. B, 29 (1980), 27. doi: 10.1016/0095-8956(80)90042-8. Google Scholar
J. D. Key, T. P. McDonough and V. C. Mavron, Partial permutation decoding for codes from finite planes,, European J. Combin., 26 (2005), 665. doi: 10.1016/j.ejc.2004.04.007. Google Scholar
J. D. Key, T. P. McDonough and V. C. Mavron, Information sets and partial permutation decoding for codes from finite geometries,, Finite Fields Appl., 12 (2006), 232. doi: 10.1016/j.ffa.2005.05.007. Google Scholar
J. D. Key, J. Moori and B. G. Rodrigues, Codes associated with triangular graphs, and permutation decoding,, Int. J. Inform. Coding Theory, 1 (2010), 334. doi: 10.1504/IJICOT.2010.032547. Google Scholar
J. D. Key, J. Moori and B. G. Rodrigues, Codes from incidence matrices and line graphs of symplectic graphs,, in preparation., (). Google Scholar
J. D. Key and B. G. Rodrigues, Codes associated with lattice graphs, and permutation decoding,, Discrete Appl. Math., 158 (2010), 1807. doi: 10.1016/j.dam.2010.07.003. Google Scholar
H.-J. Kroll and R. Vincenti, PD-sets related to the codes of some classical varieties,, Discrete Math., 301 (2005), 89. doi: 10.1016/j.disc.2004.11.020. Google Scholar
M. Lavrauw, L. Storme, P. Sziklai and G. Van de Voorde, An empty interval in the spectrum of small weight codewords in the code from points and k-spaces of $PG(n,q)$,, J. Combin. Theory Ser. A, 116 (2009), 996. doi: 10.1016/j.jcta.2008.09.004. Google Scholar
F. J. MacWilliams, Permutation decoding of systematic codes,, Bell System Tech. J., 43 (1964), 485. Google Scholar
F. J. MacWilliams and N. J. A. Sloane, "The Theory of Error-Correcting Codes,'', Amsterdam, (1983). Google Scholar
C. S. J. A. Nash-Williams, Hamiltonian arcs and circuits,, in, (1970), 197. Google Scholar
J. Schönheim, On coverings,, Pacific J. Math., 14 (1964), 1405. Google Scholar
G. Van de Voorde, "Blocking Sets in Finite Projective Spaces and Coding Theory,'', Ph.D thesis, (2010). Google Scholar
H. Whitney, Congruent graphs and the connectivity of graphs,, Amer. J. Math., 54 (1932), 154. doi: 10.2307/2371086. Google Scholar
Washiela Fish, Jennifer D. Key, Eric Mwambene. Partial permutation decoding for simplex codes. Advances in Mathematics of Communications, 2012, 6 (4) : 505-516. doi: 10.3934/amc.2012.6.505
Kwankyu Lee. Decoding of differential AG codes. Advances in Mathematics of Communications, 2016, 10 (2) : 307-319. doi: 10.3934/amc.2016007
Elisa Gorla, Felice Manganiello, Joachim Rosenthal. An algebraic approach for decoding spread codes. Advances in Mathematics of Communications, 2012, 6 (4) : 443-466. doi: 10.3934/amc.2012.6.443
Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433
Srimathy Srinivasan, Andrew Thangaraj. Codes on planar Tanner graphs. Advances in Mathematics of Communications, 2012, 6 (2) : 131-163. doi: 10.3934/amc.2012.6.131
Terasan Niyomsataya, Ali Miri, Monica Nevins. Decoding affine reflection group codes with trellises. Advances in Mathematics of Communications, 2012, 6 (4) : 385-400. doi: 10.3934/amc.2012.6.385
Heide Gluesing-Luerssen, Uwe Helmke, José Ignacio Iglesias Curto. Algebraic decoding for doubly cyclic convolutional codes. Advances in Mathematics of Communications, 2010, 4 (1) : 83-99. doi: 10.3934/amc.2010.4.83
Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046
Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 179-193. doi: 10.3934/amc.2016.10.179
Johan Rosenkilde. Power decoding Reed-Solomon codes up to the Johnson radius. Advances in Mathematics of Communications, 2018, 12 (1) : 81-106. doi: 10.3934/amc.2018005
Irene I. Bouw, Sabine Kampf. Syndrome decoding for Hermite codes with a Sugiyama-type algorithm. Advances in Mathematics of Communications, 2012, 6 (4) : 419-442. doi: 10.3934/amc.2012.6.419
Anas Chaaban, Vladimir Sidorenko, Christian Senger. On multi-trial Forney-Kovalev decoding of concatenated codes. Advances in Mathematics of Communications, 2014, 8 (1) : 1-20. doi: 10.3934/amc.2014.8.1
Vladimir Sidorenko, Christian Senger, Martin Bossert, Victor Zyablov. Single-trial decoding of concatenated codes using fixed or adaptive erasing. Advances in Mathematics of Communications, 2010, 4 (1) : 49-60. doi: 10.3934/amc.2010.4.49
Peter Beelen, Kristian Brander. Efficient list decoding of a class of algebraic-geometry codes. Advances in Mathematics of Communications, 2010, 4 (4) : 485-518. doi: 10.3934/amc.2010.4.485
Alexey Frolov, Victor Zyablov. On the multiple threshold decoding of LDPC codes over GF(q). Advances in Mathematics of Communications, 2017, 11 (1) : 123-137. doi: 10.3934/amc.2017007
Fernando Hernando, Tom Høholdt, Diego Ruano. List decoding of matrix-product codes from nested codes: An application to quasi-cyclic codes. Advances in Mathematics of Communications, 2012, 6 (3) : 259-272. doi: 10.3934/amc.2012.6.259
Jennifer D. Key, Washiela Fish, Eric Mwambene. Codes from the incidence matrices and line graphs of Hamming graphs $H^k(n,2)$ for $k \geq 2$. Advances in Mathematics of Communications, 2011, 5 (2) : 373-394. doi: 10.3934/amc.2011.5.373
Cristóbal Camarero, Carmen Martínez, Ramón Beivide. Identifying codes of degree 4 Cayley graphs over Abelian groups. Advances in Mathematics of Communications, 2015, 9 (2) : 129-148. doi: 10.3934/amc.2015.9.129
Christine A. Kelley, Deepak Sridhara, Joachim Rosenthal. Zig-zag and replacement product graphs and LDPC codes. Advances in Mathematics of Communications, 2008, 2 (4) : 347-372. doi: 10.3934/amc.2008.2.347
Joaquim Borges, Josep Rifà, Victor A. Zinoviev. Families of nested completely regular codes and distance-regular graphs. Advances in Mathematics of Communications, 2015, 9 (2) : 233-246. doi: 10.3934/amc.2015.9.233
Dina Ghinelli Jennifer D. Key | CommonCrawl |
Impact of community-based health insurance on health services utilisation among vulnerable households in Amhara region, Ethiopia
Essa Chanie Mussa1,2,
Tia Palermo3,
Gustavo Angeles4,
Martha Kibur5,
Frank Otchere1 &
Amhara ISNP Evaluation Team
Ethiopia piloted community-based health insurance in 2011, and as of 2019, the programme was operating in 770 districts nationwide, covering approximately 7 million households. Enrolment in participating districts reached 50%, holding promise to achieve the goal of Universal Health Coverage in the country. Despite the government's efforts to expand community-based health insurance to all districts, evidence is lacking on how enrolment in the programme nudges health seeking behaviour among the most vulnerable rural households. This study aims to examine the effect of community-based health insurance enrolment among the most vulnerable and extremely poor households participating in Ethiopia's Productive Safety Net Programme on the utilisation of healthcare services in the Amhara region.
Data for this study came from Amhara pilot integrated safety net programme baseline survey in Ethiopia and were collected between December 2018 and February 2019 from 5,398 households. We used propensity score matching method to estimate the impacts of enrolment in community-based health insurance on outpatient, maternal, and child preventive and curative healthcare services utilisation.
Results show that membership in community-based health insurance increases the probabilities of visiting health facilities for curative care in the past month by 8.2 percentage points (95% CI 5.3 to 11.1), seeking care from a health professional by 8.4 percentage points (95% CI 5.5 to 11.3), and visiting a health facility to seek any medical assistance for illness and check-ups in the past 12 months by 13.9 percentage points (95% CI 10.5 to 17.4). Insurance also increases the annual household per capita health facility visits by 0.84 (95% CI 0.64 to 1.04). However, we find no significant effects of community-based health insurance membership on utilisation of maternal and child healthcare services.
Findings that community-based health insurance increased outpatient services utilisation implies that it could also contribute towards universal health coverage and health equity in rural and informal sectors. The absence of significant effects on maternal and child healthcare services may be due to the free availability of such services for everyone at the public health facilities, regardless of insurance membership. Outpatient services use among insured households is still not universal, and understanding of the barriers to use, including supply-side constraints, will help improve universal health coverage.
United Nations (UN) member countries have committed to achieving universal health care coverage (UHC) by 2030 under the Sustainable Development Goals (SDGs). This target, under SDG 3 (Ensure healthy lives and promote wellbeing for all at all ages), is motivated by recognition of the need for and access to quality essential healthcare services, medicines, and vaccines for all people and to facilitate financial risk protection [1]. However, past studies show gaps towards this goal. For example, in 2013, the median proportion of births attended by a skilled health worker across 75 low and middle-income countries (LMICs) was only 62% [2] and, in 2013 about 400 million people globally lacked access to at least one of the essential healthcare services including family planning, receiving at least four antenatal care (ANC) visits, receiving 3 doses of diphtheria, tetanus, and pertussis (DTP) vaccine, antiretroviral therapy, tuberculosis treatment and children sleeping under insecticide-treated bed nets (ITBNs) [3]. Further, during 2014–2020, only 41% and 52% of children in western and central Africa and eastern and southern Africa, respectively, with symptoms of acute respiratory infection were taken to a health facility [4]. To address these and related gaps, many low-income countries are giving increasing attention to health insurance in efforts to increase health care utilisation and the attainment of UHC [5,6,7]. In this regard, Community-Based Health Insurance (CBHI) has been used as one of the important tools to expand access to healthcare services by the poorest and most vulnerable groups [8, 9].
As part of the country's efforts to strengthen the supply and increase the demand for health services, Ethiopia implemented a series of health sector development plans in the past two decades, including the last five-year Health Sector Development Programme (HSDP) (2010/11 – 2014/15) [10] and the first Health Sector Transformation Plan (HSTP I) (2015/16—2019/20) [11]. The government of Ethiopia also offers all the services delivered at the health posts as well as maternal (including family planning, antenatal care, delivery, and postnatal care) and child-related health services such as child immunizations delivered at all public health facilities free of charge regardless of socio-economic status [12, 13]. The country also invested heavily in expanding health facilities and development of health professionals to deliver quality healthcare services [14, 15]. The government of Ethiopia also provides a healthcare fee waiver for about 2 million individuals (approximately 10% of the population living below the national poverty line) annually to get healthcare services at no cost [11]. Partly to gradually and systematically replace the healthcare fee waiver scheme [16], the country has also been implementing CBHI since 2011 to further expand access to essential health care services and increase health seeking behaviour of individuals while protecting households against catastrophic health expenditures. Accordingly, some improvements have been registered. For example, the proportion of fully immunised under-one children increased from 24% in 2011 to 39% in 2016 [17] and delivery at a health facility increased from 26% in 2016 to 48% in 2019 [18]. Despite these achievements, critical gaps remain. To mention a few, in 2016, only 31% of under-five children with symptoms of acute respiratory illness (ARI) and 35% of children with a fever sought services from a health care facility or provider [17]. Further, the most recent demographic and health survey (DHS) also reported that only 43% of women had at least four ANC visits during their previous pregnancy and 34% of women received post-natal care (PNC) within two days period [18]. In addition, although the per capita outpatient department visits increased from 0.3 in 2013/14 [14] to 0.9 in 2019, it is still far below the WHO recommendation of 2.5 per capita annual visits [19].
Against the backdrop of the government's commitments but substantial gaps in some key health outcome indicators, this study examines whether CBHI increases health services utilisation among some of the most vulnerable rural households who participate in the government poverty-targeted social protection programme - the Productive Safety Net Programme (PSNP). Previous studies from LMICs on the impacts of CBHI on health services utilisation find mixed evidence. Various studies show that CBHI enrolment is linked to increased preventive healthcare utilisation such as sleeping under insecticide-treated bed nets or vaccination for children [20, 21], utilisation of outpatients health care [22,23,24,25,26], use of some maternal health care services [27], and better self-reported health and higher perceived quality of services [23,24,25]. In contrast to this, some other studies reported insignificant effects of CBHI membership on the utilisation of inpatient health services [20, 24, 26].
In Ethiopia in particular, studies to date have been conducted on the impacts of CBHI on health service utilisation using data collected during CBHI's pilot phase [24, 25] and using small-scale household surveys [28, 29]. Past studies generally show that CBHI enrolment is likely to increase healthcare utilisation, decrease costs per visit, better self-reported health, and higher perceived quality of services [23,24,25]. More specifically, Demissie and Negeri [28] find that membership in CBHI is associated with a three-fold increase in the utilisation of outpatient healthcare services in southern Ethiopia. Elsewhere in Ethiopia, Tilahun et al. [29] also find that being a member to CBHI increases healthcare utilisation approximately by 25.2 percentage points. A study by Atnafu et al. [29] also shows that households who were enrolled in CBHI were more likely to use healthcare services than households who were not enrolled. Mebratie et al. [25] find that utilisation of outpatient services in public health facilities increases by 30–41% and the frequency of visits by 45–64% due to membership in CBHI. Similarly, Shigute et al. [30] also reported that CBHI nudges the probability of using modern healthcare services (visiting a modern health care facility for outpatient care services) and the number of visits to modern health facilities among adult members in the aggregated sample. They find a larger impact of CBHI on outpatient health services utilisations for the PSNP sub-sample compared to the pooled sample. Utilisation of child curative care services for an illness in the past 4 weeks also increased due to enrolment in CBHI [31].
Existing studies in Ethiopia have focused on the general population, regardless of households' participation in other social protection programmes such as Ethiopia's flagship social protection program, the PSNP, while in the current study, we focus particularly on PSNP-participating households. Thus, CBHI enrolled households in our study could be members of two large-scale social protection programmes: CBHI and PSNP, thereby giving important policy insights on how membership in CBHI and PSNP affects households' health seeking behaviour. In this regard, we aim to contribute to the scant literature by focusing entirely on PSNP beneficiary households. From a policy perspective, our study gives new evidence to better understand how the integration of social assistance programmes can affect the utilisation of health services among the most vulnerable population. Further, while previous studies examined limited healthcare utilisation indicators, mainly outpatient visits and inpatient services, our study considered several healthcare services, categorized under outpatient, maternal, and child preventive and curative services.
In 2011, the government of Ethiopia piloted CBHI in 13 rural districts (covering about 1.6 million people) targeting rural households and people working in the informal sector. This was scaled up to 161 districts after three years of piloting [32]. As of 2019, the programme covered 7 million households residing in 770 districts throughout the country (i.e., 75% district coverage nationwide). CBHI is currently operating in all regions and Addis Ababa except Somali, Gambella, and Dire Dawa. In the programme districts, 50% of eligible households are currently enrolled and the programme has an 82% renewal rate [33]. Nevertheless, the national level enrolment is still below the target set by the government: 80% of household enrolment and 80% coverage of districts by 2020 [11].
Enrolment in CBHI is conducted voluntarily. The programme uses the core principles of risk-sharing, a community-based decision-making process, and community support. Enrolment is conducted at the household level and all rural households in the district, excluding those formally employed, can join the programme.
The CBHI is a yearly contractual agreement with advance premium payments by the members, and all renewals and new member registrations are conducted for a period of up to three months every year. Currently, the programme has two member types – self-paying and indigent members. The regional and district governments jointly fund the enrolment premiums of indigent households such as the permanent direct support (PDS) clients in the productive safety net programme. For paying members, annual premiums are set based on household sizes. In 2019, the premiums were ETB 240 (USD 8.6) for 1 to 5 member households, ETB 290 (USD 10.4) for 6–7 member households, and ETB 340 (USD 12.2) for households with 8 or more members.
The benefit package of CBHI programme includes all outpatient and inpatient services available in health centres, treatment for cancer, dialysis and organ transplant for renal failure, treatment of major trauma, intensive care unit, hip and knee replacement, and major burns [32]. Services sought at primary, general, and referral public hospitals are also covered following appropriate referral procedures [34]. All services must be sought from public healthcare facilities with contractual agreements with the district CBHI office. CBHI does not cover costs related to tooth implantation, eyeglasses for ophthalmic cases, cosmetic procedures [32], aesthetic surgery, infertility treatment, and organ transplants (except renal, heart, and bone marrow) [34].
Ethiopia's government also enacted its flagship poverty-targeted social protection programme, the rural Productive Safety Net Programme (PSNP), in 2005. About 85% of the programme beneficiaries are required to work on labour-intensive Public Works (PW) for payments while the other 15%, called the Permanent Direct Support (PDS) clients, who lack labour to participate in public works, receive unconditional cash and/ or food transfers [35]. To integrate various social protection programmes, the government endorsed its National Social Protection Policy (NSPP) in 2014 and launched the National Social Protection Strategy (NSPS) in 2016. However, Hirvonen et al. [36] found limited linkages between these large-scale social protection programmes. The Integrated Safety Net Programme (ISNP) is designed to address this gap. This pilot project, with the technical support from the United Nations Children's Fund (UNICEF) Ethiopia country office (ECO), aimed to reinforce the linkages between the PSNP and CBHI and leverage the impacts of PSNP to reduce poverty and improve the multidimensional well-being of PSNP-participating households. The efforts to integrate the social protection programmes also assume that increasing coverage of CBHI among PSNP-participating households will increase their health services utilisation and improve health outcomes. The ISNP was launched in 2019 [37].
Study setting
This study used cross-sectional data from the ISNP impact evaluation baseline survey in Amhara region, Ethiopia [38]. The ISNP evaluation is being carried out in 4 rural districts of Amhara region, namely, Libo Kemkem and Dewa Chefa as treatment districts and Ebinat and Artuma Fursi as comparison districts. Households in treatment districts receive additional ('plus') interventions on top of PSNP cash transfers including facilitation to CBHI enrolment, nutrition information through behavioural change communication (BCC) sessions, and case management through social workers and community care coalitions, while those in comparison districts do not get these plus components. While the treatment districts were selected purposively based on the availabilities of CBHI in the district, UNICEF ECO nutrition interventions and linkages to other UNICEF interventions, districts' accessibility and practicality for UNICEF ECO support, comparisons districts were selected based on their similarities with treatment districts in socio-demographic, health service supply, programme organization, culture/ ethnicity, and ecological characteristics. Thus, the treatment and their respective comparison districts are geographically close and similar culturally and economically. The trial is registered on November 5, 2018, in the Pan African Clinical Trial Registry with trial registration ID—PACTR201902876946874. More information about the overall ISNP evaluation and interventions can be found in the online Additional file 1: Appendix 1. However, in the current study, we do not examine programme impacts of the INSP, but rather we use the baseline data to examine the effects of CBHI on health services utilisation.
Study design and participants
The ISNP evaluation employed a mixed-method study approach. However, this study used the quantitative data generated through household, community, and health facility surveys. Households eligible for the survey include all PSNP-participating rural households in the four districts. The sample size was determined using power calculation based on estimates of baseline means and the expected impacts of indicators. The indicators included individuals' health services utilisations during the last month, visiting or consultation of a health service provider in the last 4 months, enrolment in CBHI, child nutrition and preventive health indicators, and mothers receiving antenatal care from a skilled provider during the last pregnancy. For each indicator, the sample size was calculated to detect a desired change of delta (δ) with minimum power of 80% under the assumption of simple random sampling and zero non-response rate. Accordingly, a target sample size of 5,400 households was decided, of which 5,398 were interviewed.
The household questionnaire was designed to capture a broad range of information both at the individual and household levels such as demographics, educational attainment, health status and utilisation, PSNP participation, asset ownership, food security, and dwelling characteristics. Questionnaire items were drawn from previously implemented questionnaires and validated measures, including from the Transfer Project and other surveys implemented in Ethiopia and Eastern Africa (see Online Additional file 1: Appendix 6 for details about the variables) [39]. Some sections draw directly from other standard surveys such as the Multiple Indicator Cluster Survey (MICS), and instruments were tested in Ethiopia during piloting of the questionnaire at data collection trainings and then adapted as needed. A proxy female respondent from each household (priority was given to the main woman of the household or caregivers of children) was interviewed. Enumerators used electronic tablets installed with programmed survey (Survey Solutions) tools to input data and interviews were administered face-to-face in local languages (Amharic in Libo Kemkem and Ebinat districts and Afan Oromo in Dewa Chefa and Artuma Fursi districts). Baseline data collection was conducted between December 2018 and February 2019.
For the community surveys (one per kebele (village) – the lowest administrative level in Ethiopia), community leaders and knowledgeable individuals in each sector were interviewed. Health care workers or facility administrators were interviewed for the health facility surveys on the facility characteristics/ infrastructure, personnel, and supplies. Data were also collected from official logbooks in all government health care facilities in study communities.
The CBHI enrolment is the treatment variable. It is defined as holding a currently valid or renewed CBHI card, which is determined at the household level (i.e., once a household enrols all members of that household are automatically enrolled, except for additional fees required for adult children). Households were coded 1 if they were currently enrolled in CBHI and 0 if they were not enrolled.
Outcomes of interest included primary preventive health services (child received all vaccinations (BCG, three doses of Polio vaccine, three doses of Pentavalent vaccine, and Measles) and mother received at least four ANC services and PNC visits in the past 12 months for children, children sleeping under long-lasting ITBNs, delivery at a health facility, births attended by skilled professionals, and children given deworming in the past 6 months). Child curative services considered in the study included health facility visits to seek treatment for child illness last month and any health facility visits for children in the last 12 months). We also considered outpatient services by members including any facility visits for curative services for illness in the past one month and if they also sought curative cares from health professionals. Data were collected on members' facility visits to seek medical assistance for an illness and check-ups from health facilities in the past 12 months, and, if yes, the number of visits to a health facility for illness by all members in the household in the past 12 months. We excluded behaviours related to seeking medications over the counter and alternative care services from our analyses. Since CBHI enrolment is at the household level, we aggregated all outcomes at the household level. Accordingly, for outcomes observed at the individual level (adult members and under-five children), we consider the household as a service user if at least one member utilized the service.
Covariate selection for the propensity score matching analysis was guided by the principles that: 1) omission of important variables could seriously increase bias in estimates [40, 41], 2) only those variables that simultaneously influence participation decision and the outcome should be included [42], and 3) selected covariates should not be affected by participation decision, that is variables should either be time-invariant or measured before participation took place [42]. Accordingly, we used previous studies, economic theory, and study context to select covariates. Household head-related factors included sex, age, current marital status, disability, and literacy status. The household-level factors were wealth status, number of household members by age, access to improved water during winter (the dry season in Ethiopia), whether the household worried about food in the last 4 weeks, number of food insecurity months in the last 12 months, having outstanding debt, drought in the last 12 months, total annual income received from PSNP, number of ill household members in the last month, and indices on perceptions and understandings about CBHI generated using Factor Analysis (see Online Additional file 1: Appendix 6 for details). The study also controlled for community and health facility-related characteristics including distance from the village to the nearest health centre (kilometres), distance from the village to the nearest health facility with a doctor (kilometres), whether the nearest health facility admits people covered with CBHI, number of years the village has been in PSNP, and village distance from district capital (kilometres). Estimations also included district fixed effects.
Ethical considerations
The study was approved by the Amhara Public Health Institute (APHI) Research Ethics Review Committee (Reference Number HRTT—03/192/2018). Enumerators received instruction during data collection training about ethical data collection, informed consent, and referral services and procedures. Informed consent was obtained from all survey participants to use their anonymised information.
This study does not involve patients. An inception workshop was conducted to select treatment districts using several social and economic indicators. Findings from the baseline data collection were disseminated in a consultative workshop conducted in August 2019 with the Amhara region and district administrators, Amhara Public Health Institute (APHI) experts, UNICEF Ethiopia staff, and stakeholders from district Bureau of Health (BoH), Bureau of Labour and Social Affairs (BoLSA) and Bureau of Women and Children Affairs (BoWCA), and district CBHI and PSNP coordinators.
Data processing and analysis
We first describe the characteristics of the target population by applying the sampling weights in the descriptive analyses. Individual-level data were aggregated at the household level. All data processing and analyses were conducted using STATA software version 15.1.
To examine the impacts of CBHI enrolment on utilisation of healthcare services, we used propensity score matching (PSM) [43, 44], to account for selection into CBHI based on observable covariates, and then estimate the effect of enrolment on outpatient, maternal, and child healthcare services utilisation.
PSM allows us to construct a comparison group that comprises PSNP participating households but did not join the CBHI programme (non-treated) but with the same probability of participating in CBHI as their enrolled counterparts (treated) based on observable and controlled characteristics. The attainment of PSM's fundamental assumptions (conditional independence assumption (CIA) or unconfoundedness, and common support) are key to reducing bias arising from observed differences between groups. Accordingly, for CIA to be met, the factors associated with CBHI enrolment among PSNP households and those factors affecting outcomes related to CBHI must be observed, i.e., the selection is solely based on observable characteristics. Further, the common support or overlap assumption also requires that households with the same characteristics (X) have a positive probability of being in both arms and have the same probability of participation between 0 and 1, such that (0 < P(T = 1|X) < 1) [42, 45].
We first calculate the average treatment effect (ATE), at the population level constituting differences between the treated and non-treated groups as E[YiT- Yi C] [46]. Next, following Smith and Todd [44] and Caliendo and Kopeinig [42] and given the above assumptions, we estimate the average treatment effect on the treated (ATT) as follows.
$${\mathrm{ATT}}_{\mathrm{PSM}} = {E}_{P(\mathrm{X}) |\mathrm{T}=1}\{E[{\mathrm{Y}}^{\mathrm{T}}|\mathrm{T }= 1, P(X)]-E[{\mathrm{Y}}^{\mathrm{C}}|\mathrm{T }= 0, P(\mathrm{X})]\}$$
where ATT is the average treatment effect on the treated for outcome Y (mean difference in outcomes between groups over the common support weighted by propensity scores), and T and C denote CBHI enrolled and non-enrolled households. P(X) is the probability of CBHI enrolment given the set of observable covariates X. Both ATE and ATT are calculated using Stata's Treatment Effects command.
In PSM analyses, we employed a nearest neighbour algorithm with replacement (described in more detail in Additional file 1: Appendix 2). We further performed a sensitivity analysis to examine the robustness of estimates to hidden bias, described in more detail in Additional file 1: Appendix 2.
Characteristics of sample households by CBHI enrolment status
Table 1 presents weighted descriptive statistics of covariates by CBHI enrolment status. Households enrolled with CBHI constituted 64.5% of our sample (n = 5,398). The average composition of households was as follows: 2.24 adults (aged 15–64 years), 1.89 children (aged 0–14 years), and 0.331 elderly aged 65 and above (≥ 65 years). The data also show that 11.9% of household heads were literate, 53.6% were currently married and 44% were females with an average age of about 53 years. We also find that 53.3% of households get water from improved sources during winter, one in every five never worried about food in the last 4 weeks but households reported to have experienced about 3 months of food insecurity in the past 12 months. Further, 17.7% and 17.5% of households have outstanding debts and experienced drought/ shortage of rainfall in the last 12 months, respectively. Households have generally high expectations about the role of CBHI in making healthcare more affordable and seeking the services easier. In addition, while households have appropriate information on how CBHI works on some key aspects, we find that only one-third and close to two-thirds know that CBHI enrolled households need to pay some costs in advance and CBHI covers medical costs related to pregnancy, respectively. Highlighting community characteristics, the study finds that the study communities are located on average 8 kms far from the nearest health centre, 24 kms from the nearest health facility with a doctor, and almost half of all the villages are located between 21 and 40 kms far from the district capital.
Table 1 Sample characteristics for the pooled and insurance-disaggregated households
Bivariate tests show that insured and non-insured households were not statistically different concerning their food insecurity experience in the last 4 weeks, the number of food insecurity months in the last 12 months, knowledge if the premium is not repaid if no medical services were sought, distance to the nearest health centre and the nearest health facility with a doctor, whether the nearest health facility admits people covered with CBHI, and the number of years the villages have been in PSNP.
Insured and Non-Insured groups were significantly different concerning household size by age, head characteristics (literacy, disability, current marital statuses, sex, and age), household profiles (sources of water during winter, having outstanding loans, shock in the past 12 months, income from PSNP, number of ill members in the past month and perceptions about CBHI benefits. Households in both arms also differ in their understandings of whether CBHI covers medical costs related to pregnancy, CBHI fully covers certain drugs or surgery, enrolled members should not pay part of the cost, if services are covered by CBHI, and if insured households need to pay some costs in advance.
Description of outcome variables
We present descriptive statistics of outcome variables by insurance status in Table 2. Members in insured households were more likely to use outpatient healthcare services (sought care for illness last month, members sought care from health professionals, health facility visits for any medical assistance in the past 12 months, and the total number of health facility visits by all members in the past 12 months) compared to those in non-insured households.
Table 2 Health services utilisation characteristics by CBHI enrolment
On the other hand, we find no significant differences between the two groups related to institutional delivery, if the birth was assisted by a skilled provider, and whether mother got four plus ANC from a skilled provider during the current pregnancy. However, we find that mothers of under-five children in insured households were more likely to get ANC from a skilled provider. Further, related to child(ren), those from CBHI enrolled households were more likely to be taken to a health facility for PNC in the past 12 months, have received curative care for illness last month, have received deworming in the last 6 months, slept under insecticide-treated bed net last night, have received all vaccinations, and taken to a health facility in the last 12 months for any health care services.
Impacts of CBHI enrolment on healthcare services utilisation
We presented the detailed information on the matching algorithm utilized using online Additional file 1: Appendix 2, and predictors of CBHI enrolment using Additional file 1: Appendix 3. Information on the quality of propensity score matching including propensity score and covariate balancing are presented in online Additional file 1: Appendix 4, and sensitivity analysis and robustness check in online Additional file 1: Appendix 5.
Table 3 presents the results on the treatment effects. The average treatment effects (ATE) show that CBHI enrolment was associated with an increase in the probability of household members visiting health facilities for curative care in the last month by 8.2 percentage points. Enrolment in CBHI also leads to an increase in the probability of seeking care from a health professional in the last month by 8.4 percentage points. Looking at outpatient health services utilisation in the past 12 months, we observe that the probability that members visited a health facility to seek any medical assistance for illness or check-ups in the past 12 months increases by 13.9 percentage points and the number of health facility visits per household increases by 0.84 as a result of CBHI enrolment. There were no impacts of CBHI enrolment on antenatal care, postnatal care, skilled delivery, and child preventative or curative care services.
Table 3 Treatment effects of CBHI enrolment on health service utilisation
The average treatment effects on the treated (ATT) estimates further strengthen the findings from ATE estimates, indicating CBHI has resulted in more utilisation of outpatient healthcare services among insured households. As expected, ATT estimates are larger than ATE estimates, except for the health facility visits in the past 12 months where the ATT is slightly smaller than the ATE. Among insured households, CBHI enrolment increased the likelihood of seeking healthcare services for illness in the previous month by 9.1 percentage points, seeking healthcare from professional care provider by 9.1 percentage points, and the probability of members visiting health facilities over the past 12 months by 13.8 percentage points. Furthermore, regarding the intensity of healthcare visits, we find that CBHI enrolment among insured households increased the mean number of health facility visits per household by 0.93. Also, we find no significant impacts of CBHI participation on ANC, institutional delivery, PNC, and child preventive and curative health services utilisation among treated households (Table 3).
In our sensitivity analyses (Additional file 1: Appendix 5), we do not find that CBHI enrolment is determined by unobservable characteristics. Moreover, in a robustness check of our PSM findings using a more restricted calliper width and increasing the number of untreated subjects to be matched, we show that the findings described above are robust (Additional file 1: Appendix 5).
Statement of principal findings
Household enrolment in CBHI was 64.5%. The study finds that enrolment in CBHI increases the likelihoods of visiting health facilities for curative care in the past one month by 8.2 percentage points, seeking care from a health professional by 8.4 percentage points, visiting a health facility to seek any medical assistance for illness and check-ups in the past 12 months by 13.9 percentage points, and the number of health facility visits per household by 0.84. However, we didn't find statistically significant impacts of CBHI enrolment on antenatal care, skilled delivery, postnatal care, and child preventative curative services.
Strengths and weaknesses of the study
The study adds insights into the effects of CBHI enrolment on health services utilisation patterns among extremely poor and most vulnerable households targeted by Ethiopia's cash transfer programme (the PSNP). In addition, the study also provides early and cross-sectional evidence on the potential effects of integrating the two largest social protection programmes on health service utilisations among extremely poor households.
The study has some weaknesses worth mentioning. We used cross-sectional data generated from one region only in the country (out of ten regions and two city administrations) and exclusively PSNP-participating households. Thus, findings cannot be generalized to the whole country which also includes non-PSNP and well-to-do households. In addition, estimates may be biased by self-selection into voluntary CBHI enrolment based on unobservable characteristics and from omitted variables. We also lack information on the temporal ordering of CBHI enrolment and the outcome variables. This could have been addressed well using longitudinal data. Further, the study assumes that CBHI benefit packages and refunding requirements and procedures, which may influence enrolment decisions and service utilisations, are similar across districts.
Comparison with other studies and possible explanations
Our findings that CBHI enrolment led to greater outpatient services use is supported by previous studies in Ethiopia on the general population, including Shigute et al. [30], Mebratie et al. [24], Mebratie et al. [25], and Tilahun et al. [29]. Shigute et al. [30] analysed the effects of CBHI alone and combined with PSNP on the use of modern healthcare utilisation for outpatient care and modern health facility visits in Ethiopia using three rounds of individual-level panel data. In line with our study findings, they also reported that CBHI nudges the probability of using modern healthcare services (visiting a modern health care facility for outpatient care services) by 2.3 percentage points and the number of visits to modern health facilities by 0.07 among adult members in the pooled sample. For the PSNP-only sub-sample, they find that CBHI increases utilisation of modern outpatient healthcare services by 4 percentage points. Although this is larger compared to the pooled sample among adult members, both estimates are by far smaller than the impacts in our study. In contrast to our study, they find no significant impact of CBHI enrolment on the number of modern health facility visits among the PSNP-participating sub-sample. Another study by Mebratie et al. [25] also showed that enrolment in pilot CBHI scheme increased utilisation of outpatient healthcare services at public health facilities by 30–41 percentage points and increases the number of public health facility visits by 0.05–0.07 in the past 2 months.
Some important differences between our and the two studies may have resulted in the variations in the estimated impacts. First, the two existing studies used a more general population (among which a sub-sample of households were participating in PSNP) while our sample is comprised entirely from PSNP-participating households. Second, while data used for both previous studies came from the CBHI pilot phase, our data came from a recent large-scale baseline survey conducted in rural Amhara, after CBHI had been scaled up nationally. Thus, our findings have greater generalizability concerning CBHI impacts among vulnerable groups targeted by the PSNP. Third, related to the recall periods, we used the previous one month to ask about the use of outpatient services by any of the household members for illness, but the 12 months period to ask the number of total health facility visits by all household members for outpatient care. They used the past two months for a recall which may fail to capture some of the health facility visits made in the year compared to the 12-month period. But, their approach is accompanied by less recall biases. However, both studies used panel data while our study relied on cross-sectional data. Tilahun et al. [29], using cross-sectional data from one district in Amhara region, also find that membership in mutual health insurance increases the likelihood of using healthcare by 25.2 percentage points. However, it is not clear if the study households were also participating in PSNP.
Our null findings related to maternal and child healthcare services utilisation and health insurance enrolment are consistent with other studies from different settings, including having received four or more ANC visits in Rwanda [27]. They also find no impact of health insurance on receiving at least four ANC services in Rwanda. On the other hand, Fernandes et al. [47] find that insured women were less likely to use skilled birth attendance during delivery in Jordan. However, our results contrast with maternal and child health-related findings reported elsewhere. Health insurance increases the likelihood of receiving at least four ANC visits in Jordan [47], Ghana and Indonesia [27], increased the probability of health facility-based delivery in Ghana, Rwanda and Indonesia [27], Tanzania [48], and Egypt [49]. In Ethiopia, Atnafu and Gebremedhin [31] find that the CBHI programme has a positive effect on the use of curative healthcare services for children in households with at least one child experienced illness in the past 4 weeks. We posit that the non-significant effects of CBHI enrolment on the maternal and child healthcare services could be due to the free provision of such services in the country at public health facilities. These facilities are also the sole health service providers for CBHI insured households. This means that both insured and non-insured households have equal access to all maternal and child healthcare services at public health facilities. In addition to the free availability of several maternal and child healthcare services at the health posts, information and sensitization efforts such as the behavioural change communications (BCCs) sessions targeting all PSNP households may have resulted in better awareness and knowledge about the importance and availability of maternal and child preventive and curative services in nearby health posts among insured and non-insured households alike. Health posts are the first point of contact public health facilities for rural households in rural villages.
Possible mechanisms
The study also provides explanations of two potential causal pathways between enrolment in CBHI and outpatient health services utilisation. With no appropriate financial mechanisms, healthcare seeking in poorly functioning health systems is associated with a risk of catastrophic expenditures [50]. Borde et al. [51] find that in Ethiopia the average direct out-of-pocket healthcare expenditures were USD 32 per month, the average indirect out-of-pocket healthcare expenditures were USD 15 per month and the average catastrophic healthcare expenditure at 10% of threshold was 40%. Accordingly, consistent with past related studies [52,53,54,55], our first hypothesised pathway is that CBHI may have reduced the high out-of-pocket health spending, thereby encouraging utilisation of healthcare services among the PSNP-participating households. In this regard, an evaluation of the pilot CBHI in Ethiopia also finds that 37% of CBHI members joined the programme to primarily reduce out-of-pocket expenditure when seeking health care, and 35% joined CBHI to seek healthcare more frequently [56].
We also hypothesised that CBHI's role to empower women could be another pathway linking CBHI enrolment and enhanced utilisation of outpatient healthcare services. In Ethiopia, men are considered to be the primary breadwinners and have the decision-making power in all household financial matters including spending on healthcare. This means that in uninsured households and whenever service seekers have to pay service fees upfront, some members, including women, may not be able to get or delay getting the treatment due to a lack of financial autonomy. In this regard, earlier evidence showed that CBHI empowered women – enabled them to seek essential health care whenever needed without requesting money and permission from male heads of household [57]. In support of this evidence, a recent study by Messner et al. [58] also finds that women in CBHI insured households are more likely to seek treatment for themselves or their children without financial support from a male head. For example, one of the study respondents in Messner et al. [58] study stated that:
"Unfortunately majority of us, women, don't have income of our own. We rely on our husband's money in order to pay for the medical bill. But if we have this card, we don't have to ask our husbands for money whenever we are sick. In addition, our husbands may not be at home when we get sick. Hence, having this card will allow us to go to the health centre without waiting on our husbands." — Married woman, age 24–45, Tigray.
While there could be more mechanisms, more rigorous studies are needed to fully understand the causal mechanisms between CBHI enrolment and improved outpatient health service utilisations.
Policy implications
The government of Ethiopia implemented several policy measures to enhance households' protection against financial risks in accessing essential health services and to improve health service utilisation. Community-based health insurance is one of these measures. The current health sector transformation plan aims to accelerate the progress towards full coverage of essential health services and protecting people from financial hardship, including those in currently underserved populations [59]. Achieving UHC entails the achievement of all components of UHC (availability of all essential health services at each service delivery with an acceptable level of quality, effective coverage of essential health services, and ensuring financial risk protection) to all population subgroups. Our findings suggest that enrolment in CBHI is one of the promising strategies towards UHC and plays a vital role to help vulnerable and PSNP-participating households to access some of the available essential outpatient services. However, further evidence is still needed in other dimensions of UHC such as on the availability of all essential health services at all public health facilities, mainly at primary health facilities, quality of care, and individual level coverage. Moreover, since enrolment into Ethiopia's CBHI is voluntary, the poorest non-PSNP households may still be excluded from the programme. A recent study in one of the study districts also finds that extremely poor and most vulnerable households to extreme poverty who are not receiving conditional cash transfers are less likely to join CBHI [60]. To place CBHI better as a tool towards UHC, the government may implement more measures such as universal eligibility for insurance with a substantial premium subsidy and the universal individual-level exemption for some vulnerable groups such as pregnant women from paying premiums [61].
Our study also provides some evidence on the role of CBHI in ensuring equity in healthcare in the informal sector. Related to this, although equity should be looked at both at the CBHI enrolment and service utilisation stages [62], improved utilisation of outpatient healthcare services by the ensured vulnerable households suggests that CBHI also contributes partly towards equity in healthcare among some of the most vulnerable groups in the country. Past studies suggest that enrolment among the poor and marginalized households can be also enhanced through improving roads and public transport systems [62] and premium subsidy and fee waiver [63, 64].
Given that the per capita health facility visits in Ethiopia (0.9 visits annually in 2019) has so far been far below the World Health Organization (WHO) recommended level—2.5 visits per capita per year [65], the significant effect of CBHI enrolment on health facility visits indicates the programme has a promising potential for Ethiopia to reach the WHO recommended level of per capita health facility visit. More importantly, the significant impact of CBHI enrolment on health facility visits among the most vulnerable households underscores the critical role of CBHI to ensure health equity in the country and to leave no one behind.
Future research areas
Existing literature shows that health insurance programmes, including CBHI, are prone to moral hazard problems which occur when enrolment in health insurance is followed by increases in healthcare consumption and a reduction in preventive measures [66,67,68,69]. However, as this is not always the case due to preferred and needed healthcares [70], future studies may investigate whether this problem exists among insured or not in Ethiopia's CBHI programme. Past studies in other related settings argued that due to low availability and utilisations of healthcare services and high unmet demand, improvements in health services utilisations among such populations may not be due to moral hazard, but the impact of health insurance [21]. Future studies may also explore if an adverse selection problem exists in the CBHI enrolment which in turn could affect the health service utilisation decisions.
Access to information about CBHI benefit packages (entitlements) and available health services at different health facilities could also influence CBHI enrolment decisions as well as health services utilisations. Future studies may investigate how exposure to various information and awareness sessions such as the behavioural change communications (BCCs) and information campaigns affect health services utilisation by PSNP-participating households. Other potential areas of investigation include how perceived or actual institutional arrangements such as refunding and referral systems between health facilities affect health service utilisations. Evidence on the individual level health service utilisation among insured households using CBHI card can also give highlights about the intra-household gender and power dynamics in using specific health services using CBHI card. Finally, future studies may also explore the pro-poorness of the CBHI among PSNP-participating households in terms of their health outcomes.
This study evaluated the impacts of enrolment in CBHI on the utilisation of outpatient, maternal, and child preventive and curative healthcare services among the most vulnerable rural households in Ethiopia. Approximately two-thirds of the sample households are insured in CBHI. We find that enrolment in CBHI was positively associated with using more outpatient healthcare services including visiting health facilities for curative care in the past one month, seeking care from a health professional, visiting a health facility to seek any medical assistance for illness and check-ups in the past 12 months, and the number of health facility visits per household. However, the study finds no significant impacts of membership in CBHI on maternal and child healthcare services. The study provides insights on the role of CBHI among safety net programme beneficiaries to achieve UHC and health equity and increase the per capital annual health facility visits. The evidence can contribute to policy making aimed to integrate the two largest social protection programmes (CBHI and PSNP) in the country and mitigate the adverse impacts of multidimensional poverty.
The datasets generated and/or analysed during the current study are not publicly available since the data is part of an ongoing study which is not yet completed. It is expected to be made available no sooner than one year after the final impact evaluation report is published, pending UNICEF and Government approval and can be available from the corresponding author on reasonable request.
APHI:
Amhara Public Health Institute
ANC:
ARI:
Acute Respiratory Illness
ATE:
Average Treatment Effect
ATT:
Average Treatment Effect on the Treated
Behavioural Change Communications
BoH:
Bureau of Health
BoLSA:
Bureau of Labour and Social Affairs
BoWCA:
Bureau of Women and Children Affairs
CBHI:
Community-Based Health Insurance
CIA:
Conditional Independence Assumption
DHS:
Demographic and Health Survey
DTP:
Diphtheria, Tetanus and Pertussis
Ethiopia Country Office
ETB:
HSTP:
Health Sector Transformation Plan
HSDP:
Health Sector Development Programme
ISNP:
Integrated Safety Net Programme
ITBN:
Insecticide-Treated Bednet
LMIC:
Lower- and Middle-Income Countries
MICS:
Multiple Indicator Cluster Survey
MoARD:
Ministry of Agriculture and Rural Development
MoLSA:
Ministry of Labour and Social Affairs
NSPP:
National Social Protection Policy
NSPS:
National Social Protection Strategy
Oral Rehydration Therapy
PDS:
Permanent Direct Support
PHC:
PNC:
PSM:
Propensity Score Matching
PSNP:
Productive Safety Net Programme
SDGs:
SPAP:
Social Protection Action Plan
UN:
UNICEF:
United Nations Children's Fund
United Nations. The sustainable development goals report 2016. New York: United Nations, Department of Economic and Social Affairs; 2016.
WHO, UNICEF. Accountability for maternal, newborn & child survival: the 2013 update. Geneva: World Health Organization and UNICEF; 2013.
World Health Organization, World Bank. Tracking universal health coverage: first global monitoring report. Geneva: World Health Organization; 2015.
UNICEF. Pneumonia: UNICEF global databases, 2021. 2021. Available from: https://data.unicef.org/topic/child-health/pneumonia/.
Lavers T. Towards universal health coverage in Ethiopia's 'developmental state'? The political drivers of health insurance. Soc Sci Med. 2019;228:60–7.
Tao W, Zhi, Dang H, Li P, Chuong L, Yue D, et al. Towards universal health coverage: achievements and challenges of 10 years of healthcare reform in China. BMJ Glob Health. 2020;5(3):e002087.
Wagstaff A, Neelsen S. A comprehensive assessment of universal health coverage in 111 countries: a retrospective observational study. Lancet Glob Health. 2020;8(1):e39-49.
Donfouet HPP, Mahieu PA. Community-based health insurance and social capital: a review. Health Econ Rev. 2012;2(1):5.
Thomas TK. Role of health insurance in enabling universal health coverage in India: a critical review. Health Serv Manage Res. 2016;29(4):99–106.
MoH. Health Sector Development Programme IV (HSDP IV): 2010/11–2014/15. Addis Ababa: Federal Democratic Republic of Ethiopia Minstry of Health; 2010.
MoH. Health Sector Transformation Plan (HSTP): 2015/16 - 2019/20 (2008–2012 EFY). Addis Ababa: Ethiopia Federal Ministry of Health; 2015.
MoH. National Rproductive Health Strategy, 2016–2020. Addis Ababa: Federal Democratic Republic of Ethiopia, Ministry of Health; 2016.
MoH. Reproductive, maternal, neonatal and child health program overview and pharmaceuticals management training for pharmacy professionals: participant manual. Addis Ababa: Federal Democratic Republic of Ethiopia, Minstry of Health; 2018.
Alebachew A, Yusuf Y, Mann C, Berman P, FMOH. Ethiopia's progress in health financing and the contribution of the 1998 health care and financing strategy in Ethiopia. Resource tracking and management project. Boston and Addis Ababa: Harvard T.H. Chan School of Public Health; Breakthrough International Consultancy, PLC; and Ethiopian Federal Ministry of Health; 2015.
MoH. National Human Resources for Health Strategic Plan for Ethiopia 2016–2025. Addis Ababa: Federal Democratic Republic of Ethiopia, Ministry of Health; 2016.
Endale K, Pick A, Woldehanna T. Financing social protection in Ethiopia: a long-term perspective. OECD Development Policy Papers 15. OECD Publishing; 2019.
Central Statistical Agency (CSA) [Ethiopia], ICF. Ethiopia demographic and Health Survey 2016. Addis Ababa and Rockville: CSA and ICF; 2016.
Ethiopian Public Health Institute (EPHI) [Ethiopia] and ICF. Ethiopia Mini Demographic and Health Survey 2019: final report. Rockville: EPHI and ICF; 2021.
MoH. Essential health services package of Ethiopia. Addis Ababa: Federal Democratic Republic of Ethiopia, Ministry of Health; 2019.
Guindon GE. The impact of health insurance on health services utilization and health outcomes in Vietnam. HEPL. 2014;9(4):359–82.
Nshakira-Rukundo E, Mussa EC, Nshakira N, Gerber N, von Braun J. Impact of community-based health insurance on utilisation of preventive health services in rural Uganda: a propensity score matching approach. Int J Health Econ Manag. 2021;21(2):203–27.
Alkenbrack S, Lindelow M. The impact of community-based health insurance on utilization and out-of-pocket expenditures in Lao People's Democratic Republic: impact evaluation of health insurance. Health Econ. 2015;24(4):379–99.
Atnafu DD, Tilahun H, Alemu YM. Community-based health insurance and healthcare service utilisation, North-West, Ethiopia: a comparative, cross-sectional study. BMJ Open. 2018;8(8):e019613.
Mebratie AD, Sparrow R, Yilma Z, Abebaw D, Alemu G, Bedi A. Impact of Ethiopian pilot community-based health insurance scheme on health-care utilisation: a household panel data analysis. Lancet. 2013;381:S92.
Mebratie AD, Sparrow R, Yilma Z, Abebaw D, Alemu G, Bedi AS. The impact of Ethiopia's pilot community based health insurance scheme on healthcare utilization and cost of care. Soc Sci Med. 2019;220:112–9.
Thuong NTT, Huy TQ, Tai DA, Kien TN. Impact of health insurance on health care utilisation and out-of-pocket health expenditure in Vietnam. Biomed Res Int. 2020;26(2020):1–16.
Wang W, Temsah G, Mallick L. The impact of health insurance on maternal health care utilization: evidence from Ghana, Indonesia and Rwanda. Health Policy Plan. 2017;32(3):366–75.
Demissie B, Negeri K. Effect of community-based health insurance on utilization of outpatient health care services in Southern Ethiopia: a comparative cross-sectional study. RMHP. 2020;13:141–53.
Tilahun H, Atnafu DD, Asrade G, Minyihun A, Alemu YM. Factors for healthcare utilization and effect of mutual health insurance on healthcare utilization in rural communities of South Achefer Woreda, North West, Ethiopia. Health Econ Rev. 2018;8(1):15.
Shigute Z, Strupat C, Burchi F, Alemud G, Bedi AS. Linking social protection schemes: the joint effects of a public works and a health insurance program in Ethiopia. 2019.
Atnafu A, Gebremedhin T. Community-based health insurance enrollment and child health service utilization in Northwest Ethiopia: a cross-sectional case comparison study. CEOR. 2020;12:435–44.
Ethiopian Health Insurance Agency. Evaluation of community-based health insurance pilot schemes in Ethiopia: final report. Addis Ababa: Federal Democratic Republic of Ethiopia, Ethiopian Health Insurance Agency; 2015.
Ethiopian Health Insurance Agency. CBHI Members' Regi stration and Contribution, 2011 2020 GC. Addis Ababa: Ethiopian Health Insurance Agency; 2020.
Yilma Z, Mebratie AD, Sparrow R, Dekker M, Alemu G, Bedi AS. Health risk and insurance: impact of Ethiopia's community based health insurance on household economicwelfare. World Bank Economic Rev. 2015;29:S164–73.
Hoddinott J, Seyoum Taffesse A. Social protection in Ethiopia. In: Cheru F, Cramer C, Oqubay A, editors. The Oxford Handbook of the Ethiopian Economy. Oxford University Press; 2019. p. 411–27. Available from: http://oxfordhandbooks.com/view/10.1093/oxfordhb/9780198814986.001.0001/oxfordhb-9780198814986-e-22. Cited 2020 Jul 23.
Hirvonen K, Bossuyt A, Pigois R. Evidence from the productive safety net programme in Ethiopia: complementarities between social protection and health policies. Dev Policy Rev. 2021;39(4):532–47.
Mussa EC, Otchere F, Vinci V, Reshad A, Palermo T. Linking poverty-targeted social protection and community based health insurance in Ethiopia: enrolment, linkages, and gaps. Soc Sci Med. 2021;286:114312.
ISNP Evaluation Team. Impact evaluation of the integrated safety net programme in the Amhara Region of Ethiopia. final baseline report. Florence: UNICEF Office of Research – Innocenti; 2020.
The Transfer Project. Transfer project instruments. Chapel Hill: Carolina Population Center; 2022. Available from: https://transfer.cpc.unc.edu/instruments/.
Dehejia RH, Wahba S. Causal effects in nonexperimental studies: reevaluating the evaluation of training programs. J Am Stat Assoc. 1999;94(448):1053–62.
Heckman JJ, Ichimura H, Todd PE. Matching as an econometric evaluation estimator: evidence from evaluating a job training programme. Rev Econ Stud. 1997;64(4):605–54.
Caliendo M, Kopeinig S. Some practical guidance for the implementation of propensity score matching. J Economic Surveys. 2008;22(1):31–72.
Abadie A, Imbens GW. Matching on the estimated propensity score. Econometrica. 2016;84(2):781–807.
Smith J, Todd P. Does matching overcome LaLonde's critique of nonexperimental estimators? J Econom. 2005;125(1–2):305–53.
Heinrich C, Maffioli A, Vazquez G. A primer for applying propensity-score matching. Inter-American Development Bank; 2010. Available from: https://publications.iadb.org/handle/11319/1681?scope=123456789/11&thumbnail=false&rpp=5&page=1&group_by=none&etal=0&filtertype_0=author&filter_0=Heinrich%252C+Carolyn&filter_relational_operator_0=equals. Cited 2017 Jun 23.
Imbens GW. Nonparametric estimation of average treatment effects under exogeneity: a review. Rev Econ Stat. 2004;86(1):4–29.
Fernandes P, Odusina EK, Ahinkorah BO, Kota K, Yaya S. Health insurance coverage and maternal healthcare services utilization in Jordan: evidence from the 2017–18 Jordan demographic and health survey. Arch Public Health. 2021;79(1):81.
Kibusi SM, Sunguya BF, Kimunai E, Hines CS. Health insurance is important in improving maternal health service utilization in Tanzania—analysis of the 2011/2012 Tanzania HIV/AIDS and malaria indicator survey. BMC Health Serv Res. 2018;18(1):112.
Rashad AS, Sharaf MF, Mansour EI. Does public health insurance increase maternal health care utilization in Egypt? J Int Dev. 2019;31(6):516–20.
Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, et al. High-quality health systems in the sustainable development goals era: time for a revolution. Lancet Glob Health. 2018;6(11):e1196–252.
Borde MT, Kabthymer RH, Shaka MF, Abate SM. The burden of household out-of-pocket healthcare expenditures in Ethiopia: a systematic review and meta-analysis. Int J Equity Health. 2022;21(1):14.
Wang Y, Jiang Y, Li Y, Wang X, Ma C, Ma S. Health insurance utilization and its impact: observations from the middle-aged and elderly in China. Beck EJ, editor. PLoS ONE. 2013;8(12):e80978.
Woldemichael A, Gurara D, Shimeles A. The impact of community based health insurance schemes on out-of-pocket healthcare spending: evidence from Rwanda. International Monetary Fund; 2019. IMF Working Paper No.: WP/19/38.
Khan JAM, Ahmed S, Sultana M, Sarker AR, Chakrovorty S, Rahman MH, et al. The effect of a community-based health insurance on the out-of-pocket payments for utilizing medically trained providers in Bangladesh. Int Health. 2020;12(4):287–98.
Shimeles A. Community based health insurance schemes in Africa: the case of Rwanda. Tunis: African Development Bank (AfDB); 2010. Working Papers Series: 120.
EHIA. Evaluation of community-based health insurance pilot schemes in Ethiopia: final report. Addis Ababa: Federal Democratic Republic of Ethiopia Ethiopian Health Insurance Agency; 2015.
USAID. How Ethiopia is empowering women through community-based health insurance. 2016.
Messner L, Tadesse HA, Godbole-Chaudhuri P, Smith D, Santillán D. Women's economic empowerment and community-based health insurance: lessons from Ethiopia 2019. Rockville: EnCompass, LLC.; 2019. p. 5.
MoH. Health Sector Transformation Plan II (HSTP II). 2020/21–2024/25, (2013 EFY - 2017 EFY). Addis Ababa: Federal Democratic Republic of Ethiopia, Minister of Health; 2021.
Mussa EC, Agegnehu D, Nshakira-Rukundo E. Impact of conditional cash transfers on enrolment in community-based health insurance among female-headed households in south Gondar zone, Amhara region, Ethiopia. SSM Population Health. 2022;17:101030.
Watson J, Yazbeck AS, Hartel L. Making health insurance pro-poor: lessons from 20 developing countries. Health Syst Reform. 2021;7(2):e1917092.
Parmar D, De Allegri M, Savadogo G, Sauerborn R. Do community-based health insurance schemes fulfil the promise of equity? A study from Burkina Faso. Health Policy Plan. 2014;29(1):76–84.
Palermo TM, Valli E, Ángeles-Tagliaferro G, de Milliano M, Adamba C, Spadafora TR, et al. Impact evaluation of a social protection programme paired with fee waivers on enrolment in Ghana's National Health Insurance Scheme. BMJ Open. 2019;9(11):e028726.
Zaidi S, Saligram P, Ahmed S, Sonderp E, Sheikh K. Expanding access to healthcare in South Asia. BMJ. 2017;11:j1645.
MoH. Ethiopia health accounts 2016/17. Addis Ababa: Federal Ministry of Health (MOH); 2019.
Ahuja R, Jütting J. Design of incentives in community based health insurance schemes. Indian council for research on international economic relations. New Delhi; 2003. Working Paper No.: 95.
Sapelli C, Vial B. Self-selection and moral hazard in Chilean health insurance. J Health Econ. 2003;22(3):459–76.
Ayana ID. Investigation of Moral Hazard Deportments in community-based health insurance in Guto Gida District, Western Ethiopia: a qualitative study. CEOR. 2020;12:733–46.
Yilma Z, van Kempen L, de Hoop T. A perverse 'net' effect? Health insurance and ex-ante moral hazard in Ghana. Soc Sci Med. 2012;75(1):138–47.
Grignon M. Access and Health Insurance. In: Encyclopedia of Health Economics. Elsevier; 2014. p. 13–8. Available from: https://linkinghub.elsevier.com/retrieve/pii/B9780123756787009238. Cited 2022 May 6.
Liu T, Tsang W, Xie Y, Tian K, Huang F, Chen Y, et al. Preferences for artificial intelligence clinicians before and during the COVID-19 pandemic: discrete choice experiment and propensity score matching study. J Med Internet Res. 2021;23(3):e26997.
Austin PC. Statistical criteria for selecting the optimal number of untreated subjects matched to each treated subject when using many-to-one matching on the propensity score. Am J Epidemiol. 2010;172(9):1092–7.
Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Stat Med. 2009;28(25):3083–107.
Austin PC. Optimal caliper widths for propensity-score matching when estimating differences in means and differences in proportions in observational studies. Pharmaceut Statist. 2011;10(2):150–61.
Becker SO, Caliendo M. Sensitivity analysis for average treatment effects. Stand Genomic Sci. 2007;7(1):71–83.
Austin PC. Some methods of propensity-score matching had superior performance to others: results of an empirical investigation and Monte Carlo simulations. Biom J. 2009;51(1):171–84.
Ali MS, Prieto-Alhambra D, Lopes LC, Ramos D, Bispo N, Ichihara MY, et al. Propensity score methods in health technology assessment: principles, extended applications, and recent advances. Front Pharmacol. 2019;18(10):973.
Ali MS, Groenwold RH, Klungel OH. Best (but oft-forgotten) practices: propensity score methods in clinical nutrition research. Am J Clin Nutr. 2016;104(2):247–58.
Rubin DB. Using propensity scores to help design observational studies: application to the tobacco litigation. Health Serv Outcomes Res Method. 2001;2(3):169–88.
The evaluation team would like to acknowledge the support of Martin Romero Martinez in power and sample size calculations. Our sincere appreciation goes to the supervisors and interviewers and, most importantly, the households for their cooperation and generous time in responding to the survey.
Gustavo Angeles1,4, Maja Gavrilovic1, Essa Chanie Mussa1,2, Frank Otchere1,*, Elsa Valli1, Jennifer Waidler1, Tia Palermo3*, Sarah Quiñones3, Ana Gabriela Guerrero Serdan5, Vincenzo Vinci5, Lisa-Marie Ouedraogo5, Getachew Berhanu Kebede5, Getinet Tadele6*, Sewareg Adamu6, Teketel Abebe6, Yenenesh Tadesse6, Feredu Nega6, Mesay Kebede6, Fekadu Muluye6, Alene Matsentu6 & Daniel Aklilu6
1UNICEF Office of Research—Innocenti, Via degli Alfani 58, 50121 Florence, Italy
2Department of Agricultural Economics, University of Gondar, Gondar, Ethiopia
3Department of Epidemiology and Environmental Health, University at Buffalo (State University of New York), Buffalo, NY 14214-8001 USA
4Department of Maternal and Child Health, UNC Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC. 27516 USA
5UNICEF Country Office of Ethiopia, Box 1169 Addis Ababa, Ethiopia
6Frontieri (Formerly BDS Centre for Development Research), Namibia St., Addis Ababa, Ethiopia
* co-Principal Investigator.
Funding for this study has been provided by the Swedish International Development Cooperation Agency (Sida).The funding body did not have a role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
UNICEF Office of Research—Innocenti, Via Degli Alfani 58, Florence, 50121, Italy
Essa Chanie Mussa & Frank Otchere
Department of Agricultural Economics, University of Gondar, Gondar, Ethiopia
Essa Chanie Mussa
Department of Epidemiology and Environmental Health, University at Buffalo (State University of New York), Buffalo, NY, 14214-8001, USA
Tia Palermo
Department of Maternal and Child Health, UNC Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27516, USA
Gustavo Angeles
UNICEF Country Office of Ethiopia, Box 1169, Addis Ababa, Ethiopia
Martha Kibur
Frank Otchere
, Maja Gavrilovic
, Essa Chanie Mussa
, Frank Otchere
, Elsa Valli
, Jennifer Waidler
, Tia Palermo
, Sarah Quiñones
, Ana Gabriela Guerrero Serdan
, Vincenzo Vinci
, Lisa-Marie Ouedraogo
, Getachew Berhanu Kebede
, Getinet Tadele
, Sewareg Adamu
, Teketel Abebe
, Yenenesh Tadesse
, Feredu Nega
, Mesay Kebede
, Fekadu Muluye
, Alene Matsentu
& Daniel Aklilu
ECM, TP, and FO conceptualized the topic for this manuscript and were responsible for the research design. ECM, TP, GA, and FO planned and conducted the statistical analysis. ECM and TP wrote the first draft of the manuscript. ECM, TP, GA, MK, and FO contributed to the interpretation of analyses and revised the manuscript. Members of the ISNP evaluation team further contributed to the study design and data collection. All authors read and approved the final manuscript.
Correspondence to Essa Chanie Mussa.
The study was approved by the Amhara Public Health Institute (APHI) Research Ethics Review Committee (Reference Number HRTT—03/192/2018). The study is registered on 19/11/2018 in the Registry for International Development Impact Evaluations (RIDIE) with study registration ID—RIDIE-STUDY-ID-5bf27eb0404a0. Informed consent was obtained from all survey participants aged 18 years and above for interviews and to use their anonymized information. For children below the age of 18 years, we obtained assent of their parents or legal guardian to conduct interviews with them. All methods were performed in accordance with the relevant guidelines and regulations.
Additional file 1: Appendix 1.
Impact evaluation design details. Appendix 2. Methods: Details on Matching Algorithm and Sensitivity Analysis . Appendix 3. Predictors of enrolment into CBHI. Appendix 4. Propensity Score Matching (PSM) Results. Appendix 5. Sensitivity analysis and robustness check. Appendix 6. Variable types and measurements [71,72,73,74,75,76,77,78,79].
Mussa, E.C., Palermo, T., Angeles, G. et al. Impact of community-based health insurance on health services utilisation among vulnerable households in Amhara region, Ethiopia. BMC Health Serv Res 23, 55 (2023). https://doi.org/10.1186/s12913-023-09024-3
Health services utilisation | CommonCrawl |
Intergenerational effects on individual charitable donation: an innovative study on philanthropy in China
Yongjiao Yang ORCID: orcid.org/0000-0001-6638-44391,
Yuting Shi2 &
Dong Zhang3
The Journal of Chinese Sociology volume 7, Article number: 22 (2020) Cite this article
The family, which is the basic unit of Chinese society, serves as the micro foundation of individual charitable behavior. This study examines the intergenerational effects on individual charitable donations in China based on China's unique social structure, traditional culture, and philanthropic history. The study identifies the mutual influence of children's charitable donation and parent's charitable donation through both downward and upward intergenerational transmission. The effect of upward transmission is stronger than that of downward inheritance, especially among families with children born in the 1980s and 1990s. The findings reflect the "family-oriented" culture of Chinese society and highlight the necessity and urgency of developing a charitable donation theory rooted in Chinese experiences.
For a theory of Chinese charitable donation
Charitable donation refers to donating tangible or intangible assets to charitable organizations or beneficiaries voluntarily and gratuitously. Both Chinese and international sociology academics have conducted decades of research on individual charitable donation and have explored the connotation, occurrence, type, participation, and sustainability of charitable donation. Both internal factors and external factors influence donations. Internal factors refer mainly to donation motivations caused by demand, which include self-interest, altruism, and reciprocity. The studies of self-interest motivation focus primarily on obtaining wealth and social reputation, whereas the studies of altruistic motivation focus on altruistic obligation and social responsibility, and the studies of reciprocal motivation are conducted from the perspective that both the donor and the beneficiary benefit from donation behavior (Vesterlund 2006).
External factors are manifested in many aspects and include institutional factors, organizational factors, and environmental factors. From the perspective of institutions, a tax exemption policy promotes Pareto improvement, advances the total utility of donation, and plays an incentive role (Hochman and Rodgers 1977). From the perspective of the organizational process, charitable donation generates incentives, such as satisfactory ability and credibility of charity organizations and high-quality charitable activities and projects (Zhao 2011). From the social environment perspective, a charitable donation is embedded in the social network significantly impacted by social capital, mobilization, and peer pressure (Meer 2011; Hu and Shen 2013; Bi et al. 2010). Moreover, individual characteristics, such as socioeconomic status, gender, and education level, are frequently discussed as factors for donation in academia (Bénabou and Tirole 2006; Bekkers and De Graaf 2006; Andreoni and Vesterlund 2001).
However, the role and function of the family in the individual charitable donation are rarely discussed. The research orientation is related to the basic structure of Western society. According to (Fei 1998), in the "group pattern" (tuanti geju) of the modern Western society, the relationships among people are similar to "a bundle of firewood"; in each group, even if there are differences between "subgroups" in terms of grouping and hierarchy, the relationships between the internal members are equal, and the boundaries between units in the group structure are clear (Ma 2007). This characteristic of the social structure and the operating rules are closely related to religion (Christianity). In Western society, religion and God are the symbols of groups. Two vital concepts are derived from these symbols: each individual is equal before God, and God is fair to each individual. The relationship between groups and individuals symbolizes the relationship between God and believers in which God is a judge with rewards and punishments, a guard of justice, and an omnipotent protector (Fei 1998). As one of the group units, the family is no more important than other group units, such as the state or church. According to (Liang 2005), when Western society turned to large group life from Christianity, the family became less important, and clan became more divisive. The cultural tradition of philanthropy originated from and developed in this social structure reflects the deep-rooted concepts of individual orientation, mutual aid culture, and religion-orientated philanthropy (Han and Zheng 2014). As a result, the perspective of Western academic research on charitable behavior often focuses on individual characteristics, charitable organizations, and religious impacts and seldom adopts the family perspective.
In China, research on individual charitable donation began recently, and most relevant studies utilize previously proposed theories based on the Western experience. According to Nan Lin (1986), a common problem prevailed in Chinese academia: the "unreflective transplantation" of Western theories to analyze the Chinese experience. Scholars do not substantially adjust the Western research framework to the particularity of Chinese contexts and seldom develop new theoretical explanations based on the differences between the Chinese experience and Western theories. From the perspective of epistemology or methodology, "the validity of knowledge in crossing contexts" (Wang 2017:18) becomes inevitable when the concepts and perspectives of Western sociological theories are utilized to examine the reality in the Chinese social context. To overcome the "non-acclimatization" between the theoretical perspective and the research object or application context, we must adopt a "localization of sociology" approach. In this study, the Chinese social structure and cultural characteristics, which differ from those of the Western society, must be considered when analyzing the main factors or driving forces of individual charitable behavior of Chinese people in order to develop a theory of Chinese charitable donation. Through examining the influence of the institutional environment on individual giving in China, studies have identified the "Chinese characteristics" of philanthropy (Bi et al. 2010). By exploring individual charitable donations from the family perspective, this study further promotes the construction and development of a philanthropy theory based on the Chinese experience.
"Familism" and philanthropy in China
The social structure significantly influences people's behavior. In the study of individual charitable donation, the social structure in which the behavior is embedded cannot be ignored. The formation of the Chinese social structure is substantially influenced by Confucian traditional culture. The kernel of Confucian ethics is to regard the family as the perfect embodiment of social relations and the responsibility to the family as superior to all other responsibilities, such as responsibilities and obligations to the emperor, the God(s), or any other secular or religious authority (Fukuyama 2001). This emphasis on the "familism" is reflected in the following: First, the premise for individuals to enter social life is to participate in different moral life in the family that, in turn, form a moral standard. Second, individuals' significance and value in social life must be realized through their contributions to the family (Wen 2014). By scrutinizing China's familism from a historical perspective, it is reasonable to conclude that the long-lasting family institution is a basic defense system, which helps individuals fight against the risky and fickle environment. The many small-scale family businesses in Chinese society also demonstrate Chinese people's dependence on their families and family members.
In line with the tradition of Confucian culture and the moral system, China's social structure is identified as a "differential mode of association" (cha xu ge ju), which differs from the "group pattern" of Western society (Fei 1998). The connotation of the "differential mode of association" can be summarized as behavioral interaction patterns among social members being differentiated by the distance and proximity of the relationship. A network of social relations is formed sequentially by close and distant relationships centered on the individual (Zhang 2010). This social network model emphasizes the significance of consanguinity, the relativity of public-private and group-self relationships, the self-centered ethical values, and the etiquette order of "using traditional interpersonal relations and ethics to maintain social order" (Yan 2006). In contrast to the Western society dominated by "groups," in China, "family" (which includes the family and the clan) plays an important role next to "the individual" in the structure of the "differential mode of association" (Yu 2003; Ma 2007) and even surpasses the meaning of the individual according to various scholars (Chen 2011). (Yan 2006) suggested that the two characters of "cha" (distance) and "xu" (rank) in the "differential mode of association" reflect the "human relations" (ren lun) that are emphasized by Confucianism, which is an ethical and social order system that was established prior to the existence of the individual. In the practice of human relations in Confucianism, the family is the major arena. The filial piety of "father and son" in the five ethics (wu lun) constitutes the starting point of these practices (Shen 2007). Whether it is "five ethics" or "ten ethics," the relationships between people differ according to distance and hierarchy.
Although contemporary Chinese society had undergone tremendous changes since Xiaotong Fei proposed the "differential mode of association," traditional Confucian culture still influences all aspects of social life imperceptibly. Despite 40 years of reform and opening-up and social transformation, the inertia of China's long history is too powerful to be neglected (Ma 2007). Familial metaphors, such as "within the four seas all men are brothers" and "one family in the world," once served as grand political goals in Confucianism and reflected the concept of "familism" (Shen 2007). Furthermore, although China has been committed to building the family as a "social unit" that is suitable for industrialized and market-oriented systems, the traditional functions of families and their influence on family members remain strong since the social welfare systems are not well-established for supporting such "modern" families (Yao 2012).
The logic of "familism" has substantially influenced philanthropy in China. First, from the perspective of the intrinsic value of individual philanthropy, "benevolence" that is based on "familism" involves "the difference of love," which is distinct from the value of "universal love" that is emphasized by Western Christian tradition (Huang 2008). Therefore, philanthropy's cultural tradition demonstrates the characteristics of family (clan) orientation, official leadership, and relative underdevelopment of grassroots charity (Han and Zheng 2014). Influenced by "familism," China's philanthropy is characterized by closeness; namely, the donors and the beneficiaries could be familiar with each other. The narrow scope of the beneficiaries is the embodiment of the acquaintance culture, which is inconsistent with the spirit of "stranger ethics" of modern philanthropy, which has the characteristics of sociality, openness, and universality (Yang 2009). It substantially affects the passion of the general public for participation in charitable donation, which is demonstrated by the ranking of China's charitable giving level in the world. According to the World Donation Index 2015, China's individual giving level ranks the second-lowest among the 145 countries and regions surveyed; the index of money donation ranks 136 and the index of volunteering time 144 (Charity Aid Foundation 2015).
Although "familism" in Chinese society has hindered the development of modern philanthropy, the above perspective regarding the role of the family in philanthropy is merely one side of the coin. The structural characteristics of Chinese society render it difficult for individuals to act free of the influence of family, and the roles of family in the performance of charitable behavior and the cultivation of charitable culture can be multifaceted. From the perspective of the spread and diffusion of charitable behaviors and ideas, due to an individual's dependence on family, their decision-making regarding charitable behavior will be constantly influenced by their families. According to Weber (1995), Chinese people's trust is a special type of trust: They only trust people who have a personal relationship (kinship or quasi kinship) rather than trusting strangers. The family is the primary group in which family members share close relationships and frequent interactions, and the behavioral impact is immediate. Therefore, due to China's social structure, the cultivation of philanthropy depends heavily on the family relationship, and the charitable behavior also depends substantially on the behaviors and attitudes of other members of the family. Compared with external stimulation, the mobilization from the family, and the transmission and diffusion of charitable values inside the family may be effective long-term strategies for promoting the development of philanthropy in China.
In China, the family has an aggregated effect on individual philanthropy since the family can effectively collect the scattered philanthropic resources of family members and participate in philanthropy in an overall form. Although the family is the initial place for the cultivation of children's altruistic spirit and social responsibility and the "incubator" for charity awareness and behavior in both Chinese and Western societies, a charitable donation in China is often initiated by the family as an economic entity, and donation decisions are made primarily by the husband and the wife together (Zhu and Liu 2017). In this sense, interactions between family members regarding their philanthropic behaviors and ideas are common. Based on the characteristics of China's social structure, an incentive strategy in which philanthropy is centered on the family and radiated out would be effective. To improve the effect of this incentive strategy of philanthropy in China, we must understand the mutual influence of family members' charitable behaviors better to utilize the endogenous motivation of individual charitable behavior and explore the positive social function of "familism."
Based on the above analysis, we argue that the characteristics of China's social structure and the role of the family in encouraging individual donation require more emphasis and proactive exploration when discussing individual charitable donations. The transmission of philanthropy culture among family members may have a remarkable effect on individual charitable donations. Since the reform and opening-up, the environment of philanthropy in China has been constantly improved. Examination of the mechanism of the family's influence on individual philanthropy and clarification of the special significance of the family in the construction of philanthropy theory and in stimulating internal motivation for charitable behavior in China are of substantial importance. This study will discuss the influence of the family on individual charitable donations from the perspective of family relationships. There are three basic relationships in Chinese families—conjugal, sibling, and parental relationships. Among them, the vertical parent-child relationship is the most important, representing the main aspect of family relationships and occupies a more important position than the horizontal husband-wife and sibling relationships (Pan and Lin 1992). Even with the trend of small and nuclear families, the conjugal relationship in contemporary Chinese families still does not occupy the central position of the parent-child relationship in the family (Ma et al. 2011). Therefore, this study will explore the influence of intergenerational relationships on individual charitable donations.
Literature review: intergenerational effect on charitable donation
Previous research and limitations
Although the value of "familism" has a profound impact on Chinese philanthropy, the research on the impact of the family on individual philanthropy in China lags behind that in the West due to the late start of Chinese philanthropy research. Previous studies are based mainly on the Western social structure and social environment.
According to relevant studies, family philanthropy has a downward intergenerational transmission effect, namely, parents' charitable behavior, donation preferences, and participation in voluntary service positively impact children (Lily Family School of Philanthropy 2016). Hodgkinson and Weitzman's study (Hodgkinson and Weitzman 1992) demonstrates that in American society if parents participate in voluntary work, their children are twice as likely to participate in voluntary work. In addition, Wilhelm et al. (2008) used the Panel Study of Income Dynamics (PSID) data in the US to explore whether adult children inherit their parents' generosity. The study shows a strong positive correlation between parents and their adult children in religious donation participation, while there is a positive but weaker correlation in secular donation.
In academia, the downward intergenerational transmission is typically defined as the phenomenon in which parents' abilities, concepts, behaviors, and social status are transferred to their children (Chi and Xin 2013). Intergenerational transmission embodies the intergenerational relationship. The intergenerational relationship can encompass not only the relationships between parents and children and between grandparents and grandchildren in the family but also the relationships between the old generation, the middle-aged generation, and the young generation in the society (Deng and Xu 2001). Although these studies find that charitable behavior can be passed from generation to generation, this transmission mechanism requires further explanation. From the perspective of sociology, the downward intergenerational transmission of charitable donations can be explored via two approaches.
The first approach emphasizes that parents influence children's donation behavior through the role model effect (Eisenberg and Fabes 1998). This explanation focuses on the process of socialization in which children can learn charitable behavior through "social learning" (Moen et al. 1997). Although socialization occurs in schools, churches, and among peers, families are often regarded as the most important entity for the socialization of values (Bengtson 1975). Role models influence generosity and altruism (Mustillo et al. 2004). As role models, parents can encourage their children's prosocial behavior through verbal persuasion, other-oriented guidance, and personality approval based on empathy (Eisenberg and Fabes 1998).
The second approach emphasizes the transmission of social and economic resources that are related to a charitable donation. According to the theory of intergenerational transmission of social status, the motivation for charitable participation is randomly distributed, while the capacity for charitable participation is not. Parents play a critical role in providing resources for charitable participation (Featherman and Hauser 1978). Parents may not transfer values and beliefs to their children, but they provide their children the possibility of obtaining social, cultural, and economic resources and status in a broader social structure (Moen et al. 1997). If parents transfer resources to the next generation, children are more capable of involvement in activities that require these resources, such as charitable donation (Acock 1984; Glass et al. 1986). This also increases children's opportunity to participate in charitable donation and helping others (Eisenberg 1990).
Although previous studies have explored the influence of the intergenerational family relationship on individual charitable behavior, there are two main limitations.
First, most studies on the intergenerational transmission of charity behavior are conducted in Western countries. Relevant studies in the Chinese context are in a marginal position, and survey data and quantitative analyses are not abundant. In Chinese society, the core position of the family and the interaction patterns among family members under the influence of Confucian culture may have a more complex intergenerational impact on the charitable donation: Influenced by the Confucian concept of "benevolent father, filial son," the thoughts and behaviors of individuals are substantially inherited; however, in Chinese society, charitable behavior may reflect a reverse intergenerational effect; namely, the younger generation may profoundly influence the older generation. This is deeply branded with Chinese characteristics. With the transformation from a traditional society to modern society, the renewal cycle of knowledge and technology is unceasingly shortened, and the phenomenon of "cultural feedback" or "reverse socialization" is striking. The authority of the elder in the family has been challenged, while the position and decision-making power of children in the family have increased (Zhou 2000). Especially for China's modern philanthropy, since this transformation accelerated in the 21st century, it is the post-80s generation and the younger generation who are substantially affected; hence, the younger generation could affect the older generation. This merits further discussion. An in-depth discussion of this phenomenon would not only help enrich the relevant theories of philanthropy research but also have substantial theoretical and practical significance for the expansion of the relevant studies on China's social change.
Second, the previous studies focus mainly on parents' influence on the donation behavior of under-age children and neglect the impact of intergenerational transmission in adulthood. Behind this research tendency is the implicit hypothesis of a Western family relationship, namely, adult children enjoy relatively high degrees of independence and self-sufficiency, and the interactions among family members are more of an exchange than of authority. However, in Chinese families, the power structure is influenced by Confucian culture and ethics. Although the intergenerational relationship is becoming more equal and democratic with the changes of the era, the reciprocal culture of filial piety is deeply rooted in the individual's subjective consciousness. It provides individuals in a risk society with a sense of cognitive security, belonging, and purpose of existence (Shi 2016).
Moreover, in contemporary China, marriage often closely connects the two families. After the marriage of adult children, the two families have a close relationship and rely on each other in child-rearing and economic life. The 2010 census data show that China's contemporary family composition is characterized by a decreased number of nuclear families and an increased number of stem families, and the number of multi-marriage families remains stable (Wang 2013). After marriage, adult children are influenced by their parents-in-law in terms of ideas and behaviors, and the adult children influence the parents-in-law in turn. Therefore, the inheritance and reverse influence of adult children's charitable behavior in the family merits further discussion.
Cultural feedback and charitable donation
In the patterns of "inheritance" and "socialization," the direction of cultural transmission is unidirectional over a substantial period of human civilization development—from the parents' generation to the child's generation in the family and from top to bottom in the society. This concept of "father is the model of the son" in these two patterns is regarded as the basic rule of cultural heritage in all civilized societies (Zhou 2000). However, in modern times, especially since the "information technology revolution" at the end of the twentieth century and the beginning of the twenty-first century, the drastic transformation of human society and the rapid development of civilization have resulted in a fractured cultural phenomenon, namely, a "generation gap" between the two generations that live under the same roof, which has even evolved into intergenerational conflict.
In contrast to the traditional and unidirectional "parent-child inheritance" mode, in modern society (especially after the Second World War), the generation of the parents gradually loses their absolute power of socialization, while the children obtain unprecedented "feedback" ability (Zhou 2000). In Western society, Mead first summarized the process of "reverse socialization" from the younger generation to the older generation through the concept of the "post-figurative culture" (Mead 1987). In China's transformation, especially in the 40 years of the reform and opening-up, the speed and intensity of the social and cultural changes at the level of materials are unprecedented. In this era of substantial changes, the young Chinese generation plays the role of a "tide player," bypassing knowledge and culture to their living predecessors and, thus, becoming the "initiator" of the new cultural transmission mode (Zhou 2000). In response to this phenomenon, (Zhou 1988:23) proposed the concept of "cultural feedback," which was defined as "the process of extensive cultural absorption of the older generation from the younger generation in the era of rapid cultural change."
Does the transmission of modern philanthropy culture in contemporary Chinese society reflect the characteristics of "cultural feedback"? We argue that charitable behavior can not only be "passed on" to children but also "fed back" to parents, which is often realized through "reverse socialization." "Reverse socialization" refers to transferring culture and ideas from the younger generation to the older generation. With society's transformation from traditional to modern, the renewal cycle of knowledge and technology is shortened, and social change is intensified. The young generation exhibits a high degree of adaptability to modern society, and their ability to accept and master the knowledge of modern society is far superior to that of their parents. The generation of parents becomes the recipient of education. The absolute authority of parents in traditional families has been severely challenged (Tian 2009). In the family life and cultural inheritance between parents and children, the differences in the ability of the two generations to accept and adapt to new things render the phenomenon of cultural feedback more distinct (Zhou 2000). Meanwhile, under the influence of family planning and other policies, China's family structure is miniaturized, and the downward parent-child relationship has received unprecedented attention (Yang 2011), which also increases the possibility of back-feeding charitable behavior. The children's rights to speech and decision-making in the family have been strengthened, laying the foundation for influencing the parents' charitable attitudes and behaviors.
Moreover, the intergenerational feedback of charitable behavior is affected by the change of the philanthropic environment. Since the founding of the People's Republic of China, there have been three millstones of national philanthropy: the revival of China's philanthropy, which began in 1981; the rise of philanthropy began in 2008; and the legalization of China's philanthropy, which began in 2016. As a virtue, the charity has undergone three thousand years of inheritance and development. After the founding of the People's Republic of China, philanthropy in modern times was interrupted (1949–1981); after the reform and opening-up, philanthropy in China was revived (1981–1994), and after 14 years of development (1994–2008), starting in 2008, under the influence of the Olympic Games and the Wenchuan earthquake, "nationwide philanthropy" flourished, thereby opening the path of transformation from modern philanthropy to contemporary philanthropy (Zhou and Lin 2014). In 2016, the implementation of the Charity Law symbolized that China's philanthropy entered the era of legalization.
The philanthropic environment in China began to improve in the 1980s and significantly advanced in 2008. The environment for participation in philanthropy is optimized, the opportunities are increasing, and the protection of rights and interests is strengthened. Influenced by these reflexive factors, young and capable individuals are more likely to participate in a charitable donation. Among them, the post-80s and post-90s generations are the groups that are most positively affected and are more likely to participate in charitable donation since the favorable philanthropic environment helps this group shape awareness of philanthropy and cultivate different attitudes toward philanthropy starting in childhood. Meanwhile, the post-60s and post-70s generations, which experienced the stagnation of philanthropy in China, are easily influenced by their children, who are more open-minded and independent, in terms of philanthropic ideas and attitudes, thereby resulting in the occurrence of reverse influence.
In summary, we argue that there is an intergenerational effect in an individual charitable donation. In the family, parents and children influence each other's philanthropic consciousness and charitable behaviors through different approaches (see Fig. 1). The parents' downward transmission to the children and the upward feedback from the children to the parents are intertwined, thereby leading to a two-way impact.
Diagram of the intergenerational influence on charitable donation
Therefore, to further the construction of a theory of philanthropy that is rooted in China, we should attach importance to the vital role of the family as the donor in promoting philanthropy in China, extend the main research object in the field of philanthropy from the "individual" to the "individual in the family," regard the whole family as the mobilization object and fully consider the characteristics and interests of the family. To address the limitations of previous studies, this study will examine the two-way intergenerational effect on charitable donation in the context of China based on the 2014 China Labor-force Dynamic Survey. It will identify the characteristics of intergenerational inheritance and feedback of charitable donation in contemporary Chinese society.
The data used in this study are collected from the 2014 China Labor-force Dynamic Survey (CLDS2014), which was conducted by Sun Yat-Sen University. The random samples of CLDS cover 29 provinces and regions (which exclude Hong Kong, Macao, Taiwan, Tibet, and Hainan). The survey adopts multistage clustering, stratified, probability proportionate to size (PPS) sampling methods. CLDS2014 collected a sample of 14,226 families and 23,594 individuals. The respondents to the CLDS family questionnaire were all living with their families, which satisfied the requirements of this study. In a family, the members who live together are more likely to have intensive interaction and communication. Their charitable behavior is more likely to have immediate, long-term, and stable effects on other members. Family members who are not living together are less likely to affect one another, which is not conducive to observing the intergenerational effect of charitable behavior. In addition, as a type of behavior, a charitable donation is more alterable than social and economic status, which may be difficult to transmit among family members with limited interaction and spatial segregation. We observe only the family members who are above 18 years old due to juveniles' limited abilities and opportunities for participation in the charitable donation and due to their generally weak position in the power structure of the family. This facilitates the analysis of the intergenerational impact on charitable behavior and the observation of the role of intergenerational transmission in children's charitable behaviors in adulthood.
To analyze the interaction of charitable behavior between parents and children, we must first identify the one-to-one relationship between parents and children. Two major national surveys that are related to the research topic of this study have been conducted—the China Family Panel Studies (CFPS) and the China Labor-force Dynamic Survey (CLDS). Although CFPS surveyed the overall donations of families as a whole, and the relationships between family members, data on individual family members' donations were not collected. The donation behaviors collected in CLDS can be associated with specific family members, but it only captures the relationships between the questionnaire respondents and their family members. We cannot portray the exact relationship between any two family members from the CLDS data. However, we can indirectly capture all family members' relationships through the relationships between the questionnaire respondents and their family members. This method is not flawless since the relationships among individual family members are difficult to accurately capture when the number of family members is too large. This study generates a sample of parent-child intergenerational relationships in the family according to the following steps.
The first step is to regarding the family questionnaire respondents as the parents and their children (which include sons, daughters, daughters-in-law, and sons-in-law) as the generation of children to generate the "parent-children" data (father_son_1). The second step is to locate, based on the data of father_son_1, the spouses of the family questionnaire respondents. The children of the family questionnaire respondents are also the children of their spouses. The data of father_son_2 is generated with the spouse as the parent and their children as the child generation. The third step is to locate the parents (which include the fathers, mothers, fathers-in-law, and mothers-in-law) of the family questionnaire respondents. The data of father_son_3 is generated by the respondents as children and their parents as the parent generation. The fourth step is to locate, based on the data of father_son_3, the IDs of the spouses of the family questionnaire respondents and incorporate them into the data of father_son_3. The parents (which include the fathers, mothers, fathers-in-law, and mothers-in-law) of the respondents are also the parents of their spouses. The data of father_son_4 is generated by the family questionnaire respondents as children and their parents as the parent generation. The fifth step is to combine the data of father_son_1, father_son_2, father_son_3, and father_son_4 to obtain the relational data of father_son. Finally, according to the respondents' IDs to individual questionnaires, the IDs of the parents, the IDs of the children, demographic variables, and donation data at the individual level are combined. The pair data of parents and children who have responded to individual questionnaires are retained, while the pairs in which the children are younger than 18 years old are deleted. All pair data are then combined with the family data to include relevant variables at the family level. After this process, 6120 pairs of parent-child relational data are obtained.
Variables and analysis method
As emphasized above, this study focuses on charitable donation. The dependent variables are "whether the parents participated in a charitable donation (last year)" and "whether the children participated in a charitable donation (last year)." This study only considers children and parents who are living together. The parents include mothers, fathers, and parents-in-law; the children include sons, daughters, sons-in-law, and daughters-in-law.
This study also considers common factors that affect charitable donation as control variables. Previous studies show that (1) the higher the individual income level is, the larger the donation resources are, and the more inclined people are to participate in charitable donation (Liu and Lu 2013); (2) the more educated individuals are, the more likely they are to participate in charitable donation (Bekkers and De Graaf 2006); (3) women are more inclined to participate in charitable donation (Andreoni and Vesterlund 2001); (4) age tends to be positively related to the capacity and inclination for charitable donation (Wang and Graddy 2008); (5) religious people are more interested in charitable activities (Bekkers and Wiepking 2010); (6) danwei have special effects on donation since individuals in danwei are often mobilized or forced to participate in donation (Bi et al. 2010); (7) mobilization from Party organizations is often an important factor for individual participation in donation, and the participation rate of Party members in charitable donation is often higher (Hu and Shen 2013); (8) compared with families with multiple children, the parents of the only child are more likely to have donation resources; and (9) family characteristics, such as the family's support for, participation in, and preferences for charitable activities, also impact the donation behaviors of family members. Therefore, the control variables of this study include the following: (1) whether the individuals are members of danwei; (2) the logarithm of annual income; (3) age; (4) gender; (5) years of education; (6) whether the individuals are religious believers; (7) party member affiliation with the Communist Party and other political parties; (8) number of children; and (9) whether the individuals have donated in the name of the family. Table 1 presents the descriptive statistics of the variables.
Table 1 Descriptive analysis of variables (N = 6120)
Since there is a potential reciprocal relationship between parents' donation and children's donation, this study uses a non-recursive model to analyze the intergenerational effects of donation (Paxton et al. 2016:2). In this model, we use a two-stage binary logit model to conduct the estimation.
$$ {y}_1=\ln \left(\frac{P\left({y}_1=1\right)}{1-P\left({y}_1=1\right)}\right)={\beta}_1{y}_2+\boldsymbol{\gamma} \boldsymbol{X}+{\varepsilon}_1 $$
$$ {y}_2=\ln \left(\frac{P\left({y}_2=1\right)}{1-P\left({y}_2=1\right)}\right)={\beta}_2{y}_1+\boldsymbol{\theta} \boldsymbol{X}+{\varepsilon}_2 $$
In these formulae, y1 is the log odds of parents' donation, and y2 is the log odds of children's donation. There is a linear relationship between the dependent and independent variables in the model. The exponential transform of the value of the independent variable in the model generates the magnitude of change in the dependent variable's odds value—the odds ratio. In the non-recursive model with a binary dependent variable that is used in this study, we initially run a model to estimate the parents' donation y1 while regarding children's donation y2 as the independent variable. Next, when running the second binary logit model of the children's donations, the fitted value of the first model—the endogenous independent variable y1 (parents' donation)—is used to obtain the final parameter of the complete model (Shah et al. 2002). Figure 2 presents the non-recursive model (the error terms ε 1 and ε 2 are not shown). Most of the independent variables in this model are characteristics of parents and children, which are exogenous to the dependent variables. The inclusion of these variables is based on careful consideration of relevant theories and literature, facilitating our estimation and identification of this non-recursive model (Paxton et al. 2016:24). The statistical analysis software that is used in this study is R.
Diagram of the non-recursive model. Note: "_p" represents parentage, "_c" represents children, "family d" denotes donation in the name of the family, "edu" denotes education, and "party m" denotes party membership
Model estimation
The non-recursive model results in Fig. 3 demonstrate that the generations of parents and children influence each other in terms of donation participation in a family with children aged 18 and above. Moreover, the influence of children on their parents is significantly stronger than that of parents on their children. The results demonstrate a satisfactory fit of the model (see Table 2). The model indicates that the parents' donation participation is affected by the Party member affiliation, danwei membership, level of education, religious beliefs, age, and gender and is significantly influenced by the child's donation participation (p < 0.001). Party members or danwei members are more inclined to participate in donation, reflecting the institutional characteristics of philanthropy in China. Meanwhile, children's influence on their parents' donation participation embodies the characteristics of China's social structure. Although the children's donation participation is also affected by the parents' donation participation, the effect is not as strong as Party member affiliation, danwei membership, and education level. The children's donation participation is also affected by religious beliefs. These factors affect both children's and parents' donations.
Mutual influence of donation between parents and adult children (N = 6120). Note: "_p" represents parentage, "_c" represents children, "family d" denotes donation in the name of the family, "edu" denotes education, and "party m" denotes party membership
Table 2 Goodness of fit indices of the models
For families with children aged 18–34, the influence of the children on the donation behavior of their parents remains very strong and statistically significant (p < 0.001) (see Fig. 4). However, the influence of the parents' donation participation on the children's donation participation becomes insignificant. The variables that significantly affect children's donation participation are Party member affiliation, danwei membership, and education level. The donation participation of the parents is influenced not only by children's donation participation but also by the party member affiliation, danwei membership, level of education, religious beliefs, and gender. The results demonstrate a satisfactory fit and validity of the model (see Table 2). The analysis results demonstrate that, in terms of donation participation, the influence of children on their parents is stronger than that of parents on their children, and this difference in strength is larger for the families with children born after the 1980s.
Mutual influence of donation between parents and children aged 18–34 (N = 5079). Note: "_p" represents parentage, "_c" represents children, "family d" denotes donation in the name of the family, "edu" denotes education, and "party m" denotes party membership
Robustness test
Since pairs of internal relationships in the same family could have similarities, the above analysis results might be affected by bias. Therefore, this study reconstitutes a new sample by randomly selecting only one pair of relationships from families with multiple pairs of relationships and evaluates the robustness of the analysis results that are presented above. The sample generation steps are as follows: First, the sample is divided into 10 data sets according to the number of family relationship pairs (the minimum is one parent-child pair and the maximum is ten pairs per family). Second, the relationship pairs in each family are sorted, and sequence numbers 1, 2, 3..., n (n equals the number of family relationship pairs) are generated. Third, a column of random numbers are generated in the dataset, and the values of the random numbers are 1, 2, 3..., n (n equals the number of family relationship pairs). Fourth, the cases in which the sequence number of the relationship pair is equal to one of the random numbers are selected in each data set to form a new sample, and ten new samples (which exclude random families with only one relationship pair) are collected. Fifth, the ten new samples are combined. In the final sample, each family has only one pair of parent-child relationships, and 2730 parent-child relationship pairs are obtained.
The results of the robustness test are presented in Fig. 5. For the random sample selected from the families with multiple parent-child relationship pairs, the parents and children's donation participations still affect each other, and the influence of the children on the parents is significantly stronger than that of the parents on the children. This is consistent with the results that were obtained based on the overall sample. Moreover, the results demonstrate a satisfactory fit of the model (see Table 2). This supports the robustness of the analysis results and the reliability of the conclusion.
Robustness test of the mutual influence of donation between parents and adult children (N = 2730). Note: "_p" represents parentage, "_c" represents children, "family d" denotes donation in the name of the family, "edu" denotes education, and "party m" denotes party membership
Conclusions and discussion
The reasons for individual participation in charitable donations have received substantial attention in China in recent years. However, there is little discussion about the influence of family on charitable behavior. The main reason is that most of the studies that were undertaken in the Western context suppose that individual charitable donation is influenced primarily by individual factors, social organizations, and religious culture, while most of the domestic studies remain in the primary stage of applying the Western theoretical framework to the Chinese local context. Given the limitations of these previous studies, this study examines the intergenerational effect of a charitable donation from the perspective of the vertical family relationship based on the social structure, traditional culture, and philanthropic environment of China.
First, this study finds that the parents' participation in charitable donation has a significant positive impact on the children's participation in a charitable donation. As discussed above, the downward "inheritance" of charitable behavior proceeds through the process of socialization and the transfer of social and economic resources within the family. This is in line with previous research findings in the Western context. It indicates that in both China and Western countries, intergenerational transmission of philanthropy inside the family occurs. The continuity of family charitable behavior depends partly on the elders' role model and their material and spiritual support. This study further shows that the impact of parents' charitable donation on children persists into adulthood, namely, in addition to minor children easily learning their parents' attitudes and behaviors of philanthropy in the process of socialization, the adult children' charitable behavior remains influenced by their parents' charitable behavior. This reflects the traditional culture of "familism" in Chinese society. In contrast to the "emotion-oriented" intergenerational relationships in Western developed countries, Chinese children still functionally depend on their parents when they are adults (Shi 2016). This highlights the special significance of family to Chinese people. Family and family members have always had an important influence in all stages of individual life.
More importantly, in Chinese families, the intergenerational influence of charitable donation is bidirectional: it includes not only downward "inheritance" from the parents but also upward influence from the children. This corresponds to the phenomenon of "cultural feedback." According to (Zhou 2015:110), "cultural feedback has become an unprecedented experience of reshaping intergenerational relations for almost all Chinese families since the reform and opening-up in 1978." The reform caused the whole society to change rapidly in a very short period, thereby leading to substantial discrepancies of material and spiritual lives between the generations who were born before versus after 1978. Hence, the "subversion" of the traditional parent-child relationship in China is more thorough than in other countries and causes the phenomenon of "cultural feedback" to occur on a large scale in contemporary Chinese society (Zhou 2011). The phenomenon of "intergenerational feedback" in charitable donations shows that the intergenerational relationship is an important carrier of the spread of charitable behavior in the family. The analysis of family charitable behavior cannot ignore the special intergenerational relationship of Chinese society. To effectively cultivate a family culture of philanthropy and promote social participation of philanthropic activities, it is necessary to identify the mechanism by which charitable behavior is transferred in the family and utilize the downward force of "inheritance" and the upward force of "feedback."
Based on Zhou's concept of "intergenerational feedback," we further compare the force of feedback and the force of inheritance of charitable donation. The results demonstrate that as for charitable donation, especially in families with children born in the 1980s or the 1990s, the upward influence of children is stronger than the downward influence of parents. This is a more profound reflection of Chinese characteristics. First, it is related to the development of the philanthropic environment in China. The philanthropic environment in China began to improve in the 1980s. The scientific and technological revolution and the popularity of computers and mass media have rendered the modern concept of philanthropy profoundly influential among the post-1980s and 1990s generation. The post-1960s and 1970s generation, who experienced the stagnation of China's philanthropy, have lost their advantages and authority in the field of discourse power of modern philanthropy. Second, the intergenerational relationship, shaped by China's family planning policy, is also an important source of children's discourse power. The family planning policy has increased the number of one-child families. In such families, children are so precious that the elders devote their emotion, energy, and resources to the only child and pay more attention to the offspring (Hao and Feng 1998). As a direct result, the offspring gradually replace their parents and become the dominant party in the intergenerational relationship interaction (Cai 2015), which strengthens the effect of reverse socialization. Moreover, due to a policy that advocates "controlling the quantity" and "improving the quality" in child rearing, children are valued because they are the hope of the family, while the elderly are often regarded as a burden due to their aging. Even if the children maintained the virtue of filial piety, when the resources were limited, "respect for the elderly" gives way to "care for the youth." The intergenerational relationship gradually changes from the "care-support" feedback mode (Fei 1983) to the "relay-feedback" mode (Yang 2017). Therefore, the family intergenerational relationship has changed from respecting the elderly to caring for the youth, and the upward parental relationship has gradually weakened in family life. Accordingly, the downward parental relationship plays an increasingly important role, and this trend is also embodied in the intergenerational effect of charitable donation. This provides young people who have received the influence of advanced philanthropy culture and ideas with substantial advantages in shaping the spirit of modern philanthropy in China.
Implication and prospects
To develop a theory of Chinese philanthropy, we must consider the "differential pattern" social structure and Confucianism as the kernel. Previous academic discussions have regarded Chinese familism as an obstacle to modern philanthropy, but the author believes that the family has strong potential for inspiring individual charitable behavior. In China, the family is always the basic unit of social life. Although substantial changes have been occurring in the Chinese social, economic, and cultural environments for thousands of years, the family retains its significant influence on various aspects of social life and philanthropy development. Confucianism emphasizes inheritance. The inheritance of philanthropy culture from parents to children and from the older generation to the younger generation is the transmission of the traditional Chinese concept of "benevolence." In addition, due to the "differential pattern" social structure, the roles of the family and family members in Chinese individual life are too significant to be surpassed by other social relations. Therefore, the attitudes and behaviors of family members play an irreplaceable role in the decision-making and diffusion of individual charitable behavior.
Hence, the family is an important social institution through which the tradition of philanthropy spreads, and the intergenerational relationship is an important carrier for the inheritance and feedback of philanthropic culture. Therefore, in the social context of China, the charitable behavior of family members is a factor that cannot be ignored when investigating the causes of individual participation in charitable donations. Scholars have concluded that the traditional Chinese philanthropic practice has always been characterized by a "strong state and weak society," and the concept and scope of individual charitable activities are limited, with remarkable particularity (Han and Zheng 2014). In the modern sense, philanthropy emphasizes the characteristics of non-differential order, universality, and public welfare. This is a new concept for transforming Chinese society and a challenge for the development of Chinese philanthropy.
Suppose an individual's charitable behavior is regarded as a manifestation at the micro-level of philanthropy culture in the whole society. In that case, the strong "intergenerational feedback" effect of charitable behavior demonstrates that the traditional top-down inheritance mode of philanthropy culture changes to the bidirectional or even bottom-up transmission mode of philanthropy culture in modern society. For the transforming Chinese society, children have obtained greater discourse power in shaping family philanthropy culture. Against the background of more interactions between Chinese and Western philanthropy cultures and more exposure of the younger generation to the modern philanthropy concept, it would be substantially helpful for China to complete the modernization of philanthropy via the transmission of modern charity spirit and practice through "cultural feedback." This is not only the result of reform and opening-up but also the social concomitant of reform and opening-up over the past 40 years. "Cultural feedback" can promote symbiosis and harmony among the generations through mutual tolerance of multi-intergenerational culture and differences in the dramatic social transformation (Zhou 2015). The feedback effect of philanthropy can facilitate the joint growth of two generations, promote the communication and integration of traditional and modern philanthropy culture, and advance the new prosperity of Chinese philanthropy.
These results have critical implications for the mobilization of a charitable donation in China. To encourage individual charitable behavior, we need to locate the family as the core position of the incentive system. The transmission effect of charitable behavior via intergenerational relationships should be emphasized to utilize the family's incentive role. The primary direction and conditions of the transmission of charitable behavior in the family should be valued to improve the effect and refinement of mobilization strategies. The upward intergenerational transmission effect of charitable donation is stronger than the downward transmission effect. This not only supports the core position of downward intergenerational relations in Chinese families but also reflects the profound influence of the rise and development of philanthropy on the children, which leads to the powerful "cultural feedback" effect of the children. Therefore, in the development of philanthropy, we should strengthen the children's mobilization in the family and consider the young people's capacity for accepting new knowledge and their ability to use electronic products and high technology. We should encourage young people to help the elders in their families comprehend the concepts of philanthropy and to learn charity tools to spread the spirit of philanthropy.
Regarding the negative impact of familism on philanthropy in China, scholars suggest that the development of philanthropy must rely on the administrative power and the joint efforts of enterprises and individuals, which do not conform to the logic of "familism" (Gao and Wang 2011). This study examines the family's role in the development of China's philanthropy from a new perspective. The unique social structure is a vital element in the development of philanthropy in China. Since the influence of the "familism" culture on the public has been deeply rooted, the strategy of simply avoiding the influence of family and designing incentive measures for charity participation with non-family logic will not be effective in the long term. Instead, we must deeply and comprehensively examine the significance of family to philanthropy and proactively explore the role of the family in encouraging individual charitable behavior. As this study suggests, the transmission of charitable behavior and culture among family members can effectively promote individual charitable donation, the spread of family charitable behavior, and the cultivation of philanthropy culture in the broader society.
In addition, the results of this study emphasize the significance of charity in maintaining family relations. Just as "cultural feedback" has facilitated the construction of a convenient and clear bridge for generations to move toward symbiosis and harmony, the feedback effect of charitable behavior can also promote intergenerational harmony. Charity spreads the spirit of love and altruism. "Knowing the good" and "practicing the good" in philanthropic participation not only contribute to social harmony but also help reduce family conflicts. Moreover, new forms of charitable participation that involve families, such as the promotion of "parental philanthropy," have caused "individualized" charitable participation to develop into "family-oriented" charitable participation and have changed the "on-site participation" mode into the "interactive experience" mode. Charity not only provides a platform for children to participate in family interaction but also offers opportunities for parents to conduct family education and cultivate close parent-child relationships. It is reasonable to suggest that the intergenerational relationship of family members is not only the premise but also the result of family philanthropy.
Therefore, as an important agent in shaping charity, charitable organizations should focus on the intergenerational family relationship and work to embed the philanthropic spirit into the family culture. The credibility, organizational capacity, mobilization capacity, and staff communication ability of charitable organizations will affect participants' scale and composition and affect the development potential of family philanthropy. An unsuitable development strategy will result in the loss of donors and the stagnation of philanthropy. Charitable organizations that can successfully engage in family education and parent-child interaction can increase donors' loyalty over the long term. The mutual reinforcement between philanthropy culture and family involvement can promote the prosperity of philanthropy. Therefore, for charitable organizations, the family's intergenerational relationship is critical for understanding the status and possible changes in charitable behavior. Family culture and intergenerational relations in Chinese society have their own characteristics, and they serve as important codes for interpreting the complexity and diversity of philanthropy in China.
This exploratory study regards the intergenerational effect on charitable donation as the breakthrough point for promoting indigenous studies and the theoretical innovation of philanthropy in Chinese society. Based on a large-scale national survey in China, this study also generated family member pair relationships associated with charitable behavior for the first time, thereby providing data for further research.
However, this study has several limitations: First, this study only examines whether individuals participate in charitable donation or not and does not consider the heterogeneity of donation behavior, e.g., in terms of the amount and type of charitable donation. This is mainly because the large-scale national survey does not include detailed data on individual donations, and several other surveys on the amounts of individual charitable donations suffer from significant missing data, which affects the validity of the analysis results. Therefore, this study failed to subdivide charitable donations. Nevertheless, the objective of this study has been realized. Second, this study only considers family members who live together and does not cover non-coresident family members. Future studies should identify the intergenerational effect on charitable donation by including non-coresident family members based on more comprehensive data. Finally, due to the limitations of the data and material, although this study outlines the path of intergenerational influence on donation based on a literature review, it considers only the intergenerational effect and not the mechanism of this effect. In the future, more targeted and detailed data can be used to fill in this gap.
Future studies can also further explore the gender roles in the intergenerational transmission of charitable donation, such as by comparing the roles and influences of fathers and mothers in the intergenerational transmission of charitable donation and refining the dynamic mechanism in the process of socialization regarding charitable behavior. Similarly, we also suggest comparing the influences of sons and daughters on the intergenerational feedback of charitable donation under various marriage conditions and examining the marriage effect and gender roles in cultural feedback. Finally, due to the drastic social transformation and the various development stages of Chinese philanthropy, the influence of the family on individual charitable behavior varies among periods and regions, and the most substantial differences are between urban and rural areas and among diverse generations. Further exploration of this phenomenon would enrich the relevant theories of philanthropy research and would be of substantial theoretical and practical significance for expanding the relevant research on China's social transformation.
We based our study on data, publicly available of the China Labor-force Dynamic Survey (CLDS, 2014).
Acock, A. 1984. Parents and their children: The study of intergenerational influence. Sociology and Social Research 68 (2): 151–171.
Andreoni, J., and L. Vesterlund. 2001. Which is the fair sex? Gender differences in altruism. Quarterly Journal of Economics 116 (1): 293–312.
Bekkers, R., and N. De Graaf. 2006. Education and prosocial behavior. Working paper. Utrecht: Department of Sociology / ICS Utrecht University.
Bekkers, R., and P. Wiepking. 2010. A literature review of empirical studies of philanthropy. Nonprofit and Voluntary Sector Quarterly 40 (5): 924–973.
Bénabou, R., and J. Tirole. 2006. Incentives and prosocial behavior. American Economic Review 96 (5): 1652–1678.
Bengtson, V.L. 1975. Generation and family effects in value socialization. American Sociological Review 40 (3): 358–371.
Bi, X., J. Jin., M. Ma, and J. He . 2010. The reach of Danwei mobilization: An analysis on urban residents' charitable giving to project Hope. Sociological Studies 6: 149–177.
Cai, J. 2015. The origin, theme and development trend of intergenerational relations: A literature review. China Youth Study 11: 38–42.
Charity Aid Foundation. 2015. Word giving index 2015: A global view of giving trends. Kent: Charity Aid Foundation.
Chen, G. 2011. Summary of Chinese legal history. Beijing: The Commercial Press.
Chi, L., and Z. Xin. 2013. Intermediary mechanism of trust intergenerational transmission: A conceptual model. Journal of Capital Normal University (Social Sciences Edition) 1: 140–147.
Deng, Z., and R. Xu. 2001. Family sociology. Beijing: China Social Sciences Press.
Eisenberg, N. 1990. Prosocial development in early and mid-adolescence. In From childhood to adolescence: A transitional period, ed. R. Montemayor, Gerald R. Adams, and Thomas P. Gullota, 240–268. Newbury Park: Sage.
Eisenberg, N., and R.A. Fabes. 1998. Prosocial development, consistency and development of prosocial dispositions. In Handbook of child psychology: Social, emotional, and personality development (Vol.3), ed. N. Eisenberg and W. Damon, 701–778. New York: Wiley.
Featherman, D.L., and R.M. Hauser. 1978. Opportunity and change. New York: Academic.
Fei, X. 1983. Support for the elderly in the change of family structure: On the changes of Chinese family structure. Journal of Peking University (Humanities and Social Sciences) 3: 7–16.
Fei, X. 1998. From the soil, fertility system. Beijing: Peking University Press.
Fukuyama, F. 2001. Trust: The social virtues and the creation of prosperity, Peng Zhihua translated. Hainan: Hainan Press.
Gao, Z., and J. Wang. 2011. Family-oriented concept: The bottleneck of Chinese public welfare. Journal of Social Work 8: 85–87.
Glass, J., V.L. Bengtson, and C. Dunham. 1986. Attitude similarity in three-generation families: Socialization, status inheritance, or reciprocal influence? American Sociological Review 51 (5): 685–698.
Han, L. and G. Zheng. 2014. A comparative study of Chinese and Western charitable culture resources. Journal of Nanchang University (Humanities and Social Sciences) 1: 104–109.
Hao, Y., and X. Feng. 1998. Investigation and analysis on the characteristics of only child families in urban middle schools. Fujian Tribune ( A Economics & Sociology Monthly) 3: 52–54.
Hochman, H.M., and J.D. Rodgers. 1977. The optimal tax treatment of charitable contributions. National Tax Journal 30 (1): 1–17.
Hodgkinson, V.A., and M.S. Weitzman. 1992. Giving and volunteering in the United States: Findings from a National Survey. Washington, DC: The Independent Sector.
Huang, J. 2008. The origin comparison and implication of charity culture between China and the west. Tianfu New Idea 3: 107–110.
Hu, R., and S. Shen. 2013. Social capital and individual donation in rural China. Journal of Public Administration 5: 60–75.
Liang, S. 2005. The substance of Chinese culture. Shanghai: Shanghai Renmin Press.
Lily Family School of Philanthropy. 2016. A tradition of giving: New research on giving and volunteering within families Indiana University – Purdue University Indianapolis (IUPUI).
Lin, N. 1986. The next step in the Sinicization of sociology. Sociological Studies 1: 89–96.
Liu, F., and W. Lu. 2013. Influence of social economical status upon charitable donation behavior. Journal of Beijing Normal University (Social Sciences) 3: 113–120.
Ma, R. 2007. The differential mode of association: Understanding of traditional Chinese social structure and the behaviors of the Chinese people. Journal of Peking University (Philosophy and Social Sciences) 2: 131–142.
Ma, C., J. Shi., Y. Li., Z. Wang., and C. Tang. 2011. Family change in urban areas of China: Main trends and latest findings. Sociological Studies 2: 182–216.
Mead, M. 1987. Culture and commitment: A study of the generation gap, Xiaohong Zhou and Yi Zhou translated. Shijiazhuang: Hebei Renmin Press.
Meer, J. 2011. Brother, can you spare a dime? Peer pressure in charitable solicitation. Journal of Public Economics 95 (7-8): 363–371.
Moen, P., M.A. Erickson, and D. Dempster-McClain. 1997. Their mother's daughters? The intergenerational transmission of gender attitudes in a world of changing roles. Journal of Marriage and the Family 59 (2): 281–293.
Mustillo, S., J. Wilson, and S.M. Lynch. 2004. Legacy volunteering: A test of two theories of intergenerational transmission. Journal of Marriage and the Family 66 (2): 530–541.
Pan, Y. and N, Lin. 1992. Vertical family relationship in China and its impact on society. Sociological Studies 6: 73–80.
Paxton, P.M., J. Hipp, and S. Marguat-Pyatt. 2016. Non recursive model: Endogeneity, reciprocal relation and feedback loop, fan Xinguang translated. Shanghai: Truth & Wisdom Press.
Shah, D.V., M. Schmierbach, J. Hawkins, R. Espino, and J. Donavan. 2002. Non recursive models of internet use and community engagement questioning whether time spent online erodes social capital. Journalism and Mass Communication Quarterly 79 (4): 964–987.
Shen, Y. 2007. Guanxi, Renqing and Mianzi: The everyday practice of Ren, Yi, and Li. Open Times 4: 90–106.
Shi, J. 2016. The evolvement of family intergenerational relationship in transition: Mechanism, logic and tension. Sociological Studies 6: 191–213.
Tian, C. 2009. A review of studies on family intergenerational relations. Tian Fu New Idea 1: 108–112.
Vesterlund, L. 2006. Why do people give? In Nonprofit sector: A research handbook, ed. R. Steinberg and W.W. Powell, 2nd ed. New Haven: Yale Press.
Wang, L., and E. Graddy. 2008. Social capital, volunteering and charitable giving. Voluntas: International Journal of Voluntary and Nonprofit Organizations 19 (1): 23–42.
Wang, Y. 2013. An analysis of the changes in China's urban and rural family structures: Based on 2010 census data. Social Sciences in China 12: 60–77.
Wang, N. 2017. Indigenization of sociology in China: Debates, core challenges, and the way forward. Sociological Studies 5: 15–38.
Weber, M. 1995. Confucianism and Taoism, Wang Rongfen translated. Beijing: The Commercial Press.
Wen, X. 2014. The Confucian family-centered ethic and generational justice. Nanjing Journal of Social Sciences 11: 47–53.
Wilhelm, M.O., E. Brown, P.M. Rooney, and R. Steinberg. 2008. The intergenerational transmission of generosity. Journal of Public Economics 92 (10-11): 2146–2156.
Yan, Y. 2006. "Chaxu geju"and the notion of hierarchy in Chinese culture. Sociological Studies 4: 201–213.
Yang, F. 2009. The comparison between charity culture and Chinese charity. Shandong Social Sciences 1: 76–79.
Yang, S. 2011. The change of urban families in contemporary China and family cohesion. Journal of Peking University (Philosophy & Social Sciences) 2: 150–158.
Yang, J. 2017. Fertility policy and family change in China. Open Times 3: 12–26.
Yao, J. 2012. "Temporary stem family": The transformation and strategy of family structure in cities. Youth Studies 3: 85–93.
Yu, Y. 2003. Modern interpretation of Chinese thought tradition. Nanjing: Jiangsu Renmin Press.
Zhang, J. 2010. Charisma, publicity, and China society. Chinese Journal of Sociology 5: 1–24.
Zhao, B. 2011. A study on the role of government in charitable donation motivation. Academics 10: 155–160.
Zhou, X. 1988. On the significance of feedback of youth culture in contemporary China. Youth Studies 11: 22–26.
Zhou, X. 2000. The cultural feedback: Parental transmission in transforming society. Sociological Studies 2: 53–68.
Zhou, X. 2011. Cultural repayment of support and the intergenerational inheritance of artifact civilization. Social Sciences in China 6: 109–120.
Zhou, Q., and Y. Lin. 2014. Inheritance and reconstruction: The history and reality of China charity development transformation. Qilu Journal 2: 82–87.
Zhou, X. 2015. From subversion and growth to symbiosis and harmony: Intergenerational influence and social significance of cultural feedback. Hebei Academic Journal 3: 104–110.
Zhu, J., and Y. Liu. 2017. An exploratory research on the scale and major factors of Chinese household donation: An analysis based on China labour force dynamics survey. Chinese Journal of Population Science 1: 47–58.
This study was funded by the National Social Science Fund of China (NSSFC) (17CSH061)
School of Public Affairs, Chongqing University, No.174 Shazhengjie, Shapingba, Chongqing, 400044, China
Yongjiao Yang
School of Sociology and Anthropology, Center of Urban Study, Sun Yat-sen University, Guangzhou, China
Yuting Shi
Western Research Base of Sociology, Chongqing Technology and Business University, Chongqing, China
Dong Zhang
Yongjiao Yang designed the study and conducted research, Yuting Shi contributed in theory construction and data cleansing, and Dong Zhang contributed in the data analysis and modeling. The authors read and approved the final manuscript.
Correspondence to Yongjiao Yang.
Yang, Y., Shi, Y. & Zhang, D. Intergenerational effects on individual charitable donation: an innovative study on philanthropy in China. J. Chin. Sociol. 7, 22 (2020). https://doi.org/10.1186/s40711-020-00139-2
Downward intergenerational transmission
Upward intergenerational reverse transmission | CommonCrawl |
JIMO Home
A symmetric Gauss-Seidel based method for a class of multi-period mean-variance portfolio selection problems
March 2020, 16(2): 1009-1035. doi: 10.3934/jimo.2018190
A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs
Yu-Hong Dai 1,2, , Xin-Wei Liu 3,, and Jie Sun 4,5,6,
LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
School of Mathematical Sciences, University of Chinese Academy of Sciences
Institute of Mathematics, Hebei University of Technology, Tianjin 300401, China
Institute of Mathematics, Hebei University of Technology, Tianjin China
School of Science, Curtin University, Perth, Australia
School of Business, National University of Singapore, Singapore
Received July 2018 Revised August 2018 Published December 2018
Fund Project: The first draft of this paper was completed on December 2, 2014. The first author is supported by the Chinese NSF grants (nos. 11631013, 11331012 and 81173633) and the National Key Basic Research Program of China (no. 2015CB856000). The second author is supported by the Chinese NSF grants (nos. 11671116 and 11271107) and the Major Research Plan of the NSFC (no. 91630202). The third author is supported by Grant DP-160101819 of Australia Research Council
Table(4)
With the help of a logarithmic barrier augmented Lagrangian function, we can obtain closed-form solutions of slack variables of logarithmic-barrier problems of nonlinear programs. As a result, a two-parameter primal-dual nonlinear system is proposed, which corresponds to the Karush-Kuhn-Tucker point and the infeasible stationary point of nonlinear programs, respectively, as one of two parameters vanishes. Based on this distinctive system, we present a primal-dual interior-point method capable of rapidly detecting infeasibility of nonlinear programs. The method generates interior-point iterates without truncation of the step. It is proved that our method converges to a Karush-Kuhn-Tucker point of the original problem as the barrier parameter tends to zero. Otherwise, the scaling parameter tends to zero, and the method converges to either an infeasible stationary point or a singular stationary point of the original problem. Moreover, our method has the capability to rapidly detect the infeasibility of the problem. Under suitable conditions, the method can be superlinearly or quadratically convergent to the Karush-Kuhn-Tucker point if the original problem is feasible, and it can be superlinearly or quadratically convergent to the infeasible stationary point when the problem is infeasible. Preliminary numerical results show that the method is efficient in solving some simple but hard problems, where the superlinear convergence to an infeasible stationary point is demonstrated when we solve two infeasible problems in the literature.
Keywords: Nonlinear programming, constrained optimization, infeasibility, interior-point method, global and local convergence.
Mathematics Subject Classification: Primary: 90C30, 90C51; Secondary: 90C26.
Citation: Yu-Hong Dai, Xin-Wei Liu, Jie Sun. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. Journal of Industrial & Management Optimization, 2020, 16 (2) : 1009-1035. doi: 10.3934/jimo.2018190
R. Andreani, E. G. Birgin, J. M. Martinez and M. L. Schuverdt, Augmented Lagrangian methods under the constant positive linear dependence constraint qualification, Math. Program., 111 (2008), 5-32. doi: 10.1007/s10107-006-0077-1. Google Scholar
P. Armand and J. Benoist, A local convergence property of primal-dual methods for nonlinear programming, Math. Program., 115 (2008), 199-222. doi: 10.1007/s10107-007-0136-2. Google Scholar
P. Armand, J. C. Gilbert and S. Jan-Jégou, A feasible BFGS interior point algorithm for solving convex minimization problems, SIAM J. Optim., 11 (2000), 199-222. doi: 10.1137/S1052623498344720. Google Scholar
I. Bongartz, A. R. Conn, N. I. M. Gould and P. L. Toint, CUTE: Constrained and Unconstrained Testing Environment, ACM Tran. Math. Software, 21 (1995), 123-160. Google Scholar
J. V. Burke, F. E. Curtis and H. Wang, A sequential quadratic optimization algorithm with rapid infeasibility detection, SIAM J. Optim., 24 (2014), 839-872. doi: 10.1137/120880045. Google Scholar
J. V. Burke and S. P. Han, A robust sequential quadratic programming method, Math. Program., 43 (1989), 277-303. doi: 10.1007/BF01582294. Google Scholar
R. H. Byrd, Robust Trust-Region Method for Constrained Optimization, Paper presented at the SIAM Conference on Optimization, Houston, TX, 1987. Google Scholar
R. H. Byrd, F. E. Curtis and J. Nocedal, Infeasibility detection and SQP methods for nonlinear optimization, SIAM J. Optim., 20 (2010), 2281-2299. doi: 10.1137/080738222. Google Scholar
R. H. Byrd, J. C. Gilbert and J. Nocedal, A trust region method based on interior point techniques for nonlinear programming, Math. Program., 89 (2000), 149-185. doi: 10.1007/PL00011391. Google Scholar
R. H. Byrd, M. E. Hribar and J. Nocedal, An interior point algorithm for large-scale nonlinear programming, SIAM J. Optim., 9 (1999), 877-900. doi: 10.1137/S1052623497325107. Google Scholar
R. H. Byrd, G. Liu and J. Nocedal, On the local behaviour of an interior point method for nonlinear programming, In, Numerical Analysis 1997 (eds. D. F. Griffiths and D. J. Higham), Addison-Wesley Longman, Reading, MA, 380 (1998), 37-56. Google Scholar
R. H. Byrd, M. Marazzi and J. Nocedal, On the convergence of Newton iterations to non-stationary points, Math. Program., 99 (2004), 127-148. doi: 10.1007/s10107-003-0376-8. Google Scholar
L. F. Chen and D. Goldfarb, Interior-point $\ell_2$-penalty methods for nonlinear programming with strong global convergence properties, Math. Program., 108 (2006), 1-36. doi: 10.1007/s10107-005-0701-5. Google Scholar
F. E. Curtis, A penalty-interior-point algorithm for nonlinear constrained optimization, Math. Program. Comput., 4 (2012), 181-209. doi: 10.1007/s12532-012-0041-4. Google Scholar
A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang, On the formulation and theory of the Newton interior-point method for nonlinear programming, J. Optim. Theory Appl., 89 (1996), 507-541. doi: 10.1007/BF02275347. Google Scholar
A. V. Fiacco and G. P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, John Wiley and Sons, New York, 1968; republished as Classics in Appl. Math. 4, SIAM, Philadelphia, 1990. doi: 10.1137/1.9781611971316. Google Scholar
R. Fletcher, Practical Methods for Optimization. Vol. 2: Constrained Optimization, John Wiley and Sons, Chichester, 1980. Google Scholar
A. Forsgren and P. E. Gill, Primal-dual interior methods for nonconvex nonlinear programming, SIAM J. Optim., 8 (1998), 1132-1152. doi: 10.1137/S1052623496305560. Google Scholar
A. Forsgren, Ph. E. Gill and M. H. Wright, Interior methods for nonlinear optimization, SIAM Review, 44 (2002), 525-597. doi: 10.1137/S0036144502414942. Google Scholar
D. M. Gay, M. L. Overton and M. H. Wright, A primal-dual interior method for nonconvex nonlinear programming, in Advances in Nonlinear Programming, (ed. Y.-X. Yuan), Kluwer Academic Publishers, Dordrecht, 14 (1998), 31-56. doi: 10.1007/978-1-4613-3335-7_2. Google Scholar
E. M. Gertz and Ph. E. Gill, A primal-dual trust region algorithm for nonlinear optimization, Math. Program., 100 (2004), 49-94. doi: 10.1007/s10107-003-0486-3. Google Scholar
N. I. M. Gould, D. Orban and Ph. L. Toint, An interior-point $\ell_1$-penalty method for nonlinear optimization, in Recent Developments in Numerical Analysis and Optimization, Proceedings of NAOIII 2014, Springer, Verlag, 134 (2015), 117-150. doi: 10.1007/978-3-319-17689-5_6. Google Scholar
W. Hock and K. Schittkowski, Test Examples for Nonlinear Programming Codes, Lecture Notes in Eco. and Math. Systems 187, Springer-Verlag, Berlin, New York, 1981. doi: 10.1007/BF00934594. Google Scholar
X.-W. Liu, G. Perakis and J. Sun, A robust SQP method for mathematical programs with linear complementarity constraints, Comput. Optim. Appl., 34 (2006), 5-33. doi: 10.1007/s10589-005-3075-y. Google Scholar
X.-W. Liu and J. Sun, A robust primal-dual interior point algorithm for nonlinear programs, SIAM J. Optim., 14 (2004), 1163-1186. doi: 10.1137/S1052623402400641. Google Scholar
X.-W. Liu and Y.-X. Yuan, A robust algorithm for optimization with general equality and inequality constraints, SIAM J. Sci. Comput., 22 (2000), 517-534. doi: 10.1137/S1064827598334861. Google Scholar
X.-W. Liu and Y.-X. Yuan, A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties, Math. Program., 125 (2010), 163-193. doi: 10.1007/s10107-009-0272-y. Google Scholar
J. Nocedal, F. Öztoprak and R. A. Waltz, An interior point method for nonlinear programming with infeasibility detection capabilities, Optim. Methods Softw., 29 (2014), 837-854. doi: 10.1080/10556788.2013.858156. Google Scholar
J. Nocedal and S. Wright, Numerical Optimization, Springer-Verlag New York, Inc., 1999. doi: 10.1007/b98874. Google Scholar
[30] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York and London, 1970. Google Scholar
D. F. Shanno and R. J. Vanderbei, Interior-point methods for nonconvex nonlinear programming: Orderings and higher-order methods, Math. Program., 87 (2000), 303-316. doi: 10.1007/s101070050116. Google Scholar
P. Tseng, Convergent infeasible interior-point trust-region methods for constrained minimization, SIAM J. Optim., 13 (2002), 432-469. doi: 10.1137/S1052623499357945. Google Scholar
M. Ulbrich, S. Ulbrich and L. N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming, Math. Program., 100 (2004), 379-410. doi: 10.1007/s10107-003-0477-4. Google Scholar
A. Wächter and L. T. Biegler, Failure of global convergence for a class of interior point methods for nonlinear programming, Math. Program., 88 (2000), 565-574. doi: 10.1007/PL00011386. Google Scholar
A. Wächter and L. T. Biegler, Line search filter methods for nonlinear programming: Motivation and global convergence, SIAM J. Optim., 16 (2005), 1-31. doi: 10.1137/S1052623403426556. Google Scholar
A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), 25-57. doi: 10.1007/s10107-004-0559-y. Google Scholar
M. H. Wright, Why a pure primal Newton barrier step may be infeasible?, SIAM J. Optim., 5 (1995), 1-12. doi: 10.1137/0805001. Google Scholar
S. J. Wright, On the convergence of the Newton/Log-barrier method, Math. Program., 90 (2001), 71-100. doi: 10.1007/PL00011421. Google Scholar
Y.-X. Yuan, On the convergence of a new trust region algorithm, Numer. Math., 70 (1995), 515-539. doi: 10.1007/s002110050133. Google Scholar
Y. Zhang, Solving large-scale linear programs by interior-point methods under the MATLAB environment, Optim. Methods Softw., 10 (1998), 1-31. doi: 10.1080/10556789808805699. Google Scholar
Table 1. Output for test problem (TP1)
$ l $ $ f_l $ $ v_l $ $ \|\phi_l\|_{\infty} $ $ \|\psi_l\|_{\infty} $ $ \beta_l $ $ \rho_l $ $ k $
0 5 16.6132 129.6234 129.6234 0.1000 3.3226 -
1 0.1606 2.0205 4.8082 0.7313 0.1000 0.0972 3
2 -0.0149 2.0002 0.0989 0.0445 0.1000 0.0020 4
3 -0.0036 2.0000 0.0029 0.0018 0.1000 3.1595e-06 3
4 -0.0029 2.0000 3.1674e-06 2.8185e-06 0.1000 1.0000e-09 1
5 0.0018 2.0000 1.0011e-09 6.7212e-10 - - -
Download as excel
0 -20 126.6501 2.8052e+03 2.8052e+03 0.1000 6.3325 -
1 -172.5829 172.7978 1.0948e+03 6.2866 0.1000 0.8719 6
8 -0.1999 0.4472 9.2732e-10 9.2732e-10 - - -
0 20 2.8284 9.9557 9.9557 0.1000 1 -
1 0.2305 0.4167 0.8900 0.7008 0.0100 1 4
3 0.1690 0.1630 0.0503 0.0022 0.0100 4.7328e-06 1
4 0.8561 2.9531e-04 3.1379e-06 3.1379e-06 0.0100 1.0000e-09 14
5 0.9028 1.2372e-04 9.3463e-08 9.3463e-08 - - -
Table 4. The last $ 4 $ inner iterations corresponding to $ l = 4 $ for test problem (TP4)
$ k $ $ f_k $ $ v_k $ $ \|\phi_k\|_{\infty} $ $ \|\psi_k\|_{\infty} $ $ x_{k1} $ $ x_{k2} $
11 0.8500 5.7136e-04 5.6703e-04 5.6703e-04 1.0780 0.0001
12 0.8548 3.0434e-04 1.2222e-05 1.2222e-05 1.0754 -0.0002
Gang Luo, Qingzhi Yang. The point-wise convergence of shifted symmetric higher order power method. Journal of Industrial & Management Optimization, 2021, 17 (1) : 357-368. doi: 10.3934/jimo.2019115
C. J. Price. A modified Nelder-Mead barrier method for constrained optimization. Numerical Algebra, Control & Optimization, 2020 doi: 10.3934/naco.2020058
Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021012
Bing Yu, Lei Zhang. Global optimization-based dimer method for finding saddle points. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 741-753. doi: 10.3934/dcdsb.2020139
Hassan Mohammad. A diagonal PRP-type projection method for convex constrained nonlinear monotone equations. Journal of Industrial & Management Optimization, 2021, 17 (1) : 101-116. doi: 10.3934/jimo.2019101
Yasmine Cherfaoui, Mustapha Moulaï. Biobjective optimization over the efficient set of multiobjective integer programming problem. Journal of Industrial & Management Optimization, 2021, 17 (1) : 117-131. doi: 10.3934/jimo.2019102
Tengfei Yan, Qunying Liu, Bowen Dou, Qing Li, Bowen Li. An adaptive dynamic programming method for torque ripple minimization of PMSM. Journal of Industrial & Management Optimization, 2021, 17 (2) : 827-839. doi: 10.3934/jimo.2019136
Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $ L^2- $norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077
Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048
Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309
Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448
Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHUM approach. Numerical Algebra, Control & Optimization, 2020 doi: 10.3934/naco.2020055
Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021013
Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Hui Gao, Jian Lv, Xiaoliang Wang, Liping Pang. An alternating linearization bundle method for a class of nonconvex optimization problem with inexact information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 805-825. doi: 10.3934/jimo.2019135
Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006
Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020178
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Wenjun Liu, Hefeng Zhuang. Global attractor for a suspension bridge problem with a nonlinear delay term in the internal feedback. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 907-942. doi: 10.3934/dcdsb.2020147
PDF downloads (124)
Yu-Hong Dai Xin-Wei Liu Jie Sun | CommonCrawl |
Workshop & Seminars
Bridging-Time Scale Techniques and their Applications in Atomistic Computational Science
Poster Contributions
Talk pdfs
Quantum Memory from Quantum Dynamics
NZIAS-MPIPKS Tandem Workshops
Floquet Physics
FFLO-Phase in Quantum Liquids, Quantum Gases, and Nuclear Matter
Festkolloquium
Quantum Sensing with Quantum Correlated Systems
Topological Matter in Artificial Gauge Fields
Topological Phenomena in Novel Quantum Matter: Laboratory Realization of Relativistic Fermions and Spin Liquids
Quantum-Classical Transition in Many-Body Systems: Indistinguishability, Interference and Interactions
Principles of Biological and Robotic Navigation
Topological Patterns and Dynamics in Magnetic Elements and in Condensed Matter
Tensor Product State Simulations of Strongly Correlated Systems
Prospects and Limitations of Electronic Structure Imaging by Angle Resolved Photoemission Spectroscopy
Brain Dynamics on Multiple Scales - Paradigms, their Relations, and Integrated Approaches
Discrete, Nonlinear and Disordered Optics
Quantum Dynamics in Tailored Intense Fields
Chaos and Dynamics in Correlated Quantum Matter
Pattern Dynamics in Nonlinear Optical Cavities
Strong Correlations and the Normal State of the High Temperature Superconductors
Physical Biology of Tissue Morphogenesis - Mechanics, Metabolism and Signaling
Atomic Physics 2017
Dynamical Probes for Exotic States of Matter
Future Trends in DNA-based Nanotechnology
Korrelationstage 2017
New Platforms for Topological Superconductivity with Magnetic Atoms
Many paths to interference: a journey between quantum dots and single molecule junctions
Critical Stability of Quantum Few-Body Systems
Disorder, Interactions and Coherence: Warps and Delights
Multistability and Tipping: From Mathematics and Physics to Climate and Brain
Group Picture
Two-Phase Continuum Models for Geophysical Particle-Fluid Flows
Climate Fluctuations and Non-equilibrium Statistical Mechanics: An Interdisciplinary Dialogue
Joint IMPRS Workshop on Condensed Matter, Quantum Technology and Quantum Materials
Novel Paradigms in Many-Body Physics from Open Quantum Systems
Optimising, Renormalising, Evolving and Quantising Tensor Networks
Single Nanostructures, Nanomaterials, Aerogels and their Interactions: Combining Quantum Physics and Chemistry
Anderson Localization and Interactions
Machine Learning for Quantum Many-body Physics
Stochastic Thermodynamics: Experiment and Theory
Correlated Electrons in Transition-Metal Compounds:<br> New Challenges
Tensor Network based approaches to Quantum Many-Body Systems
Image-based Modeling and Simulation of Morphogenesis
Social Events & group photo
Anyons in Quantum Many-Body Systems
Constrained Many-body Dynamics
Quantum Ferromagnetism and Related Phenomena
Granular and Particulate Networks
Phase Transitions in Polymeric and Protein Systems
Atomic tunneling Systems and fluctuating Spins interacting with superconducting Qubits
Engineering Nonequilibrium Dynamics of Open Quantum Systems
Quantum Criticality and Topology in Correlated Electron Systems
Fluid Physics of Life
Bound states in superconductors and interfaces
Challenges in Nanoscale Physics of Wetting Phenomena
Dynamical Methods in Data-based Exploration of Complex Systems
workshop week - seminar rooms 1 - 3
Registration at guest house 4, library
Welcome reception and dinner at the MPIPKS
Informal discussions
08:45 - 09:00 Roderich Moessner (MPIPKS) & scientific coordinators
Chair: Jonathan Ruhman
09:00 - 09:45 Sergey Borisenko (Leibniz Institute for Solid State and Materials Research (IFW) Dresden)
Nematicity and topology in iron-based superconductors from ARPES
I will overview our recent ARPES results on this topic. In particular, the data on LiFeSe, Ba122 and FeSeTe will be discussed.
09:45 - 10:30 Takasada Shibauchi (University of Tokyo)
Electronic nematicity in the pseudogap state in cuprates
In the quest to understand the superconductivity in cuprates, a key issue is the nature of the enigmatic pseudogap region of the phase diagram. An especially important question is whether the pseudogap state is a distinct thermodynamic phase characterized by broken symmetries below the onset temperature $T^∗$. We use torque-magnetometry and elastoresistance measurements to test the rotational symmetry breaking in the pseudogap state in single crystals of YBa$_2$Cu$_3$O$_y$ (YBCO) and Bi$_2$Sr$_2$CaCu$_2$O$_8+\delta$ (Bi2212). In YBCO, anisotropic susceptibility within the ab planes was measured by the torque magnetometry with exceptionally high precision [1]. The in-plane anisotropy displays a significant increase with a distinct kink at the pseudogap onset temperature $T^∗$, showing a remarkable scaling behavior with respect to $T/T^∗$ in a wide doping range. Our systematic analysis reveals that the rotational symmetry breaking sets in at $T^∗$ in the limit where the effect of orthorhombicity is eliminated. In Bi2212, the nematic susceptibility that describes the electronic anisotropy response to a uniaxial strain was measured by elastoresistance above $T^∗$ [2]. We find that the nematic susceptibility increases upon cooling with a kink anomaly at $T^∗$, evidencing a second-order transition with neamtic fluctuations. These results provide thermodynamic evidence that the pseudogap in cuprates accompanies an electronic nematic order, which differs from the recently reported charge-density-wave transition that accompanies translational symmetry breaking. [1] Y. Sato {\it et al.}, Nat. Phys. {\bf 13}, 1074 (2017). [2] K. Ishida {\it et al.}, preprint.
11:00 - 11:45 Rafael Fernandes (University of Minnesota, Minneapolis)
Vestigial electronic orders in correlated systems: nematicity and beyond
A hallmark of the phase diagrams of correlated electronic systems is the existence of multiple electronic ordered states. In many cases, they cannot be simply described as independent competing phases, but instead display a complex intertwinement. A prime example of intertwined states is the case of primary and vestigial phases. While the former is characterized by a multi-component order parameter, the fluctuation-driven vestigial state is characterized by a composite order parameter formed by higher-order, symmetry-breaking combinations of the primary order parameter. This concept has been widely employed to elucidate nematicity in both iron-based and cuprate superconductors. In this talk, I will present a group-theoretical framework, supplemented by microscopic calculations, that extends this notion to a variety of phases, providing a general classification of vestigial orders of unconventional superconductors and density-waves. Electronic states with scalar and vector chiral order, spin-nematic order, Ising-nematic order, time-reversal symmetry-breaking order, and algebraic vestigial order emerge from this simple underlying principle. I will present a rich variety of possible phase diagrams involving the primary and vestigial orders, and discuss possible realizations of these exotic composite orders in different materials.
11:45 - 12:30 Alain Sacuto (University Paris Diderot)
Energy Scale of Charge Density Wave in Cuprates
The cuprate high temperature superconductors develop spontaneous charge density wave (CDW) order below a temperature $T_{\rm CDW}$ and over a wide range of hole doping ($p$). An outstanding challenge in the field is to understand whether this modulated phase is related to the more exhaustively studied pseudogap and superconducting phases [1,2]. To address this issue, it is important to extract the energy scale $\Delta_{\rm CDW}$ associated with the CDW order, and to compare it with the pseudogap (PG) $\Delta_{\rm PG}$ and with the superconducting gap $\Delta_{\rm SC}$. However, while $T_{\rm CDW}$ is well-characterized from earlier work, little is known about $\Delta_{\rm CDW}$ until now. Here, we report the extraction of $\Delta_{\rm CDW}$ for several cuprates using electronic Raman spectroscopy. Crucially, we find that upon approaching the parent Mott state by lowering $p$, $\Delta_{\rm CDW}$ increases in a manner similar to the doping dependence of $\Delta_{\rm PG}$ and $\Delta_{\rm SC}$ [3]. This reveals that the above three phases have a common microscopic origin. In addition, we find that $\Delta_{\rm CDW} \approx \Delta_{\rm SC}$ over a substantial doping range, which suggests that CDW and superconducting phases are intimately related, for example they may be intertwined or connected by an emergent symmetry [1,4-6] [1] E. Fradkin et al., Rev. Mod. Phys. 87, 457 (2015). [2] B. Keimer et al., Nature 518, 179 (2015). [3] B. Loret et al., arXiv:1808.08198 accepted in Nature Physics [4] K. B. Efetov et al., Nat. Phys. 9, 442 (2013). [5] S. Sachdev et al., Phys. Rev. Lett. 111, 027202 (2013). [6] Y. Wang et al., Phys. Rev. Lett. 114, 197001 (2015).
Chair: Erez Berg
14:00 - 14:45 Mathias Scheurer (Harvard University)
Gauge theories for the cuprate superconductors
As a consequence of strong correlations, many aspects of the complex phase diagram of the cuprate high-temperature superconductors are still under debate. In the first part of the talk, I will discuss a non-Abelian gauge theory that we propose [1] as an effective field theory for the cuprates near optimal doping. In this theory, spin-density-wave order is fractionalized into Higgs fields while all low-energy fermionic excitations are electron-like and gauge neutral. The conventional Fermi-liquid state corresponds to the confining phase of the theory at large doping and there is a quantum phase transition to a Higgs phase, describing the pseudogap, at low doping. It will be shown that the topological order of the pseudogap state is very naturally intertwined with charge-density-wave, Ising-nematic, and scalar spin-chirality order. We will also discuss the deconfined quantum criticality in the limit of large numbers of Higgs flavors. The second part of the talk deals with the thermal Hall effect in related gauge theories of the square-lattice antiferromagnet [2,3]. I will show that these approaches can yield a sizeable thermal Hall response, similar to what has recently been seen in experiment [4]. [1] Phys. Rev. B 99, 054516 (2019). [2] Phys. Rev. B 99, 165126 (2019). [3] arXiv:1903.01992 (2019). [4] arXiv:1901.03104 (2019).
14:45 - 15:30 Yuxuan Wang (University of Florida, Gainesville)
A solvable random model for the non-Fermi liquid pairing problem
We show that a random interacting model exhibits solvable non-Fermi liquid behavior and exotic pairing behavior. The model describes the random Yukawa coupling between $N$ quantum dots each hosting $M$ flavors of fermions and $N^2$ bosons that becomes critical at low energies. This diagrammatic expansion is controlled by $1/MN$, and the results become exact in a large-$M$, large-$N$ limit. We find that pairing only develops within a region of the $(M,N)$ plane --- even though the pairing interaction is strongly attractive, the incoherence of the fermions can spoil the forming of Cooper pairs{, rendering the system a non-Fermi liquid down to zero temperature}. By solving the Eliashberg equation and renormalization group analysis, we show that the transition into the pairing phase exhibits Kosterlitz-Thouless quantum-critical behavior.
15:30 - 16:00 Richard Greene (via Skype) (University of Maryland, College Park)
Strange metal transport properties of an electron-doped cuprate
I report measurements of resistivity, Hall Effect, magnetoresistance and thermopower in the electron-doped cuprate La2-xCexCuO4 for 0.19≥ x ≥0.08 as a function of temperature. The new and unconventional results are: 1) The normal state magnetoresistance exhibits an anomalous linear-in-H behavior [1] at the same doping and temperature where a linear-in-T resistivity was previously observed for H>Hc2 [2], i.e. above the Fermi surface reconstruction at x =0.14 up to the end of the superconducting dome (x ~ 0.175). For doping above the dome conventional Fermi liquid behavior is found (but, with intrinsic Ferromagnetism below 4K--arXiv:1902.11235). 2) The normal state Seebeck coefficient, S/T, exhibits an unconventional low temperature –lnT dependence at the same doping where linear-in-T and linear-in-H resistivity is found [3]. Conventional S/T = constant behavior is found above the superconducting dome. 3) The normal state resistivity above Tc, from 80 K to 400 K, follows an anomalous ~A(x)T2 behavior at zero field for all doping(x), with no indication of a MIR limit. Fermi liquid theory cannot explain any of these results. Moreover, the magnitude of the anomalous magnetoresistance and thermopower scales with Tc, suggesting that the origin of the superconductivity is correlated with the anomalous normal state properties. 1. T. Sarkar et al., Sci. Adv. 2019 2. K. Jin et al. , Nature 476, 73 (2012) 3. P. R. Mandal et al., PNAS 116, 5991 (2019) 4. T. Sarkar, R. L. Greene, and S. Das Sarma, Phys. Rev. B 98, 224503 (2018)
Chair qctces19-colloquium: Falko Pientka (MPIPKS)
16:30 - 17:30 Jörg Schmalian (Karlsruhe Institute of Technology (KIT))
qctces19-colloquium: Critical and incoherent electron systems and superconductivity within the Sachdev-Ye-Kitaev approach
The established and highly successful description of metals is based on Landau's Fermi liquid theory. The relevant phase space for electron-electron collisions is determined by the Pauli blocking of a degenerate Fermi gas due to its Fermi surface. In some strongly correlated systems narrow bands form and the energetics of the system at elevated energies or temperatures becomes dominated by strong interactions and is no longer restricted by Fermi-surface phase-space effects. To develop a well-controlled approach of this regime is an important question in the theory of strongly correlated electrons. In this talk I will give an overview over the description of incoherent and critical electronic systems using the Sachdev-Ye-Kitaev approach. We consider versions of the model where electrons interact with each other, with boson collective modes, and even with phonons. We also comment on the very direct relation to holographic approaches to strongly coupled quantum theories. Finally we address the emergence of superconductivity in such an incoherent metal and show that pairing and superconductivity of a fully incoherent electronic system is allowed and leads to a pairing state with high transition temperature but low condensation energy.
Chair discussion session: Erez Berg
Chair: Kamran Behnia
09:00 - 09:45 Amit Kanigel (Technion - Israel Institute of Technology)
Chiral superconductivity in 4Hb-TaS2
We study the 4Hb structure of TaS$_2$ which is an alternating stack of a Mott insulator, recently reported as a gapless spin-liquid (1T-TaS$_2$), and a 2D superconductor (1H-TaS$_2$). We find a superconducting transition at T$_c$ =2.7 K, which is signicantly elevated compared to the value in bulk 2H-TaS$_2$, 0.7 K. Muon spin relaxation experiments show that 4Hb-TaS2 exhibits signatures of time-reversal symmetry breaking insetting abruptly at T$_c$. This suggests that 4Hb-TaS2 is a chiral superconductor and promotes the question of the mutual influence of superconductivity and gapless spin-liquid states.
09:45 - 10:30 Andrew Mackenzie (Max Planck Institute for Chemical Physics of Solids)
Superconductivity and normal state properties of Sr 2 RuO 4
The past year has seen rapid developments in our understanding of the long-studied unconventional superconductor Sr 2 RuO 4. I will report on our group's measurements of the strain dependence of the heat capacity and angle-resolved photoemission, and on collaborative work on the strain dependence of muon spin rotation and the NMR Knight shift. The results of the last project have completely rewritten our understanding of the superconducting order parameter, by reversing famous experimental data from 1998 that was a mainstay of the evidence for spin triplet superconductivity. I will attempt to assess where things stand at present in light of this revised experimental situation.
11:00 - 11:45 Ronny Thomale (Julius-Maximilians-University Würzburg)
Towards a microscopic theory of unconventional superconductivity in Sr 2 RuO 4
Despite its low Tc, Sr2RuO4 has been one of the most important seed materials and inspiration for contemporary research on unconventional superconductivity, encompassing triplet pairing, Majorana zero modes in vortices, chiral Majorana edge states, and chiral topological bulk order. In order to appropriately take into account the enhanced diversity of experimental evidence, the theoretical modelling of superconductivity in Sr2RuO4 necessitates refinement at several levels. This includes the consideration of spin-orbit coupling, uniaxial strain, and the systematic treatment of Fermi surface instabilities and electronic flucutations beyond the mere weak coupling domain. I review the state of the art of several current frontiers along these lines, and how functional renormalization group has the potential to reach an accurate description of electron pairing in Sr2RuO4 in the physically relevant parametric window of intermediate interaction strength.
11:45 - 12:30 Jonathan Ruhman (Bar-Ilan University)
Superconductivity in ultra low density semimetals due to a ferroelectric quantum critical point
According to standard theory, low-density metals, such as doped semiconductors and semimetals, are not expected to become superconducting for two main reasons: (1) Their density of states is very low. (2) Their Fermi energy is small and can be even smaller than the Debye frequency. Nonetheless, superconductivity is observed in a wide range of low-density metals. In this talk I will first discuss possible resolutions of this puzzle using different pairing mechanisms. I will then focus on a specific example, where I will show how the coupling between an arbitrarily small Fermi surface in a Dirac semimetal, and a quantum critical point of the ferroelectric type, leads to strong electronic pairing and superconductivity in-spite the fulfillment of points (1) & (2).
Chair: Andrey V. Chubukov
14:00 - 14:45 Avi Klein (University of Minnesota, Minneapolis)
Intertwined pairing states of superconductivity at a nematic quantum critical point
Superconductivity mediated by dynamical fluctuations of a quantum-critical nematic order-parameter is characterized by the existence of multiple pairing states with closely spaced transition temperatures. These states are topologically distinct, with pairing gaps that differ by the number of nodes on the Fermi surface. Nevertheless, they are all in the same lattice symmetry, usually s-wave. I will discuss how the superconducting phase below $T_c$ is a superposition of many such states, and how interference effects modify superconducting properties such as the Fermi surface gap anisotropy.
14:45 - 15:10 Maxim Dzero (Kent State University)
Thermodynamic properties of disordered unconventional superconductors with competing interactions
A topic of interplay between disorder and competing electronic phases in multiband superconductors have recently got renewed interest in the context of iron-based superconductors. In my talk I will present a theory of disordered unconventional superconductor with competing magnetic order. My discussion will be based on the results obtained for on a two-band model with quasi-two-dimensional Fermi surfaces, which allows for the coexistence region in the phase diagram between magnetic and superconducting states in the presence of intraband and interband scattering induced by doping. Within the quasi-classical approximation I will present the analysis of the quasi-classical Eilenberger's equations which include weak external magnetic field. I will demonstrate that disorder has a crucial effect on the temperature dependence of the magnetic pen- etration depth as well as critical current, which is especially pronounced in the coexistence phase.
15:10 - 15:30 Elio Koenig (Rutgers University)
Crystalline symmetry protected helical Majorana modes in the iron pnictides
We propose that propagating one-dimensional Majorana fermions will develop in the vortex cores of certain iron-based superconductors, most notably Li(Fe1−xCox)As. A key ingredient of this proposal are the 3D Dirac cones recently observed in ARPES experiments [P. Zhang et al., Nat. Phys. 15, 41 (2019)]. Using an effective Hamiltonian around the Gamma − Z line we demonstrate the development of gapless one-dimensional helical Majorana modes, protected by C4 symmetry. A topological index is derived which links the helical Majorana modes to the presence of monopoles in the Berry curvature of the normal state. We present various experimental consequences of this theory and discuss its possible connections with cosmic strings.
Chair: Hitesh Changlani
16:00 - 16:45 Yuji Matsuda (Kyoto University)
Half-integer thermal quantum Hall effect in a chiral spin liquid
The quantum Hall effect (QHE) is one of the most remarkable phenomena in contemporary condensed matter physics, which rivals superconductivity in its fundamental significance as a manifestation of quantum mechanics on a macroscopic scale. The quantum Hall state is a topological property of quantum matter. There are two classes of the QHE, where integer and fractional electrical conductance are measured in units of $e^2/h$. Here we report a novel type of quantization of the Hall effect caused by charge neutral quasiparticles, i.e. Majorana fermions, in an insulating two-dimensional (2D) quantum magnet, $\alpha$-RuCl$_3$ with honeycomb lattice[1][2]. This material has been suggested to be a candidate of Kitaev quantum spin liquid (QSL), where significant entanglement of quantum spins is expected. In the low-temperature regime of the QSL state, the 2D thermal Hall conductance reaches a quantum plateau as a function of the applied magnetic field. The plateau attains a quantization value $\kappa_{xy}/T=1/2(\pi^2k_B^2/3h)$, which is exactly half of that in the integer QHE. In stark contrast to the quantum Hall effect in 2D electron gas, the half-integer thermal quantum Hall effect occurs even in the magnetic field parallel to 2D plane [3]. These observations are direct signatures of topologically protected chiral edge currents of emergent Majorana fermions and non-Abelian anyons in the bulk. [1]Y. Kasahara {\it et al}. Phys. Rev. Lett. 120, 217205 (2018). [2]Y. Kasahara {\it et al}. Nature 559, 227 (2018). [3]T. Yokoi {\it et al}. in preparation.
16:45 - 17:30 Natalia Perkins (University of Minnesota, Minneapolis)
Probing Kitaev spin liquids
Motivated by the ongoing effort to search for high-resolution signatures of quantum spin liquids, we investigate the temperature dependence of the Raman and the indirect resonant inelastic x-ray scattering response for the Kitaev honeycomb model. We find that, as a result of spin fractionalization, the dynamical response changes qualitatively at two well-separated temperature scales, TL and TH, which correspond to the characteristic energies of the two kinds of fractionalized excitations.
Poster session I (focus on odd poster numbers)
Chair: Dmitrii L. Maslov
09:00 - 09:45 Sean Hartnoll (Stanford University)
Simple models of strange transport
I will discuss various instances in which anomalous transport can be captured with simple scattering mechanisms. I will focus on charge transport in ruthenates, and heat transport in complex insulators such as perovskites.
09:45 - 10:30 Erez Berg (The Weizmann Institute of Science)
Transport in strange metals
I will describe theoretical progress on understanding transport in several models of strange metals.
Group photo (to be published on the event's web page)
11:00 - 11:45 Kamran Behnia (CNRS Paris)
Metallicity and superconductivity in strontium titanate
An overview of recent progress in the field will be given.
Lévy flights and hydrodynamic superdiffusion in quantum-critical transport
Departure for the guided tours by tram from the institute
(meeting point: reception of MPIPKS)
Meeting with the guides in Dresden Neustadt and guided tours (3 groups - limited participants numbers):
group 1: guided tour through Dresden incl. visit of the Royal Castle Dresden
group 2: guided tour through Dresden incl. visit of the Zwinger and its Mathematisch-Physikalischer Salon
group 3: guided tour through Dresden incl. tour of the ascent of the dome of Frauenkirche
Buffet-dinner at the restaurant "Palastecke", Schloßstraße 2 (Tel.: +49 351 48417330)
Chair: Oskar Vafek
09:00 - 09:45 Eva Andrei (Rutgers University)
Strategies for transforming the band structure of 2D materials
Historically materials discovery has been the result of serendipity or painstaking exploration of a large phase space of chemically synthesized compounds. A new era of materials started with the breakthrough isolation of a free standing 2D material, graphene, followed by many others. The distinctive characteristic of these 2D materials is that, with all the atoms residing at the surface, it is possible to access and manipulate their properties without changing their chemical composition. I will discuss methods of transforming the electronic structure of graphene: inducing magnetism and Kondo screening by removing single Carbon atoms; generating flat bands and pseudo-magnetic fields by introducing a twist between layers, by buckling or by introducing strain.
09:45 - 10:30 Bogdan Andrei Bernevig (Princeton University)
Possible Effects of Topology on Moire Lattices
We will present the possible effects of topological properties on the low energy physics of the Moire graphene lattices
11:00 - 11:45 Elena Bascones (Consejo Superior de Investigaciones Científicas (CSIC))
Exploring correlations in twisted bilayer graphene
Correlated states at different fillings have been observed in twisted bilayer graphene upon doping a graphene bilayer with a small twist angle, creating a lot of excitement in the scientific community. The stacking misorientation produces a moiré pattern with a superlattice modulation corresponding to thousands of atoms per unit cell. When the rotation angle is close to the so-called magic angle, the low energy electronic bands of the moiré become almost flat. The insulating states, believed to be due to electronic correlations, arise when the system is doped with an integer number of electrons or holes per moire unit cell. The superconducting states are intercalated between the insulating ones and show nematic features. More recently STM experiments have uncovered a new correlated state with flat band splitting. Many different theoretical proposals have been put forward to explain these correlated states and the origin of superconductivity. Understanding the nature of these ordered phases requires the use of different techniques. To date most of the experimental data is coming from transport experiments and a few STM experiments. Optical conductivity is one of the techniques which can be used to analyze these systems. I will discuss the features in the optical conductivity spectrum which can help identifying the nature of the correlated states.
11:45 - 12:15 Eli Fox (Stanford University)
Emergent ferromagnetism in twisted bilayer graphene
When two sheets of graphene are stacked at a small twist angle, the resulting flat superlattice minibands are expected to strongly enhance electron-electron interactions. Here we present evidence that near three-quarters (3/4) filling of the conduction miniband these enhanced interactions drive the twisted bilayer graphene into a ferromagnetic state. In a narrow density range around an apparent insulating state at 3/4, we observe emergent ferromagnetic hysteresis, with a giant anomalous Hall (AH) effect and indications of chiral edge states. Surprisingly, the magnetization of the sample can be reversed by applying a small DC current. Although the AH resistance is not quantized and dissipation is significant, our measurements suggest that the system may be an incipient Chern insulator. We further present magnetotransport measurements in tilted field that may hold clues to the nature of the magnetism, and briefly discuss the appearance of a similar phenomenon in an ABC-stacked trilayer graphene/hexagonal boron nitride moiré superlattice.
12:15 - 12:45 Matthias Punk (Ludwig-Maximilian University of Munich)
Metallic states with topological order in twisted bilayer graphene
Twisted bilayer graphene exhibits an unusual metallic phase with an anomalously low charge carrier density near the Mott insulator, which is reminiscent of the pseudogap phase in underdoped cuprates. Here we present theoretical results for a two-species quantum dimer model on the triangular lattice, which provides a simple effective description of a metallic state with topological order (i.e. a fractionalized Fermi liquid). This state exhibits a small charge carrier concentration as observed in experiments, without breaking any symmetries. We develop an exact solution of this model and use it to predict ordering wave-vectors of potential CDW instabilities.
Chair: Yuxuan Wang
14:00 - 14:45 Senthil Todadri (Massachusetts Institute of Technology (MIT))
Correlations and topology in moire graphene materials
I will discuss my recent theoretical work on the interplay between band topology and strong correlations in moire graphene systems, and its implications for experiments.
14:45 - 15:30 Jian Kang (Florida State University, Tallahassee - National High Magnetic Field Laboratory)
Nonlocal interactions beyond Hubbard model in twisted bilayer graphene near magic angle
Recent experiments on the twisted bilayer graphene have discovered the correlated insulator phase and the neighboring superconductivity with certain fillings of the charge carriers. To understand the correlation effects, we build the localized Wannier states (WSs) for the four narrow bands around the charge neutrality point and construct the corresponding low energy tight binding model. Furthermore, we project the gate-screened Coulomb interaction onto the constructed localized WSs. Due to the nontrivial topological properties of the four narrow bands, the projected interaction becomes nonlocal and contains new terms beyond the cluster Hubbard model. These new terms lead to strong correlations between different lattice sites even without the hoppings. At the one-quarter filling, the largely degenerate ground states are found to be SU(4) ferromagnetic in the strong coupling limit. The kinetic terms select the ground state in which the two valleys with opposite spins are equally mixed, with vanishing magnetic moment per particle. Our results suggest the fundamental difference between the twisted bilayer graphene and other unconventional superconductors described by the Hubbard model.
Chair: Yong-Baek Kim
16:00 - 16:30 Saurabh Maiti (Concordia University)
Is the composite fermion state in graphene a Haldane's Chern insulator?
Graphene in the presence of a strong external magnetic field is a special attraction for investigations of the fractional quantum Hall (fQH) states with odd and even denominators of the fraction. Most of the attempts to understand Graphene in strong field regime were made through exploiting the universal low-energy effective description of Dirac fermions emerging from the nearest-neighbor hopping model of electrons on a honeycomb lattice. We highlight that accounting for the nextnearest-neighbor hopping terms in doped Graphene can lead to a unique redistribution of magnetic fuxes within the unit cell of the lattice. While this affects all the fQH states, it has a striking effect at a half-filled Landau-level state: it leads to a composite fermion state that is equivalent to the doped topological Chern insulator on a honeycomb lattice. At energies comparable to the Fermi energy, this state possesses a Haldane gap in the bulk proportional to the next-nearest-neighbor hopping and density of dopants. We propose experiments to detect this gap; the associated chiral boundary mode; and encourage cold-atom setups to test other predictions of the theory.
16:30 - 17:15 Hitesh Changlani (Florida State University, Tallahassee)
The mother of all states of the kagome quantum antiferromagnet
I will present aspects of our theoretical and numerical work in the area of frustrated magnetism [1,2], particularly in the context of the kagome geometry. This area has seen a flurry of research activity owing to several near-ideal material realizations. My focus will be on the existence of an exactly solvable point in the XXZ-Heisenberg model for the ratio of Ising to transverse coupling $J_z/J=-1/2$ [2]. This point in the phase diagram has "three-coloring" states as its exact quantum ground states and is macroscopically degenerate. It exists for all magnetizations and is the origin or "mother" of many of the observed phases of the kagome antiferromagnet. I will revisit aspects of the contentious (and experimentally relevant) Heisenberg case and provide a framework to understand multiple results in terms of three-coloring states [2,3]. [1] K. Kumar, H. J. Changlani, B. K. Clark, E. Fradkin, Phys. Rev. B 94, 134410 (2016). [2] H. J. Changlani, D. Kochkov, K. Kumar, B. K. Clark, E. Fradkin, Phys. Rev. Lett. 120, 117202 (2018). [3] H. J. Changlani, S. Pujari, C.M. Chung, B. K. Clark, Phys. Rev. B 99, 104433 (2019).
Poster session II (focus on even poster numbers)
Chair: Natalia Perkins
09:00 - 09:45 Hidenori Takagi (Max Planck Institute for Solid State Research Stuttgart)
Exotic spin-orbital entangled phases in 5d and 4d transition metal oxides
The exploration of novel phases of interacting electrons (correlated electrons) has long been a major stream of condensed-matter research. Many-body interactions among electrons give rise to a huge variety of phases, grouped into electron-solid, -liquid-crystal, -liquid and -gas states. The wealth of possibilities arises from a complicated interplay of lattice geometry, quantum effects and the multiple degrees of freedom of the electron (charge, spin and orbital). In the past, the two dominant areas of exploration have been the 3d transition-metal (TM) oxides and the 4f intermetallic compounds but recently 5d TM oxides and related compounds have emerged as the next arena of correlated-electron physics. Significant new physics is expected due to the presence of a large spin-orbit coupling in heavy 5d elements, tying together the otherwise independent spin and orbital degrees of freedom. This can be of order 0.5eV and is often larger than the crystal-field splitting of the orbital states, resulting in a spin-orbital-entangled state of correlated electrons. The nature of the spin-orbital entanglement depends significantly on the d-electron number and the chemical bonding, and it is anticipated that, in combination with electron correlations, a rich variety of novel electronic phases are waiting to be discovered. To name just a few, the proposed phases include Kitaev quantum spin liquids, correlated topological semimetals, excitonic magnets and multipolar-ordered states. In this talk, I will present our recent exploration of such exotic faces of spin-orbital entangled matter in 5d (and 4d) transition metal oxides. Topics will include the following. (I) Spin-orbital quantum liquid on honeycomb lattice in 5d5H(D)3LiIr2O61,2. (II) Non-magnetic Jeff = 0 Mott insulating state3 in 4d4 Ru4+ honeycomb oxides and pressure-induced excitonic magnetism. (III) Multipolar ordering in 5d1 cubic Cs(Rb)2TaCl34. References 1. K. Kitagawa, T. Takayama, Y. Matsumoto, A. Kato, R. Takano, Y. Kishimoto, S. Bette, R. Dinnebier, G. Jackeli, and H. Takagi, Nature, 554, 341–345 (2018). 2. H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. N. Nagler, Nature Reviews Physics 1, 264–280 (2019). 3. G. Khaliullin, Phys. Rev. Lett. 111, 197201 (2013). 4. H. Ishikawa, T. Takayama, R. Kremer, J. Nuss, R. Dinnebier, K. Kitagawa, K. Ishii and H. Takagi, submitted. arXiv:1807.08311
09:45 - 10:30 Yong-Baek Kim (University of Toronto)
Magnetic field induced novel phases in Kitaev magnets
There has been a great interest in magnetic field induced quantum spin liquids and other competing phases in Kitaev materials. In particular, discovery of neutron scattering continuum and half quantized thermal Hall effect in α-RuCl3, have prompted a number of theoretical studies on the effect of magnetic field in various local moment models with dominant Kitaev interaction. We provide classical and semiclassical analyses of some of such models for large system sizes in two dimensions and compare the results with numerical computations of quantum models for small system sizes. We show emergence of magnetic orders with fairly large unit cells and discuss possible relation to quantum spin liquids. We also discuss some preliminary results on quantum models and thermal Hall conductivity in the context of recent experiments on α-RuCl3.
Chair: Saurabh Maiti
11:00 - 11:45 Roderich Moessner (Max Planck Institute for the Physics of Complex Systems)
Intermediate energy spin dynamics
11:45 - 12:30 Dam Son (University of Chicago)
Hydrodynamics of composite fermions
We develop a hydrodynamic theory that allows one to compute various physical properties of the quantum Hall states near filling factors 1/2 and 1/4.
Chair: Revaz Ramazashvili
14:00 - 14:45 Nicola Poccia (Leibniz Institute for Solid State and Materials Research (IFW) Dresden)
Current progress in high temperature superconducting nano-devices using novel "van der Waals technologies"
The strong variations in superconducting critical temperature among complex cuprate perovskites, that have identical hole density, indicates the importance of lattice modulation effects. The one-dimensional incommensurate lattice modulation of Bi2Sr2CaCu2O8+y, where the average atomic positions are perturbed beyond the unit cell, is an ideal case where to study the interplay between superconductivity and the one-dimensional incommensurate lattice fluctuations. Here we report nano X-ray diffraction imaging of incommensurate lattice structure in a bulk Bi2Sr2CaCu2O8+y (Tc = 90 K) down to a 2 u.c van der Waals heterostructure. Despite the atomically thin heterostructure, the superconducting critical transition temperature is identical to the bulk devices and the long- and short range incommensurate lattice modulations are still detected. The incommensurate lattice modulations are spatially correlated and fluctuate respect the average wavevector, showing a strain dependence with the substrate. We correlate the structural and transport data by measuring the Hall-effect down to two unit cells. We establish quantitative agreement between theory and data through the observation of the the Hall sign reversal far above Tc of even the optimally doped bulk crystal. We have thus bridged the gap between low temperature superconducting films (where Hall sign reversal occurs above Tc due to superconducting fluctuations) and cuprates (where Hall sign reversal was previously believed to occur below Tc and as the result of vortex dynamics). The newly developed techniques that allow the atomically thin incorporation of BSCCO into van der Waals heterostructures opens the possibility of a new generation of extremely sensitive nanodevices for the search and study of high temperature superconductivity and their interplay with other van der Waals crystal systems.
14:45 - 15:15 Alex Levchenko (University of Wisconsin, Madison)
Mesoscopic and anomalous Nernst effect in disordered superconductors
We will discuss mesoscopic and anomalous quantum transport effects at the onset of superconducting transition focusing on the observed large Nernst-Ettinghausen signal in disordered thin films. (i) In the vicinity of transition, as the Ginzburg-Landau coherence length of preformed Cooper pairs diverges, short-ranged mesoscopic fluctuations are equivalent to local fluctuations of the critical temperature. As the result, the dynamical susceptibility function of pair propagation acquires singular mesoscopic component and consequently superconducting correlations give rise to enhanced mesoscopic fluctuations of thermodynamic and transport characteristics. In contrast to disordered normal metals, root-mean-square value of mesoscopic conductivity fluctuations ceases to be universal and displays strong dependence on dimensionality, temperature, and under certain conditions can exceed its quantum normal state value by a large factor. Interestingly, we find different universality as magnetic susceptibility, conductivity, and transverse magnetic thermopower coefficients all display the same temperature dependence. We will discuss an enhancement of mesoscopic effects in the Seebeck thermoelectricity and Hall conductivity fluctuations as mediated by emergent superconductivity. (ii) We will also discuss how skew-scattering due to spin-orbit interaction gives rise to a new fluctuation-induced effect to the Hall conductance and Nernst coefficient near the superconducting transition point.
15:15 - 15:45 Julia Link (Simon Fraser University)
Hydrodynamic transport in Luttinger semimetals
In the hydrodynamic regime it is possible to investigate the universal collision-dominated dynamics of the isolated electron fluid, while the couplings to the lattice and to impurities becomes secondary. An important transport property is the shear viscosity which describes whether the electron fluid behaves laminar or turbulent. The ratio shear viscosity over entropy is bounded from below and is an indicator for how strongly the system is interacting [Kovtun]. We determine the shear viscosity in Luttinger metals which are semimetals with a quadratic energy dispersion. Especially we focused on the determination of the shear viscosity in the Luttinger-Abrikosov-Beneslavskii phase [Moon]. This phase occurs upon adding the long-range Coulomb interaction to the system and it is an interacting, scale invariant, and non-Fermi-liquid phase. Upon combining the Boltzmann formalism with an RG analysis, we find that the ratio viscosity over entropy comes very close to the lower bound which makes the Luttinger metals a nearly perfect electron fluid. [Kovtun2005] P. K. Kovtun, D. T. Son, and A. O. Starinets, Phys. Rev. Lett. 94, 111601 (2005). [Moon] E.-G. Moon, C. Xu, Y. B. Kim, and L. Balents, Phys. Rev. Lett. 111, 206401 (2013).
16:15 - 16:45 Serguei Brazovskii (LPTMS-CNRS, Paris-Sud University)
From Chiral Anomaly to two-fluid hydrodynamics for electronic Vortices
Many recent experiments addressed manifestations of electronic crystals, particularly the charge density waves, in nano-junctions, under electric field effect, at high magnetic fields, together with real space visualizations by STM and micro X-ray diffraction. This activity returns the interest to stationary or transient states with static and dynamic topologically nontrivial configurations: electronic vortices as dislocations, instantons as phase slip centers, and ensembles of microscopic solitons. Describing and modeling these states and processes calls for an efficient phenomenological theory which should take into account the degenerate order parameter, various kinds of normal carriers and the electric field. Here we notice that the commonly employed time-depend Ginzburg-Landau approach suffers with violation of the charge conservation law resulting in unphysical generation of particles which is particularly strong for nucleating or moving electronic vortices. We present a consistent theory which exploits the chiral transformations taking into account the principle contribution of the fermionic chiral anomaly to the effective action. The resulting equations clarify partitions of charges, currents and rigidity among subsystems of the condensate and normal carriers. On this basis we perform the numerical modeling of a spontaneously generated coherent sequence of phase slips - the space-time vortices - serving for the conversion among the injected normal current and the collective one.
Closing remarks workshop week
seminar week - seminar room 1, 3 or 4
Seminar room 1
Quantum Matter with and without Quasiparticles
09:45 - 10:00 Scientific coordinators
Opening seminar week
Black board talks and discussions
10:30 - 11:00 Revaz Ramazashvili
11:00 - 11:30 Elliot Christou
11:30 - 12:00 Avi Klein
13:00 - 13:30 Alex Levchenko
13:30 - 14:00 Andrey V. Chubukov
14:00 - 14:30 Dmitrii L. Maslov
Seminar room 3 and 4
Welcome dinner of seminar week
10:30 - 11:30 James Analytis (University of California, Berkeley)
Vanishing Quasiparticles Across a Quantum critical point connecting two Fermi liquids
The strange metal in unconventional superconductors is thought to be connected to an underlying quantum critical point. Although the phenomenology is consistent with this idea, the absence of measurement that show a diverging correlation length of a symmetry breaking order parameter seems to contradict this connection. We show that the nature of the quantum critical point in a class of heavy fermion superconductors is not associated with any kind of symmetry breaking. Rather, it is a critical point connecting two unusual Fermi liquids that have well defined quasiparticles everywhere in the phase diagram except at the transition itself.
14:00 - 15:00 Leo Radzihovsky (University of Colorado, Boulder)
Fractionalized mobility in a vector gauge theory
Motivated by the prediction of fractonic topological defects in a quantum crystal, I will discuss a novel U(1) vector gauge theory description of fractons. The fractionalized mobility is simply imposed by gauge invariance. In a specific gauge, at low energies this vector gauge theory reduces to the previously studied fractonic symmetric tensor gauge theory.
15:30 - 16:30 Ilya Eremin (Ruhr University Bochum)
Knight Shift and Leading Superconducting Instability from Spin Fluctuations in Sr2RuO4
10:30 - 11:30 Liang Fu (Massachusetts Institute of Technology (MIT))
Supermetal
We study the effect of electron interaction in an electronic system with a high-order Van Hove singularity, where the density of states shows a power-law divergence. Owing to scale invariance, we perform a renormalization group (RG) analysis to find a nontrivial metallic behavior where various divergent susceptibilities coexist but no long-range order appears. We term such a metallic state as a supermetal. Our RG analysis reveals the Gaussian fixed point and a nontrivial interacting fixed point, which draws an analogy to the ϕ^4 theory. We further present a finite anomalous dimension at the interacting fixed point by a controlled RG analysis, thus establishing an interacting supermetal as a non-Fermi liquid. We discuss experimental evidence of supermetal in moire materials based on graphene and transition metal dichalcogenide.
14:00 - 15:00 Aharon Kapitulnik (Stanford University)
Thermal Transport, Thermalization, and Possible Signatures of Quantum Chaos in Complex Crystalline Materials
Summer party / barbeque at MPIPKS
10:30 - 11:30 Subir Sachdev (Harvard University)
Theory of a Planckian metal
We present a lattice model of fermions with $N$ flavors and random interactions which describes a Planckian metal at low temperatures, $T \rightarrow 0$, in the solvable limit of large $N$. We begin with quasiparticles around a Fermi surface with effective mass $m^\ast$, and then include random interactions which lead to fermion spectral functions with frequency scaling with $k_B T/\hbar$. The resistivity, $\rho$, obeys the Drude formula $\rho = m^\ast/(n e^2 \tau_{\textrm{tr}})$, where $n$ is the density of fermions, and the transport scattering rate is $1/\tau_{\textrm{tr}} = f \, k_B T/\hbar$; we find $f$ of order unity, and essentially independent of the strength and form of the interactions. The random interactions are a generalization of the Sachdev-Ye-Kitaev models; it is assumed that processes non-resonant in the bare quasiparticle energies only renormalize $m^\ast$, while resonant processes are shown to produce the Planckian behavior.
14:00 - 15:00 Claudia Felser (Max Planck Institute for Chemical Physics of Solids)
Magnetic Weyl Semimetals
Claudia Felser, Johannes Gooth, Kaustuv Manna, Enke Liu and Yan Sun Max Planck Institute Chemical Physics of Solids, Dresden, Germany Topology a mathematical concept became recently a hot topic in condensed matter physics and materials science. One important criteria for the identification of the topological material is in the language of chemistry the inert pair effect of the s-electrons in heavy elements and the symmetry of the crystal structure. Beside of Weyl and Dirac new fermions can be identified compounds via linear and quadratic 3-, 6- and 8- band crossings stabilized by space group symmetries. In magnetic materials the Berry curvature and the classical AHE helps to identify interesting candidates. Magnetic Heusler compounds were already identified as Weyl semimetals such as Co$_2$YZ , in Mn$_3$Sn and Co$_3$Sn$_2$S$_2$. The Anomalous Hall angle helps to identify even materials in which a QAHE should be possible in thin films. Besides this k-space Berry curvature, Heusler compounds with non-collinear magnetic structures also possess real-space topological states in the form of magnetic antiskyrmions, which have not yet been observed in other materials.
Interaction meets topology
09:00 - 09:30 Kun Yang
09:30 - 10:00 Pavel Volkov
10:30 - 11:00 Inti Sodemann (MPIPKS)
11:00 - 11:30 Elio Koenig
Closing remarks seminar week and departure
pdf of program (printable)
Divisions and Groups
Proposals for Seminars & Workshops
Physik-Preis Dresden
Collaborative Research Funding
Other Service Departments
ImprintData Protection Advice | CommonCrawl |
Understanding solutions of the Dirac equation
In one of the lectures that I'm currently taking we encountered the Dirac equation. The general solution was given as $$\psi ( x ) = \sum _ { s } \int \frac { d ^ { 3 } \bf { p } } { ( 2 \pi ) ^ { 2 } 2 \omega _ { p } } \left[ a _ { s } ( p ) u ^ { s } ( p ) e ^ { - i p \cdot x } + b _ { s } ^ { * } ( p ) v ^ { s } ( p ) e ^ { + i p \cdot x } \right],$$ where $$u^{s}(p)=\begin{pmatrix}{\sqrt{\sigma \cdot p} \xi^{s}} \\ {\sqrt{\overline{\sigma} \cdot p} \xi^{s}}\end{pmatrix} \quad\text{and}\quad v ^ { s } ( p ) = \begin{pmatrix} { \sqrt { \sigma \cdot p } \xi ^ { s } } \\ { - \sqrt { \bar { \sigma } \cdot p } \xi ^ { s } } \end{pmatrix}.$$ Note that we defined $\sigma^\mu \equiv (1,\vec{\sigma})$ and $\bar\sigma^\mu \equiv (1,-\vec\sigma)$ and $s\in\{+,-\}$ for $$\xi^+ \equiv \begin{pmatrix}1\\0\end{pmatrix},~\xi^-\equiv\begin{pmatrix}0\\1\end{pmatrix}.$$
My problem is now that I'm a bit confused on how to evalute the expression $\sqrt{p\cdot\sigma}\xi^s$. If I understood correctly we have $p\cdot \sigma = p_\mu\sigma^\mu$ which makes this expression a matrix. But how am I supposed to take the square-root now? So the questions boils down to explaining how one can evalute the expression $\sqrt{\sigma \cdot p}\xi^s$.
some notes: There was actually no proof given why $u^s(p)$ or $v^s(p)$ should solve the Dirac equation, only a statement that one could prove it using the identity $$(\sigma\cdot p)(\bar\sigma\cdot p)=p^2=m^2.$$ We were using the Wely-representation of the $\gamma$-matrices, if this should be relevant.
field-theory dirac-equation
SitoSito
$\begingroup$ You can take the square root of a positive matrix pretty much in the same way that you take the square root of a positive number, via the spectral theorem. $\endgroup$ – lcv Mar 29 at 17:22
The more usual expressions are $$ u_\alpha = \frac{1}{\sqrt{2E(E+m)}} \left[\matrix{ (E+m)\chi_\alpha \cr ({ \sigma}\cdot {\bf k} )\chi_\alpha}\right], $$ $$ v_\alpha = \frac{1}{\sqrt{2|E|(|E|+m)}} \left[\matrix{ - ({ \sigma}\cdot {\bf k} ) \chi_\alpha \cr (|E|+m)\chi_\alpha}\right]. $$ Alternatively one can use the rapidity ${\bf k}= {\bf n}\cosh s $ and write $$ u_\alpha= \frac{1}{\sqrt{2m(E+m)}}\left[\matrix{(E+m)\chi_\alpha \cr ({ \sigma}\cdot {\bf k} )\chi_\alpha}\right] =\left[\matrix{\phantom{({\bm \sigma}\cdot {\bf n} )} \cosh (s/2)\chi_\alpha \cr \sinh(s/2) ({ \sigma}\cdot {\bf n} )\chi_\alpha}\right]. $$ The second way of writing $u_\alpha$ is designed to stress that the eigenstate depends only on the geometry of the Lorentz boost, and not on the rest mass $m$.
I had a look at P&S ch 3 and he explains what he means in eq 3.49, the second line of which coincides with my "alternately" expression. The proof is also given there as he starts from an obvious solution and boosts it. My comment about "not being positive" was mistaken because I thought that $\sigma\cdot {\bf p}$ meant the operator with eigenvalue $±|{\bf p}|$ rather than the operator which is written the same, but has positive eigenvalues $E\pm |{\bf p}|$. In the second $\sigma$ is not the threee vector but
is meant to mean $(1,{\bf \sigma})$ You did say this in your question, but I missed it. Sorry for misunderstanding.
mike stonemike stone
$\begingroup$ I'm having a hard time relating what you wrote here to the solutions in my post... Could you maybe elaborate on how to switch between them? $\endgroup$ – Sito Mar 29 at 19:47
$\begingroup$ @Sito: I am having a hard time figuring out what your lecturer meant. My "usual" exressions have a direct physical meaning. My $\chi_\alpha$ is the particle spin in its rest fram, and the rest of the spinor solution is the result of it's boost from rest to momentm ${\bf k}$. Your lecturer seems to be abandoning physics for a kind of formal "simplicity" which is not actually simple. Notice that icv's comment does not apply as as $\sigma\cdot {\bf k}$ does not have positive eigenvalues, so it does not have a square root. $\endgroup$ – mike stone Mar 29 at 22:24
$\begingroup$ Thank you for providing your form of the solutions. The thing is that we will be working with the one I presented in the question so I need to understand them... Perskin uses the sam expression (Chapter 3, eq. 3.50) and writes as explanation "where it is understood that in taking the square root of a matrix, we take the positive root of each eigenvalue." but I honestly still don't really understand how this is supposed to work... $\endgroup$ – Sito Mar 30 at 15:21
$\begingroup$ @Sito. I don't have a copy of Peskin to hand. I'll look up what he writes when I get into my office, but again $\sigma\dot {\bf p}$ does not have all positive eigenvalues, so it does not have "positive roots" $\endgroup$ – mike stone Mar 30 at 19:20
$\begingroup$ Thanks for checking. If you want to incorporate this into your answer I will gladly accept if later! $\endgroup$ – Sito Apr 2 at 19:14
Not the answer you're looking for? Browse other questions tagged field-theory dirac-equation or ask your own question.
Confusion about Dirac mass term
Are there eight or four independt solutions of the Dirac equation?
Is the Dirac equation equivalent to the Klein-Gordon equation for its left handed component?
Positive free particle Dirac equation
Understanding Dirac equation notation
A more general completeness relation for Dirac spinors
How is a particular form of the Dirac Spinor derived by a boost from the system of rest?
How this spinor identity is shown?
Weyl transformation of Dirac equation
$4\times4$ Dirac Hamiltonian in Graphene | CommonCrawl |
Doing Laboratory Experiments on Quarks
Reference Sensation Constant (Units)
Touching ice $T^{\sf{b}} = 0$ (℃)
Touching steam $T^{\sf{c}} = 100$ (℃)
Not seeing the Sun $U^{\sf{d}} = 0$ (MeV)
Different people in different societies may have profoundly different ways of seeing things. So we make measurements to cope with perceptual variations. Mensuration is also a way to transcend personal sensory limitations. Overall, a systematic quantitative approach to observation is crucial for objectifying the description of sensation. Measurement techniques can be quite arbitrary to start, e.g. measurements of length began by referring to somebody's foot . But nonetheless observational methods have become very dependable because experimental physicists have invested an enormous effort in developing calibration standards and high precision techniques. For example, atomic clocks can be used to make time measurements that are good to about one part in 1014. By comparison, in 2013 the US economy was 17 trillion dollars or about 1015 cents. So physicists can be fussy in a way that is like counting every dime spent in the USA per year. When we speak of doing laboratory experiments, we mean that observations are being made and reported in this fastidious style.
For WikiMechanics, discussing laboratory practice starts with the reference sensations that are benchmarks from which all perceptions are judged and recognized. These sensations are mathematically represented by constants. And sometimes, the constants express calibration standards. See the accompanying table for examples where $T$ notes the temperature, $U$ is the internal energy, and b, c or d represent the bottom, charmed and down quarks. Numerical values for these constants are established by convention, and are without any claim of universal validity. They can be altered by collective agreement if expedient. So, due to the variety of possibilities, a statement of measurement units is usually included with any complete experimental report. As measurement techniques become more refined, calibration standards are adjusted, and so these constants actually represent historical standards. For example, the internal energy of a down-quark is almost always taken as zero, as indicated above. But precise observations of hydrogen show a tiny value of a few micro electronvolts.
Measuring Quirks
Ice Constant ${\sf{\text{Freezing Point of Water}}} \\ T^{\sf{b}} = 0 \ \ \text{(℃)}$ 1-6
Steam Constant ${\sf{\text{Boiling Point of Water}}} \\ T^{\sf{c}} = 100 \ \ \text{(℃)}$ 1-7
Sun Constant ${\sf{\text{Mean Internal Energy of Down Quarks}}} \\ {\mathit{Ũ}}^{\sf{D}} = -27 \, \left( \sf{\text{µeV}} \right) \; \simeq 0 \, \left( \sf{\text{MeV}} \right)$ 1-3
page revision: 171, last edited: 01 Aug 2022 18:57 | CommonCrawl |
Speech Compression - In LPC how does the linear predictive filter work on a general level?
Hi I'm taking a multimedia systems course and I'm preparing for my exam on tuesday. I'm trying to get my head around LPC compression on a general level, but I'm having trouble with what is going on with the linear predictive filter part. This is my understanding so far:
LPC works by digitising the analog signal and splitting it into segments. For each of the segments we determine the key features of the signal and try to encode these as accurately as possible. The key features are the pitch of the signal (i.e. the basic formant frequency), the loudness of the signal and whether the sound is voiced or unvoiced. Parameters called vocal tract excitation parameters are also determined which are used in the vocal tract model to better model the state of the vocal tract which generated the sound. This data is passed over the network and decoded at the receiver. The pitch of the signal is used as input to either a voiced or unvoiced synthesiser, and the loudness data is used to boost the amplitude of this resulting signal. Finally the vocal cord model filters this sound by applying the LPC coefficients which were sent over the network.
In my notes it says that the vocal tract model uses a linear predictive filter and that the nth sample is a linear combination of the previous p samples plus an error term, which comes from the synthesiser.
does this mean that we keep a running average of last p samples at both the encoder and decoder? So that at the encoder we only transmit data that corresponds to the difference between this average and actual signal?
Why is it an a linear combination of these previous samples? My understanding is that we extract the loudness, frequency and voiced/unvoiced nature of the sound and then generate these vocal tract excitation parameters by choosing them so that the difference between the actual signal and the predicted signal is as small as possible. Surely an AVERAGE of these previous samples would be a better indication of the next sample?
If there are any holes in my understanding if you could point them out that would be great! Thanks in advance!
audio speech voice compression linear-prediction
LPC voice coders (starting with the old LPC10 standard, which seems to be the one you refer to here) are based on the source-filter model of speech production. Speech can be characterized by the following properties:
The raw sound emitted by the larynx (through vibration of the vocal folds, or just air flowing through it, the vocal folds being opened).
The transfer function of the filter achieved by the articulatory system, further filtering this raw sound.
Early LPC coders (LPC10) adopt the following model of those two steps:
The larynx emits either white noise, characterized by an amplitude $\sigma$ ; or a periodic impulsion train, characterized by an amplitude $\sigma$ and a frequency $f_0$
The transfer function of the articulatory system is of the form $\frac{1}{1 - \sum_k a_k z^{-k}}$, and is thus entirely characterized by the coefficients $a_k$
The principle of early LPC coders (such as LPC10) is thus to estimate these parameters from chunks of the incoming audio signal ; transmit these over the network ; and have a sound generator reproduce a sound from these parameters on the receiver. Observe that in this process, nothing of the original audio samples is actually transmitted. To make a musical analogy, it's like listening to a piano performance, transcribing it, sending over the score, and getting someone to play it on the other end... The result on the other end will be close to the original performance, but only a representation has been sent over.
does this mean that we keep a running average of last p samples at both the encoder and decoder?
No, this is not how it works. On the encoder side, we run a process known as AR (autoregressive) estimation, to estimate the set of coefficients of the AR filter that matches best the spectral envelope of the input signal. This is an attempt at recovering the coefficients of the filtering that has been performed by the articulatory system.
These coefficients are sent over the network (along with pitch, the voice/unvoiced flag, and loudness). The decoder uses those coefficients to filter a synthetic excitation signal, which can be either white noise (unvoiced frame) or a comb of periodic impulsions (voiced frame). These coefficients are used in the following way on the decoder to recover the output signal $y(n)$:
$$y(n) = excitation(n) - \sum_k a_k y(n - k)$$
Observe that since this is an all-pole, IIR filter, the samples linearly combined are previous samples produced by the filter.
So that at the encoder we only transmit data that corresponds to the difference between this average and actual signal?
There is no "averaging" and there is no transmission of a difference signal. The decoder is a sound synthesizer with some parameters ; and the encoder searches for the set of parameters for this synthesizer that most closely matches the input signal.
Why is it an a linear combination of these previous samples?
Other options would be possible indeed here, but the advantage of using an auto-regressive model are the following:
In this model we use an all-pole IIR filter to simulate the articulatory system. This is a good model, since it can capture strong spectral dips and peaks with a small number of coefficients ; and the articulatory system is indeed capable of emphasizing / attenuating narrow bands of frequencies (See formants). If we had used an all-zeros FIR filter, we would have needed many more coefficients to accurately capture the kind of filter responses that the articulatory system can achieve.
On the encoder end, the problem of of estimating the filter coefficients is computationally efficient - it can be done using the Levinson-Durbin recursion on the first few samples of the autocorrelation. More complex linear models (ARMA) or non-linear models are more computationally expensive or intractable.
On the decoder end, the synthesis is very simple indeed - we just need enough memory to keep track of the n previous samples emitted by the decoder.
Surely an AVERAGE of these previous samples would be a better indication of the next sample?
This is not true. For example, assume the input signal is a sine wave. The prediction $\hat{y}(n) = (2-\omega^2) y(n - 1) - y(n - 2)$ has zero error, while $\hat{y}(n) = \frac{y(n - 1) + y(n - 2)}{2}$ is wrong most of the time (in particular, it does not capture the fact that the sine wave is strictly increasing half of the time, strictly decreasing the rest of the time, so $y(n)$ shouldn't be "between" the previous values). Speech signals are not sinewaves of course - but you can squeeze much more modelling power out of a model with $p$ parameters (order p AR), than out of a model with zero (averaging).
Also, it's worth keeping in mind that there's a mathematical definition of "better" (minimizing the expected value of the squared of the error), which yields a procedure for finding the optimal value of the coefficients.
pichenettespichenettes
Not the answer you're looking for? Browse other questions tagged audio speech voice compression linear-prediction or ask your own question.
Theory behind Linear Predictive Coding (LPC)
Linear predictive model convolution
Measure of harmonicity in a time-series
Get excitation from two signals
Linear Predictive Coding and Block Size
How to use linear predictive coding to compress voice diphone samples?
How do I go from LPC coefficients to a filter polynomial?
Linear Predictive Coding for general signals
Linear Predictive Coding example in MATLAB
Adaptive Predictive Coding: Transmit signal AND prediction coefficients | CommonCrawl |
Temporal and spatial patterns and influencing factors of intangible cultural heritage: Ancient Qin-Shu roads, Western China
Yuan Liu1,
Mo Chen1 &
Yonggang Tian1
Heritage Science volume 10, Article number: 201 (2022) Cite this article
The ancient Qin-Shu roads corridor is one of the important cultural main corridors in China. Throughout China's long historical and cultural evolution, today's ancestors created a rich intangible cultural heritage along this route. Studying its intangible cultural heritage has important theoretical and practical significance for the protection and innovation of cultural heritage in this region. The purpose of this study is to analyze the spatial and temporal distribution characteristics of intangible cultural heritage along the ancient Qin-Shu roads and explore the main factors affecting its distribution. The nearest neighbor index, kernel density estimation, standard deviation ellipse, location entropy, buffer analysis and other methods were used. The results show that (1) The types of intangible cultural heritage of the ancient Qin-Shu roads are expressed in three echelons. Traditional handicrafts are the most numerous, folk custom and traditional music are the second most numerous, the other categories of ICH are third in quantity overall, among which traditional medicine and sports recreation competition are the scarcest. (2) The overall spatial distribution of intangible cultural heritage along the ancient Qin-Shu roads shows an agglomeration distribution. Its distribution pattern places the central cities (Xi'an, Chengdu and Chongqing) at the core, gradually spreading out and decreasing in density as it reaches peripheral districts and counties. There are significantly differ in the core areas of different types of intangible cultural heritage. (3) In the process of historical development, the intangible cultural heritage of the ancient Qin-Shu roads demonstrated an overall change pattern of "three rising and three falling". That is, during Qin and Han Dynasties, Sui, Tang and Five Dynasties, and Ming and Qing Dynasties, culture flourished, while in Wei, Jin, Southern and Northern Dynasties, Song and Yuan Dynasties, modern times culture developed slowly. The overall trajectory of the center of gravity of intangible cultural heritage shifted from the northeast to the southwest. (4) Natural and human factors, such as topography, climate, transportation, traditional villages and population evolution, have an important impact on the spatial pattern of the intangible cultural heritage of the ancient Qin-Shu roads. The results of this study provide a useful reference for the theoretical research and practical management of intangible cultural heritage.
Intangible cultural heritage (hereafter referred to as ICH) is the heritage and embodiment of living culture. It is also the carrier of national memory and historical witness. The protection of intangible cultural heritage is essential for maintaining the diversity of human civilization and promoting the reproduction and effective and sustainable development of human civilization. The ancient Qin-Shu roads were an important channel for military, commercial and cultural exchanges in Chinese history and are one of the earliest large-scale traffic relics preserved to date [1]. The cultures of eight historical periods, the prehistoric period, pre-Qin period, Qin and Han Dynasties, Three Kingdoms Dynasties, Southern and Northern Dynasties, Sui, Tang and Five Dynasties, Song and Yuan Dynasties, Ming and Qing Dynasties and modern times, are fused via these roads [2,3,4]. They have accumulated and preserved many precious historical and cultural factors. A rich and unique culture has been formed, and a large number of valuable ICH have been developed and handed down. This area has a subtle impact on the production and life of the local people. In the context of globalization in recent years, foreign cultures have impacted local culture to some extent. The uniqueness and differences of the ICH of the ancient Qin-Shu roads are rapidly weakening and disappearing. The ability to protect the ICH and its inheritance faces significant challenges. Therefore, studying the ICH of the ancient Qin-Shu roads has extremely important theoretical value and practical significance.
With increasing attention to ICH around the world, the study of ICH has gradually become a popular topic of concern in the academic world. Throughout past studies, most of the early studies focused on basic aspects, such as conceptual definition, typological classification, utilization and value evaluation of ICH [5,6,7,8,9]. In recent years, ICH studies have begun to delve into deeper issues, such as research on the theoretical construction of intangible cultural heritage, reflection and suggestion research on intangible cultural heritage protection systems, experience in intangible cultural heritage protection and management [10,11,12], the impact of intangible cultural heritage declarations on local economic and social development, intangible cultural heritage re-innovation, intangible cultural heritage tourism, and research on intangible cultural heritage education and teaching [13,14,15]. The scope of the study can be as large as the whole continent or as small as a village [16, 17]. However, most scholars prefer to study a specific ICH or the ICH of a particular region. The research methodology mainly uses literature induction, in-depth interviews, case studies, model simulations, sample surveys, and other means [13,14,15,16,17,18,19,20]. From a research perspective, studies are mostly implemented from humanities fields including sociology, nationality and folklore, economics and management, tourism, and education. These research results are useful for the conservation and development of ICH in a holistic way, but they cannot provide sufficient guidance for ICH at a localized level.
There are few scientific documents about the ancient Qin-Shu roads. The sociological literature mainly focuses on fundamental research work such as historical geography. It includes sorting out the historical context of the ancient Qin-Shuroads [2, 21,22,23], demonstrating the route trend [24,25,26,27,28], and exploring the remains of the route [29]. For example, Yan examined all the routes of the ancient Qin-Shu roads one by one, these results with substantial impacts at home and abroad [30]. Li published many works on Shu Road transportation lines and became an expert in Shu Road research [31]. Since the beginning of the twenty-first century, when relevant departments organized the application of "Shu Road" for World Cultural Heritage, related research on the ancient Qin-Shu roads gradually increased. This research has been expanded to include the composition and protection of route heritage [32,33,34,35], tourism development [36,37,38] and other aspects. For example, Shan researched the conservation and status of Shu Road then pointed out the challenges to preserving Shu Road [39]. Wang et al. sorted and classified the tourism resources of the ancient Shu Road and initially explored the tourism development and conservation model of the ancient Shu Road [40]. Luo advocates that the development of the ancient Qin-Shu roads tourism route can be used to drive industrial upgrading in towns along the route. By radiating from core cities, a small urban agglomeration dominated by "culture + tourism" can be constructed to realize the transformation of historical and cultural resources [41]. It can be seen that scholars mainly focus on the textual research, protection and development of material carriers and have not yet involved ICH. In addition, the research methods are mainly literature induction or field surveys and rarely involve geographical methods and techniques.
At present, some studies have examined cultural heritage [42,43,44] and intangible cultural heritage based on the spatial analysis and influencing factors of geographic information systems (GISs) [45]. In a study by Yao et al. [42], 81.9% of Christian cultural heritage was concentrated in Europe. Liang et al. [43] concluded that the number of cultural heritage sites is much higher in Europe and North America than in Asia and the Pacific. Heritage sites are affected by many factors, such as global political, political and economic crises. Xu et al. [44] believed that China's intangible cultural heritage is more distributed in the economically developed eastern coastal areas and less distributed in the west. Tian et al. [45] argued that the spatial and temporal distribution of national heritage conservation units in China is influenced by a combination of natural factors, such as topography, climate change, hydrology and rivers, and human factors, such as government governance, economic development, and population evolution.
However, there are few studies on the ICH of the ancient Qin-Shu roads, and there is no research on the temporal and spatial distribution. In addition, the spatial heterogeneity of the ICH of the ancient Qin-Shu roads is affected by many complex factors, and these factors have not been fully explored. In view of this, an empirical study of the ancient Qin-Shu roads was carried out by means of spatial analysis. The temporal and spatial pattern of ICH along the ancient Qin-Shu roads and its influencing factors were explored. To this end, this study performed the following steps:
We used statistical data to analyze the structural characteristics of ICH types along the ancient Qin-Shu roads.
Geographical methods and techniques such as the nearest neighbors index and kernel density estimation were used to analyze the distribution characteristics of the ICH of the ancient Qin-Shu roads.
According to the standard deviation ellipse data, the temporal distribution and evolution characteristics of the ancient Qin-Shu roads ICH were analyzed.
The influencing factors of ICH of the ancient Qin-Shu roads were explored by means of location entropy, buffer analysis, and layering.
This study attempts to develop a spatiotemporal analysis of ICH in the field of spatial information and further deepen the concept of ICH protection and development. In addition, this study can provide technical support for the preservation and development of ICH of the ancient Qin-Shu roads and contribute to cultural revitalization.
In ancient China, Shaanxi was called Qin, and Sichuan was called Shu. Therefore, historical documents refer to a series of very important communication lines connecting the Guanzhong Plain and Sichuan Basin in ancient China as "the ancient Qin-Shu roads". It was also known as Zhou Road, Qin Road, and Shu Road in different eras [29].
The ancient Qin-Shu roads starting from the Guanzhong region, they pass through Chang'an (now Xi'an) and Baoji before converging in Hanzhong and then southward to Chengdu and Chongqing. They have a total length of approximately 4000 km [46, 47]. They are a combination of humanity and nature, passing through the Qinling Mountains, known as the "north–south division of China's geography", over the Daba Mountains, and crossing the Yellow River and the Yangtze River, two of China's most important rivers. They are blessed with natural ecological conditions. As a carrier of cultural transmission, they have left a mark of a specific era in all places along the route, especially in regard to folk crafts, poetry and literature, stories and legends and historical events and other ICH [35, 48]. They play an extremely important role in the protection of ICH.
The scope of the ancient Qin-Shu roads is ambiguous. In this paper, the division of the ancient Qin-Shu roads range determined by Li [24], Li [25], and Wang [49] is used as the research standard; this division is generally recognized by the current academic community. It is believed that there are seven main routes on the ancient Qin-Shu roads: four ancient northern roads crossing the Qinling Mountains (Ziwu Road, Tangluo Road, Bao Inclined Road and Chencang Road) and three ancient southern roads crossing the Daba Mountains (Litchi Road, Micang Road and Jinniu Road). According to the route direction, parts of the ancient Qin-Shu roads belong to areas of Shaanxi, Sichuan and Chongqing in the current geographical administrative divisions. Therefore, the area covered by the ancient Qin-Shu roads in this paper includes 12 prefecture-level cities and 3 districts and counties in southern Shaanxi, northeastern Sichuan and northwestern Chongqing (Fig. 1).
The location of the Ancient Qin-Shu Roads and its topographic features
The data for this paper were collected from the China ICH network (http://www.ihchina.cn), Shaanxi Province ICH network (http://www.sxfycc.com), Sichuan Province ICH network (https://www.ichsichuan.cn) and the national and provincial representative ICH projects announced by the Chongqing Municipal People's Government as of 2021. The ICH projects successfully declared by Shaanxi, Sichuan and Chongqing Provinces along the ancient Qin-Shu roads were collected, including five sets of national and six sets of provincial ICH projects. Some of the ICH are both national ICH and in the provincial ICH list. To more accurately study the geographical distribution characteristics of the ancient Qin-Shu roads, in this study, overlapping items were integrated into one item, and no double counting was performed [50, 51]. Ultimately, 567 ICH items were collected. According to the national classification standards, the categories of ICH items before 2008 were corrected. The projects were uniformly classified into 10 types: folk literature, traditional music, traditional dance, traditional drama, quyi, sports recreation competition, traditional art, traditional handicraft, traditional medicine and folk custom [52].
The geographic coordinates of the ICH declaration units were obtained from the Baidu coordinate selection tool. For ICH projects that could not be accurately located, the location of the administrative center was selected for spatial positioning. Vector base maps of Shaanxi, Sichuan and Chongqing in the national basic Geographic Information System database were adopted as working base maps. ArcGIS10.2 software was used to establish the ancient Qin-Shu roads ICH database and create the ancient Qin-Shu roads ICH distribution map. In terms of influencing factors, the geomorphic data of the study area were obtained from the Chinese 1:1,000,000 geomorphic spatial distribution data of the Resource and Environmental Science Data Center of the Chinese Academy of Sciences (http://www.resdc.cn/). River data and traffic data were obtained from the latest vector data provided by Open Street Map and Bigemap. The climatic and other natural zones were derived from the Atlas of China. The distribution data of the ancient Qin-Shu roads were collated according to the historical Atlas of Shaanxi Province and An Investigation of Qin and Shu ancient roads in Early China. The population data were obtained from the statistical yearbooks of Shaanxi, Sichuan and Chongqing (2020). The list of traditional villages was derived from the list of three groups of traditional villages in China published by the China Traditional Villages Network. For period divisions, we referred to Xiaobo et al. [44] and Ju et al. [53] on the historical period divisions of cultural heritage. ICH can be divided into eight periods: prehistoric period, pre-Qin period, Qin and Han Dynasties, Wei, Jin, Southern and Northern Dynasties, Sui, Tang and Five Dynasties, Song and Yuan Dynasties, Ming and Qing Dynasties, and modern times. In terms of natural and humanistic influencing factors, based on previous research results [45, 51, 54] and the special characteristics of the local environment of the ancient Qin-Shu roads, topography, climate, rivers, transportation, traditional villages, and population evolution were finally selected for analysis.
The nearest neighbor index
The nearest neighbor index is an important method for judging the spatial distribution types of intangible heritage sites. It is obtained by comparing the ratio of the actual distance between the ICH point-like elements and the random average distance. The calculation formula is as follows [45]:
$$NNI=\frac{\left\{\sum_{i=1}^{n}{Q}_{i}\right\}/n}{0.5/\sqrt{n/A}}$$
Qi represents the distance between any distance and the nearest neighbor point, n is the number of points, and A is the area of the region. NNI ≤ 0.5 is generally considered to be an agglomeration distribution. NNI ≥ 1.5 is uniformly distributed, 0.5 < NNI ≤ 0.8 is an aggregation-random distribution, 0.8 < NNI < 1.2 is a random distribution, and 1.2 ≤ NNI < 1.5 is a randomly discrete distribution.
Kernel density estimation is often used to determine the density of a point element in its surrounding field [27]. It clearly reflects the spatial aggregation trend of ICH items in different types and periods of the ancient Qin-Shu roads. The denser the intangible heritage sites are, the greater the nuclear density f(x). The calculation formula is as follows [45, 55]:
$${f}_{n}\left(x\right)=\frac{1}{nh}\sum_{i=1}^{n}k\left(\frac{x-{x}_{i}}{h}\right)$$
k (x) is the kernel function; H > 0 is the search radius, and x–xi is the distance between valuation point x and sample point xi.
Standard deviation ellipse
The standard deviation ellipse can be used to express spatial characteristics such as the center of gravity position and transition direction of geographical elements [44, 56]. In this paper, standard deviation ellipses are used to analyze the center of gravity migration direction of the intangible heritage resource distribution of cultural relics in different dynasties. The calculation formula is as follows [57]:
$$x=\frac{\sum_{i=1}^{n}{x}_{i}}{n} Y=\frac{\sum_{i=1}^{n}{y}_{i}}{n}$$
Xi and Yi represent the coordinates of the distribution of intangible heritage points, and n is the total number of intangible heritage points in a certain period.
Location entropy
Location entropy is an indicator used to measure the degree of concentration of a factor [58]. In this paper, it is used to simulate the ratio of the ICH quantity in each statistical range to the ICH quantity in the research area relative to the proportion of the area in the statistical range to the total research area. The calculation formula is as follows [59]:
$$R=\frac{{p}_{i}/\sum_{i}^{n}{p}_{i}}{{m}_{i}/\sum_{i}^{n}{m}_{i}}$$
R is the advantage of the number of ICH in each statistical range relative to the average level of the research area, and the variable I ranges from 1 to n and is the ordinal number of the county-level area. When R > 1, the number of ICH in this statistical range is higher than the average level of the research area; when R = 1, the number of ICH is the same as the average level of the research area; and when R < 1, the number of ICH is lower than the average level of the research area.
Buffer analysis is an analytical method used to determine the proximity of different geographical elements [60, 61]. This paper takes rivers, traffic patterns and traditional villages as the analysis objects and takes intangible cultural heritage projects as the objects affected by the subject, that is, adjacent objects. First, a buffer ring layer was created at a certain distance around rivers, traffic patterns and traditional villages, and then the number of ICH items in the buffer zone was calculated by intersection analysis. The calculation formula is as follows:
$${D}_{(x,y)}=\sqrt{{\sum }_{i=1}^{n}{\left[{w}_{i}\left({x}_{i}-{y}_{i}\right)\right]}^{2}}$$
D (x, y) represents the distance between samples x and y, n is the characteristic dimension, and xi and yi are the i-th attributes of samples x and y.
Type structure and temporal and spatial pattern characteristics of the ancient Qin-Shu Road ICH
Scale and type structure
To date, the ancient Qin-Shu roads have been included in 567 national and provincial ICH, of which 116 are national (20.5%) and 451 are provincial (79.5%). According to the classification standard of the "National ICH List", the ICH along the ancient Qin-Shu roads is divided into 10 categories. According to the statistics of quantity proportion, it can be divided into three echelons. The first order is traditional handicraft (180 items), which have the largest number of all types, accounting for approximately 31.7% of the total number. The second echelon is folk custom and traditional music, with 81 items and 66 items, accounting for 14.3% and 11.6% of the total, respectively. The third echelon is traditional drama (43 items), traditional dance (35 items), quyi (35 items), folk literature (29 items), traditional medicine (27 items), and sports recreation competition (19 items), accounting for 7.6%, 6.2%, 6.2%, 5.1%, 4.8% and 3.4% of the total, respectively. The ICH of the ancient Qin-Shu roads presents the structural characteristics of traditional handicrafts as the main body and folk custom and traditional music as supporting types (Fig. 2). The ICH of traditional handicraft, folk custom and traditional music are rich in quantity and provide a strong sense of experience and potential for development and utilization, which can become a key type of cultural tourism development of the ancient Qin-Shu roads. Traditional drama, traditional dance, quyi, folk literature, traditional medicine and sports recreation competition are relatively scarce items. It is necessary to further strengthen the protection and cultural excavation of these projects.
Type structure features of intangible cultural heritage in the Ancient Qin-Shu Roads
We further calculated the number of the ancient Qin-Shu roads ICH at the regional level (Table 1). In terms of the number of regions, Shaanxi had the largest number of ICH, with 254 items, accounting for 44.8% of the total. There were 189 ICH in Sichuan, accounting for 33.3% of the total. Chongqing scored the lowest, with 124 items, accounting for 21.9% of the total. In terms of type, the region with the most traditional music was Sichuan, with 30 items, and the region with the least traditional music was Chongqing, with only 6 items. The traditional dance had the most items in Sichuan, at 17. The regions with the most traditional drama ICH were Shaanxi and Sichuan, with 17 in both provinces. The highest number of quyi was in Sichuan (17), and the lowest was in Chongqing (4). Sports recreation competition did not vary much among the three provinces and cities, and all of them had a low number. Shaanxi had the most traditional handicraft (89 items), while Sichuan and Chongqing were similar (51 items and 40 items, respectively). Traditional medicine was less common in the three provinces, and the folk custom ICH of Shaanxi (53 items) far outnumbered those of Sichuan and Chongqing.
Table 1 Numbers-types of intangible cultural heritage in the Ancient Qin-Shu Roads
On the whole, Shaanxi showed traditional handicraft and folk custom as the main types. Sichuan showed traditional handicraft and traditional music as the main types, and Chongqing showed traditional handicraft as the main types. The distribution characteristics of ICH types in these three places show that the spatial distribution of ICH items are closely related to the origin of regional civilization and culture.
From the perspective of traditional handicraft, Shaanxi, as the ancient capital of 13 dynasties, has always been an important cultural and economic city in China. With a large population and easy access to the city, its restaurant culture has flourished. In addition, there have been few wars in the region since ancient times, allowing their skills to be passed down in an orderly manner, providing fertile ground for a rich variety of traditional handicrafts. In terms of folk customs, Shaanxi was the location of the capitals of the Zhou, Qin, Han and Tang Dynasties. The ritual system of these dynasties laid the cornerstone of traditional Chinese etiquette. The concept of a ritual system is deeply rooted in the hearts of the people. Therefore, Shaanxi folk customs mainly focus on sacrificial activities and festivals; examples include the Yan Emperor sacrificial ceremony, Lou Guan Tai worship Lao Tze etiquette, etc.
The humid climate, fertile land and developed water system in Sichuan provide good conditions for traditional handicraft such as sericulture and grain brewing. A large number of traditional handicraft ICH items of weaving and brewing have been produced. For example, Shu brocade weaving techniques, Jiannanchun traditional brewing techniques, Pi County bean paste techniques and other traditional skills are produced in traditional ways. In terms of traditional music, Sichuan has a dense network of rivers and developed shipping traffic. The workers used music to express their emotions, As a result, a large number of traditional music ICH were produced. For example, river songs and Chuanjiang songs. In addition, Qingcheng Mountain in Sichuan is the birthplace of Taoism in China and has a flourishing Taoist culture. It has produced religious traditional music such as the ancient music of Qingcheng Cave Scripture and Chengdu Taoist music.
Chongqing is located at the confluence of the Yangtze and Jialing Rivers, a meeting point for materials. The wharf culture is flourishing and has formed a wealth of traditional handicraft related to food. Examples include Chongqing hot pot, Wu wonton traditional techniques, and Liangping Zhang duck traditional handicraft.
Spatial distribution characteristics
Spatial distribution types
According to the calculation formula of the NNI, the average nearest neighbor in the ArcGIS10.2 software spatial statistical tool was used to calculate and process each intangible heritage type and the whole. The proximity NNI of ICH resources was obtained (Table 2), and the NNI value of the ancient Qin-Shu Roads ICH was 0.1857, less than 0.5, indicating that the ICH of the ancient Qin-Shu roads belongs to the agglomeration distribution types.
Table 2 Nearest neighbor index of intangible cultural heritage in the Ancient Qin-Shu Roads
Different types of ICH resources differ in their distribution patterns. The nearest proximity index of traditional drama, quyi, traditional handicraft, and traditional medicine was 0.5 and below, showing a clear spatial aggregation distribution characteristic. The nearest proximity index of folk literature, traditional music, traditional dance, sports recreation competition, traditional art, and fool custom resources was between 0.5 and 0.8 and tended to be an aggregation-random distribution. On the whole, ICH of the ancient Qin-Shu roads show an agglomeration distribution in space. However, the degree of agglomeration varies and differs among types, and agglomerative and agglomerative-random distributions coexist. The formation of this distribution feature is significantly related to the characteristics of each type of ICH, the difficulty of spreading and other factors.
Spatial density distribution characteristics
Kernel Density in the ArcGIS10.2 the Spatial Analyst module was used to estimate the kernel density distribution of the ancient Qin-Shu roads intangible heritage resources as a whole and the characteristics of each type (Figs. 3, 4). As a whole, the intangible heritage resources of the ancient Qin-Shu roads formed three high-density circles and five sub-high-density circles. The high-density circles were Xi'an, Chengdu and Chongqing. The sub high-density circles were located in Baoji, Hanzhong, Ankang, Mianyang, and Dazhou-Liangping Counties. As a whole, the provincial capital city was the core, and other cities and counties were evenly distributed. As a living culture, ICH is often diffused through the exchange of people and trade, thus creating a radiation zone. The distribution of ICH of the ancient Qin-Shu roads may be related to the large population base and frequent production activities in the capital cities of Xi'an, Chengdu and Chongqing.
Kernel density of intangible cultural heritage in the Ancient Qin-Shu Roads
Kernel density of different types intangible cultural heritage in the Ancient Qin-Shu Roads
From the perspective of type distribution, folk literature resources formed four core areas in Xi'an-Baoji, Hanzhong and Dazhou. At the same time, Chongqing, Chengdu, Shenyang, and Bazhong formed secondary core areas. Traditional music was widely distributed in the ancient Qin-Shu roads area, especially in the southern plank. Three high-density core areas formed in Chongqing, Chengdu and Xi'an. Traditional dance and traditional drama were fewer in number and more scattered along the ancient Qin-Shu roads. The former had Baoji and Chongqing as the high-density core areas, and the latter had a high-density core area in Chongqing. Quyi had Xi'an-Baoji, Chengdu and Chongqing as high-density core areas and was also densely distributed in Hanzhong and Ankang. Sports recreation competition and traditional medicine were the lowest in quantity and the highest in concentration. The former were distributed in Xi'an-Baoji, Chengdu, Mianyang, Dazhou, Chongqing and Liangping Counties, with Xi'an as the high-density core area. The latter formed three core areas in Xi'an, Chengdu, and Chongqing and secondary core areas in Hanzhong and Mianyang, and the rest of the region had no distribution of these intangible heritage resources. Traditional art had core areas in Xi'an-Baoji, Chengdu and Chongqing and showed uneven distribution characteristics, with more in the east and less in the west. Traditional handicraft were the most abundant ICH type on the ancient Qin-Shu roads and were widely distributed in 15 cities, counties and districts. Xi'an-Baoji, Hanzhong, Chengdu, and Liangping County formed five core areas, and Chongqing, Ankang, Mianyang, and Nanchong formed secondary core areas. The main representatives of Traditional handicraft on ancient Qin-Shu roads were the techniques of making Tong Sheng Xiang mutton and bread pieces in soup, JianNanChun Chiew brewing techniques, Pixian doubanjiang, Fuling pickle and other folk delicacies as well as Xiqin embroidery, Hanzhong rattan weaving, Sichuan figured satin weaving, Chengdu lacquer art, Liangping bamboo mats and so on, which were located mainly in the above areas. Folk custom had the highest distribution density in Xi'an-Baoji, Mianyang, and Chengdu and formed secondary core areas in Mianyang, Dazhou, Chongqing, and Liangping County, with a wider overall distribution.
The above characteristics show that different types of ICH resources have strong regional and type differences in spatial distribution. The internal characteristics and inheritance degree of different types of ICH resources also have an impact on their spatial distribution patterns, which is also an important reason for the formation of the spatial distribution pattern of ICH.
Analysis of the time series pattern
The long history of the region has created rich ICH along the ancient Qin-Shu roads. In terms of the proportion of ICH, the distribution along the ancient Qin-Shu roads was uneven in various historical periods, with significant fluctuations. The overall distribution shows the characteristics of three rises and three falls, peaking in Qin and Han Dynasties, Sui, Tang and Five Dynasties, and Ming and Qing Dynasties. In the Wei, Jin, Southern and Northern Dynasties, Song and Yuan Dynasties, and modern times, the development fell in a trough pattern.
The Qin, Han, Sui and Tang Dynasties were the most prosperous and powerful periods in Chinese history; Shaanxi became the political, economic and cultural center during these periods, resulting in a rich and diverse ICH. The Ming and Qing dynasties were developed based on previous historical periods, thereby inheriting and accumulating a wealth of ICH. As an ancient road connecting Shaanxi and the surrounding areas, the ancient Qin-Shu roads naturally gave birth to numerous ICH. In Wei, Jin, Southern and Northern Dynasties, Song and Yuan Dynasties and modern times, the ancient Qin-Shu roads have seen frequent wars and regime changes, challenging economic stability. These events have had a negative impact on the transmission and dissemination of ICH, so the number of ICH in these periods was low.
According to the characteristics of the proportion of types of ICH in historical stages (Table 3), traditional music and folk literature accounted for the largest proportion of ICH resources in prehistoric period, accounting for 28.57% and 23.81%, respectively, for a total of 52.38%. In pre-Qin period, traditional handicraft accounted for an absolute majority, accounting for 42.50%. In Qin and Han Dynasties, traditional music had the highest proportion (27.78%), followed by traditional handicraft (22.22%), with the two accounting for 50.00%. The proportions of traditional art and traditional handicraft in Wei, Jin, Southern and Northern Dynasties were the largest and the same, accounting for 30.77%. In Sui, Tang and Five Dynasties, traditional handicraft accounted for the highest proportion at 23.64%, with traditional music and traditional drama in second place, accounting for 16.36% and 56.36%, respectively. In Song and Yuan Dynasties, traditional music, traditional handicraft and folk custom were the most common, accounting for 17.24% each and 51.72% in total. In the Ming and Qing Dynasties, traditional handicrafts became the absolute leader at 34.80%, followed by traditional arts at 10.80% and traditional drama at 9.60%. In modern times, traditional handicraft have remained the main body, accounting for as much as 50%.
Table 3 Types and proportion of intangible cultural heritage in the Ancient Qin-Shu Roads
On the longitudinal scale of the historical development of ICH types, folk literature accounted for the highest proportion, 44.00%, in Ming and Qing Dynasties. This was followed by prehistoric period and Qin and Han Dynasties, which each accounted for 20.00%. The ICH of folk custom in the three periods accounted for 84.00% of the total number of such ICH. Traditional music was highest in Ming and Qing Dynasties, Qin and Han Dynasties; and Sui, Tang and Five Dynasties, accounting for 26.15%, 23.08% and 13.84%, respectively, and 63.08% in total. The proportions of traditional dance, traditional drama, quyi and traditional art in Ming and Qing Dynasties were 47.50%, 60.00%, 62.50% and 56.25%, respectively. In Ming and Qin Dynasties, sports recreation competition accounted for 47.37%, followed by modern times, accounting for 26.32%, and accounting for 73.69% in both. Traditional handicraft appeared mainly in Ming and Qing Dynasties, followed by modern times and Qin and Han Dynasties, accounting for 78.66% of the total.
On the whole, the number of traditional handicraft has been increasing since ancient times; two notable periods of increase occurred during pre-Qin period and Ming and Qing Dynasties. During the first of these two periods, pre-Qin period, the quantity of traditional handicraft increased by 37.74%. During the second of these two periods, Ming and Qing Dynasties period, the quantity of traditional handicraft increased by 17.7% compared with its previous historical period. Because traditional handicraft ICH are integrated with daily life, the number of traditional handicraft ICH has increased as the population has grown. The number of folk literature, traditional dance and folk custom ICH decreased with the evolution of history, from 23.81%, 14.29% and 19.05% in prehistoric period to 2.00%, 2.00% and 2.00% in modern times, respectively. This may be related to the gradual disappearance of the carrier and function of these ICH.
Changes to the center of gravity
On the whole, the standard deviation ellipse rotation angle of ICH resources is 35.2°. This shows that the ICH resources of the ancient Qin-Shu roads present a long and narrow northeast‒southwest pattern. The elliptical center of the standard deviation of ICH resources is located in Nanjiang County, Bazhong city, Sichuan Province (32.083°N, 106.888°E) and is 655.49 km east of the center of gravity of China (36.03°N, 103.40°E). From the viewpoint of the gravity and evolutionary trends of historical stages, the ICH of the eight periods of the ancient Qin-Shu roads present a trend of northwest–southwest–south–northeast–southwest–southeast (Fig. 5). The historical development of Qin and Han Dynasties served as the boundary. Before the Qin and Han Dynasties, ICH resources in all periods were distributed mostly in Shaanxi. After the Qin and Han Dynasties, ICH resources in all historical periods were distributed in Sichuan. Further analysis of the position of the intangible heritage centroid in each period showed that the center of ICH resources in prehistoric times was located in Nanjiang County, Bazhong city, Sichuan Province (32.565°N, 107.087°E). It was transferred to Nanzheng (32.839°N, 106.947°E) in Hanzhong city, Shaanxi Province, during Qin Dynasty; to Nanjiang (32.620°N, 106.749°E) in Bazhong city, Sichuan Province, during Qin and Han Dynasties; to Jiange County (31.826°N, 105.618°E) in Guangyuan city, Sichuan Province, during Wei, Jin, Southern and Northern Dynasties; and to Bazhong in Sichuan during Sui, Tang and Five Dynasties. From Song and Yuan Dynasties and Ming and Qing Dynasties to modern times, the center of ICH resources was still located in the Bazhong urban area of Sichuan Province, 32.1625°N, 106.717°E, 31.963°N, 106.609°E, 31.995°N, 106.939°E, 31.835°N, and 107.282°E. The center of gravity moved 467 km during the eight periods.
Gravity center and direction of intangible cultural heritage in the Ancient Qin-Shu Roads
In general, the historical track of ICH of the ancient Qin-Shu roads is the development path of "Northeast to southwest". The shift of the center of gravity of ICH of the ancient Qin-Shu roads to the southwest after the Sui and Tang dynasties may be related to the beginning of the southward shift of the political and economic center of China during this period. This result is also similar to the evolution of Chinese tangible cultural heritage in historical periods [44].
The influencing factors of ICH spatial distribution along the ancient Qin-Shu roads
Influence of topography on the distribution of ICH
The ancient Qin-Shu roads are located on the second ladder of topography in China, forming geomorphic features of three rivers and two mountains. Guanzhong Plain, Weihe Basin, and the Qinling Mountains are to the north; the Hanshui Valley and Danjiang Plain are in the center; and the Daba Mountains and Sichuan Basin are to the south. The overall landforms are complex and varied: there are mainly plains, plateaus, hills, small undulating mountain ranges, medium undulating mountain ranges, large undulating mountain ranges and great undulating mountain ranges, accounting for 7.35%, 9.72%, 1.68%, 21.84%, 31.53%, 12.61% and 0.11%, respectively. Since the areas of different types of topographic areas are quite different, this study adopts location entropy for analysis. Location entropy can measure the spatial distribution of a certain regional element and reflect the degree of agglomeration of a certain element. The higher the value is, the higher the agglomeration level. After superposition of the ICH map and terrain type map (Fig. 6a), the location entropy result shows that there are 283 ICH in the plain along the ancient Qin-Shu roads, for which the location entropy is 6.05. There are 114 plateaus and 115 hilly ICH, respectively, for which the regional entropy is 2.68 and 1.29, respectively. There are 37 ICH in small undulating mountain ranges and 12 ICH in medium undulating mountain ranges are 37 and 12, for which the location entropy is 0.19 and 0.08, respectively. The large undulating mountain ranges and great undulating mountain ranges have no distribution of ICH.
Various influencing factors and the distribution of intangible cultural heritage in the Ancient Qin-Shu Roads
Topography has a significant impact on the distribution of ICH. The distribution of intangible cultural heritage decreases with increasing altitude. Flat terrain is beneficial to the production and dissemination of ICH, while mountains hinder the development of ICH.
Influence of climate on the distribution of ICH
Based on the 14 climatic zones in China, the ancient Qin-Shu roads can be roughly divided into 4 climatic types: northern temperate, northern subtropical, plateau temperate and middle subtropical regions, accounting for 17.53%, 25.55%, 0.10% and 56.82%, respectively. After superimposing the ICH map and climate type map (Fig. 6b), the result of location entropy shows that there are 169 ICH in the northern temperate zone, for which the location entropy is 5.68, which is the largest distribution of intangible heritage. There are 304 ICH in the northern subtropical zone and 88 ICH in the highland temperate zone, for which the locational entropy is 0.77 and 0.73. In contrast, there is no ICH distribution in the highland temperate zone, hence, the locational entropy is 0.
The ICH of the ancient Qin-Shu roads is mainly concentrated in temperate regions with good climate conditions and decreases with increasing climate discomfort. Generally, a temperate climate with four distinct seasons that is warm and humid is more conducive to productive lives and to the formation and dissemination of ICH.
Influence of rivers on the distribution of ICH
There are many rivers along the ancient Qin-Shu roads, including two main streams and five tributaries of the Yellow River and Yangtze River. The Yellow River Basin mainly has the Wei River, and the Yangtze River Basin mainly has the Han River, Jialing River, Minjiang River, and Wujiang River. Many rivers interconnect, creating good conditions for the river network of the ancient Qin-Shu roads and for traffic and commerce on the ancient roads. To have a clearer understanding of the relationship between rivers and ICH, rivers above level 4 are taken as data sources, and buffer zones of 1 km, 3 km and 10 km are established by using the ArcGIS10.5 buffer tool (Fig. 6c). The intersection analysis in Analysis Tool is then used to calculate the number of ICH in different buffer layers. Within 1 km from the river, 124 ICH were distributed, accounting for 21.87% of the total projects. Within 3 km from the river, 82 ICH were distributed, accounting for 14.46% of the total projects; within 10 km of the river, 279 ICH were distributed, accounting for 49.21% of the total. The results show that the river has an auxiliary effect on the ICH of the ancient Qin-Shu roads.
The influence of transportation on the distribution of ICH
Based on basic traffic data of expressways, the buffer tool was used to establish 15 km buffer zones of expressway, high-speed railways and the ancient road, and the spatial distributions of the transportation network and the ICH project were superimposed (Fig. 6d–f). The intersection analysis in Analysis Tool is then used to calculate the number of ICH in different buffer layers. The results show that there were 404 ICH projects in expressway and high-speed railway buffer zones, accounting for 71.37% of the total number. There were 281 ICH projects in the 15 km buffer zone of the seven north–south ancient roads, accounting for 49.65% of the total. These results show that the ICH of the ancient Qin-Shu roads is distributed along the transportation route. The ancient Qin-Shu Roads ICH have a significant relationship with the distribution of modern transportation routes, ancient traffic may have affected the distribution of ICH of the ancient Qin-Shu roads.
The influence of population evolution on ICH distribution
The population of ancient societies was both a necessary condition and an important marker of socioeconomic development. The size of the population often determines the different modes of social production and then determines the different forms of social organization and social structure [62]. In Chinese history, the population reached its peak in Qin and Han Dynasties, Sui and Tang Dynasties, Song and Yuan Dynasties, Ming and Qing Dynasties, and modern times. The total number of the ancient Qin-Shu roads ICH in these periods accounts for 93.36%. In the population trough period of pre-Qin period, Wei, Jin, Southern and Northern Dynasties, the number of ICH was small, accounting for only 6.64%. To quantitatively reflect this correlation between the two, the population numbers in historical periods were taken from "Statistics on Household, Field, and Field Assignment in China through the Ages" and "Population Geography of China" as data sources. Fitting analysis with ICH in various historical periods shows that there is a significant correlation between them: R2 = 0.8783 (Fig. 7). The change in population size can explain 87.83% of the variation in ICH, indicating that population change greatly affects the spatial and temporal distribution and evolution of ICH.
The fitting curve of population in historical periods and intangible cultural heritage of the Ancient Qin-Shu Roads
Influence of traditional villages on the distribution of ICH
As places with concentrated population distributions, villages usually retain relatively good living environments and cultural customs. ICH also reflects the cultural concept and lifestyle of the original residents of local villages. Based on the five batches of traditional village lists jointly released by the Ministry of Housing and Urban‒Rural Development of China and other departments, buffer zones of 15 km, 30 km and 50 km were established by using the ArcGIS 10.2 buffer tool. The intersection analysis in Analysis Tool is then used to calculate the number of ICH in different buffer layers. In the buffer zone 15 km away from traditional villages, there are 87 ICH resources, accounting for 15.37% of the total. In the buffer zone 30 km from traditional villages, there are 297 ICH resources, accounting for 52.47%. Within the buffer zone 50 km from traditional villages, 405 ICH resources are present, accounting for 71.55% (Fig. 8). The distribution of ICH items along the Qin-Shu ancient roads is well coupled with the distribution of traditional villages.
Intangible cultural heritage of the Ancient Qin-Shu Roads and buffer zone of traditional villages
Comparison with previous studies
ICH is the living embodiment of the special production and lifestyle, character and aesthetic habits of different nationalities. It is of great significance to explore the emergence and evolution of its culture for the promotion of cultural protection and development [44]. This paper analyzes the spatial and temporal distribution characteristics of the ICH of the ancient Qin-Shu roads in China and discusses the main factors affecting its distribution. As a case study, the ancient Qin-Shu Roads is a living fossil of the world's ancient road, which provides the possibility for the communication and integration between the Sanqin civilization in north China and the Bashu civilization in south China. A wealth of ICH has been nurtured along this route. Methods such as the nearest neighbor index, kernel density estimation, standard deviation ellipse, location entropy, and buffer analysis are used to ensure that the above problems are comprehensively and deeply studied.
The structure of ICH types reflects that the ancient Qin-Shu roads have a complete range of ICH types, but the distribution is uneven and in a stepped pattern. The results of this study are consistent with those of Zhang et al. [54] and Wang et al. [63]. In this study, traditional handicraft ICH was the most common type of ancient road, accounting for 31.7% of the total. This may be because this type of ICH is closely correlated with daily life and will increase as the population continues to grow. Traditional medicine and sports recreation competition were the least common ICH types, accounting for 4.8% and 3.4% of the total, respectively. These two types of ICH mostly rely on word-of-mouth transmission or inheritance among individuals, which is easily lost, resulting in fewer projects of these types of ICH.
The spatial distribution of ICH of the ancient Qin-Shu roads has strong heterogeneity, which is consistent with the results of Marzeion and Levermann [64] and Zhang et al. [54]. Marzeion's study confirmed that the world's cultural sites are characterized by agglomeration and are mainly located in coastal areas. Zhang et al. found that the national ICH of music in West Xiangxi, China, is mainly found in the south of Xiangxi, while the provincial ICH is distributed in the west. In the study of the ancient Qin-Shu roads, the overall distribution of ICH is agglomeration, and the majority of all types of ICH are the same; only a few are agglomeration–random distribution. Meanwhile, ICH is mainly concentrated in Xi'an, Chengdu and Baoji. In addition, there are spatial differences in the density of different types of ICH. For example, the density of quyi is characterized by two sub-high-density areas surrounding three high-density areas.
The temporal and spatial changes in heritage essentially reflect the spatial directionality and regional changes in social and economic development [44]. Different historical periods have had a strong influence on the distribution of the ICH of the ancient Qin-Shu roads. The overall ICH distribution is characterized by fluctuations. This result is consistent with the results of Tian et al. [44] and Li et al. [65]. In this study, ICH reached its peak in three historical periods: Qin and Han Dynasties, Sui, Tang and Five Dynasties, and Ming and Qing Dynasties. Wei, Jin, Southern and Northern Dynasties, the Song and Yuan Dynasties and three historical periods since the modern era were associated with the troughs. The historical track is the "northeast–southwest" development path. The variation in the quantity of ICH in different historical periods may be related to the war and peace in each period. The shift of the historical track also accords with the historical background of the southward shift of the political and economic center of gravity in China after the Sui and Tang Dynasties.
In terms of the main influencing factors, topography, climate, transportation and population evolution have the most profound impact on the distribution of ICH along the ancient Qin-Shu roads. This result is consistent with the results of previous studies [44, 64, 65]. In the study of Cho and Sung [66], ICH was mainly concentrated in areas with flat terrain, sufficient water and suitable climate, such as coastal areas or plains. The results of Zhang et al. [67] show that topography is an important factor affecting ICH in the Yellow River Basin. In addition, population and transportation accessibility are closely related to the distribution of cultural heritage [68], and they have a significant positive correlation [69].
Overall, the results of this study are consistent with those reported in previous studies and are also scientifically sound.
Theoretical and practical implications
Spatial analysis and visual expression research based on GIS have been widely used in various research fields [70,71,72,73,74,75]. This study introduces geography into the theoretical study of intangible cultural heritage, which is of great significance to enrich the theoretical research of intangible cultural heritage. The methods used and the results produced in this study demonstrate effective tools and provide references for obtaining the spatial and temporal distribution characteristics and influencing factors of intangible cultural heritage of the ancient Qin-Shu roads.
This study can provide a reference and suggestions for the protection and management of ICH. On the one hand, according to the analysis of the type structure of ICH, the differences in the quantity of various kinds of ICH can be used to promote targeted protection and utilization plans for different categories. On the other hand, through the analysis of spatial and temporal distributions, the relationship between ICH can be established. These methods are conducive to the formation of distinctive ICH areas, deriving cultural tourism brands, and bringing ICH tourism resources into play; they promote the development of cultural tourism and regional economic development.
This study focuses on the temporal and spatial distribution of ICH along the ancient Qin-Shu roads and its influencing factors. However, it may not include the extent to which these factors influence ICH and the impact of multiple factors together on ICH. Future research can be carried out using the geographical detector method. In addition, there are numerous linear cultural heritage sites along ancient roads worldwide, such as the "Silk Road" in China, the "Frankincense Road" in Oman, etc. However, the natural and human conditions of different regions vary greatly, so the study of different environmental factors influencing the spatial and temporal distribution of the ICH of ancient roads is another direction for future research.
The main conclusions of this study are as follows: (1) the ICH of the ancient Qin-Shu roads are complete in type and are distributed in a stepwise manner. Traditional handicraft include 180 ICH, which is the largest number of ICH, accounting for 31.7%. Folk custom and traditional music are the second most numerous, at 81 and 66 ICH, respectively, accounting for 25.9%. The other categories of ICH are third in quantity overall, among which traditional medicine and sports recreation competition are the scarcest, accounting for only 4.8% and 3.4%, respectively. (2) The ancient Qin-Shu roads ICH have typical clustering distribution characteristics. On the whole, three high-density circles have been formed in Xi 'an, Chengdu and Chongqing. There are significant differences in the core areas of different types of ICH. (3) In the process of historical development, the ICH of the ancient Qin-Shu roads shows a trend of "three rises and three falls". It reached its peak in Qin and Han Dynasties, Sui, Tang and Five Dynasties, and Ming and Qing Dynasties and reached its low point in Wei, Jin, Southern and Northern Dynasties, Song and Yuan Dynasties, and modern times. The number of traditional handicraft increased with the dynasties, while the numbers of folk literature, traditional dance and traditional medicine gradually decreased. During the eight periods, the overall center of gravity trajectory of the ICH moved 467 km from northeast to southwest. (4) In the natural geographical environment, topography and climate play an important role in the distribution of ICH, while rivers play an auxiliary role in the distribution of ICH. In the ancient Qin-Shu roads, the higher the altitude is, the lower the ICH. The more suitable the climate is, the more intangible cultural heritage. Among the human geography, transportation, traditional villages and population evolution have a high impact on ICH. Expressway and high-speed railroads have the greatest influence on the distribution of ICH, with 71.37% of ICH located within the 15 km buffer zone of expressway and high-speed railroads. A total of 71.55% of the ICH is distributed in the 50 km buffer zone of traditional villages.The significant positive impact of demographic evolution on the generation of ICH can explain 87.83% of the changes in ICH.
ICH:
Luo L. Research on Spatial differentiation of tourism resources and tourism development in Great Road of Sichuan province from the perspective of heritage corridor. Chengdu Univ Technol. 2020 (in Chinese)
Huang SZ. A historical study of the transportation routes connecting Shensi and Szechuan provinces. Acta Geograp Sin. 1959;23:419–35 (in Chinese).
Li JC. The historical context of the rise and fall of Shu Road traffic. J Sanmenxia Polytech. 2014;13:6–12 (in Chinese).
Sun H. Preliminary discussion on the Shu Road heritage—dates, route and heritage type. Res Herit Pres. 2017;2:1–9 (in Chinese).
Bille M. Assembling heritage: investigating the Unesco proclamation of Bedouin intangible heritage in Jordan. Int J Herit Stud. 2012;18:107–23.
Su X, Li X, Wu Y, Yao L. How is intangible cultural heritage valued in the eyes of inheritors? Scale development and validation. J Hosp Tour Res. 2020;44:806–34.
Lombardo V, Pizzo A, Damiano R. Safe guarding and accessing dramas intangible cultural heritage. ACM J Comput Cult Herit. 2016;9:1–11.
Brezina P. Acoustics of historic space Sasa form of intangible cultural heritage. Antiquit. 2013;87:574–80.
Dimitropoulos K, Tsalakanidou F, Nikolopoulos S, et al. A multimodal approach for the safeguarding and transmission of intangible cultural heritage: the case of i-Treasures. IEEE Intell Syst. 2018;33:3–16.
Xie F. The review of the overseas study on related intangible culture heritage. Guizhou Ethnic Stud. 2011;139:93–8 (in Chinese).
Cozzani G, Pozz IF, Dagnino FM, et al. Innovative technologies for intangible cultural heritage education and preservation: the case of i-Treasures. Pers Ubiquit Comput. 2017;21:1–13.
Thomas B. Whom does heritage empower, and whom does it silence? Intangible cultural heritage at the Jemaa el Fnaa Marrakech. Int J Herit Stud. 2015;22:1–13.
Arizpe L. How to reconceptualize intangible cultural heritage. Culture, diversity and heritage: major studies .Springe. 2015: 95–99.
Zhang X, Yu H, Chen T, et al. Evaluation of tourism development value of intangible culture heritage resources: a case study of Suzhou City. Pro Geol. 2016;35:997–1007 (in Chinese).
Pu LC, Li X. A research on educational inheritance of the Epopee Ashima of Yi people. J Res Educ Ethnic Minorities. 2016;27:137–44 (in Chinese).
Arizpe L. Intangible cultural heritage, diversity and coherence. Mus Int. 2004;56:130–6.
Robinson RNS, Clifford C. Authenticity and festival food service experience. Ann Tourism Res. 2012;39:571–600.
Smith L, Morgan A, Meer AVD. Community-driven research in cultural heritage management: the Waanyi women's history project. Int J Herit Stud. 2003;9:65–80.
Fromm AB. Ethnographic museums and intangible cultural heritage return to our roots. J Mar Isl Cult. 2016;5:9–10.
Saleh F, Ryan C. Jazz and knit wear: factors that attract tourists to festivals. Tourism Manag. 1993;14:289–97.
He L. Changes and causes of the traffic roads from Hanzhong basin to Guanzhong plain in Han Dynasty. J Shaanxi Univ Technol (Soc Sci). 2008;26:87–91 (in Chinese).
Wang ZJ. The management of the roads to Sichuan by people in Qin State. J Xianyang Norm Uni. 2012;27:7–11 (in Chinese).
Sun QX. History of the three kingdoms and Shu path. Shaanxi Arch. 2016;01:22–5 (in Chinese).
Li ZQ. On the special position of the old way in each posthouse in Sichuan and Shaanxi. Chin Hist Geogr. 1993;02:151–70 (in Chinese).
Li JC. Research on Eaely History of Gu Royte. J Shangqiu Voc Tech Coll. 2016;15:103–106–109 (in Chinese).
Dang Y. The development, change and historical function of Baoxie Road. J Tangdu. 1997;04:76–9 (in Chinese).
Wang YP, Xu GT, Gao T, et al. Archaeological investigation from Luogu road to the ancient Tangluo road in the Qinling mountains. Relics Museol. 2017;03:18–26 (in Chinese).
Li ZQ. The Ziwu road in history. J Northwest Uni (Philos Soc Sci Edit). 1981;02:38–41 (in Chinese).
Editorial Committee of Shaanxi Ancient Qin-Shu Roads Heritage. Shaanxi Ancient Qin-Shu Roads Heritage. Sanqin Press. 2015:1–10 (in Chinese).
Yan GW. Traffic map of tang dynasty. Shanghai: Shanghai Classics Publishing House; 2007. (in Chinese).
Li QZ. History of Shu Road. Northwest University Press. 1985 (in Chinese).
Chen YY. The Ancient Shu Road based on the "Trinity" of the linear cultural heritage protected mode—Jianmenshudao centered. J Chinese Cult. 2014;02:73–9 (in Chinese).
Zhao XN, Guo Y. Situations and approaches of Shudao (Sichuan Section) researches in the perspective of cultural route theory. J Southwest Jiaotong Uni (Soc Sci). 2015;02:32–9 (in Chinese).
Tang F. A rustic opinion of research & protection of Shu Road Heritage. Stud Nat Cult Herit. 2017;02:10–19 (in Chinese).
Liu XT. Research on heritage constitution and conservation of Qin-Shu Ancient Road—a case study on the Fengzhou-Xinhongpu Section of the Lianyun Plank Road. Xi'an Uni Archit Technol. 2017 in Chinese.
Shang CW. Interpretation and utilization conception of Jinniu Dao as linear cultural heritage. Stud Nat Cult Herit. 2017;2:20–9 (in Chinese).
Feng MY, Tang GH, Li QY, et al. Research of the tourist value of the ancient Shu Road. J China W Norm Univ (Nat Sci). 2007;04:361–364 (in Chinese).
Li YP. Shu Road poetry and the development of Shu Road tourism resources. J Shaanxi Uni Technol (Soc Sci). 2016;03:49–54 (in Chinese).
Shan JX. Actively promote of the protection of Sichuan Road cultural routes and the application for World Heritage. China Herit News (in Chinese).
Wang Q, Li XB, Liu GY. Cultural itinerary development and protection—a case study of Shu(Sichuan) road. J Sichuan Tour Univ. 2016;01:80–2 (in Chinese).
Luo XD. The historical excavation and creative development of ancient Shu civilization in China starting from the application of Shu Dao as World Heritage site. Humanistic World. 2018;1:24–27 (in Chinese).
Yao Y, Wang X, Lu L, Liu C, Wu Q, Ren H, Yang S, Sun R, Luo L, Wu K. Proportionated distributions in spatiotemporal structure of the world cultural heritage sites: analysis and countermeasures. Sustainability. 2021;13:2148.
Liang Y, Yang R, Wang P, Yang A, Chen G. A quantitative description of the spatial–temporal distribution and evolution pattern of world cultural heritage. Herit Sci. 2021;7:1–14.
Tian XB, Hu J, Xu X, et al. Spatial–temporal distribution characteristics and influence mechanism of key cultural relics protection units in China at different historical periods. Econ Geogr. 2021;41:191–201 (in Chinese).
Xu BC, Pan JH. Spatial distribution characteristics of the intangible cultural heritage in China. Econ Geogr. 2018;38:188–96 (in Chinese).
Zhao YT. Research on route selection of Qin Shu ancient post road network (Guanzhong section) under the guidance of linear cultural heritage. Xi'an Uni Archit Technol. 2021 (in Chinese).
Liu QZ, Wang ZJ. China Shu Road Traffic lines. Sanqin Press. 2015 (in Chinese).
Peng T. Research on the value and heritage of Qinling Section of the Ancient Qinshu Road from the perspective of cultural route. Xi'an Uni Archit Technol. 2021 (in Chinese).
Wang P. Shu road in China. China Travel and Tourism Press. 2008:7 (in Chinese).
Yin D, Shi B, Chen XR. Spatial distribution of sports intangible cultural heritage tourism resources in China—based on GIS spatial analysis. J Beijing Sport Univ. 2018;41:116–22 (in Chinese).
Zhang J, He LX, Xiong KN, et al. Spatial pattern and influencing factors of intangible cultural heritage in Karst Areas: a case study of Guizhou Province. Resour Env Yangtze Basin. 2021;30:1055–68 (in Chinese).
China Intangible Cultural Heritage Network. https://www.ihchina.cn/(accessedon24October2022) (in Chinese).
Yue J. Temporal and spatial pattern of cultural heritage in Beijing-Tianjin-Hebei region and its influencing factors: a case study of Cultural Relic Protection Unit. Econ Geogr. 2020;40:221–230 (in Chinese).
Zhang XY, Xiang H, Liu R. Spatial pattern and influencing factors of intangible cultural heritage of music in Xiangxi, central China. Herit Sci. 2022;3:2–12.
Zhang C, Yang BG. Fundamentals of quantitative geography. Beijing: Higher Educ Press; 1991. (in Chinese).
Zhang SY, Zhong ZF, Xiong KN, et al. Spatial pattern of the caves in Guizhou Province and their the influencing factors. Acta Geogr Sin. 2016;71:1998–2009 (in Chinese).
Wang SB, Guo JK. Spatial and temporal evolution and spatial correlation analysis of the transportation development in China. J Arid Land Res Environ. 2017;312:43–9 (in Chinese).
Ni XL, Lv WQ, Zhang D. An empirical study on the development of tourism industry agglomeration in Yunnan Province. J Guangxi Uni (Philos Soc Sci). 2018;40:55–60 (in Chinese).
Wang P, Liu M. The spatial influence of geography on the inheritance of traditional intangible culture: a case study of intangible cultural heritage in Shanxi province. Geogr Res. 2020;39:1807–21 (in Chinese).
Tang GA, Liu XJ, Yan GN, et al. Geographic information systems tutorial. Beijing: Higher Educ Press; 2007. (in Chinese).
Liu J, Huang XF, Fang GU, et al. Spatiotemporal variation of NDVI in the middle reaches of the Tarim River based on GIS buffer function. Arid Zone Res. 2018;35:171–80 (in Chinese).
Wang YR. Economic history of China. Beijing: Higher Educ Press; 2008. (in Chinese).
Wang YD, Yang YC. Study on the characteristics of regional "Non-heritage" and its cooperative development model a case of Shaanxi, Gansu and Xinjiang. Res Dvol Market. 2021;37:904–13 (in Chinese).
Marzeion B, Levermann A. Loss of cultural world heritage and currently inhabited places to sea-level rise. Environ Res Lett. 2014;9:2033–53.
Li JH, Hu MM, Zhang D, et al. The spatial and temporal differentiation characteristics of cultural heritage in the Yellow River Basin. J Arid Land Res Env. 2021;35:194–201 (in Chinese).
Sung C. A study on the characteristics of cultural spaces associated with intangible cultural heritage. Stud Pract Folk. 2017;30:155–86.
Zhang ZW, Li Q, Hu SX. Intangible cultural heritage in the Yellow River basin: its spatial–temporal distribution characteristics and differentiation causes. Sustainablility. 2022;14(17):1–7.
Wang JJ. Flood risk maps to cultural heritage: measures and process. J Cult Herit. 2015;16:210–20.
Hong W, Su M. Influence of rapid transit on accessibility pattern and economic linkage at urban agglomeration scale in China. Open Geosci. 2019;11:804–14.
Kug YB. A study for tourism information system device on the Korea Natural using tourist resources with geographical information system. Int J Tourism Hosp Res. 2006;20:99–117.
Akram U, QuTtIneh NH, Wennergren U, Tonderski K, Metson GS. Author correction: enhancing nutrient recycling from excreta to meet crop nutrient needs in Sweden—a spatial analysis. Sci Rep. 2020;10:361.
Dhonju HK, Xiao W, Mills JP, et al. Share our cultural heritage (SOCH): Worldwide 3D heritage reconstruction and visualization via web and mobile GIS. Isprs Int J Geo-Inf. 2018;7:2–16.
Ferretti V, Montibeller G. An integrated framework for environmental multi-impact spatial risk analysis. Risk Anal. 2019;39:257–73.
Shen CH. An analysis for features of geospatially rescaled range analysis method and spatial scaling behavior. Nonlinear Dyn. 2017;89:243–54.
Chen X, Liu XB. quantitative analysis of urban spatial morphology Based on GIS regionalization and spatial syntax. J Indian Ssoc Remote. 2022. https://doi.org/10.1007/s12524-021-01439-x.
The authors thank the research group for the financial support and the reviewers for their useful comments and suggestions.
This work was supported by Major theoretical and Practical problems of Social Science in Shaanxi Province, Grant No. 2022ND0341. This work was funded by the Social Science Reasearch of Shaanxi, Grant No. 2020J022. This work was supported by basic scientific research business expense humanities and social science project of Northwest A&F University, Grant No. 2452022056, and it was supported by Youth cultivation project of college of Landscape Architecture and Arts, Northwest A&F University.
College of Landscape Architecture and Arts, Northwest A&F University, Yangling, 712100, Shaanxi, China
Yuan Liu, Mo Chen & Yonggang Tian
Yuan Liu
Mo Chen
Yonggang Tian
L.Y.: conceptualization, data collection and quality, and formal analysis; C.M.: interpretation, visualization and methods; T.Y.G: editing the manuscript. All authors read and approved the final manuscript.
Correspondence to Yonggang Tian.
Liu, Y., Chen, M. & Tian, Y. Temporal and spatial patterns and influencing factors of intangible cultural heritage: Ancient Qin-Shu roads, Western China. Herit Sci 10, 201 (2022). https://doi.org/10.1186/s40494-022-00840-0
Spatial and temporal pattern
Influencing factors
Ancient Qin-Shu roads
Utilization and protection | CommonCrawl |
private data analysis statistical data privacy differential privacy noise addition
Cynthia Dwork
Frank McSherry
Microsoft Research SVC
Kobbi Nissim
Georgetown University; This work was done while the author was at the Department of Computer Science, Ben-Gurion University
We continue a line of research initiated in Dinur and Nissim (2003); Dwork and Nissim (2004); and Blum et al. (2005) on privacy-preserving statistical databases.
Consider a trusted server that holds a database of sensitive information. Given a query function $f$ mapping databases to reals, the so-called {\em true answer} is the result of applying $f$ to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user.
Previous work focused on the case of noisy sums, in which $f = \sum_i g(x_i)$, where $x_i$ denotes the $i$th row of the database and $g$ maps database rows to $[0,1]$. We extend the study to general functions $f$, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the {\em sensitivity} of the function $f$. Roughly speaking, this is the amount that any single argument to $f$ can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case.
The first step is a very clean definition of privacy---now known as differential privacy---and measure of its loss. We also provide a set of tools for designing and combining differentially private algorithms, permitting the construction of complex differentially private analytical tools from simple differentially private primitives.
Finally, we obtain separation results showing the increased value of interactive statistical release mechanisms over non-interactive ones.
Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2017. "Calibrating Noise to Sensitivity in Private Data Analysis". Journal of Privacy and Confidentiality 7 (3):17-51. https://doi.org/10.29012/jpc.v7i3.405.
Grant numbers IIS-1447700
Samuel S Wu, Shigang Chen, Deborah L Burr, Long Zhang, A New Data Collection Technique for Preserving Privacy , Journal of Privacy and Confidentiality: Vol. 7 No. 3 (2016)
Daniel Kifer, Bing-Rong Lin, An Axiomatic View of Statistical Privacy and Utility , Journal of Privacy and Confidentiality: Vol. 4 No. 1 (2012)
Yu-Xiang Wang, Per-instance Differential Privacy , Journal of Privacy and Confidentiality: Vol. 9 No. 1 (2019): Special Issue: Theory and Practice of Differential Privacy (TPDP2016)
Mohammad Alaggan, Sébastien Gambs, Anne-Marie Kermarrec, Heterogeneous Differential Privacy , Journal of Privacy and Confidentiality: Vol. 7 No. 2 (2016): Special Issue on the Theory and Practice of Differential Privacy 2015
Bing-Rong Lin, Dan Kifer, Towards a Systematic Analysis of Privacy Definitions , Journal of Privacy and Confidentiality: Vol. 5 No. 2 (2014)
Brendan Avent, Aleksandra Korolova, David Zeber, Torgeir Hovden, Benjamin Livshits, BLENDER: Enabling Local Search with a Hybrid Differential Privacy Model , Journal of Privacy and Confidentiality: Vol. 9 No. 2 (2019): Differential Privacy, including Special Issue on the Theory and Practice of Differential Privacy 2017
Natalie Shlomo, Statistical Disclosure Limitation: New Directions and Challenges , Journal of Privacy and Confidentiality: Vol. 8 No. 1 (2018): Commemorating Stephen Fienberg
Dinusha Vatsalan, Peter Christen, Christine M. O'Keefe, Vassilios S. Verykios, An Evaluation Framework for Privacy-Preserving Record Linkage , Journal of Privacy and Confidentiality: Vol. 6 No. 1 (2014)
Krishnaram Kenthapadi, Aleksandra Korolova, Ilya Mironov, Nina Mishra, Privacy via the Johnson-Lindenstrauss Transform , Journal of Privacy and Confidentiality: Vol. 5 No. 1 (2013)
Raj Chetty, John N Friedman, A Practical Method to Reduce Privacy Loss When Disclosing Statistics Based on Small Samples , Journal of Privacy and Confidentiality: Vol. 9 No. 2 (2019): Differential Privacy, including Special Issue on the Theory and Practice of Differential Privacy 2017
Cynthia Dwork, Nitin Kohli, Deirdre Mulligan, Differential Privacy in Practice: Expose your Epsilons! , Journal of Privacy and Confidentiality: Vol. 9 No. 2 (2019): Differential Privacy, including Special Issue on the Theory and Practice of Differential Privacy 2017
Cynthia Dwork, Adam Smith, Differential Privacy for Statistics: What we Know and What we Want to Learn , Journal of Privacy and Confidentiality: Vol. 1 No. 2 (2010)
Aloni Cohen, Kobbi Nissim, Linear Program Reconstruction in Practice , Journal of Privacy and Confidentiality: Vol. 10 No. 1 (2020): Special Issue: Theory and Practice of Differential Privacy (TPDP 2018)
Cynthia Dwork, Moni Naor, On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy , Journal of Privacy and Confidentiality: Vol. 2 No. 1 (2010)
Shiva P. Kasiviswanathan, Adam Smith, On the 'Semantics' of Differential Privacy: A Bayesian Formulation , Journal of Privacy and Confidentiality: Vol. 6 No. 1 (2014)
Cynthia Dwork, Alan Karr, Kobbi Nissim, Lars Vilhuber, On Privacy in the Age of COVID-19 , Journal of Privacy and Confidentiality: Vol. 10 No. 2 (2020)
Cynthia Dwork, Jonathan Ullman, The Fienberg Problem: How to Allow Human Interactive Data Analysis in the Age of Differential Privacy , Journal of Privacy and Confidentiality: Vol. 8 No. 1 (2018): Commemorating Stephen Fienberg
Cynthia Dwork, The Future of the Journal of Privacy and Confidentiality , Journal of Privacy and Confidentiality: Vol. 8 No. 1 (2018): Commemorating Stephen Fienberg
John M. Abowd, Kobbi Nissim, Chris J. Skinner, First Issue Editorial , Journal of Privacy and Confidentiality: Vol. 1 No. 1 (2009): Inaugural Issue
Albert Cheu, Adam Smith, Jonathan Ullman, Manipulation Attacks in Local Differential Privacy , Journal of Privacy and Confidentiality: Vol. 11 No. 1 (2021) | CommonCrawl |
PT-Quantum Mechanics
Filed under: physics, Quantum mechanics — Tags: Carl bender, Perimeter Institute, PIRSA, PT QM, PT Quantum mechanics — sandokan65 @ 14:39
Carl Bender's speaches at PIRSA (Perimeter Institute Recorded Seminar Archive) – http://pirsa.org/index.php?p=speaker&name=Carl_Bender
Carl Bender's talk on turbulence, iterated maps, chaos and classical mechanics in classical domain. Includes several interesting experimental demonstrations. – http://artsci.wustl.edu/~spenteco/newVideo/DLP_bender_083109.html
Carl Bender's papers at arXiv – http://arxiv.org/find/quant-ph/1/au:+Bender_C/0/1/0/all/0/1
"Response to Shalaby's Comment on "Families of Particles with Different Masses in PT-Symmetric Quantum Field Theory"" by Carl M. Bender, S. P. Klevansky (arXiv:1103.0338; 2011.03.02) – http://arxiv.org/abs/1103.0338
Abstract: In a recent Comment [arXiv: 1101.3980] Shalaby criticised our paper "Families of Particles with Different Masses in PT-Symmetric Quantum Field Theory" [arXiv:1002.3253]. On examining his arguments, we find that there are serious flaws at almost every stage of his Comment. In view of space and time considerations, we point out the major flaws that render his arguments invalid. Essentially Shalaby is attempting to obtain our results from a variational principle and to find a physical interpretation of his calculation. The variational procedure that he uses is inapplicable, and his description of the physics is wrong. We thus refute his criticism on all levels.
"PT-symmetric quantum state discrimination" by Carl M. Bender, Dorje C. Brody, Joao Caldeira, Bernard K. Meister (arXiv:1011.1871; 2010.11.08) – http://arxiv.org/abs/1011.1871
Abstract: Suppose that a system is known to be in one of two quantum states, or . If these states are not orthogonal, then in conventional quantum mechanics it is impossible with one measurement to determine with certainty which state the system is in. However, because a non-Hermitian PT-symmetric Hamiltonian determines the inner product that is appropriate for the Hilbert space of physical states, it is always possible to choose this inner product so that the two states and are orthogonal. Thus, quantum state discrimination can, in principle, be achieved with a single measurement.
"Tunneling in classical mechanics" by Carl M. Bender, Daniel W. Hook (arXiv:1011.0121; 2010.09.16) – http://arxiv.org/abs/1011.0121
Abstract: A classical particle that is initially in a classically allowed region of a potential is not confined to this region for all time if its energy is complex. Rather, the particle may travel through complex coordinate space and visit other classically allowed regions. Thus, a complex-energy classical particle can exhibit tunneling-like behavior. This tunneling behavior persists as the imaginary part of the energy tends to zero. Hence one may compare complex classical tunneling times with quantum tunneling probabilities. An accurate numerical study of quantum and classical tunneling demonstrates that as the energy increases, the probabilities associated with complex classical tunneling approach the corresponding quantum probabilities.
The Hamiltonian with the potential has following lowest energy levels:
lies just above the barrier separating two potential wells.
The sextic potential has following lowest energy levels:
The geometric charactericts of the quartic potential are:
the left well minimum is at ;
the right well minimum is at ;
the barrier is at
The geometric charactericts of the sextic potential are:
"Extending PT symmetry from Heisenberg algebra to E2 algebra" by Carl M. Bender, R. J. Kalveks (arXiv:1009.3236 2010.09.16; Int.J.Theor.Phys.50:955-962,2011) – http://arxiv.org/abs/1009.3236
Abstract: The E2 algebra has three elements, J, u, and v, which satisfy the commutation relations . We can construct the Hamiltonian , where g is a real parameter, from these elements. This Hamiltonian is Hermitian and consequently it has real eigenvalues. However, we can also construct the PT-symmetric and non-Hermitian Hamiltonian , where again g is real. As in the case of PT-symmetric Hamiltonians constructed from the elements x and p of the Heisenberg algebra, there are two regions in parameter space for this PT-symmetric Hamiltonian, a region of unbroken PT symmetry in which all the eigenvalues are real and a region of broken PT symmetry in which some of the eigenvalues are complex. The two regions are separated by a critical value of g.
"Quantum counterpart of spontaneously broken classical PT symmetry" by Carl M. Bender, Hugh F. Jones (arXiv:1008.0782 2010.08.04; J.Phys.A44:015301,2011) – http://arxiv.org/abs/1008.0782
Abstract: The classical trajectories of a particle governed by the PT-symmetric Hamiltonian ( ) have been studied in depth. It is known that almost all trajectories that begin at a classical turning point oscillate periodically between this turning point and the corresponding PT-symmetric turning point. It is also known that there are regions in for which the periods of these orbits vary rapidly as functions of and that in these regions there are isolated values of for which the classical trajectories exhibit spontaneously broken PT symmetry. The current paper examines the corresponding quantum-mechanical systems. The eigenvalues of these quantum systems exhibit characteristic behaviors that are correlated with those of the associated classical system.
"Almost zero-dimensional PT-symmetric quantum field theories" by Carl M. Bender (arXiv:1003.3881; 2010.03.19) – http://arxiv.org/abs/1003.3881
Abstract: In 1992 Bender, Boettcher, and Lipatov proposed in two papers a new and unusual nonperturbative calculational tool in quantum field theory. The objective was to expand the Green's functions of the quantum field theory as Taylor series in powers of the space-time dimension D. In particular, the vacuum energy for a massless \phi^{2N} (N=1,2,3,…) quantum field theory was studied. The first two Taylor coefficients in this dimensional expansion were calculated {\it exactly} and a set of graphical rules were devised that could be used to calculate approximately the higher coefficients in the series. This approach is mathematically valid and gives accurate results, but it has not been actively pursued and investigated. Subsequently, in 1998 Bender and Boettcher discovered that PT-symmetric quantum-mechanical Hamiltonians of the form H=p^2+x^2(ix)^\epsilon, where \epsilon\geq0, have real spectra. These new kinds of complex non-Dirac-Hermitian Hamiltonians define physically acceptable quantum-mechanical theories. This result in quantum mechanics suggests that the corresponding non-Dirac-Hermitian D-dimensional \phi^2(i\phi)^\epsilon quantum field theories might also have real spectra. To examine this hypothesis, we return to the technique devised in 1992 and in this paper we calculate the first two coefficients in the dimensional expansion of the ground-state energy of this complex non-Dirac-Hermitian quantum field theory. We show that to first order in this dimensional approximation the ground-state energy is indeed real for \epsilon\geq0.
"Families of particles with different masses in PT-symmetric quantum field theory" by C. M. Bender, S. P. Klevansky (arXiv:1002.3253; 2010.07.05) – http://arxiv.org/abs/1002.3253
Abstract: An elementary field-theoretic mechanism is proposed that allows one Lagrangian to describe a family of particles having different masses but otherwise similar physical properties. The mechanism relies on the observation that the Dyson-Schwinger equations derived from a Lagrangian can have many different but equally valid solutions. Nonunique solutions to the Dyson-Schwinger equations arise when the functional integral for the Green's functions of the quantum field theory converges in different pairs of Stokes' wedges in complex field space, and the solutions are physically viable if the pairs of Stokes' wedges are PT symmetric.
"Classical Particle in a Complex Elliptic Potential" by Carl M. Bender, Daniel W. Hook, Karta Singh Kooner (arXiv:1001.1548 2010.01.10; J.Phys.A43:165201,2010) – http://arxiv.org/abs/1001.1548
Abstract: This paper reports a numerical study of complex classical trajectories of a particle in an elliptic potential. This study of doubly-periodic potentials is a natural sequel to earlier work on complex classical trajectories in trigonometric potentials. For elliptic potentials there is a two-dimensional array of identical cells in the complex plane, and each cell contains a pair of turning points. The particle can travel both horizontally and vertically as it visits these cells, and sometimes the particle is captured temporarily by a pair of turning points. If the particle's energy lies in a conduction band, the particle drifts through the lattice of cells and is never captured by the same pair of turning points more than once. However, if the energy of the particle is not in a conduction band, the particle can return to previously visited cells.
"Complex Elliptic Pendulum" by Carl M. Bender, Daniel W. Hook, Karta Kooner (arXiv:1001.0131; 2009.12.31) – http://arxiv.org/abs/1001.0131
Abstract: This paper briefly summarizes previous work on complex classical mechanics and its relation to quantum mechanics. It then introduces a previously unstudied area of research involving the complex particle trajectories associated with elliptic potentials.
"Probability Density in the Complex Plane" by Carl M. Bender, Daniel W. Hook, Peter N. Meisinger, Qing-hai Wang (arXiv:0912.4659; 2010.01.23) – http://arxiv.org/abs/0912.4659
Abstract: The correspondence principle asserts that quantum mechanics resembles classical mechanics in the high-quantum-number limit. In the past few years many papers have been published on the extension of both quantum mechanics and classical mechanics into the complex domain. However, the question of whether complex quantum mechanics resembles complex classical mechanics at high energy has not yet been studied. This paper introduces the concept of a local quantum probability density $\rho(z)$ in the complex plane. It is shown that there exist infinitely many complex contours of infinite length on which is real and positive. Furthermore, the probability integral is finite. Demonstrating the existence of such contours is the essential element in establishing the correspondence between complex quantum and classical mechanics. The mathematics needed to analyze these contours is subtle and involves the use of asymptotics beyond all orders.
"Complex Correspondence Principle" by Carl M. Bender, Daniel W. Hook, Peter N. Meisinger, Qing-hai Wang (arXiv:0912.2069 2009.12.10; Phys.Rev.Lett.104:061601,2010) – http://arxiv.org/abs/0912.2069
Abstract: Quantum mechanics and classical mechanics are two very different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum mechanics and classical mechanics into the complex domain. This letter shows that these complex extensions continue to exhibit a correspondence, and that this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship prequires the use of asymptotics beyond all orders.
"PT symmetry and necessary and sufficient conditions for the reality of energy eigenvalues" by Carl M. Bender, Philip D. Mannheim (arXiv:0902.1365; 2009.02.09) – http://arxiv.org/abs/0902.1365
Abstract: Despite its common use in quantum theory, the mathematical requirement of Dirac Hermiticity of a Hamiltonian is sufficient to guarantee the reality of energy eigenvalues but not necessary. By establishing three theorems, this paper gives physical conditions that are both necessary and sufficient. First, it is shown that if the secular equation is real, the Hamiltonian is necessarily PT symmetric. Second, if a linear operator C that obeys the two equations [C,H]=0 and C^2=1 is introduced, then the energy eigenvalues of a PT-symmetric Hamiltonian that is diagonalizable are real only if this C operator commutes with PT. Third, the energy eigenvalues of PT-symmetric Hamiltonians having a nondiagonalizable, Jordan-block form are real. These theorems hold for matrix Hamiltonians of any dimensionality.
"Optimal Time Evolution for Hermitian and Non-Hermitian Hamiltonians" by Carl M. Bender, Dorje C. Brody (arXiv:0808.1823; 2008.08.13) – http://arxiv.org/abs/0808.1823
Abstract: Consider the set of all Hamiltonians whose largest and smallest energy eigenvalues, E_max and E_min, differ by a fixed energy \omega. Given two quantum states, an initial state |\psi_I> and a final state |\psi_F>, there exist many Hamiltonians H belonging to this set under which |\psi_I> evolves in time into |\psi_F>. Which Hamiltonian transforms the initial state to the final state in the least possible time \tau? For Hermitian Hamiltonians, has a nonzero lower bound. However, among complex non-Hermitian PT-symmetric Hamiltonians satisfying the same energy constraint, \tau can be made arbitrarily small without violating the time-energy uncertainty principle. The minimum value of \tau can be made arbitrarily small because for PT-symmetric Hamiltonians the evolution path from the vector |\psi_I> to the vector |\psi_F>, as measured using the Hilbert-space metric appropriate for this theory, can be made arbitrarily short. The mechanism described here resembles the effect in general relativity in which two space-time points can be made arbitrarily close if they are connected by a wormhole. This result may have applications in quantum computing.
"Quantum effects in classical systems having complex energy" by Carl M. Bender, Dorje C. Brody, Daniel W. Hook (arXiv:0804.4169 2008.04.25; J.Phys.A41:352003,2008) – http://arxiv.org/abs/0804.4169
Abstract: On the basis of extensive numerical studies it is argued that there are strong analogies between the probabilistic behavior of quantum systems defined by Hermitian Hamiltonians and the deterministic behavior of classical mechanical systems extended into the complex domain. Three models are examined: the quartic double-well potential , the cubic potential , and the periodic potential . For the quartic potential a wave packet that is initially localized in one side of the double-well can tunnel to the other side. Complex solutions to the classical equations of motion exhibit a remarkably analogous behavior. Furthermore, classical solutions come in two varieties, which resemble the even-parity and odd-parity quantum-mechanical bound states. For the cubic potential, a quantum wave packet that is initially in the quadratic portion of the potential near the origin will tunnel through the barrier and give rise to a probability current that flows out to infinity. The complex solutions to the corresponding classical equations of motion exhibit strongly analogous behavior. For the periodic potential a quantum particle whose energy lies between -1 and 1 can tunnel repeatedly between adjacent classically allowed regions and thus execute a localized random walk as it hops from region to region. Furthermore, if the energy of the quantum particle lies in a conduction band, then the particle delocalizes and drifts freely through the periodic potential. A classical particle having complex energy executes a qualitatively analogous local random walk, and there exists a narrow energy band for which the classical particle becomes delocalized and moves freely through the potential.
"Comment on the Quantum Brachistochrone Problem" by C. M. Bender, D. C. Brody, H. F. Jones, B. K. Meister (arXiv:0804.3487; 2008.04.22) – http://arxiv.org/abs/0804.3487
Abstract: In this brief comment we attempt to clarify the apparent discrepancy between the papers [1] and [2] on the quantum brachistochrone, namely whether it is possible to use a judicious mixture of Hermitian and non-Hermitian quantum mechanics to evade the standard lower limit on the time taken for evolution by a Hermitian Hamiltonian with given energy dispersion between two given states.
"Exact Isospectral Pairs of PT-Symmetric Hamiltonians" by Carl M. Bender, Daniel W. Hook (arXiv:0802.2910 2008.04.27; J.Phys.A41:244005,2008) – http://arxiv.org/abs/0802.2910
Abstract: A technique for constructing an infinite tower of pairs of PT-symmetric Hamiltonians, and (n=2,3,4,…), that have exactly the same eigenvalues is described. The eigenvalue problem for the first Hamiltonian of the pair must be posed in the complex domain, so its eigenfunctions satisfy a complex differential equation and fulfill homogeneous boundary conditions in Stokes' wedges in the complex plane. The eigenfunctions of the second Hamiltonian of the pair obey a real differential equation and satisfy boundary conditions on the real axis. This equivalence constitutes a proof that the eigenvalues of both Hamiltonians are real. Although the eigenvalue differential equation associated with is real, the Hamiltonian exhibits quantum anomalies (terms proportional to powers of ). These anomalies are remnants of the complex nature of the equivalent Hamiltonian . In the classical limit in which the anomaly terms in are discarded, the pair of Hamiltonians and have closed classical orbits whose periods are identical.
Extracts:
Paper considers the first three members of the family of PT-symmetric Hamiltonians for and . Each of these Hamiltonians has the same discrete real spectrum as a corresponding member of another family of Hamiltonians which are Hermitian.
Example : For one gets . Notice the quantum-anomaly term (proportional to ). The corresponding classical Hamiltonians and also have a correspondence between them: namely, each closed orbit of first one has a corresponding closed orbit of the second one, both orbits with the exactly same periods.
"Does the complex deformation of the Riemann equation exhibit shocks?" by Carl M. Bender, Joshua Feinberg (arXiv:0709.2727 2007.09.17; J.Phys.A41:244004,2008) – http://arxiv.org/abs/0709.2727
Abstract: The Riemann equation , which describes a one-dimensional accelerationless perfect fluid, possesses solutions that typically develop shocks in a finite time. This equation is symmetric. A one-parameter -invariant complex deformation of this equation, ( real), is solved exactly using the method of characteristic strips, and it is shown that for real initial conditions, shocks cannot develop unless is an odd integer.
"No-ghost theorem for the fourth-order derivative Pais-Uhlenbeck oscillator model" by Carl M. Bender, Philip D. Mannheim (arXiv:0706.0207 2007.06.01; Phys.Rev.Lett.100:110402,2008) – http://arxiv.org/abs/0706.0207
Abstract: Contrary to common belief, it is shown that theories whose field equations are higher than second order in derivatives need not be stricken with ghosts. In particular, the prototypical fourth-order derivative Pais-Uhlenbeck oscillator model is shown to be free of states of negative energy or negative norm. When correctly formulated (as a symmetric theory), the theory determines its own Hilbert space and associated positive-definite inner product. In this Hilbert space the model is found to be a fully acceptable quantum-mechanical theory that exhibits unitary time evolution.
"Faster than Hermitian Quantum Mechanics" by Carl M. Bender, Dorje C. Brody, Hugh F. Jones, Bernhard K. Meister (arXiv:quant-ph/0609032 2006.09.05; Phys.Rev.Lett.98:040403,2007) – http://arxiv.org/abs/quant-ph/0609032
Abstract: Given an initial quantum state |psi_I> and a final quantum state |psi_F> in a Hilbert space, there exist Hamiltonians H under which |psi_I> evolves into |psi_F>. Consider the following quantum brachistochrone problem: Subject to the constraint that the difference between the largest and smallest eigenvalues of H is held fixed, which H achieves this transformation in the least time tau? For Hermitian Hamiltonians tau has a nonzero lower bound. However, among non-Hermitian PT-symmetric Hamiltonians satisfying the same energy constraint, tau can be made arbitrarily small without violating the time-energy uncertainty principle. This is because for such Hamiltonians the path from |psi_I> to |psi_F> can be made short. The mechanism described here is similar to that in general relativity in which the distance between two space-time points can be made small if they are connected by a wormhole. This result may have applications in quantum computing.
"Equivalence of a Complex -Symmetric Quartic Hamiltonian and a Hermitian Quartic Hamiltonian with an Anomaly" by Carl M. Bender, Dorje C. Brody, Jun-Hua Chen, Hugh F. Jones, Kimball A. Milton, Michael C. Ogilvie (arXiv:hep-th/0605066 2006.05.08; Phys.Rev.D74:025016,2006) – http://arxiv.org/abs/hep-th/0605066
Abstract: In a recent paper Jones and Mateo used operator techniques to show that the non-Hermitian -symmetric wrong-sign quartic Hamiltonian has the same spectrum as the conventional Hermitian Hamiltonian . Here, this equivalence is demonstrated very simply by means of differential-equation techniques and, more importantly, by means of functional-integration techniques. It is shown that the linear term in the Hermitian Hamiltonian is anomalous; that is, this linear term has no classical analog. The anomaly arises because of the broken parity symmetry of the original non-Hermitian -symmetric Hamiltonian. This anomaly in the Hermitian form of a -symmetric quartic Hamiltonian is unchanged if a harmonic term is introduced into . When there is a harmonic term, an immediate physical consequence of the anomaly is the appearance of bound states; if there were no anomaly term, there would be no bound states. Possible extensions of this work to quantum field theory in higher-dimensional space-time are discussed.
"Calculation of the Hidden Symmetry Operator for a -Symmetric Square Well" by Carl M. Bender, Barnabas Tan (arXiv:quant-ph/0601123 2006.01.18; J.Phys.A39:1945-1953,2006) – http://arxiv.org/abs/quant-ph/0601123
Abstract: It has been shown that a Hamiltonian with an unbroken symmetry also possesses a hidden symmetry that is represented by the linear operator . This symmetry operator guarantees that the Hamiltonian acts on a Hilbert space with an inner product that is both positive definite and conserved in time, thereby ensuring that the Hamiltonian can be used to define a unitary theory of quantum mechanics. In this paper it is shown how to construct the operator for the -symmetric square well using perturbative techniques.
"PT-Symmetric Versus Hermitian Formulations of Quantum Mechanics" by Carl M. Bender, Jun-Hua Chen, Kimball A. Milton (arXiv:hep-th/0511229 2005.11.23; J.Phys.A39:1657-1668,2006) – http://arxiv.org/abs/hep-th/0511229
Abstract: A non-Hermitian Hamiltonian that has an unbroken PT symmetry can be converted by means of a similarity transformation to a physically equivalent Hermitian Hamiltonian. This raises the following question: In which form of the quantum theory, the non-Hermitian or the Hermitian one, is it easier to perform calculations? This paper compares both forms of a non-Hermitian quantum-mechanical Hamiltonian and demonstrates that it is much harder to perform calculations in the Hermitian theory because the perturbation series for the Hermitian Hamiltonian is constructed from divergent Feynman graphs. For the Hermitian version of the theory, dimensional continuation is used to regulate the divergent graphs that contribute to the ground-state energy and the one-point Green's function. The results that are obtained are identical to those found much more simply and without divergences in the non-Hermitian PT-symmetric Hamiltonian. The $\mathcal{O}(g^4)$ contribution to the ground-state energy of the Hermitian version of the theory involves graphs with overlapping divergences, and these graphs are extremely difficult to regulate. In contrast, the graphs for the non-Hermitian version of the theory are finite to all orders and they are very easy to evaluate.
"Semiclassical analysis of a complex quartic Hamiltonian" by Carl M. Bender, Dorje C. Brody, Hugh F. Jones (arXiv:quant-ph/0509034 2009.09.05; Phys.Rev.D73:025002,2006) – http://arxiv.org/abs/quant-ph/0509034
Abstract: It is necessary to calculate the C operator for the non-Hermitian PT-symmetric Hamiltonian in order to demonstrate that H defines a consistent unitary theory of quantum mechanics. However, the C operator cannot be obtained by using perturbative methods. Including a small imaginary cubic term gives the Hamiltonian , whose operator can be obtained perturbatively. In the semiclassical limit all terms in the perturbation series can be calculated in closed form and the perturbation series can be summed exactly. The result is a closed-form expression for having a nontrivial dependence on the dynamical variables and and on the parameter .
"Reflectionless Potentials and PT Symmetry" by Zafar Ahmed, Carl M. Bender, M. V. Berry (arXiv:quant-ph/0508117 2005.08.16; J.Phys.A38:L627-L630,2005) – http://arxiv.org/abs/quant-ph/0508117
Abstract: Large families of Hamiltonians that are non-Hermitian in the conventional sense have been found to have all eigenvalues real, a fact attributed to an unbroken PT symmetry. The corresponding quantum theories possess an unconventional scalar product. The eigenvalues are determined by differential equations with boundary conditions imposed in wedges in the complex plane. For a special class of such systems, it is possible to impose the PT-symmetric boundary conditions on the real axis, which lies on the edges of the wedges. The PT-symmetric spectrum can then be obtained by imposing the more transparent requirement that the potential be reflectionless.
"Dual PT-Symmetric Quantum Field Theories" by Carl M. Bender, H. F. Jones, R. J. Rivers (arXiv:hep-th/0508105 2005.08.15; Phys.Lett. B625 (2005) 333-340) – http://arxiv.org/abs/hep-th/0508105
Abstract: Some quantum field theories described by non-Hermitian Hamiltonians are investigated. It is shown that for the case of a free fermion field theory with a mass term the Hamiltonian is -symmetric. Depending on the mass parameter this symmetry may be either broken or unbroken. When the symmetry is unbroken, the spectrum of the quantum field theory is real. For the -symmetric version of the massive Thirring model in two-dimensional space-time, which is dual to the -symmetric scalar Sine-Gordon model, an exact construction of the operator is given. It is shown that the -symmetric massive Thirring and Sine-Gordon models are equivalent to the conventional Hermitian massive Thirring and Sine-Gordon models with appropriately shifted masses.
"New Quasi-Exactly Solvable Sextic Polynomial Potentials" by Carl M. Bender, Maria Monou (arXiv:quant-ph/0501053 2005.01.11) – http://arxiv.org/abs/quant-ph/0501053
Abstract: A Hamiltonian is said to be quasi-exactly solvable (QES) if some of the energy levels and the corresponding eigenfunctions can be calculated exactly and in closed form. An entirely new class of QES Hamiltonians having sextic polynomial potentials is constructed. These new Hamiltonians are different from the sextic QES Hamiltonians in the literature because their eigenfunctions obey PT-symmetric rather than Hermitian boundary conditions. These new Hamiltonians present a novel problem that is not encountered when the Hamiltonian is Hermitian: It is necessary to distinguish between the parametric region of unbroken PT symmetry, in which all of the eigenvalues are real, and the region of broken PT symmetry, in which some of the eigenvalues are complex. The precise location of the boundary between these two regions is determined numerically using extrapolation techniques and analytically using WKB analysis.
"Introduction to PT-Symmetric Quantum Theory" by Carl M. Bender (arXiv:quant-ph/0501052 2005.01.11; Contemp.Phys.46:277-292,2005) – http://arxiv.org/abs/quant-ph/0501052
Abstract: In most introductory courses on quantum mechanics one is taught that the Hamiltonian operator must be Hermitian in order that the energy levels be real and that the theory be unitary (probability conserving). To express the Hermiticity of a Hamiltonian, one writes , where the symbol denotes the usual Dirac Hermitian conjugation; that is, transpose and complex conjugate. In the past few years it has been recognized that the requirement of Hermiticity, which is often stated as an axiom of quantum mechanics, may be replaced by the less mathematical and more physical requirement of space-time reflection symmetry (PT symmetry) without losing any of the essential physical features of quantum mechanics. Theories defined by non-Hermitian PT-symmetric Hamiltonians exhibit strange and unexpected properties at the classical as well as at the quantum level. This paper explains how the requirement of Hermiticity can be evaded and discusses the properties of some non-Hermitian PT-symmetric quantum theories.
"The C Operator in PT-Symmetric Quantum Theories" by Carl M. Bender, Joachim Brod, Andre Refig, Moritz Reuter (arXiv:quant-ph/0402026 2004.02.03) – http://arxiv.org/abs/quant-ph/0402026
Abstract: The Hamiltonian H specifies the energy levels and the time evolution of a quantum theory. It is an axiom of quantum mechanics that H be Hermitian because Hermiticity guarantees that the energy spectrum is real and that the time evolution is unitary (probability preserving). This paper investigates an alternative way to construct quantum theories in which the conventional requirement of Hermiticity (combined transpose and complex conjugate) is replaced by the more physically transparent condition of space-time reflection (PT) symmetry. It is shown that if the PT symmetry of a Hamiltonian H is not broken, then the spectrum of H is real. Examples of PT-symmetric non-Hermitian quantum-mechanical Hamiltonians are and . The crucial question is whether PT-symmetric Hamiltonians specify physically acceptable quantum theories in which the norms of states are positive and the time evolution is unitary. The answer is that a Hamiltonian that has an unbroken PT symmetry also possesses a physical symmetry represented by a linear operator called C. Using C it is shown how to construct an inner product whose associated norm is positive definite. The result is a new class of fully consistent complex quantum theories. Observables are defined, probabilities are positive, and the dynamics is governed by unitary time evolution. After a review of PT-symmetric quantum mechanics, new results are presented here in which the C operator is calculated perturbatively in quantum mechanical theories having several degrees of freedom.
"Finite-Dimensional PT-Symmetric Hamiltonians" by Carl M. Bender, Peter N. Meisinger, Qinghai Wang (arXiv:quant-ph/0303174; 2003.03.29) – http://arxiv.org/abs/quant-ph/0303174
Abstract: This paper investigates finite-dimensional representations of PT-symmetric Hamiltonians. In doing so, it clarifies some of the claims made in earlier papers on PT-symmetric quantum mechanics. In particular, it is shown here that there are two ways to extend real symmetric Hamiltonians into the complex domain: (i) The usual approach is to generalize such Hamiltonians to include complex Hermitian Hamiltonians. (ii) Alternatively, one can generalize real symmetric Hamiltonians to include complex PT-symmetric Hamiltonians. In the first approach the spectrum remains real, while in the second approach the spectrum remains real if the PT symmetry is not broken. Both generalizations give a consistent theory of quantum mechanics, but if D>2, a D-dimensional Hermitian matrix Hamiltonian has more arbitrary parameters than a D-dimensional PT-symmetric matrix Hamiltonian.
"Quantised Three-Pillar Problem" by Carl M. Bender, Dorje C. Brody, Bernhard K. Meister (arXiv:quant-ph/0302097; 2003.02.12) – http://arxiv.org/abs/quant-ph/0302097
Abstract: This paper examines the quantum mechanical system that arises when one quantises a classical mechanical configuration described by an underdetermined system of equations. Specifically, we consider the well-known problem in classical mechanics in which a beam is supported by three identical rigid pillars. For this problem it is not possible to calculate uniquely the forces supplied by each pillar. However, if the pillars are replaced by springs, then the forces are uniquely determined. The three-pillar problem and its associated indeterminacy is recovered in the limit as the spring constant tends to infinity. In this paper the spring version of the problem is quantised as a constrained dynamical system. It is then shown that as the spring constant becomes large, the quantum analog of the ambiguity reemerges as a kind of quantum anomaly.
"Calculation of the Hidden Symmetry Operator in PT-Symmetric Quantum Mechanics" by Carl M. Bender, Peter N. Meisinger, Qinghai Wang (arXiv:quant-ph/0211166; 2002.11.26) – http://arxiv.org/abs/quant-ph/0211166
Abstract: In a recent paper it was shown that if a Hamiltonian H has an unbroken PT symmetry, then it also possesses a hidden symmetry represented by the linear operator C. The operator C commutes with both H and PT. The inner product with respect to CPT is associated with a positive norm and the quantum theory built on the associated Hilbert space is unitary. In this paper it is shown how to construct the operator C for the non-Hermitian PT-symmetric Hamiltonian using perturbative techniques. It is also shown how to construct the operator C for using nonperturbative methods.
"All Hermitian Hamiltonians Have Parity" by Carl M. Bender, Peter N. Meisinger, Qinghai Wang (arXiv:quant-ph/0211123; 2002.11.26) – http://arxiv.org/abs/quant-ph/0211123
Abstract: It is shown that if a Hamiltonian $H$ is Hermitian, then there always exists an operator P having the following properties: (i) P is linear and Hermitian; (ii) P commutes with H; (iii) ; (iv) the nth eigenstate of H is also an eigenstate of P with eigenvalue . Given these properties, it is appropriate to refer to P as the parity operator and to say that H has parity symmetry, even though P may not refer to spatial reflection. Thus, if the Hamiltonian has the form , where is real (so that H possesses time-reversal symmetry), then it immediately follows that H has PT symmetry. This shows that PT symmetry is a generalization of Hermiticity: All Hermitian Hamiltonians of the form have PT symmetry, but not all PT-symmetric Hamiltonians of this form are Hermitian.
"Complex Extension of Quantum Mechanics" by Carl M. Bender, Dorje C. Brody, Hugh F. Jones (arXiv:quant-ph/0208076 2002.10.30; EconfC0306234:617-628,2003; Phys.Rev.Lett.89:270401,2002;) – http://arxiv.org/abs/quant-ph/0208076
Abstract: It is shown that the standard formulation of quantum mechanics in terms of Hermitian Hamiltonians is overly restrictive. A consistent physical theory of quantum mechanics can be built on a complex Hamiltonian that is not Hermitian but satisfies the less restrictive and more physical condition of space-time reflection symmetry (PT symmetry). Thus, there are infinitely many new Hamiltonians that one can construct to explain experimental data. One might expect that a quantum theory based on a non-Hermitian Hamiltonian would violate unitarity. However, if PT symmetry is not spontaneously broken, it is possible to construct a previously unnoticed physical symmetry C of the Hamiltonian. Using C, an inner product is constructed whose associated norm is positive definite. This construction is completely general and works for any PT-symmetric Hamiltonian. Observables exhibit CPT symmetry, and the dynamics is governed by unitary time evolution. This work is not in conflict with conventional quantum mechanics but is rather a complex generalisation of it.
"Quantum Complex Henon-Heiles Potentials" by Carl M. Bender, Gerald V. Dunne, Peter N. Meisinger, Mehmet Simsek (arXiv:quant-ph/0101095 2001.01.18; Phys.Lett. A281 (2001) 311-316) – http://arxiv.org/abs/quant-ph/0101095
Abstract: Quantum-mechanical PT-symmetric theories associated with complex cubic potentials such as and , where is a real parameter, are investigated. These theories appear to possess real, positive spectra. Low-lying energy levels are calculated to very high order in perturbation theory. The large-order behavior of the perturbation coefficients is determined using multidimensional WKB tunneling techniques. This approach is also applied to the complex Henon-Heiles potential .
"Variational Ansatz for PT-Symmetric Quantum Mechanics" by Carl Bender, Fred Cooper, Peter Meisinger, Van M. Savage (arXiv:quant-ph/9907008 ; Phys.Lett. A259 (1999) 224-231) – http://arxiv.org/abs/quant-ph/9907008
Abstract: A variational calculation of the energy levels of a class of PT-invariant quantum mechanical models described by the non-Hermitian Hamiltonian with N positive and x complex is presented. Excellent agreement is obtained for the ground state and low lying excited state energy levels and wave functions. We use an energy functional with a three parameter class of PT-symmetric trial wave functions in obtaining our results.
"Complex Square Well — A New Exactly Solvable Quantum Mechanical Model" by Carl M. Bender (Washington U.), Stefan Boettcher (Emory U.), H. F. Jones (Imperial C.), Van M. Savage (Washington U.) (arXiv:quant-ph/9906057; J.Phys.A32:6771-6781,1999) – http://arxiv.org/abs/quant-ph/9906057
Abstract: Recently, a class of PT-invariant quantum mechanical models described by the non-Hermitian Hamiltonian was studied. It was found that the energy levels for this theory are real for all . Here, the limit as is examined. It is shown that in this limit, the theory becomes exactly solvable. A generalization of this Hamiltonian, ( ) is also studied, and this PT-symmetric Hamiltonian becomes exactly solvable in the large- limit as well. In effect, what is obtained in each case is a complex analog of the Hamiltonian for the square well potential. Expansions about the large- limit are obtained.
"Large-order Perturbation Theory for a Non-Hermitian PT-symmetric Hamiltonian" by Carl M. Bender, Gerald V. Dunne (arXiv:quant-ph/9812039; J.Math.Phys. 40 (1999) 4616-4621) – http://arxiv.org/abs/quant-ph/9812039
Abstract: A precise calculation of the ground-state energy of the complex PT-symmetric Hamiltonian , is performed using high-order Rayleigh-Schr\"odinger perturbation theory. The energy spectrum of this Hamiltonian has recently been shown to be real using numerical methods. The Rayleigh-Schr\"odinger perturbation series is Borel summable, and Pad\'e summation provides excellent agreement with the real energy spectrum. Pad\'e analysis provides strong numerical evidence that the once-subtracted ground-state energy considered as a function of is a Stieltjes function. The analyticity properties of this Stieltjes function lead to a dispersion relation that can be used to compute the imaginary part of the energy for the related real but unstable Hamiltonian .
"PT-Symmetric Quantum Mechanics" by Carl Bender (Washington U.), Stefan Boettcher (Los Alamos and Clark Atlanta U.), Peter Meisinger (Washington U.) (arXiv:quant-ph/9809072; J.Math.Phys. 40 (1999) 2201-2229) – http://arxiv.org/abs/quant-ph/9809072
Abstract: This paper proposes to broaden the canonical formulation of quantum mechanics. Ordinarily, one imposes the condition on the Hamiltonian, where represents the mathematical operation of complex conjugation and matrix transposition. This conventional Hermiticity condition is sufficient to ensure that the Hamiltonian $H$ has a real spectrum. However, replacing this mathematical condition by the weaker and more physical requirement , where represents combined parity reflection and time reversal , one obtains new classes of complex Hamiltonians whose spectra are still real and positive. This generalization of Hermiticity is investigated using a complex deformation of the harmonic oscillator Hamiltonian, where is a real parameter. The system exhibits two phases: When , the energy spectrum of is real and positive as a consequence of symmetry. However, when ; each of these complex Hamiltonians exhibits a phase transition at . These -symmetric theories may be viewed as analytic continuations of conventional theories from real to complex phase space.
"Quasi-exactly solvable quartic potential" by Carl M. Bender, Stefan Boettcher (CNLS, Los Alamos and CTSPS, Clark Atlanta University) (arXiv:physics/9801007; J.Phys. A31 (1998) L273-L277) – http://arxiv.org/abs/physics/9801007
Abstract: A new two-parameter family of quasi-exactly solvable quartic polynomial potentials is introduced. Until now, it was believed that the lowest-degree one-dimensional quasi-exactly solvable polynomial potential is sextic. This belief is based on the assumption that the Hamiltonian must be Hermitian. However, it has recently been discovered that there are huge classes of non-Hermitian, -symmetric Hamiltonians whose spectra are real, discrete, and bounded below [physics/9712001]. Replacing Hermiticity by the weaker condition of symmetry allows for new kinds of quasi-exactly solvable theories. The spectra of this family of quartic potentials discussed here are also real, discrete, and bounded below, and the quasi-exact portion of the spectra consists of the lowest eigenvalues. These eigenvalues are the roots of a th-degree polynomial.
"Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry" by Carl M. Bender, Stefan Boettcher (CNLS, Los Alamos, and CTSPS, Clark Atlanta University) (arXiv:physics/9712001; Phys.Rev.Lett. 80 (1998) 5243-5246) – http://arxiv.org/abs/physics/9712001
Abstract: The condition of self-adjointness ensures that the eigenvalues of a Hamiltonian are real and bounded below. Replacing this condition by the weaker condition of symmetry, one obtains new infinite classes of complex Hamiltonians whose spectra are also real and positive. These symmetric theories may be viewed as analytic continuations of conventional theories from real to complex phase space. This paper describes the unusual classical and quantum properties of these theories.
[…] En la línea de mi crítica recomiendo leer a Peter Woit, "Some Math and Physics Interactions," Not Even Wrong, 05 Apr 2017. Por cierto, Carl Bender tenía un blog, hoy abandonado; recomiendo leer "PT-Quantum Mechanics," Eikonal Blog, 18 Apr 2011. […]
Pingback by Hacia la prueba de la hipótesis de Riemann usando simulaciones cuánticas | Ciencia | La Ciencia de la Mula Francis — 2017.04.06 @ 06:16 | CommonCrawl |
View source for Help:Contents
← Help:Contents
== Editing pages == The most detailed explanation is given on the [http://www.mediawiki.org/wiki/Help:Editing_pages corresponding page] of MediaWiki. In MolWiki.org the most important principles are: * Please follow the [http://www.scitext.com/writing.php general practice] of good scientific writing. * Prove your statements by including references to peer reviewed journals. * Use preview functionality as much as possible! Try not to save changes just for testing purposes. === Molecule-related pages === MolWiki.org among other information also contains pages related to particular molecules. The naming convention for such pages is based on the [http://en.wikipedia.org/wiki/Hill_system Hill system]. The total name for a molecule-related page should be constructed as a chemical formula according to the Hill system and an index number, which makes this page unique. Example: [[C10H17B-1]].<br/>Index numbers are used for distinguishing different [http://en.wikipedia.org/wiki/Structural_isomer structural isomers]. Different conformers belonging to the same structural isomer are described in one article.<br/>It is also reasonable to create an additional page with human-readable name corresponding to the compound, which [http://www.mediawiki.org/wiki/Help:Redirects redirects] to the main page of the molecule. Example: [[3-Methyl-1-boraadamantane]].<br/> You can find a 'copy/paste' template for your new article here: [[Help:MolTemplate]] === Source code === To publish source code with highlighted syntax use code tags with indication of the language type: <pre> <syntaxhighlight lang="your_language"> Your source code goes here... </syntaxhighlight> </pre> The following example is for Fortran: <pre> <syntaxhighlight lang="fortran"> C A Simple program program hello print *, "Hello World!" end program hello </syntaxhighlight> </pre> The result is <syntaxhighlight lang="fortran"> C A Simple program program hello print *, "Hello World!" end program hello </syntaxhighlight> === Mathematical formulae === You can include formulae by using <pre><math></math></pre> tags and LaTeX syntax. The example <pre> <math> \operatorname{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}\,dt = \frac{e^{-x^2}}{x\sqrt{\pi}}\sum_{n=0}^\infty (-1)^n \frac{(2n)!}{n!(2x)^{2n}} </math> </pre> produces the following output:<br /> <math> \operatorname{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}\,dt = \frac{e^{-x^2}}{x\sqrt{\pi}}\sum_{n=0}^\infty (-1)^n \frac{(2n)!}{n!(2x)^{2n}} </math> You can click on the formula to enlarge it. === Molecular structures === MolWiki.org has a Jmol extension, which can be used to show molecular structures interactively. You can upload an XYZ file and visualize it as: <pre> <jmol> <jmolApplet> <uploadedFileContents>C26NH28.xyz</uploadedFileContents> </jmolApplet> </jmol> </pre> The example produces the following: <br /> <jmol> <jmolApplet> <uploadedFileContents>C26NH28.xyz</uploadedFileContents> </jmolApplet> </jmol> <br /> You can also do this without uploading a file. Instead, paste molecular data right in the text of the page. The following example <pre> <jmol> <jmolApplet> <inlineContents> 11 test F 0.000000000000 -0.000000000000 -1.685455090782 C 0.000000000000 -0.000000000000 -0.385040888542 N -1.006202819407 1.006202819407 0.139899574177 O -1.068429240123 2.026449428610 -0.506881519565 O -1.600251840223 0.695487106176 1.148009747048 N -0.368295793263 -1.374498612670 0.139899574177 O -1.220742064599 -1.938511578398 -0.506881519565 O 0.197816418158 -1.733602299174 1.148009747048 N 1.374498612670 0.368295793263 0.139899574177 O 2.289171304722 -0.087937850212 -0.506881519565 O 1.402435422065 1.038115192997 1.148009747048 </inlineContents> </jmolApplet> </jmol> </pre> creates the window with structure: <jmol> <jmolApplet> <inlineContents> 11 test F 0.000000000000 -0.000000000000 -1.685455090782 C 0.000000000000 -0.000000000000 -0.385040888542 N -1.006202819407 1.006202819407 0.139899574177 O -1.068429240123 2.026449428610 -0.506881519565 O -1.600251840223 0.695487106176 1.148009747048 N -0.368295793263 -1.374498612670 0.139899574177 O -1.220742064599 -1.938511578398 -0.506881519565 O 0.197816418158 -1.733602299174 1.148009747048 N 1.374498612670 0.368295793263 0.139899574177 O 2.289171304722 -0.087937850212 -0.506881519565 O 1.402435422065 1.038115192997 1.148009747048 </inlineContents> </jmolApplet> </jmol> Sometimes structures defined in [http://wiki.jmol.org/index.php/File_formats/Coordinates#XYZ XYZ] format are visualized by Jmol with some missing bonds. In this case it is better to define structure in [http://wiki.jmol.org/index.php/File_formats/Coordinates#MOL_and_SD_.28Symyx_MDL.29 MOL] format like this: <pre> <jmol> <jmolApplet> <inlineContents> 1 OpenBabel06141312083D 13 12 0 0 0 0 0 0 0 0999 V2000 0.0001 0.0000 -0.6042 P 0 0 0 0 0 0 0 0 0 0 0 0 1.6159 -0.3009 0.2804 C 0 0 0 0 0 0 0 0 0 0 0 0 2.0189 -1.2740 -0.0163 H 0 0 0 0 0 0 0 0 0 0 0 0 2.3421 0.4621 -0.0161 H 0 0 0 0 0 0 0 0 0 0 0 0 1.5107 -0.2814 1.3713 H 0 0 0 0 0 0 0 0 0 0 0 0 -1.0685 -1.2489 0.2803 C 0 0 0 0 0 0 0 0 0 0 0 0 -2.1129 -1.1107 -0.0158 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.7715 -2.2593 -0.0167 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.9986 -1.1680 1.3712 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.5474 1.5497 0.2804 C 0 0 0 0 0 0 0 0 0 0 0 0 0.0941 2.3852 -0.0159 H 0 0 0 0 0 0 0 0 0 0 0 0 -1.5711 1.7973 -0.0165 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.5121 1.4486 1.3712 H 0 0 0 0 0 0 0 0 0 0 0 0 1 6 1 0 0 0 0 1 10 1 0 0 0 0 1 2 1 0 0 0 0 2 5 1 0 0 0 0 3 2 1 0 0 0 0 4 2 1 0 0 0 0 6 9 1 0 0 0 0 7 6 1 0 0 0 0 8 6 1 0 0 0 0 10 13 1 0 0 0 0 11 10 1 0 0 0 0 12 10 1 0 0 0 0 </inlineContents> </jmolApplet> </jmol> </pre> The structure is then visualized with all explicitly defined bonds: <jmol> <jmolApplet> <inlineContents> 1 OpenBabel06141312083D 13 12 0 0 0 0 0 0 0 0999 V2000 0.0001 0.0000 -0.6042 P 0 0 0 0 0 0 0 0 0 0 0 0 1.6159 -0.3009 0.2804 C 0 0 0 0 0 0 0 0 0 0 0 0 2.0189 -1.2740 -0.0163 H 0 0 0 0 0 0 0 0 0 0 0 0 2.3421 0.4621 -0.0161 H 0 0 0 0 0 0 0 0 0 0 0 0 1.5107 -0.2814 1.3713 H 0 0 0 0 0 0 0 0 0 0 0 0 -1.0685 -1.2489 0.2803 C 0 0 0 0 0 0 0 0 0 0 0 0 -2.1129 -1.1107 -0.0158 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.7715 -2.2593 -0.0167 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.9986 -1.1680 1.3712 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.5474 1.5497 0.2804 C 0 0 0 0 0 0 0 0 0 0 0 0 0.0941 2.3852 -0.0159 H 0 0 0 0 0 0 0 0 0 0 0 0 -1.5711 1.7973 -0.0165 H 0 0 0 0 0 0 0 0 0 0 0 0 -0.5121 1.4486 1.3712 H 0 0 0 0 0 0 0 0 0 0 0 0 1 6 1 0 0 0 0 1 10 1 0 0 0 0 1 2 1 0 0 0 0 2 5 1 0 0 0 0 3 2 1 0 0 0 0 4 2 1 0 0 0 0 6 9 1 0 0 0 0 7 6 1 0 0 0 0 8 6 1 0 0 0 0 10 13 1 0 0 0 0 11 10 1 0 0 0 0 12 10 1 0 0 0 0 </inlineContents> </jmolApplet> </jmol> Conversion from XYZ to MOL format can be easily done by using the [http://openbabel.org OpenBabel] program: $ babel file.xyz file.mol == Participating in discussion == Each article has a dedicated discussion page where you can ask questions, find more information on the topic and talk to other people. [http://www.mediawiki.org/wiki/Help:Talk_pages More information] about discussion pages is on MediaWiki site. == Bibliography == MolWiki.org has a central repository for references. You can find it in [[Special:SpecialPages]] in category BibManager. Whenever you want to cite an article check first the repository if it contains already the corresponding entry. If not, then you have to add a new entry manually or import it as BibTeX string automatically. Note, that only registered users can add new or edit existing entries. Each entry has its own unique id called ''Citation'', which is then used for making automatic references in articles of MolWiki.org. This id should be defined manually upon creation of a new entry. The ids are written as a composition of the name of first author of an article, the capital letters of journal (or book, ...) name and the year of publication. For example BergerZFN2009. More examples can be found in [[Special:BibManagerList|overview of BibManager entries]].<br /> <br /> When you want to cite a paper in text use bib tag as in example: <pre><bib id="BergerZFN2009" /></pre> which automatically produces a link like <bib id="BergerZFN2009" />.
Return to Help:Contents.
Retrieved from "http://molwiki.org/wiki/Help:Contents" | CommonCrawl |
From generating series to polynomial congruences
Mattarei, Sandro and Tauraso, Roberto (2018) From generating series to polynomial congruences. Journal of Number Theory, 182 . pp. 179-205. ISSN 0022-314X
Full content URL: http://dx.doi.org/10.1016/j.jnt.2017.06.007
Licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
Truncation.pdf - Whole Document
Available under License Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.
Consider an ordinary generating function $\sum_{k=0}^{\infty}c_kx^k$, of an integer sequence of some combinatorial relevance, and assume that it admits a closed form $C(x)$. Various instances are known where the corresponding truncated sum $\sum_{k=0}^{q-1}c_kx^k$, with $q$ a power of a prime $p$, also admits a closed form representation when viewed modulo $p$. Such a representation for the truncated sum modulo $p$ frequently bears a resemblance with the shape of $C(x)$, despite being typically proved through independent arguments. One of the simplest examples is the congruence $\sum_{k=0}^{q-1}\binom{2k}{k}x^k\equiv(1-4x)^{(q-1)/2}\pmod{p}$ being a finite match for the well-known generating function $\sum_{k=0}^\infty\binom{2k}{k}x^k= 1/\sqrt{1-4x}$. We develop a method which allows one to directly infer the closed-form representation of the truncated sum from the closed form of the series for a significant class of series involving central binomial coefficients. In particular, we collect various known such series whose closed-form representation involves polylogarithms ${\rm Li}_d(x)=\sum_{k=1}^{\infty}x^k/k^d$, and after supplementing them with some new ones we obtain closed-forms modulo $p$ for the corresponding truncated sums, in terms of finite polylogarithms $\pounds_d(x)=\sum_{k=1}^{p-1}x^k/k^d$.
binomial coeffcients, harmonic numbers, polylogarithms, generating functions
G Mathematical and Computer Sciences > G110 Pure Mathematics
College of Science > School of Mathematics and Physics | CommonCrawl |
Bijection between irreducible representations and conjugacy classes of finite groups
Is there some natural bijection between irreducible representations and conjugacy classes of finite groups (as in case of $S_n$)?
rt.representation-theory
$\begingroup$ I do not see how it is natural for $S_n$, $n>3$? Surely, you can use either a partition or its dual partition for a representation. Is there a mathematical reason to choose one over another or is it purely historical? $\endgroup$ – Bugs Bunny Jul 23 '12 at 9:37
$\begingroup$ @Bugs Bunny: maybe one can argue that there are two natural choices in this case? It all boils down to interpreting the word "natural", of course. $\endgroup$ – Vladimir Dotsenko Jul 23 '12 at 13:14
$\begingroup$ @Bugs It's fairly natural. For any partition $\lambda$ of $n$, let $S(\lambda)$ be the subgroup $\prod S_{\lambda_i}$ of $S_n$. Let $H(\lambda)$ be the induction of the trivial rep from $S(\lambda)$ to $S_n$. The characters of the $H(\lambda)$ are upper triangular in the characters of the irreps; the conjugacy classes pairs upper triangularly against the $H(\lambda)$. The pairing is forced by wanting these matrices to be upper triangular rather than a permutation of upper triangular. I'm not sure that there is no other pairing which achieves this, but most of them won't work. $\endgroup$ – David E Speyer Jul 23 '12 at 13:30
$\begingroup$ Is there any other case in math where bijection exists, but there is no natural one ? $\endgroup$ – Alexander Chervov Jul 24 '12 at 7:30
$\begingroup$ Here is a quote from Stanley, EC2, section 7.18: "We have described a natural way to index the irreducible characters of $S_n$ by partitions of $n$, while the cycle type of a permutation defines a natural indexing of the conjugacy classes of $S_n$ by partitions of $n$. Hence we have a canonical bijection between the conjugacy classes and the irreducible characters of $S_n$. However, this bijection is essentially 'accidental' and does not have any useful properties. For arbitrary finite groups there is in general no canonical bijection between irreducible characters and conjugacy classes." $\endgroup$ – Sam Hopkins Jan 6 '14 at 18:54
This is a different take on Steven Landsburg's answer. The short version is that conjugacy classes and irreducible representations should be thought of as being dual to each other.
Fix an algebraically closed field $k$ of characteristic not dividing the order of our finite group $G$. The group algebra $k[G]$ is a finite-dimensional Hopf algebra, so its dual is also a finite-dimensional Hopf algebra of the same dimension; it is the Hopf algebra of functions $G \to k$, which I will denote by $C(G)$. (The former is cocommutative but not commutative in general, while the latter is commutative but not cocommutative in general.) The dual pairing $$k[G] \times C(G) \to k$$
is equivariant with respect to the action of $G$ by conjugation, and it restricts to a dual pairing $$Z(k[G]) \times C_{\text{cl}}(G) \to k$$
on the subalgebras fixed by conjugation; $Z(k[G])$ is the center of $k[G]$ and $C_{\text{cl}}(G)$ is the space of class functions $G \to k$. Now:
The maximal spectrum of $Z(k[G])$ can be canonically identified with the irreducible representations of $G$, and the maximal spectrum of $C_{\text{cl}}(G)$ can be canonically identified with the conjugacy classes of $G$.
The second identification should be clear; the first comes from considering the central character of an irreducible representation. Now, the pairing above is nondegenerate, so to every point of the maximal spectrum of $Z(k[G])$ we can canonically associate an element of $C_{\text{cl}}(G)$ (the corresponding irreducible character) and to every point of the maximal spectrum of $C_{\text{cl}}(G)$ we can canonically associate an element of $Z(k[G])$ (the corresponding sum over a conjugacy class divided by its size).
$\begingroup$ @Qiaochu: This is a nice abstract way to re-focus rigorously the original somewhat fuzzy question. But I still feel unable to treat concrete cases involving finite groups of Lie type in any explicit way: e.g., given the family $\mathrm{SL}_2(\mathbb{F}_p)$, how to prescribe an actual bijection (uniformly for all odd primes) and pass to the quotient group by the center as well? Even for $S_n$ I'm reminded that the original Springer Correspondence assigned characters/partitions to cohomology degrees of the flag variety in one way which later got dualized/transposed. $\endgroup$ – Jim Humphreys Jul 24 '12 at 20:47
$\begingroup$ I'm not sure I understand your question. I'm not claiming that a bijection between irreducible representations and conjugacy classes exist. What this argument does is exhibit irreducible representations and conjugacy classes as canonical bases of two vector spaces which are canonically dual to each other. But these bases are not dual to each other, so I don't get a bijection this way. $\endgroup$ – Qiaochu Yuan Jul 24 '12 at 22:39
$\begingroup$ @Qiaochu: I guess I was expecting a more explicit answer to the original question, such as "no" (?). My concern beyond that is whether your interesting higher level viewpoint adds any concrete detail about actual finite groups of interest, starting with symmetric groups. $\endgroup$ – Jim Humphreys Aug 2 '12 at 18:08
$\begingroup$ @Jim: yes, my answer is essentially "no." That is, I'm suggesting that there shouldn't be a nice bijection for arbitrary finite groups. I think one way to see this is to note that there is no reasonable sense in which this is true for infinite groups (e.g. $\text{SU}(2)$ has countably many irreducible representations but uncountably many conjugacy classes). The correct generalization of the duality above is that by Peter-Weyl, for a compact Hausdorff group $G$ the characters of irreducible representations are dense in the space of class functions. $\endgroup$ – Qiaochu Yuan Aug 2 '12 at 18:35
$\begingroup$ Could there more generally be some kind of duality between irreducible representations of $G$ over a field $K$ and the $K$-conjugacy classes of $K$-regular elements of $G$? $\endgroup$ – anon Feb 15 '14 at 3:24
In general there is no natural bijection between conjugacy classes and irreducible representations of a finite group. To see this think of abelian groups for example. The conjugacy classes are the elements of the group, while the irreducible representations are elements of the dual group. These are isomorphic, via the Fourier transform, but not canonically.
Gjergji ZaimiGjergji Zaimi
$\begingroup$ Nevetheless among all set-theoretic bijections one has subclass of "good" bijections which are isomorphisms of the groups G and and G^dual. $\endgroup$ – Alexander Chervov Sep 16 '12 at 14:42
I would suggest that no such general natural bijection has been found to date. I am not sure how one would ``prove" that such a natural bijection could not be found, notwithstanding Gjergi's answer. I take the view that the equality between numbers of irreducible characters (over an algebraically closed field of characteristic zero) and the number of conjugacy classes is most naturally obtained by counting the dimension of the center of the group algebra in two different categorical settings: from the group theoretic perspective, the natural distinguished basis for the group algebra (the group elements) makes it clear that the dimension of the center is the number of conjugacy classes. On the other hand, from a ring-theoretic perspective, the structure of semi-simple algebras makes it clear that the dimension of the center of the group algebra is the number of isomorphism types of simple modules, that is, the number of irreducible characters. Moving to prime characteristic (still over an algebraically closed field, now of characteristic $p$, say), it is rather more difficult to prove, as R. Brauer did, that the number of isomorphism types of simple modules is the number of conjugacy classes of group elements of order prime to $p.$ However, there are contemporary conjectures in modular representation theory which suggest that there may one day be a different explanation for this equality. In particular, Alperin's weight conjecture suggests counting the number of (isomorphism types of) absolutely irreducible modules in characteristic $p$ in quite a different way, but one which still degenerates to the usual "non-natural" count when the characteristic $p$ does not divide the group order, which is essentially the same as the characteristic zero case. No general conceptual explanation for the conjectural count of Alperin has been found to date, though a number of approaches have been suggested, including a 2-category perspective. But it is not impossible that such an explanation could one day be found, and such an explanation might shed light even on the "easy" characteristic zero situation.
Later edit: In view of some of the comments below on the action of the automorphism group on irreducible characters and on conjugacy classes (which is really an action of the outer automorphism group, since inner automorphisms act trivially in each case), I make some comments on (well-known) properties of these actions, which while not identical, have many compatible features.
Brauer's permutation lemma states that for any automorphism $a$ of the finite group $G,$ the number of $a$-stable complex irreducible characters of $G$ is the same as the number of $a$-stable conjugacy classses. Hence any subgroup of ${\rm Aut}(G)$ has the same number of orbits on irreducible characters as it does on conjugacy classes. The Glauberman correspondence goes further with a group of automorphisms $A$ of order coprime to $|G|$. In that case the $A$-actions on the irreducible characters of $G$ and on the conjugacy classes of $G$ are permutation isomorphic.
While the actions of a general subgroup of the automorphism group are not always as strongly compatible as in the coprime case, various conjectures from modular representation theory suggest that it might be possible to have more compatibilty when dealing with complexes of modules than with individual modules. As a matter of speculation, I have sometimes wondered whether there might be some analogue of Glauberman correspondence in the non-coprime situation for actions on suitable complexes, although I have no idea for a precise formulation at present. Since the dimension of the center of an algebra is invariant under derived equivalence, this is one reason why I do not dismiss the idea of a more subtle explanation for numerical equalities.
Geoff RobinsonGeoff Robinson
$\begingroup$ If I read the discussion at mathoverflow.net/questions/46900/… correctly, no bijection between the conjugacy classes and the irreducible representations respects the action of outer automorphisms. That seems like pretty convincing evidence to me. $\endgroup$ – Qiaochu Yuan Jul 22 '12 at 21:53
$\begingroup$ mathoverflow.net/questions/21606/… seems to say the same thing. $\endgroup$ – Qiaochu Yuan Jul 22 '12 at 21:54
$\begingroup$ @Qiaochu: It's a matter of opinion to some extent. That is some evidence, but I wouldn't consider it conclusive myself. $\endgroup$ – Geoff Robinson Jul 22 '12 at 22:06
$\begingroup$ In my opinion, before I'd call a bijection natural, I'd want it to be invariant under isomorphisms of groups. That is, it should depend only on the group structure, not on what the specific elements of the group are. In particular, then, it should be invariant under arbitrary automorphisms, including outer ones. $\endgroup$ – Andreas Blass Jul 22 '12 at 23:22
$\begingroup$ For what it's worth, even if the original straightforward version of the question has a somewhat negative answer (instead, duality...), as a general methodological attitude I endorse "not giving up tooo easily". Thus, I like Geoff R.'s points. Yet, dangit, there's that pesky factoid about (outer) automorphisms...?!? Ok, well, I myself "concede" that automorphisms' action on repns is natural in all ways that likely concern me (tho' I'm open to persuasion otherwise). So then the scope of that negative result is relevant. Still, "the orbit method"... and all that. Fun stuff... $\endgroup$ – paul garrett Jul 23 '12 at 0:39
Let $k$ be an algebraically closed field whose characteristic is either zero or prime to the order of $G$.
Then the center of the group ring $kG$ has one basis in natural bijective correspondence with the set of irreducible representations of $G$ over $k$, and another basis in natural bijective correspondence with the conjugacy classes of $G$.
1) $kG$ is semisimple (this is called Maschke's Theorem) and Artinian, so it is a direct sum of matrix rings over division rings, hence (because $k$ is algebraically closed) a direct sum of matrix rings over $k$. There is (up to isomorphism) one irreducible representation for each of these matrix rings. Those representations are therefore in natural one-one correspondence with the central idempotents that generate those matrix rings, and these form a basis for the center.
2) For each conjugacy class, we can form the sum of all elements in that conjugacy class. The resulting elements of $kG$ form a basis for the center.
This gives a (non-natural) bijection between irreducible representations and conjugacy classes, because there is a (non-natural) bijection between any two bases for a given finite-dimensional $k$-vector space. I do not see any way you can make this natural.
Steven LandsburgSteven Landsburg
$\begingroup$ This only works when $k$ is algebraically closed, and the result itself also only works in that generality. For example, over $\mathbb{R}$, the cyclic group of order $3$ has two irreducible representations but still has three conjugacy classes. The error in the argument is that, for a division ring $\Delta$ occuring in the decomposition of $k[G]$, the contribution of $M_n(\Delta)$ to $Z(k[G])$ is $Z(\Delta)$, which may be larger than $k$. $\endgroup$ – David E Speyer Jul 22 '12 at 22:47
$\begingroup$ David Speyer: Point taken; I've inserted the words "algebraically closed" where I ought to have inserted them in the first place. $\endgroup$ – Steven Landsburg Jul 22 '12 at 23:30
$\begingroup$ May I suggest that you then change "matrix rings over division rings" to "matrix rings over $k$"? Since an algebraically closed field has nonnontrivial finite division ring extensions. $\endgroup$ – David E Speyer Jul 23 '12 at 10:08
Expanding slightly on the other answers:
To ask for a "natural" bijection is presumably to ask for a natural isomorphism between two functors from the category of finite groups to the category of sets. First, we have the contravariant functor $S$ that associates to each $G$ the set of isomorphism classes of irreducible representations. Then we have the covariant "functor" $T$ that associates to each $G$ the set of its conjugacy classes.
The first problem is that $T$ is not in fact functorial, because the image of a conjugacy class might not be a conjugacy class. So at the very least we should restrict to some subcategory on which $T$ is functorial, e.g. finite groups and surjective morphisms.
But the key problem still remains: There is no good way to define a natural transfomation between two functors of opposite variances. So when I said in my earlier answer that "I do not see any way you can make this natural" I might better have said "This is not a situation in which the notion of naturality makes sense".
All of this, of course, is really just an expansion of Gjergji's and Qiaochu's observations.
$\begingroup$ Representations can be induced (loosing irreducibleiity) so it also can thought as contr variant functor. There are many problems still exist but ..... $\endgroup$ – Alexander Chervov Sep 8 '12 at 9:25
$\begingroup$ There is no hope for unique bijection in general but there are some "good" bijectionS in many examples although I do not know definitive characterization.... $\endgroup$ – Alexander Chervov Sep 8 '12 at 9:32
Steven's and Gjergji answers points that there is no bijection, however possibly this idea should not be put into the rubbish completely.
Ideologically conjugacy classes and irreducible representations are somewhat dual to each other.
The other instances of this "duality" is Kirillov's orbit method - this is "infinitesimal version" of the duality: orbits in Lie algebra are infinitesimal versions of the conjugacy classes. But pay attention orbits are taken not in Lie algebra, but in the dual space g^. This again manifests that there irreps and conj. classes are dual to each other. However think of semi-simple Lie algebra - then g^ and g can be canonically identified...
Another instance is Langlands parametrization of the unitary irreducible representations of the real Lie group G. They are parametrized by conjugacy classes in Langlands dual group G^L. Again here are conjugacy classes in G^L, not in G itself. However for example GL=GL^L...
So it might be one should ask the question what are the groups such that conjugacy classes and irreps are in some natural bijection or something like this ?
Here is some natural map conjugacy classes -> representations. But it does not maps to irreducible ones, and far from being bijection in general.
A colleague of mine suggested the following - take vector space of functions on a group which are equal to zero everywhere except given conjugacy class "C". We can act on these functions by $f \to g f g^{-1} $ - such action will preserve this class. So we get some representation. In the case of abelian group this gives trivial representation, however in general, it might be non-trivial. It always has trivial component - the function which is constant on "C".
I have not thought yet how this representation can be further decomposed, may be it is well-known ?
Alexander ChervovAlexander Chervov
$\begingroup$ Take a finite abelian group $G$ and fix a non-degenerate pairing $G \times G \to \mathbb{C}^{\times}$. Unlike in the case of semisimple Lie algebras I do not see a canonical way to pick such a pairing. $\endgroup$ – Qiaochu Yuan Jul 22 '12 at 19:21
$\begingroup$ @Qiaochu Yuan I agree. I did not pretend that there always exists some natural bijection. Just wanted to softly point out that "not giving up tooo easily", as Paul Garrett wrote in his comment above. E.g. Take G=Z/2Z in this case we may say there is natural bijection :) $\endgroup$ – Alexander Chervov Jul 23 '12 at 7:39
$\begingroup$ @Geoff Sorry may be I misunderstanding however it seems my question was different. I do not take "class functions" (functions constant on conj. class (is it correct?)) but take NON constant on conj. class. Pay attention I am NOT acting by G in standadrd way - but act by conjugation f-> g f g^-1 hence non-constant functions are preserved by this action. This is a representation which clear have trivial one as a submodule (class function is trivial submodule). How this repr. is decomposed ? $\endgroup$ – Alexander Chervov Jul 24 '12 at 9:50
$\begingroup$ E.g. take S_3 then conj. class corresponding to transpositions (just 3 elements) generaters standard 3D representation. We know it is trivial + 2D irreducible representation $\endgroup$ – Alexander Chervov Jul 24 '12 at 9:54
$\begingroup$ OK Alexander. I did not read carefully enough. $\endgroup$ – Geoff Robinson Jul 24 '12 at 10:31
It appears that similar question has been asked at sci.math.research Tue, 19 Oct 1999. The answer by G. Kuperberg is quite interesting. Hope no one don't mind if I put it here:
As Torsten Ekedahl explained, it is sometimes the wrong question, but in modified form, the answer is sometimes yes.
For example, consider A_5, or its central extension Gamma = SL(2,5). The two 3-dimensional representations are Galois conjugates and there is no way to choose one or the other in association with the conjugacy classes. However, if you choose an embedding pi of Gamma in SU(2), then there is a specific bijection given by the McKay correspondence. The irreducible representations form an extended E_8 graph where two representations are connected by an edge if you can get from one to the other by tensoring with pi. The conjugacy classes also form and E_8 graph if you resolve the singularity of the algebraic surface C^2/Gamma. The resolution consists of 8 projective lines intersecting in an E_8 graph. If you take the unit 3-sphere S^3 in C^2, then the resolution gives you a surgery presentation of the 3-manifold S^3/Gamma. The surgery presentation then gives you a presentation of Gamma itself called the Wirtinger presentation. As it happens, each of the Wirtinger generators lies in a different non-trivial conjugacy class. In this way both conjugacy classes and irreps. are in bijection with the vertices of E_8.
Alexander Chervov
$\begingroup$ @Greg Kuperberg I hope you don't mind me to put it here. I quoted it in mathoverflow.net/questions/153731/… $\endgroup$ – Alexander Chervov Jan 6 '14 at 13:19
$\begingroup$ See also: mathoverflow.net/questions/172517/… $\endgroup$ – Alexander Chervov Jun 28 '14 at 18:49
Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question.
Example of an unnatural isomorphism
Groups in which all characters are rational.
Examples of finite groups with "good" bijection(s) between conjugacy classes and irreducible representations?
What is natural about the well-known bijection between conjugacy classes and irreps of a symmetric group?
What is the map from nodes of the E8 diagram to conjugacy classes in the binary icosahedral group?
Bijection between conjugacy classes and irreducible representation of Weyl group = Langlands correspondence over "field with one element"
Representations of finite Coxeter groups
Analogy between product of conjugacy classes and irreps: is there analog of Thompson conjecture ? | CommonCrawl |
Earth, Planets and Space
Monitoring the instrument response of the high-sensitivity seismograph network in Japan (Hi-net): effects of response changes on seismic interferometry analysis
Tomotake Ueno1,
Tatsuhiko Saito1,
Katsuhiko Shiomi1 &
Yoshikatsu Haryu2
Earth, Planets and Space volume 67, Article number: 135 (2015) Cite this article
More than 10 years have passed since observations began to be recorded by Hi-net, a network of high-sensitivity seismometers located in Japan. Several large earthquakes, including the 2011 Tohoku-Oki earthquake, have been recorded by the network during this period. Age-related degradation and the strong ground motion of large earthquakes may change the instrument response of the high-sensitivity seismometers of Hi-net. Thus, we checked the natural frequency f and damping constant h for each Hi-net sensor and monitored the instrument response for 10 years from 2003 to 2013. Most of the sensors showed a stable instrument response over this period. More than 95 % of the sensors whose responses we could well estimate showed small fluctuations in their natural frequencies and damping constants of within 0.05 Hz and 0.05, respectively. We also found that many Hi-net sensors in northeastern Japan showed slight changes in the instrument response as a result of the 2011 Tohoku-Oki earthquake. Based on the assumption that the instrument responses remained unchanged, the fractional velocity reduction in the subsurface structure was reported by seismic interferometry analysis. To investigate how changes in the instrument response can cause errors in seismic interferometry analysis, we conducted a synthetic test. The results indicate that the instrument response did not result in systematic variation in the time delay observed in the interferometry analysis. This confirmed that the velocity decrease observed as a result of the 2011 Tohoku-Oki earthquake was not due to artificial instrument error.
The National Research Institute for Earth Science and Disaster Prevention (NIED) has operated three nationwide seismic networks since the 1990s. The networks consist of approximately 80 broadband seismographs, 1700 strong-motion seismographs, and 800 high-sensitivity seismographs and are referred to as F-net, KiK-net/K-NET, and Hi-net, respectively. These three seismic networks observe seismic motions for short to long periods and detect earthquakes of small to large magnitudes (Okada et al. 2004).
Hi-net is designed for the detection of small earthquakes and very weak signals from the deep crust. At a Hi-net station, a sensor characterized by a natural frequency of 1 Hz is installed at the bottom of a borehole of 100 m or more in depth to reduce background noise (Obara et al. 2005). The performance and limitations of the sensor dynamic range were investigated by Shiomi et al. (2005). The sensor orientation at the bottom of the borehole cannot be observed by the naked eye but may be accurately estimated by analyzing the long-wavelength waveforms and ambient seismic signals (Shiomi 2013). The Hi-net records have been used to determine the accurate hypocenter locations and mechanisms of small earthquakes (e.g., Yukutake et al. 2008) and have contributed to the new finding of a non-volcanic seismic tremor in southwestern Japan (Obara 2002). Additionally, the records have been used for the construction of subsurface structure models by seismic travel-time tomography (e.g., Matsubara et al. 2009), receiver function analysis (e.g., Shiomi et al. 2004; Ueno et al. 2008), and scattering-wave analysis (e.g., Takahashi et al. 2009). Although the Hi-net sensors were originally designed for regional or small earthquakes, high-quality observations can be successfully achieved using a simulation filter from the Hi-net sensors to the F-net broadband sensors for far-field large earthquakes (Maeda et al. 2011). The waveforms recorded by the Hi-net stations are available on the NIED Hi-net website (http://www.hinet.bosai.go.jp/?LANG=en).
Wegler and Sens-Schönfelder (2007) proposed a method referred to as passive image interferometry (PII), in which fractional changes in the background seismic velocity structure (less than approximately 1 %) can be identified by monitoring a correlation function of the ambient seismic noise. This method has successfully detected temporal changes in the subsurface structure associated with a large earthquake (e.g., Brenguier et al. 2008a), a volcanic eruption (e.g., Brenguier et al. 2008b), and Earth tides (Takano et al. 2014). Since Hi-net provides continuous high-quality seismograms, Hi-net records are suitable for the PII. For example, the PII of the Hi-net records revealed a significant velocity decrease of more than 0.3 % for the 2004 Mw 6.6 mid-Niigata earthquake (Wegler et al. 2009), the 2007 Mw 6.6 Noto Hanto earthquake (Ohmi et al. 2008), and the seismic swarm activity by magma intrusions in the Izu Peninsula, Japan (Ueno et al. 2012). Additionally, for the 2011 Tohoku-Oki earthquake, a significant velocity decrease of approximately 1.5 % was reported based on monitoring by the autocorrelation function (ACF) using Hi-net records (Minato et al. 2012).
These PII results implicitly presumed stable instrument responses of the Hi-net sensors during the analysis period. However, stability is not always assured, because some sensors suffer from ground motions larger than the stroke limitation. This can damage the sensor and change the instrument response. Figure 1 shows records of the 2011 Tohoku-Oki earthquake at the N.KMIH station. The KiK-net waveform indicates strong motions larger than the stroke limitation of the Hi-net sensor which is 2 mm (Fig. 1b). Thus, although it appears to work correctly, the Hi-net sensor cannot accurately record large ground motions, because of the stroke limitation (Fig. 1c). Such large signals may cause significant changes in the instrument response for PII because PII is sensitive to very small velocity structure changes (approximately 1 %). To maintain a satisfactorily high performance of a seismograph and guarantee accurate PII results, it is necessary to carefully monitor the changes in the instrument response and examine the effects of these changes on the PII.
Epicenter and Hi-net station locations and waveform examples. a Distribution of the NIED Hi-net stations and the epicenter of the 2011 Tohoku-Oki earthquake. Vertical waveforms of the 2011 Tohoku-Oki earthquake recorded at the N.KMIH (IWTH23) station with b the KiK-net borehole acceleration seismometer and c the Hi-net velocity seismometer. Both seismometers were placed in the same borehole. Band-pass filters of 0.01–20 Hz were applied to the waveforms, and they were transformed to displacements. The dashed line indicates the reliable stroke width (±15 mm) of the Hi-net vertical sensor (Shiomi et al. 2005)
This study thoroughly investigates the instrument responses of approximately 800 Hi-net sensors using a daily test coil signal over a period of 10 years that includes the 2011 Tohoku-Oki earthquake. Analyzing the test coil signal, we estimated the natural frequency and damping constant as the instrument responses for each sensor of Hi-net. Additionally, we examined how changes in the instrument response can affect the subsurface structure changes detected by PII for the 2011 Tohoku-Oki earthquake.
Evaluation of instrument response
We estimated the natural frequency f and damping constant h for each Hi-net sensor by using a daily test signal. Figure 2a shows a schematic of a Hi-net seismometer. A coil bobbin, which works as a pendulum in the seismometer, includes a signal coil and a test coil. The signal coil is for the detection of ground motion, which induces a current in the coil. The coil bobbin is displaced by an electrical voltage applied to the test coil. When we apply a voltage e i to a test coil with length l and resistance R in a magnetic flux density B, the weight of the coil bobbin M, the generated Lorentz force per unit mass f, given by
Equivalent circles and instrument response. a Schematic of a typical Hi-net seismometer, b the voltage used as an input for the test signal, and c the velocity of the coil bobbin relative to the magnet as a result of the input shown in (b). We refer to this waveform as the instrument response or test signal
$$ f=\frac{Bl}{MR}{e}_i, $$
displaces the coil bobbin in the vertical direction. The values of B, l, M, and R are constant. When an electrical voltage is applied suddenly as shown in Fig. 2b, the force is produced in a stepwise shape. The equation of motion for the coil bobbin is then given by
$$ \ddot{z}(t)+2h{\omega}_0\dot{z}(t)+{\omega}_0^2z(t)=f\;H(t), $$
where z(t) is the displacement of the bobbin, ω 0 is the natural angular frequency, h is the damping resistance, and a function H(t) is defined as
$$ H(t)=\left\{\begin{array}{l}0,\kern1em t<0\\ {}1,\kern1em t\ge 0\end{array}\right.. $$
We then obtained Z(ω) from the Fourier transform of z(t) as
$$ Z\left(\omega \right)=\frac{C_0}{\left({\omega}^2-2h{\omega}_0i\omega -{\omega}_0^2\right)i\omega }, $$
where C 0 is a constant which includes the constant values of B, l, M, and R. Equation (3) shows the displacement spectrum of the coil bobbin. The displacement of the coil bobbin induces the voltage e 0 in the signal coil. The output voltage e 0 produce by this bobbin motion is proportional to the velocity of the coil bobbin v(t) = dz(t)/dt, and the Fourier transform of the velocity is given by
$$ V\left(\omega \right)=\frac{C_0}{\omega^2-2h{\omega}_0i\omega -{\omega}_0^2}. $$
Therefore, we obtain the output voltage, in other words, an instrument response, with respect to the force given by the right-hand side of Eq. (2) by performing the inverse Fourier transform given by Eq. (4).
The electrical voltage was applied every morning at 9:00 AM, as shown in Fig. 2b, and a test signal was obtained for each Hi-net sensor, as shown in Fig. 2c. Figure 3 shows examples of the calculated instrument responses (test signals) for three difference sets of natural frequencies f and damping constants h: (f, h) = (1.0 Hz, 0.7), (0.9 Hz, 0.6), and (0.5 Hz, 0.3). The standard Hi-net sensor is set to 1.0 Hz and 0.7. If the instrument response is characterized by f = 0.9 Hz and h = 0.6, the difference from the standard instrument response is not very large, whereas a large difference is recognized for the case of f = 0.5 Hz and h = 0.3.
Examples of calculated instrument responses. Black, red, and blue lines indicate parameters of (f, h) = (1.0 Hz, 0.7), (0.9 Hz, 0.6), and (0.5 Hz, 0.3), respectively
Estimating the parameters of instrument response using a test coil record
We estimated the natural frequency f and damping constant h for the Hi-net sensors using a grid-search method. We calculated the root mean square reduction (rr), which is defined as
$$ \mathrm{r}\mathrm{r}=1-\sqrt{\frac{{\displaystyle \sum_i{\left[S\left({t}_i\right)-O\left({t}_i\right)\right]}^2}}{{\displaystyle \sum_i{\left[O\left({t}_i\right)\right]}^2}}}, $$
where S(t) and O(t) are the calculated and observed instrument responses, respectively. The rr is larger when the equation better reproduces the observation. It takes a maximum value of 1 when the calculation perfectly matches the observation. By changing f from 0.1 to 2.1 Hz with a step of 0.01 Hz and h from 0.1 to 2.1 with a step of 0.01, we investigated an appropriate range of f and h values to maximize the value of rr.
Figure 4a shows the rr distribution as a function of f and h for the instrument response at the N.KMIH station (the station location is shown in Fig. 1a) recorded on 4 June 2010. The maximum rr value is 0.98 for a parameter set of f = 1.11 Hz and h = 0.68. Figure 4b–e compares the observed and calculated instrument responses for each f and h set. Very good agreement between the observed and calculated instrument responses is obtained when the rr value is larger than 0.95. This study assumes that the ranges of f and h that yield rr values larger than 0.95 are the estimated natural frequency and damping constant for an instrument response. We estimated these ranges to be f = 1.05–1.15 Hz and h = 0.6–0.75 for the instrument response at the N.KMIH station in Fig. 4a.
Results of a grid-search method for the determination of the natural frequency f and damping constant h of a Hi-net sensor. a Values of rr as a function of f and h. b–e Observed (black line) and calculated instrument responses for various parameter sets: (b) f = 1.11 Hz and h = 0.68, c f = 1.05 Hz and h = 0.55, d f = 1.25 Hz and h = 0.80, and e f = 1.0 Hz and h = 0.70. Each calculated waveform is colored according to the value of rr
Figure 5 shows an example of the time series of f, h, rr, and the root mean square (RMS) amplitude of the waveform from 2003 to 2013 for three component seismographs at the N.KMIH station in the Tohoku region (the station location is shown in Fig. 1a). The rr value is an important index in the stability determination for parameter estimation, as shown in Fig. 4. For the vertical component time series in Fig. 5a, low rr values were estimated occasionally and the values of f and h were not always stable. It was usually because natural earthquakes such as aftershocks of the Iwate–Miyagi Nairiku earthquake (Mw 6.9) in 2008 and the 2011 Tohoku-Oki earthquake (Mw 9.1) contaminated the test signals. However, most rr values in the whole period were greater than 0.95, which indicates very good agreement between the observed and calculated instrument responses. The natural frequencies and damping constants during the analyzed period were estimated to be in the ranges of f = 1.05–1.10 Hz and h = 0.65–0.75, respectively. The results of the north–south and east–west components were similar to those of the vertical component, as shown in Fig. 5b, c.
Temporal changes in the instrumental parameters from 2003 to 2013. a Vertical components of (a1) the natural frequency f Hz of the test signal at the N.KMIH station, (a2) the damping constant h, and (a3) the rr and the peak (broken line) and RMS (dot) amplitudes 3 s before the test signal. b North–south and c east–west components of the variables in (a). The arrows indicate the dates of the Iwate–Miyagi Nairiku Earthquake in 2008 and the 2011 Tohoku-Oki earthquake
Figure 6 summarizes the distribution of the instrument responses for all Hi-net stations in this study, including a histogram of the average f (Fig. 6(a1)), the standard deviation of f (Fig. 6(a2)), the average h (Fig. 6(a3)), and the standard deviation of h (Fig. 6(a4)) for the vertical sensors for over 10 years for each station. Figure 6b, c indicate the same values for the north–south and east–west components of the seismometers, respectively. More than 95 % of the stations have natural frequencies and damping constants in the ranges of f = 1.0–1.2 Hz and h = 0.6–0.75. The standard deviations of the f and h for each station are within 0.05 Hz and 0.05, respectively, indicating that there is no significant variation in the instrument response (see Fig. 4). Therefore, we conclude that the Hi-net sensors as a whole are well controlled, with small instrument-to-instrument variations. Additionally, the sensors are very stable over the analyzed 10 years, although slight fluctuations in the instrument response were sometimes recognized during large earthquakes.
Histograms of the natural frequency and damping constant distributions. a Distributions of the vertical components of (a1) the natural frequency f, (a2) the standard deviation of f over 10 years, (a3) the damping constant h, and (a4) the standard deviation of h over 10 years. The values were counted when the value of rr exceeded 0.95. b North–south and c east–west components of the variables in (a)
Influence of changes in instrument response on crustal monitoring by interferometry analysis
Passive image interferometry of the 2011 Tohoku-Oki earthquake
During the 2011 Mw 9.1 Tohoku-Oki earthquake, a large-amplitude seismic wave propagated through eastern Honshu (e.g., Suzuki et al. 2011). Fractional subsurface velocity change after the earthquake was reported associated with the strong motion and the large stress/strain change (e.g., Brenguier et al. 2014; Nakata and Snieder 2011). We conducted PII using the records of the vertical components of Hi-net. Following a stretching method proposed by Wegler et al. (2009), we estimated the time delay of the ACF from the reference ACF in the lag time of 4–15 s for the frequency range of 1–3 Hz and detected the changes in the subsurface seismic velocity structure that occurred during the 2011 Tohoku-Oki earthquake.
Figure 7 shows the ACFs obtained from the continuous Hi-net records of the N.KMIH station during the period of 2008–2013. The ACF, which is considered as a Green's function of the source and receiver located at the station, does not vary significantly and shows high stability over each time period. However, looking carefully, a slight time delay from the lag time can be observed immediately following the occurrence of the 2011 Tohoku-Oki earthquake. For example, there is a time delay of approximately 0.1 s at a lag time of approximately 8.5 s.
Autocorrelation functions (ACFs) and noise level. a ACFs of the vertical component for the N.KMIH station. The ACFs were made from band-pass-filtered waveforms (passband = 1–3 Hz). The broken line indicates the date of the 2011 Tohoku-Oki earthquake. b Root mean square amplitude of waveforms for the duration of 1 h after band-pass filtering
The time delays between the reference ACF and the ACFs from 30 October 2010 and 30 October 2011 at each lag time are estimated in Fig. 8(a1, a2), respectively. The reference ACF is defined as the mean of the ACFs in 2009 and 2010. For comparison, we first estimated the time delay on 30 October 2010, a day without a large event, as shown in Fig. 8(a1). No systematic variation of the time delay with the lag time is evident in the figure. Conversely, in Fig. 8(a2), which shows the time delay on 30 October 2011, a day roughly 5 months after the 2011 Tohoku-Oki earthquake, it is evident that the time delay increases with increasing lag time. The stretching method in the PII interprets the time delay of an ACF as being caused by a decrease in the background velocity structure; when the background seismic velocity decreases by α %, the time delay is given by dt = α/100 for a lag time of t (e.g., Sens-Schönfelder and Wegler 2006). We estimated that the velocity structure decreased by approximately 0.5 % at the N.KMIH station after the 2011 Tohoku-Oki earthquake (Fig. 8b).
Time series of interferometry and instrument response. a Time delay between the reference ACF and the ACFs on (a1) 30 October 2010 and (a2) 30 October 2011. Error bars indicate the 95 % confidence interval calculated by the bootstrap method using ACFs spanning 1 week. b Fractional velocity change (dv/v %) at the N.KMIH station obtained from ACFs with lag times from 4 to 15 s. The gray zone indicates apparent velocity changes within ±0.1 %, which may be caused by the changes in instrument response caused by the 2011 Tohoku-Oki earthquake. c Monitoring results of the Hi-net vertical seismogram at the N.KMIH station from 2008 to 2013. The blue and green inverted triangles are the natural frequency f and the damping constant h, respectively
Changes in instrument response
Since the ground shaking caused by the 2011 Tohoku-Oki earthquake was too strong for a high-sensitivity seismograph, an instrument response change must be considered. After the earthquake, in fact, a very small change of the f and the h seems to occur at N.KMIH as shown in Fig. 8c. We may have misidentified the change in the instrument response as corresponding to a change in the subsurface velocity structure resulting from the earthquake. Therefore, we investigate whether the change in the instrument response can misrepresent the subsurface velocity change.
Figure 9 shows the distribution of the differences between the instrument responses before and after the earthquake. We found that many stations in northeastern Japan had small instrument response changes. This means that the instrument responses for the Hi-net sensors, especially those in northeastern Japan, were affected by the large shaking of the 2011 Tohoku-Oki earthquake. However, the changes in the natural frequencies and damping constants for the sensors were smaller than ±0.05 Hz and ±0.05, respectively.
Spatial distributions of the temporal changes in the instrument response. a Spatial distribution of the difference between the estimated a natural frequencies f and b damping constants h before and after the 2011 Tohoku-Oki earthquake
Apparent temporal changes in subsurface structure due to changes in instrument response
We then examined the extent to which the change in the instrument response during the 2011 Tohoku-Oki earthquake could affect the velocity change estimated by PII. First, a band-pass filter with a passband of 1–3 Hz was applied to ambient noise recorded by a broadband F-net station (N.KSNF) as shown in Fig. 10a. We then convoluted instrument responses characterized by (f, h) = (1.0 Hz, 0.7), (0.9 Hz, 0.6), and (0.5 Hz, 0.3) to simulate a seismogram of a normal Hi-net sensor, a seismogram of a Hi-net sensor which was slightly damaged by age-related degradation or large earthquakes, and a seismogram in an extreme case, respectively (black, red, and blue lines, respectively, in Fig. 10b). Figure 10c shows the ACFs for the three types of simulated seismogram records which were calculated using the same method as in Fig. 4.
Examples of an apparent velocity change test. a An observed F-net record. b Simulated Hi-net waveforms, which are made by convoluting three types of instrument responses with parameters of f = 1 Hz and h = 0.7 (black line), f = 0.9 Hz and h = 0.6 (red), and f = 0.5 Hz and h = 0.3 (blue). c ACFs of the simulated Hi-net waveforms shown in (b)
Figure 11 shows the estimated time delay from the reference ACF with f = 1.0 Hz and h = 0.7 for each lag time of ACFs with f = 0.9 Hz and h = 0.6, which represents a possible change in the instrument responses after the 2011 Tohoku-Oki earthquake, and f = 0.5 Hz and h = 0.3, which represents an unrealistically extreme case. We did not find a systematic time delay according to the change in the lag time. Additionally, a very small velocity change of less than 0.1 % may be obtained by the stretching method when a possible response change occurs (Fig. 11a). However, the velocity reduction caused by the 2011 Tohoku-Oki earthquake was 0.5 %, as shown in Fig. 8(a2) and 8b. Since this velocity change is significantly larger than 0.1 %, the obtained velocity structure change is not an artifact caused by the instrument response change during the large earthquake. Figure 11b shows the result for an extremely large instrument response change, which shows unstable values of the lag time dt at each elapsed time. This extreme case of changes in the instrument response cannot reproduce a systematic change in the lag time, according to the lag time obtained by the PII (Fig. 8(a2)). Therefore, we conclude that the change of an instrument response caused by age-related degradation or large earthquakes is usually too small to affect the result of the PII.
Apparent time delay of ACFs. Time delays between the reference ACF of the simulated Hi-net waveform with parameters f = 1.0 Hz and h = 0.7 and ACFs of simulated waveforms with parameters a f = 0.9 Hz and h = 0.6 and b f = 0.5 Hz and h = 0.3
We monitored the instrument responses for Hi-net sensors for 10 years from 2003 to 2013 by estimating their natural frequencies f and damping constants h from the test signal. We found that the Hi-net sensors are well controlled with small instrument-to-instrument variation; more than 95 % of the sensors show natural frequencies in the range of f = 1.0–1.2 Hz and damping constants in the range of h = 0.6–0.8. Furthermore, each Hi-net sensor is very stable against the temporal change over a decade. Their standard deviations of the natural frequency and damping constant were within 0.05 Hz and 0.05, respectively. During the 2011 Tohoku-Oki earthquake, large seismic waves exceeding the dynamic range of the Hi-net sensor were observed at the stations near the earthquake focal area. Small changes in the instrument response were also recognized in the Hi-net sensors after the earthquake. Thus, we conducted a synthetic test to investigate how changes in the instrument response can cause errors in seismic interferometry analysis. The test indicated that the instrument response change did not result in systematic variation in the time delay obtained from the PII of the 2011 Tohoku-Oki earthquake. This supports the conclusion that the 0.5 % decrease in the velocity at the N.KMIH station after the 2011 Tohoku-Oki earthquake was not due to artificial error caused by the large ground shaking.
Brenguier F, Campillo M, Hadziioannou C, Shapiro NM, Nadeau R, Larose E (2008a) Postseismic relaxation along the San Andreas fault at Parkfield from continuous seismological observations. Science 321(5895):1478–1481. doi:10.1126/science.1160943
Brenguier F, Campillo M, Takeda T, Aoki Y, Shapiro NM, Briand X, Emoto K, Miyake H (2014) Mapping pressurized volcanic fluids from induced crustal seismic velocity drops. Science 345(6192):80–82. doi:10.1126/science.1254073
Brenguier F, Shapiro NM, Campillo M, Ferrazzini V, Duputel Z, Coutant O, Nercession A (2008b) Towards forecasting volcanic eruptions using seismic noise. Nat Geosci 1:126–130. doi:10.1038/ngeo104
Maeda T, Obara K, Furumura T, Saito T (2011) Interference of long‐period seismic wavefield observed by the dense Hi-net array in Japan. J Geophys Res 116:B10303. doi:10.1029/2011JB008464
Matsubara M, Obara K, Kasahara K (2009) High-VP/VS zone accompanying non-volcanic tremors and slow-slip events beneath southwestern Japan. Tectonophysics 472:6–17
Minato S, Tsuji T, Ohmi S, Matsuoka T (2012) Monitoring seismic velocity change caused by the 2011 Tohoku-oki earthquake using ambient noise records. Geophys Res Lett 39:L09309. doi:10.1029/2012GL051405
Nakata N, Snieder R (2011) Near-surface weakening in Japan after the 2011 Tohoku-Oki earthquake. Geophys Res Lett 38:L17302. doi:10.1029/2011GL048800
Obara K (2002) Nonvolcanic deep tremor associated with subduction in southwest Japan. Science 296:1679–1681. doi:10.1126/science.1070378
Obara K, Kasahara K, Hori S, Okada Y (2005) A densely distributed high-sensitivity seismograph network in Japan: Hi-net by National Research Institute for Earth Science and Disaster Prevention. Rev Sci Instrum 76:021301. doi:10.1063/1.1854197
Ohmi S, Hirahara K, Wada H, Ito K (2008) Temporal variations of crustal structure in the source region of the 2007 Noto Hanto earthquake, central Japan, with passive image interferometry. Earth Planets Space 60(10):1069–1074
Okada Y, Kasahara K, Hori S, Obara K, Sekiguchi S, Fujiwara H, Yamamoto A (2004) Recent progress of seismic observation networks in Japan: Hi-net, F-net, K-NET and KiK-net. Earth Planets Space 56:xv–xxviii
Sens-Schönfelder C, Wegler U (2006) Passive image interferometry and seasonal variations of seismic velocities at Merapi Volcano, Indonesia. Geophys Res Lett 33, L21302. doi:10.1029/2006GL027797
Shiomi K (2013) New measurements of sensor orientation at NIED Hi-net stations. Report NIED 80:1–20 (in Japanese with English abstract)
Shiomi K, Obara K, Kasahara K (2005) Amplitude saturation of the NIED Hi-net waveforms and simple criteria for recognition. Zishin 57:451–461 (in Japanese)
Shiomi K, Sato H, Obara K, Ohtake M (2004) Configuration of subducting Philippine Sea plate beneath southwest Japan revealed from receiver function analysis based on the multivariate autoregressive model. J Geophys Res 109:B04308. doi:10.1029/2003JB002774
Suzuki W, Aoi S, Sekiguchi H, Kunugi T (2011) Rupture process of the 2011 Tohoku-Oki mega-thrust earthquake (M9.0) inverted from strong-motion data. Geophys Res Lett 38:L00G16. doi:10.1029/ 2011GL049136
Takahashi T, Sato H, Nishimura T, Obara K (2009) Tomographic inversion of the peak delay times to reveal random velocity fluctuations in the lithosphere: method and application to northeastern Japan. Geophys J Int 178(3):1437–1455. doi:10.1111/j.1365-246X.2009.04227.x
Takano T, Nishimura T, Nakahara H, Ohta Y, Tanaka S (2014) Seismic velocity changes caused by the Earth tide: ambient noise correlation analyses of small-array data. Geophys Res Lett 41:6131–6136. doi:10.1002/2014GL060690
Ueno T, Saito T, Shiomi K, Enescu B, Hirose H, Obara K (2012) Fractional seismic velocity change related to magma intrusions during earthquake swarms in the eastern Izu peninsula, central Japan. J Geophys Res 117:B12305. doi:10.1029/2012JB009580
Ueno T, Shibutani T, Ito K (2008) Configuration of the continental Moho and Philippine Sea slab in Southwest Japan derived from receiver function analysis: relation to subcrustal earthquakes. Bull Seism Soc Am 98(5):2416–2427. doi:10.1785/0120080016
Wegler U, Nakahara H, Sens-Schönfelder C, Korn M, Shiomi K (2009) Sudden drop of seismic velocity after the 2004 Mw6.6 mid-Niigata earthquake, Japan, observed with passive image interferometry. J Geophys Res 114:B06305. doi:10.1029/2008JB005869
Wegler U, Sens-Schönfelder C (2007) Fault zone monitoring with passive image interferometry. Geophys J Int 168:1029–1033. doi:10.1111/j.1365-246X.2006.03284.x
Wessel P, Smith WHF (1998) New, improved version of Generic Mapping Tools released. EOS Trans AGU 79(47):579
Yukutake Y, Takeda T, Obara K (2008) Well-resolved hypocenter distribution using the double-difference relocation method in the region of the 2007 Chuetsu-oki Earthquake. Earth Planets Space 60(9):981–985
We thank K. Nomura for his helpful comments about the Hi-net records and the sensor. M. Yamamoto suggested the damage of Hi-net sensors by large ground shaking. We also thank two anonymous reviewers for useful comments. We used the Generic Mapping Tools (GMT) (Wessel and Smith 1998) to make the figures in this paper.
National Research Institute for Earth Science and Disaster Prevention, 3-1 Tennodai, Tsukuba-shi, Ibaraki, 305-0006, Japan
Tomotake Ueno, Tatsuhiko Saito & Katsuhiko Shiomi
Association for the Development of Earthquake Prediction, 1-5-18 Sarugaku-cho, Chiyoda-ku, Tokyo, 101-0064, Japan
Yoshikatsu Haryu
Tomotake Ueno
Tatsuhiko Saito
Katsuhiko Shiomi
Correspondence to Tomotake Ueno.
TU analyzed data and drafted the manuscript. TS offered numerous comments about the analysis. KS offered numerous comments about the Hi-net record. YH observed and provided the seismic records of Hi-net. All authors read and approved the final manuscript.
Ueno, T., Saito, T., Shiomi, K. et al. Monitoring the instrument response of the high-sensitivity seismograph network in Japan (Hi-net): effects of response changes on seismic interferometry analysis. Earth Planet Sp 67, 135 (2015). https://doi.org/10.1186/s40623-015-0305-0
Hi-net sensor
Test coil
Instrument response
Seismic interferometry | CommonCrawl |
The Next White (NEW) detector (1804.02409)
F. Monrabal, J.J. Gómez-Cadenas, J.F. Toledo, V. Álvarez, J.M. Benlloch-Rodríguez, S. Cárcel, J.V. Carrión, R. Esteve, R. Felkai, V. Herrero, A. Laing, A. Martínez, M. Musti, M. Querol, J. Rodríguez, A. Simón, C. Sofka, J. Torrent, R. Webb, J.T. White, C. Adams, L. Arazi, C.D.R. Azevedo, K. Bailey, F. Ballester, F.I.G.M. Borges, A. Botas, S. Cebrián, C.A.N. Conde, J. Díaz, M. Diesburg, J. Escada, L.M.P. Fernandes, P. Ferrario, A.L. Ferreira, E.D.C. Freitas, A. Goldschmidt, D. González-Díaz, R. Guenette, R.M. Gutiérrez, K. Hafidi, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, J.A. Hernando Morata, S. Johnston, B.J.P. Jones, L. Labarga, P. Lebrun, N. López-March, M. Losada, J. Martín-Albo, G. Martínez-Lema, A.D. McDonald, C.M.B. Monteiro, F.J. Mora, J. Munoz Vidal, M. Nebot-Guinot, P. Novella, D.R. Nygren, B. Palmeiro, A. Para, J. Pérez, J. Renner, J. Repond, S. Riordan, C. Romo, L. Ripoll, L. Rogers, F.P. Santos, J.M.F. dos Santos, M. Sorel, T. Stiegler, J.F.C.A. Veloso, N. Yahlali
April 6, 2018 physics.ins-det
Conceived to host 5 kg of xenon at a pressure of 15 bar in the fiducial volume, the NEXT- White (NEW) apparatus is currently the largest high pressure xenon gas TPC using electroluminescent amplification in the world. It is also a 1:2 scale model of the NEXT-100 detector scheduled to start searching for $\beta\beta 0\nu$ decays in 136Xe in 2019. Both detectors measure the energy of the event using a plane of photomultipliers located behind a transparent cathode. They can also reconstruct the trajectories of charged tracks in the dense gas of the TPC with the help of a plane of silicon photomultipliers located behind the anode. A sophisticated gas system, common to both detectors, allows the high gas purity needed to guarantee a long electron lifetime. NEXT-White has been operating since October 2017 at the Canfranc Underground Laboratory (LSC), in Spain. This paper describes the detector and associated infrastructures.
Electron drift properties in high pressure gaseous xenon (1804.01680)
NEXT Collaboration: A. Simón, R. Felkai, G. Martínez-Lema, F. Monrabal, D. González-Díaz, M. Sorel, J.A. Hernando Morata, J.J. Gómez-Cadenas, C. Adams, V. Álvarez, L. Arazi, C.D.R. Azevedo, J.M. Benlloch-Rodríguez, F.I.G.M. Borges, A. Botas, S. Cárcel, J.V. Carrión, S. Cebrián, C.A.N. Conde, J. Díaz, M. Diesburg, J. Escada, R. Esteve, L.M.P. Fernandes, P. Ferrario, A.L. Ferreira, E.D.C. Freitas, J. Generowicz, A. Goldschmidt, R. Guenette, R.M. Gutiérrez, K. Hafidi, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, V. Herrero, S. Johnston, B.J.P. Jones, M.Kekic, L. Labarga, A. Laing, P. Lebrun, N. López-March, M. Losada, J. Martín-Albo, A. Martínez, A.D. McDonald, C.M.B. Monteiro, F.J. Mora, J. Muñoz Vidal, M. Musti, M. Nebot-Guinot, P. Novella, D.R. Nygren, B. Palmeiro, A. Para, J. Pérez, M. Querol, J. Renner, J. Repond, S. Riordan, L. Ripoll, J. Rodríguez, L. Rogers, C. Romo, F.P. Santos, J.M.F. dos Santos, C. Sofka, T. Stiegler, J.F. Toledo, J. Torrent, J.F.C.A. Veloso, R. Webb, J.T. White, N. Yahlali
April 5, 2018 hep-ex, nucl-ex, physics.ins-det
Gaseous time projection chambers (TPC) are a very attractive detector technology for particle tracking. Characterization of both drift velocity and diffusion is of great importance to correctly assess their tracking capabilities. NEXT-White is a High Pressure Xenon gas TPC with electroluminescent amplification, a 1:2 scale model of the future NEXT-100 detector, which will be dedicated to neutrinoless double beta decay searches. NEXT-White has been operating at Canfranc Underground Laboratory (LSC) since December 2016. The drift parameters have been measured using $^{83m}$Kr for a range of reduced drift fields at two different pressure regimes, namely 7.2 bar and 9.1 bar. The results have been compared with Magboltz simulations. Agreement at the 5% level or better has been found for drift velocity, longitudinal diffusion and transverse diffusion.
Calibration of the NEXT-White detector using $^{83m}\mathrm{Kr}$ decays (1804.01780)
NEXT Collaboration: G. Martínez-Lema, J.A. Hernando Morata, B. Palmeiro, A. Botas, P. Ferrario, F. Monrabal, A. Laing, J. Renner, J.J. Gómez-Cadenas, C. Adams, V. Álvarez, L. Arazi, C.D.R. Azevedo, K. Bailey, J.M. Benlloch-Rodríguez, F.I.G.M. Borges, S. Cárcel, J.V. Carrión, S. Cebrián, C.A.N. Conde, J. Díaz, M. Diesburg, J. Escada, R. Esteve, R. Felkai, L.M.P. Fernandes, A.L. Ferreira, E.D.C. Freitas, J. Generowicz, A. Goldschmidt, D. González-Díaz, R. Guenette, R.M. Gutiérrez, K. Hafidi, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, V. Herrero, S. Johnston, B.J.P. Jones, M.Kekic, L. Labarga, P. Lebrun, N. López-March, M. Losada, J. Martín-Albo, A. Martínez, A.D. McDonald, C.M.B. Monteiro, F.J. Mora, J. Muñoz Vidal, M. Musti, M. Nebot-Guinot, P. Novella, D.R. Nygren, A. Para, J. Pérez, M. Querol, J. Repond, S. Riordan, L. Ripoll, J. Rodríguez, L. Rogers, C. Romo, F.P. Santos, J.M.F. dos Santos, A. Simón, C. Sofka, M. Sorel, T. Stiegler, J.F. Toledo, J. Torrent, J.F.C.A. Veloso, R. Webb, J.T. White, N. Yahlalib
April 5, 2018 hep-ex, physics.ins-det
The NEXT-White (NEW) detector is currently the largest radio-pure high pressure gas xenon time projection chamber with electroluminescent readout in the world. NEXT-White has been operating at Laboratorio Subterr\'aneo de Canfranc (LSC) since October 2016. This paper describes the calibrations performed with $^{83m}\mathrm{Kr}$ decays during a long run taken from March to November 2017 (Run II). Krypton calibrations are used to correct for the finite drift-electron lifetime as well as for the dependence of the measured energy on the event position which is mainly caused by variations in solid angle coverage. After producing calibration maps to correct for both effects we measure an excellent energy resolution for 41.5 keV point-like deposits of (4.55 $\pm$ 0.01) % FWHM in the full chamber and (3.88 $\pm$ 0.04) % FWHM in a restricted fiducial volume. Using naive 1/$\sqrt{E}$ scaling, these values translate into FWHM resolutions of (0.592 $\pm$ 0.001) % FWHM and (0.504 $\pm$ 0.005) % at the $Q_{\beta\beta}$ energy of xenon double beta decay (2458 keV), well within range of our target value of 1%.
Measurement of radon-induced backgrounds in the NEXT double beta decay experiment (1804.00471)
NEXT Collaboration: P. Novella, B. Palmeiro, A. Simón, M. Sorel, P. Ferrario, G. Martínez-Lema, F. Monrabal, G. Zuzel, J.J. Gómez-Cadenas, C. Adams, V. Álvarez, L. Arazi, C.D.R Azevedo, K. Bailey, J.M. Benlloch-Rodríguez, F.I.G.M. Borges, A. Botas, S. Cárcel, J.V. Carrión, S. Cebrián, C.A.N. Conde, J. Díaz, M. Diesburg, J.M.F. Dos Santos, J. Escada, R. Esteve, R. Felkai, L.M.P. Fernandes, A.L. Ferreira, E.D.C. Freitas, J. Generowicz, A. Goldschmidt, D. González-Díaz, R. Guenette, R.M. Gutiérrez, K. Hafidi, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, J.A. Hernando Morata, V. Herrero, S. Johnston, B.J.P. Jones, M. Kekic, L. Labarga, A. Laing, P. Lebrun, N. López-March, M. Losada, J. Martín-Albo, A. Martínez, A. McDonald, C.M.B. Monteiro, F.J. Mora, J. Muñoz Vidal, M. Musti, M. Nebot-Guinot, D.R. Nygren, A. Para, J. Pérez, M. Querol, J. Renner, J. Repond, S. Riordan, L. Ripoll, J. Rodríguez, L. Rogers, C. Romo, F.P. Santos, C. Sofka, T. Stiegler, J.F. Toledo, J. Torrent, J.F.C.A. Veloso, R. Webb, J.T. White, N. Yahlali
The measurement of the internal 222Rn activity in the NEXT-White detector during the so-called Run-II period with 136Xe-depleted xenon is discussed in detail, together with its implications for double beta decay searches in NEXT. The activity is measured through the alpha production rate induced in the fiducial volume by 222Rn and its alpha-emitting progeny. The specific activity is measured to be $(37.5\pm 2.3~\mathrm{(stat.)}\pm 5.9~\mathrm{(syst.)})$~mBq/m$^3$. Radon-induced electrons have also been characterized from the decay of the 214Bi daughter ions plating out on the cathode of the time projection chamber. From our studies, we conclude that radon-induced backgrounds are sufficiently low to enable a successful NEXT-100 physics program, as the projected rate contribution should not exceed 0.2~counts/yr in the neutrinoless double beta decay sample.
Demonstration of Single Barium Ion Sensitivity for Neutrinoless Double Beta Decay using Single Molecule Fluorescence Imaging (1711.04782)
A.D.McDonald, B.J.P. Jones, D.R.Nygren, C. Adams, V.Alvarez, C.D.R. Azevedo, J.M. Benlloch-Rodrıguez, F.I.G.M. Borges, A. Botas, S. Carcel, J.V. Carrion, S. Cebrian, C.A.N. Conde, J. Dıaz, M. Diesburg, J. Escada, R. Esteve, R. Felkai, L.M.P. Fernandes, P. Ferrario, A.L. Ferreira, E.D.C. Freitas, A. Goldschmidt, J.J. Gomez-Cadenas, D. Gonzalez-Dıaz, R.M. Gutierrez, R. Guenette, K. Hafidi, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, J.A. Hernando Morata, V. Herrero, S. Johnston, L. Labarga, A. Laing, P. Lebrun, I. Liubarsky, N. Lopez-March, M. Losada, J. Martın-Albo, G. Martınez-Lema, A. Martınez, F. Monrabal, C.M.B. Monteiro, F.J. Mora, L.M. Moutinho, J. Munoz Vidal, M. Musti, M. Nebot-Guinot, P. Novella, B. Palmeiro, A. Para, J. Perez, M. Querol, J. Repond, J. Renner, S. Riordan, L. Ripoll, J. Rodrıguez, L. Rogers, F.P. Santos, J.M.F. dos Santos, A. Simon, C. Sofka, M. Sorel, T. Stiegler, J.F. Toledo, J. Torrent, Z. Tsamalaidze, J.F.C.A. Veloso, R. Webb, J.T. White, N. Yahlali
Feb. 6, 2018 hep-ex, nucl-ex, physics.ins-det
A new method to tag the barium daughter in the double beta decay of $^{136}$Xe is reported. Using the technique of single molecule fluorescent imaging (SMFI), individual barium dication (Ba$^{++}$) resolution at a transparent scanning surface has been demonstrated. A single-step photo-bleach confirms the single ion interpretation. Individual ions are localized with super-resolution ($\sim$2~nm), and detected with a statistical significance of 12.9~$\sigma$ over backgrounds. This lays the foundation for a new and potentially background-free neutrinoless double beta decay technology, based on SMFI coupled to high pressure xenon gas time projection chambers.
Radiopurity assessment of the energy readout for the NEXT double beta decay experiment (1706.06012)
S. Cebrián, J. Pérez, I. Bandac, L. Labarga, V. Álvarez, C.D.R. Azevedo, J.M. Benlloch-Rodríguez, F.I.G.M. Borges, A. Botas, S. Cárcel, J.V. Carrión, C.A.N. Conde, J. Díaz, M. Diesburg, J. Escada, R. Esteve, R. Felkai, L.M.P. Fernandes, P. Ferrario, A.L. Ferreira, E.D.C. Freitas, A. Goldschmidt, J.J. Gómez-Cadenas, D. González-Díaz, R.M. Gutiérrez, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, J.A. Hernando Morata, V. Herrero, B.J.P. Jones, A. Laing, P. Lebrun, I. Liubarsky, N. López-March, M. Losada, J. Martín-Albo, G. Martínez-Lema, A. Martínez, A.D. McDonald, F. Monrabal, C.M.B. Monteiro, F.J. Mora, L.M. Moutinho, J. Muñoz Vidal, M. Musti, M. Nebot-Guinot, P. Novella, D.R. Nygren, B. Palmeiro, A. Para, M. Querol, J. Renner, L. Ripoll, o J. Rodríguez, L. Rogers, F.P. Santos, J.M.F. dos Santos, A. Simón, C. Sofka, M. Sorel, T. Stiegler, p J.F. Toledo, J. Torrent, Z. Tsamalaidze, J.F.C.A. Veloso, J.A. Villar, R. Webb, J.T. White, N. Yahlali
Aug. 21, 2017 hep-ex, nucl-ex, physics.ins-det
The Neutrino Experiment with a Xenon Time-Projection Chamber (NEXT) experiment intends to investigate the neutrinoless double beta decay of 136Xe, and therefore requires a severe suppression of potential backgrounds. An extensive material screening and selection process was undertaken to quantify the radioactivity of the materials used in the experiment. Separate energy and tracking readout planes using different sensors allow us to combine the measurement of the topological signature of the event for background discrimination with the energy resolution optimization. The design of radiopure readout planes, in direct contact with the gas detector medium, was especially challenging since the required components typically have activities too large for experiments demanding ultra-low background conditions. After studying the tracking plane, here the radiopurity control of the energy plane is presented, mainly based on gamma-ray spectroscopy using ultra-low background germanium detectors at the Laboratorio Subterr\'aneo de Canfranc (Spain). All the available units of the selected model of photomultiplier have been screened together with most of the components for the bases, enclosures and windows. According to these results for the activity of the relevant radioisotopes, the selected components of the energy plane would give a contribution to the overall background level in the region of interest of at most 2.4 x 10-4 counts keV-1 kg-1 y-1, satisfying the sensitivity requirements of the NEXT experiment.
Application and performance of an ML-EM algorithm in NEXT (1705.10270)
NEXT Collaboration: A. Simón, C. Lerche, F. Monrabal, J.J. Gómez-Cadenas, V. Álvarez, C.D.R. Azevedo, J.M. Benlloch-Rodríguez, F.I.G.M. Borges, A. Botas, S. Cárcel, J.V. Carrión, S. Cebrián, C.A.N. Conde, J. Díaz, M. Diesburg, J. Escada, R. Esteve, R. Felkai, L.M.P. Fernandes, P. Ferrario, A.L. Ferreira, E.D.C. Freitas, A. Goldschmidt, D. González-Díaz, R.M. Gutiérrez, J. Hauptman, C.A.O. Henriques, A.I. Hernandez, J.A. Hernando Morata, V. Herrero, B.J.P. Jones, L. Labarga, A. Laing, P. Lebrun, I. Liubarsky, N. López-March, M. Losada, J. Martín-Albo, G. Martínez-Lema, A. Martínez, A.D. McDonald, C.M.B. Monteiro, F.J. Mora, L.M. Moutinho, J. Muñoz Vidal, M. Musti, M. Nebot-Guinot, P. Novella, D.R. Nygren, B. Palmeiro, A. Para, J. Pérez, M. Querol, J. Renner, L. Ripoll, J. Rodríguez, L. Rogers, F.P. Santos, J.M.F. dos Santos, C. Sofka, M. Sorel, T. Stiegler, J.F. Toledo, J. Torrent, Z. Tsamalaidze, J.F.C.A. Veloso, R. Webb, J.T. White, N. Yahlali
May 29, 2017 hep-ex, physics.ins-det
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in $^{136}$Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of $^{136}$Xe) for events distributed over the full active volume of the TPC.
Secondary scintillation yield of Xenon with sub-percent levels of CO2 additive: efficiently reducing electron diffusion in HPXe optical TPCs for rare-event detection (1704.01623)
C.A.O. Henriques, E.D.C. Freitas, C.D.R. Azevedo, D. González-Díaz, R.D.P. Mano, M.R. Jorge, L.M.P. Fernandes, C.M.B. Monteiro, J.J. Gómez-Cadenas, V. Álvarez, J.M. Benlloch-Rodríguez, F.I.G.M. Borges, A. Botas, S. Cárcel, J.V. Carrión, S. Cebrián, C.A.N. Conde, J. Díaz, M. Diesburg, J. Escada, R. Esteve, R. Felkai, P. Ferrario, A.L. Ferreira, A. Goldschmidt, R.M. Gutiérrez, J. Hauptman, A.I. Hernandez, J.A. Hernando Morata, V. Herrero, B.J.P. Jones, L. Labarga, A. Laing, P. Lebrun, I. Liubarsky, N. López-March, M. Losada, J. Martín-Albo, G. Martínez-Lema, A. Martínez, A.D. McDonald, F. Monrabal, F.J. Mora, L.M. Moutinho, J. Muñoz Vidal, M. Musti, M. Nebot-Guinot, P. Novella, D.R. Nygren, B. Palmeiro, A. Para, J. Pérez, M. Querol, J. Renner, L. Ripoll, J. Rodríguez, L. Rogers, F.P. Santos, J.M.F. dos Santos, A. Simón, C. Sofka, M. Sorel, T. Stiegler, J.F. Toledo, J. Torrent, Z. Tsamalaidze, J.F.C.A. Veloso, R. Webb, J.T. White, N. Yahlali
April 13, 2017 physics.ins-det
We have measured the electroluminescence (EL) yield of Xe-CO2 mixtures, with sub-percent CO2 concentrations. We demonstrate that the EL production is still high in these mixtures, 70% and 35% relative to that produced in pure xenon, for CO2 concentrations around 0.05% and 0.1%, respectively. The contribution of the statistical fluctuations in EL production to the energy resolution increases with increasing CO2 concentration and, for our gas proportional scintillation counter, it is smaller than the contribution of the Fano factor for concentrations below 0.1% CO2. Xe-CO2 mixtures are important alternatives to pure xenon in TPCs based on EL signal amplification with applications in the important field of rare event detection such as directional dark matter, double electron capture and double beta decay detection. The addition of CO2 to pure xenon at the level of 0.05-0.1% can reduce significantly the scale of electron diffusion from 10 mm/sqrt(m) to 2.5 mm/sqrt(m), with high impact on the HPXe TPC discrimination efficiency of the events through pattern recognition of the topology of primary ionisation trails.
Background rejection in NEXT using deep neural networks (1609.06202)
NEXT Collaboration: J. Renner, A. Farbin, J. Muñoz Vidal, J.M. Benlloch-Rodríguez, A. Botas, P. Ferrario, J.J. Gómez-Cadenas, V. Álvarez, C.D.R. Azevedo, F.I.G. Borges, S. Cárcel, J.V. Carrión, S. Cebrián, A. Cervera, C.A.N. Conde, J. Díaz, M. Diesburg, R. Esteve, L.M.P. Fernandes, A.L. Ferreira, E.D.C. Freitas, A. Goldschmidt, D. González-Díaz, R.M. Gutiérrez, J. Hauptman, C.A.O. Henriques, J.A. Hernando Morata, V. Herrero, B. Jones, L. Labarga, A. Laing, P. Lebrun, I. Liubarsky, N. López-March, D. Lorca, M. Losada, J. Martín-Albo, G. Martínez-Lema, A. Martínez, F. Monrabal, C.M.B. Monteiro, F.J. Mora, L.M. Moutinho, M. Nebot-Guinot, P. Novella, D. Nygren, A. Para, J. Pérez, M. Querol, L. Ripoll, J. Rodríguez, F.P. Santos, J.M.F. dos Santos, L. Serra, D. Shuman, A. Simón, C. Sofka, M. Sorel, J.F. Toledo, J. Torrent, Z. Tsamalaidze, J.F.C.A. Veloso, J. White, R. Webb, N. Yahlali, H. Yepes-Ramírez
Oct. 18, 2016 hep-ex, physics.ins-det
We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.
Investigation of the Coincidence Resolving Time performance of a PET scanner based on liquid xenon: A Monte Carlo study (1604.04106)
J.J Gomez-Cadenas, J.M. Benlloch-Rodríguez, P. Ferrario, F. Monrabal, J. Rodríguez, J.F. Toledo
Sept. 17, 2016 hep-ex, physics.ins-det
The measurement of the time of flight of the two 511 keV gammas recorded in coincidence in a PET scanner provides an effective way of reducing the random background and therefore increases the scanner sensitivity, provided that the coincidence resolving time (CRT) of the gammas is sufficiently good. The best commercial PET-TOF system today (based in LYSO crystals and digital SiPMs), is the VEREOS of Philips, boasting a CRT of 316 ps (FWHM). In this paper we present a Monte Carlo investigation of the CRT performance of a PET scanner exploiting the scintillating properties of liquid xenon. We find that an excellent CRT of 70 ps (depending on the PDE of the sensor) can be obtained if the scanner is instrumented with silicon photomultipliers (SiPMs) sensitive to the ultraviolet light emitted by xenon. Alternatively, a CRT of 160 ps can be obtained instrumenting the scanner with (much cheaper) blue-sensitive SiPMs coated with a suitable wavelength shifter. These results show the excellent time of flight capabilities of a PET device based in liquid xenon.
PETALO, a new concept for a Positron Emission TOF Apparatus based on Liquid xenOn (1605.09615)
J.M. Benlloch-Rodriguez
May 31, 2016 physics.med-ph
This master thesis presents a new type of Positron Emission TOF Apparatus using Liquid xenOn (PETALO). The detector is based in the Liquid Xenon Scintillating Cell (LXSC). The cell is a box filled with liquid xenon (LXe) whose transverse dimensions are chosen to optimize packing and with a thickness optimized to contain a large fraction of the incoming photons. The entry and exit faces of the box (relative to the incoming gammas direction) are instrumented with large silicon photomultipliers (SiPMs), coated with a wavelength shifter, tetraphenyl butadiene (TPB). The non-instrumented faces are covered by reflecting Teflon coated with TPB. In this thesis we show that the LXSC can display an energy resolution of 5% FWHM, much better than that of conventional solid scintillators such as LSO/LYSO. The LXSC can measure the interaction point of the incoming photon with a resolution in the three coordinates of 1 mm. The very fast scintillation time of LXe (2 ns) and the availability of suitable sensors and electronics permits a coincidence resolution time (CRT) in the range of 100-200 ps, again much better than any current PET-TOF system. The LXSC constitutes the core of a high-sensitivity, nuclear magnetic resonance compatible, PET device, with enhanced Time Of Flight (TOF) sensitivity. | CommonCrawl |
Search all SpringerOpen articles
Journal of the Egyptian Mathematical Society
On the joint distribution of order statistics from independent non-identical bivariate distributions
A. R. Omar1
Journal of the Egyptian Mathematical Society volume 27, Article number: 29 (2019) Cite this article
In this note, the exact joint probability density function (jpdf) of bivariate order statistics from independent non-identical bivariate distributions is obtained. Furthermore, this result is applied to derive the joint distribution of a new sample rank obtained from the rth order statistics of the first component and the sth order statistics of the second component.
Multivariate order statistics especially Bivariate order statistics have attracted the interest of several researchers, for example, see [1]. The distribution of bivariate order statistics can be easily obtained from the bivariate binomial distribution, which was first introduced by [2]. Considering a bivariate sample, David et al. [3] studied the distribution of the sample rank for a concomitant of an order statistic. Bairamove and Kemalbay [4] introduced new modifications of bivariate binomial distribution, which can be applied to derive the distribution of bivariate order statistics if a certain number of observations are within the given threshold set. Barakat [5] derived the exact explicit expression for the product moments (of any order) of bivariate order statistics from any arbitrary continuous bivariate distribution function (df). Bairamove and Kemalbay [6] used the derived jpdf by [5] to derive the joint distribution on new sample rank of bivariate order statistics. Moreover, Barakat [7] studied the limit behavior of the extreme order statistics arising from n two-dimensional independent and non-identically distributed random vectors. The class of limit dfs of multivariate order statistics from independent and identical random vectors with random sample size was fully characterized by [8].
Consider n two-dimensional independent random vectors \({{\underline W}_{j}}=(X_{j},Y_{j})\), j=1,2,...,n, with the respective distribution function (df) \(F_{j}(\underline w)=F_{j}(x,y)= P(X_{j}\leq x, Y_{j}\leq y), j=1,2,...,n \). Let X1:n≤X2:n≤...≤Xn:n and Y1:n≤Y2:n≤...≤Yn:n be the order statistics of the X and Y samples, respectively. The main object of this work is to derive the jpdf of the random vector \(Z_{k,k^{\prime }:n}= (X_{n-k+1:n},Y_{n-k^{\prime }+1:n})\phantom {\dot {i}\!},\) where 1≤k, k′≤n. Let \(G_{j}(\underline {w})=P({\underline {W}}_{j}>\underline {w})\) be the survival function of \(F_{j}(\underline {w}), j=1,2,...,n\) and let F1,j(.), F2,j(.), G1,j(.)=1−F1,j(.) and G2,j(.)=1−F2,j(.) the marginal dfs and the marginal survival functions of \(\Phi _{k,k':n}= P(Z_{k,k':n}\leq \underline {w}),\ F_{j}(\underline {w})\) and \(~G_{j}(\underline {w}), j=1,2,...,n,\) respectively. Furthermore, let \({F_{j}}^{1,.}=\frac {\partial F_{j}(\underline {w})}{\partial {x}}\) and \({F_{j}}^{.,1}=\frac {\partial F_{j}(\underline {w})}{\partial {y}}.\) Also, the jpdf of \((X_{n-k+1:n},Y_{n-k^{\prime }+1:n})\phantom {\dot {i}\!}\) is conveniently denoted by \(\phantom {\dot {i}\!}f_{k,k^{\prime }:n}(\underline {w}).\) Finally, the abbreviations min(a, b)=a∧b, and max(a, b)=a∨b will be adopted.
The jpdf of non-identical bivariate order statistics
The following theorem gives the exact formula of the jpdf of non-identical bivariate order statistics.
Theorem 1
The jpdf of non-identical bivariate order statistics is given by
$${{} \begin{aligned} f_{k,k':n}(\underline{w})\,=\,\sum_{\theta,\varphi=0}^{1}\sum_{r=r_{**}}^{r^{**}}\sum_{\rho_{\theta,\varphi,r}}\, \Pi_{j=1}^{\theta}{F}^{.,1}_{i_{j}}(\underline{w})\Pi_{j=\theta+1}^{1}(f_{2,i_{j}}(y)\,-\,{F}^{.,1}_{i_{j}}(\underline{w})) \Pi_{j=2}^{\varphi+1}{F}^{1,.}_{i_{j}}(\underline{w})\\ \times\Pi_{j=\varphi+2}^{2}(f_{1,i_{j}}(x)-{F}^{1,.}_{i_{j}}(\underline{w}))\Pi_{j=3}^{k-\theta-r+1} (F_{1,i_{j}}(x)-{F}_{i_{j}}(\underline{w}))\Pi_{j=k-\theta-r+2}^{k-\theta+1}F_{i_{j}}(\underline{w})\\ \times \Pi_{j=k-\theta+2}^{k+k'-\theta-\varphi-r}(F_{2,i_{j}}(y)-{F}_{i_{j}}(\underline{w})) \Pi_{j=k+k'-\theta-\varphi-r+1}^{n}G_{i_{j}}(\underline{w})+\sum_{r=0\vee(k+k'-n-1)}^{(k-1)\wedge(k'-1)}\sum_{\rho_{r}}f_{j}(\underline{w})\\ \Pi_{j=2}^{k-r}(F_{1,i_{j}}(x)-{F}_{i_{j}}(\underline{w})) \times\Pi_{j=k-r+1}^{k}F_{i_{j}}(\underline{w})\Pi_{j=k+1}^{k+k'-r}(F_{2,i_{j}}(y)-{F}_{i_{j}}(\underline{w}))\Pi_{j=k+k'-r+1}^{n}G_{i_{j}}(\underline{w}), \end{aligned}} $$
where \(r_{**}=0\vee (k+k'-\theta -\varphi -n), r^{**}= (k-\theta -1)\wedge (k'-\varphi -1),\ \sum _{\rho }\) denotes summation subject to the condition ρ, and \(\sum _{\rho _{\theta _{1},\theta _{2},\varphi _{1},\varphi _{2},\omega,r}}\) denotes the set of permutations of i1,...,in such that \(i_{j_{1}}<...< i_{j_{n}}.\)
A convenient expression of \(f_{k,k':n}(\underline {w})\) may derived by noting that the compound event E={x<Xk:n<x+δx, y<Yk:n<y+δy} may be realized as follows: r;φ1;s1;θ1;ω;θ2;s2;φ2 and t observations must fall respectively in the regions I1=(−∞,x]∩(−∞,y];I2=(x, x+δx]∩(−∞,y];I3=(x+δx,∞]∩(−∞,y];I4=(−∞,x]∩(y, y+δy];I5=(x, x+δx]∩(y, y+δy];I6=(x+δx,∞]∩(y, y+δy];I7=(−∞,x]∩(y+δy,∞);I8=(x, x+δx]∩(x+δx,∞);and I9=(x+δx,∞)∩(y+δy,∞) with the corresponding probability \(P_{ij}=P({\underline {W}}_{j}\in I_{i}), i=1,2,...,9\). Therefore, the joint density function \(f_{k,k':n}(\underline {w})\) of \(\phantom {\dot {i}\!}(X_{k:n},Y_{k^{\prime }:n})\) is the limit of \(\frac {P(E)}{\delta x\delta y}\) as δx,δy→0, where P(E) can be derived by noting that \(\theta _{1}+\theta _{2}+\omega =\varphi _{1}+\varphi _{2}+\omega =1;\ r+\theta _{1}+s_{2}=k-1;\ r+\varphi _{1}+s_{1}=k'-1;\ r,\theta _{1},s_{2},\varphi _{1},\omega,\theta _{2},s_{1},\varphi _{2},t \geq 0;\ P_{1j}=F_{j}(\underline {w}),P_{2j}=F_{j}^{1,.}(\underline {w})\delta x, P_{3j}=F_{2,j}(y)-F_{j}(x+\delta x,y), P_{4j}= F_{j}^{.,1}(\underline {w})\delta y, P_{5j\cong }F_{j}^{1,1}(\underline {w})\delta x\delta y=f_{j}(\underline {w})\delta x\delta y, P_{6j}\cong (f_{2,j}(y)-F_{j}^{.,1}(\underline {w}+\delta \underline {w}))\delta y,\) where \(f_{2,j}(y)=\frac {\partial F_{2,j}(y)}{\partial y},j=1, 2,...,n,~ \partial \underline {w}=(\delta x,\delta y), \underline {w}+\delta \underline {w}=(x+\delta x,y+\delta y),P_{7j}=F_{1,j}(x)-F_{j}(x,y+\delta y), P_{8j}=(f_{1,j}(x)-F_{j}^{1,.}(\underline {w}+\delta \underline {w}))\delta {x}, P_{9j}=1-F_{1,j}(x+\delta x)-F_{2,j}(y+\delta y)+F_{j}(\underline {w}).\) Thus, we get
$$ {{} \begin{aligned} f_{k,k':n}(\underline{w})\!&=\!\sum_{\theta_{1},\varphi_{1},\theta_{2},\varphi_{2}=0}^{1}\sum_{r=r_{*}}^{r^{*}} \sum_{\rho_{\theta_{1},\theta_{2},\varphi_{1},\varphi_{2},\omega,r}}\,\Pi_{j=1}^{\theta_{1}}P_{4i_{j}}\Pi_{\theta_{1}+1}^{\theta_{1}+\varphi_{1}}P_{2i_{j}} \Pi_{j=\theta_{1}+\varphi_{1}+1}^{\theta_{1}+\varphi_{1}+\theta_{2}}P_{6i_{j}} \Pi_{j=\theta_{1}+\varphi_{1}+\theta_{2}+1}^{\theta_{1}+\varphi_{1}+\theta_{2}+\varphi_{2}}P_{8i_{j}}\\ &\Pi_{j=\theta_{1}+\varphi_{1}+\theta_{2}+\varphi_{2}\,+\,1}^{\theta_{1}+\varphi_{1}+\theta_{2}+\varphi_{2}+\omega}P_{5i_{j}} \Pi_{j=\theta_{1}+\varphi_{1}+\theta_{2}+\varphi_{2}+\omega+1}^{\theta_{2}+\varphi_{1}+\theta_{2}+\omega+k-r\!-1}P_{7i_{j}} \Pi_{j=\theta_{2}+\varphi_{1}+\varphi_{2}+\omega+k\!-r}^{\varphi_{1}\,+\,\theta_{2}+\varphi_{2}+\omega+k-1}P_{1i_{j}} \Pi_{j=\varphi_{1}\,+\,\theta_{2}+\varphi_{2}+\omega\!+k}^{\theta_{2}\!+\varphi_{2}+\omega+k+k'\!-r\,-\,2}P_{3i_{j}}\\ &\Pi_{j=\theta_{2}+\varphi_{2}+\omega+k+k'-r-1}^{n}P_{9i_{j}}, \end{aligned}} $$
where \(r_{*}=0\vee (k+k'+\theta _{2}+\varphi _{2}+\omega -r-1-n), r^{*}= (k-\theta _{1}-1)\wedge (k'-\varphi _{1}-1),\sum _{\rho }\) denotes summation subject to the condition ρ, and \(\sum _{\rho _{\theta _{1},\theta _{2},\varphi _{1},\varphi _{2},\omega,r}}\) denotes the set of permutations of i1,...,in such that \(i_{j_{1}}<...< i_{j_{n}}\) for each product of the type \(\Pi _{j=j_{1}}^{j_{2}}\). Moreover, if j1>j2, then \(\Pi _{j=j_{1}}^{j_{2}}=1\). But (1) can be written in the following simpler form
$${{} \begin{aligned} P(E)=\sum_{\theta,\varphi=0}^{1}\sum_{r=r_{**}}^{r^{**}}\sum_{\rho_{\theta,\varphi,r}}\,\Pi_{j=1}^{\theta}P_{4i_{j}}\Pi_{j= \theta+1}^{1}P_{6i_{j}}\Pi_{j=2}^{\varphi+1}P_{2i_{j}} \Pi_{j=\varphi+2}^{2}P_{8i_{j}}\Pi_{j=3}^{k-\theta-r+1}P_{7i_{j}}\Pi_{j=k-\theta-r+2}^{k-\theta+1}P_{1i_{j}} \\ \Pi_{j=k-\theta+2}^{k+k'-\theta-\varphi-r}P_{3i_{j}} \Pi_{j=k+k'-\theta-\varphi-r+1}^{n}P_{9i_{j}}+\sum_{r=0\vee(k+k'-n-1)}^{(k-1)\wedge(k'-1)}\sum_{\rho_{r}}P_{5i_{3}}\Pi_{j=2}^{k-r}P_{7i_{j}} \Pi_{j=k-r+1}^{k}P_{1i_{j}} \Pi_{j=k+1}^{k+k'-r}P_{3i_{j}}\Pi_{j=k+k'-r}^{n}P_{9i_{j}}, \end{aligned}} $$
where r∗∗=0∨(k+k′−θ−φ−n),r∗∗=(k−θ−1)∧(k′−φ−1). Therefore,
$$ {{} \begin{aligned} f_{k,k':n}(\underline{w})=\sum_{\theta,\varphi=0}^{1}\sum_{r=r_{**}}^{r^{**}}\sum_{\rho_{\theta,\varphi,r}}\,\Pi_{j=1}^{\theta}P_{4i_{j}} \Pi_{j=\theta+1}^{1}P_{6i_{j}}\Pi_{j=2}^{\varphi+1}P_{2i_{j}} \Pi_{j=\varphi+2}^{2}P_{8i_{j}}\Pi_{j=3}^{k-\theta-r+1}P_{7i_{j}}\\ \Pi_{j=k-\theta-r+2}^{k-\theta+1}P_{1i_{j}}\Pi_{j=k-\theta+2}^{k+k'-\theta-\varphi-r}P_{3i_{j}} \Pi_{j=k+k'-\theta-\varphi-r+1}^{n}P_{9i_{j}}+\sum_{r=0\vee(k+k'-n-1)}^{(k-1)\wedge(k'-1)}\sum_{\rho_{r}}P_{5i_{3}}\Pi_{j=2}^{k-r}P_{7i_{j}}\\ \Pi_{j=k-r+1}^{k}P_{1i_{j}}\Pi_{j=k+1}^{k+k'-r}P_{3i_{j}}\Pi_{j=k+k'-r}^{n}P_{9i_{j}}. \end{aligned}} $$
Thus, we get
$$ {{} \begin{aligned} f_{k,k':n}(\underline{w})=\sum_{\theta,\varphi=0}^{1}\sum_{r=r_{**}}^{r^{**}}\sum_{\rho_{\theta,\varphi,r}}\, \Pi_{j=1}^{\theta}{F}^{.,1}_{i_{j}}(\underline{w})\Pi_{j=\theta+1}^{1}(f_{2,i_{j}}(y)-{F}^{.,1}_{i_{j}}(\underline{w})) \Pi_{j=2}^{\varphi+1}{F}^{1,.}_{i_{j}}(\underline{w})\\ \Pi_{j=\varphi+2}^{2}(f_{1,i_{j}}(x)-{F}^{1,.}_{i_{j}}(\underline{w}))\Pi_{j=3}^{k-\theta-r+1} (F_{2,i_{j}}(x)-{F}_{i_{j}}(\underline{w}))\Pi_{j=k-\theta-r+2}^{k-\theta+1}F_{i_{j}}(\underline{w}) \Pi_{j=k-\theta+2}^{k+k'-\theta-\varphi-r}(F_{2,i_{j}}(y)-{F}_{i_{j}}(\underline{w}))\\ \Pi_{j=k+k'-\theta-\varphi-r+1}^{n}G_{i_{j}}(\underline{w})+\sum_{r=0\vee(k+k'-n-1)}^{(k-1)\wedge(k'-1)}\sum_{\rho_{r}}f_{i_{3}}(\underline{w}) \Pi_{j=2}^{k-r}(F_{1i_{j}}(x)-{F}_{i_{j}}(\underline{w}))\\ \Pi_{j=k-r+1}^{k}F_{i_{j}}(\underline{w})\Pi_{j=k+1}^{k+k'-r}(F_{2,i_{j}}(y)-{F}_{i_{j}}(\underline{w}))\Pi_{j=k+k'-r+1}^{n}G_{i_{j}}(\underline{w}). \end{aligned}} $$
Hence, the proof.
Relation (3) may be written in term of permanents (c.f [9]) as follows:
$$ {{} \begin{aligned} f_{k,k':n}(\underline{w})= \sum_{\theta,\varphi=0}^{1}\sum_{r=r_{**}}^{r^{**}}\frac{1}{(k-\theta-r-1)!r!(k'-\varphi-r-1)!(n-k-k'+\varphi+\theta+r-1)!}\\ \begin{array}{ccccccc}\text{Per} [\underline{U}^{.,1}_{1,1}&(\underline{U}^{1}_{.,1}\,-\,\underline{U}^{.,1}_{1,1})&\underline{U}^{1,.}_{1,1}& \left(\underline{U}^{1}_{1,.}\,-\,\underline{U}^{1,.}_{1,1}\right)& \left(\underline{U}_{1,.}\,-\,\underline{U}_{1,1}\right)&\underline{U}_{1,1}&(\underline{U}_{.,1}\,-\,\underline{U}_{1,1})~\\ ~~ {\theta}~ &~ { 1-\theta}~~&~ {\varphi} ~~&~ {1-\varphi}~~&~ {k-\theta-r-1}~&~ {r}&~ {k'-\varphi-r-1}\\ (1-\underline{U}_{1,.}-\underline{U}_{1,.}+\underline{U}_{1,1}){\vphantom{\underline{U}^{.,1}_{1,1}}}]\\~~ {n-k-k'+\theta+\varphi+r-1} \end{array}\\ +\sum_{r=r_{*}}^{r^{*}}\frac{1}{(k-r)!r!(k'-r)!(n-k-k'+r)!} ~{\renewcommand{\arraystretch}{0.6} \begin{array}{cccccccc}\text{Per} [\underline{U}^{1,1}_{1,1}~&~(\underline{U}_{1,.}-\underline{U}_{1,1})~&\underline{U}_{1,1}~&~ (\underline{U}_{.,1}-\underline{U}_{1,1})~&~ (1-\underline{U}_{1,.}-\underline{U}_{1,.}+\underline{U}_{1,1})]\\ ~ {1}~ &~ {k-r}~~&~ {r} ~~&~ {k'-r}~&~ {n-k-k'+r-1}\end{array}}, \end{aligned}} $$
where \(~\underline U_{1,.}=(F_{11}(x_{1})~~F_{12}(x_{1})~...~ F_{1n}(x_{1}))',\ \underline U_{.,1}=(F_{2,1}(x_{2})~~F_{2,2}(x_{2})~...~ F_{2,n}(x_{2}))',\ \underline U_{1,1}=(F_{1}(\underline x)~~F_{2}(\underline x)~...~ F_{n}(\underline x))'\) and \(\underline 1\) is the n×1 column vector of ones. Moreover, if \({\underline {a}}_{1}, {\underline {a}}_{2},... \) are column vectors, then
$$\begin{array}{cccc} \text{Per}[&{\underline{a}}_1~~&~~{\underline{a}}_2~~&~~...]\\ &~~{i_{1}}~~&~~{i_{2}}~~&~~... \end{array} $$
will denote the matrix obtained by taking i1 copies of \({\underline {a}}_{1},\ i_{2}\) copies of \({\underline {a}}_{2},\) and so on.
Finally, when k=k′=1, in (3), we get
$$\begin{array}{*{20}l} f_{1,1:n}(\underline{w})=\sum_{\rho_{\theta,\varphi,r}}\, (f_{2,i_{1}}(y)-{F}^{.,1}_{i_{1}}(\underline{w})) (f_{1,i_{2}}(x)-{F}^{1,.}_{i_{2}}(\underline{w}))\Pi_{j=3}^{n} G_{i_{j}}(\underline{w})+\sum_{\rho_{r}}f_{i_{3}}(\underline{w})\\ (F_{2,i_{2}}(y)-{F}_{i_{2}}(\underline{w}))\Pi_{j=3}^{n}G_{i_{3}}(\underline{w}). \end{array} $$
Also, for k=k′=n, we get
$$\begin{array}{*{20}l} {} f_{n,n:n}(\underline{w})=\sum_{\rho_{\theta,\varphi,r}}\, {F}^{.,1}_{i_{1}}(\underline{w}){F}^{1,.}_{i_{2}}(\underline{w})\Pi_{j=3}^{n}F_{i_{j}}(\underline{w})+\sum_{\rho_{r}}f_{i_{3}}(\underline{w}) \Pi_{j=2}^{n}F_{i_{j}}(\underline{w}))(F_{2,i_{n+1}}(y)-{F}_{i_{n+1}}(\underline{w})). \end{array} $$
Joint distribution of the new sample rank of X r:n and Y s:n
Consider n two-dimensional independent vectors \(\underline {W}_{j}=(X_{j},Y_{j}),j= 1,...,n,\) with the respective df \(F_{j}(\underline {W})\) and the jpdf\(F_{j}(\underline {W})\). Further assume that (Xn+1,Yn+1), (Xn+2,Yn+2),..., (Xn+m,Yn+m), (m≥1) is another random sample with absolutely continuous df \(G^{*}_{j}(x,y), j= 1,...,m\) and jpdf gj(x, y). We assume that the two samples (Xn+1,Yn+1),(Xn+2,Yn+2),...,(Xn+m, Yn+m), (m≥1) and (X1,Y1),(X2,Y2),...,(Xn,Yn) are independent.
For 1≤r, s≤n, m≥1, we define the random variables η1 and η2 as follows:
$$\begin{array}{*{20}l} \eta_{1}=\sum_{i=1}^{m}I_{(X_{r:n}-X_{n+i})} \end{array} $$
$$\begin{array}{*{20}l} \eta_{2}=\sum_{i=1}^{m}I_{(Y_{s:n}-Y_{n+i})}, \end{array} $$
where I(x)=1 if x>0 and I(x)=0 if x≤0 is an indicator function. The random variables η1 and η2 are referred to as exceedance statistics. Clearly η1 shows the total number of new X observations Xn+1,Xn+2,..., Xn+m which does not exceed a random threshold based on the rth order statistic Xr:n. Similarly, η2 is the number of new observations Yn+1,Yn+2,...,Yn+m which does not exceed Ys:n.
The random variable ζ1=η1+1 indicates the rank of Xr:n in the new sample Xn+1, Xn+2,..., Xn+m, and the random variable ζ2=η2+1 indicates the rank of Ys:n in the new sample Yn+1,Yn+2,...,Yn+m. We are interested in the joint probability mass function of random variables ζ1 and ζ2. We will need the following representation of the compound event P(ζ1=p,ζ2=q)=P(η1=p−1,η2=q−1).
Denote A={Xn+i≤Xr:n},Ac={Xn+i>Xr:n}, B={Yn+i≤Ys:n} and Bc={Yn+i>Ys:n}. Assume that in a fourfold sampling scheme, the outcome of the random experiment is one of the events A or Ac, and simultaneously one of B or Bc, where Ac is the complement of A.
In m independent repetitions of this experiment, if A appears together with B ℓ times, then A and Bc appear together p−ℓ−1 times. Therefore, B appears together with Ac q−ℓ−1 times and Bc m−p−q+ℓ+2 times. This can be described as follows:
Clearly, the random variables η1 and η2 are the number of occurrences of the events A and B in m independent trials of the fourfold sampling scheme, respectively. By conditioning on Xr:n=x and Ys:n=y, the joint distribution of η1 and η2 can be obtained from bivariate binomial distribution considering the four sampling scheme with events A={Xn+i≤x}, B={Yn+i≤y}, and with respective probabilities
$$\begin{array}{*{20}l} P(AB)=P(X_{n+i}\leq x,Y_{n+i}\leq y),\\ P(AB^{c})=P(X_{n+i}\leq x,Y_{n+i}> y),\\ P(A^{c}B)=P(X_{n+i}> x,Y_{n+i}\leq y),\\ P(A^{c}B^{c})=P(X_{n+i}> x,Y_{n+i}> y). \end{array} $$
Now, we can state the following theorem.
The joint probability mass function of ζ1 and ζ2, is given by
$${{} \begin{aligned} &P(\zeta_{1}=p, \zeta_{2}=q) = P(\eta_{1}= p-1,\eta_{2}=q-1)= \sum_{\ell=max (0, p+q-m-2)}^{min (p-1,q-1)}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\\&\Pi_{j=1}^{\ell}G^{*}_{i_{j}}(x,y) \Pi_{j=\ell+1}^{p-1}[G^{*}_{1,i_{j}}(x)-G^{*}_{i_{j}}(x,y)]\Pi_{j=p}^{q-\ell-1+p}\left[G^{*}_{2,i_{j}}(y)-G^{*}_{i_{j}}(x,y)\right] \Pi_{j=q-\ell+p}^{m+2}\overline{G}^{*}_{1,i_{j}}(x) f_{k,k':n(\underline{w})} dxdy, \end{aligned}} $$
where, \(p,q= 1,...,m+1,\ f_{k,\acute {k}:n}(\underline {w})\) is defined in (3).
Consider the fourfold sampling scheme described in Definition (1). By conditioning with respect to Xr:n=x and Ys:n=y, we obtain
$$ {{} \begin{aligned} P(\zeta_{1}=p, \zeta_{2}=q) \equiv P(\eta_{1}= p-1,\eta_{2}=q-1)= P\left\{\sum_{i=1}^{m}I_{(X_{r:n}-X_{n+i})}=p-1,I_{(Y_{r:n}-Y_{n+i})}=q-1\right\}\\ =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P\left\{\sum_{i=1}^{m}I_{(X_{r:n}-X_{n+i})}=p-1,I_{(Y_{s:n}-Y_{n+i})}=q-1| X_{r:n}=x, Y_{s:n}=y\right\}\\ \times P\{X_{r:n} = x, Y_{s:n} = y \} dxdy\\ =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P\left\{\sum_{i=1}^{m}I_{(x-X_{n+i})}=p-1,I_{(y-Y_{n+i})}=q-1\right\}{dF}_{r,s:n}(x,y). \end{aligned}} $$
On the other hand,
$$ {{} \begin{aligned} P\left(\sum_{i=1}^{m}I_{(x-X_{n+i})}=p-1,I_{(y-Y_{n+i})}=q-1\right)=\sum_{\ell=max (0,p+q-m-2)}^{min(p-1,q-1)}\Pi_{j=1}^{\ell}P_{i_{j}}(AB)\Pi_{j=\ell+1}^{p-1}P_{i_{j}}(AB^{c})\\ \Pi_{j=p}^{q-\ell-2+p}P_{i_{j}}\Pi_{j=q-\ell-1+p}^{m}P_{i_{j}}. \end{aligned}} $$
Substituting (6) in (5), we get
$${{} \begin{aligned} P(\zeta_{1}=p, \zeta_{2}=q) = P(\eta_{1}= p-1,\eta_{2}=q-1)= \sum_{\ell=max (0, p+q-m-2)}^{min (p-1,q-1)}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Pi_{j=1}^{\ell}G^{*}_{i_{j}}(x,y)\\ \Pi_{j=\ell+1}^{p-1}[G^{*}_{1,i_{j}}(x)-G^{*}_{i_{j}}(x,y)]\Pi_{j=p}^{q-\ell-1+p}[G^{*}_{2,i_{j}}(y)-G^{*}_{i_{j}}(x,y)]\Pi_{j =q-\ell+p}^{m}\overline{G}^{*}_{1,i_{j}}(x) f_{k,k':n(\underline{w})} dxdy, \end{aligned}} $$
where p, q=1,...,m+1. This completes the proof.
Galambos, J.: Order statistics of samples from multivariate distributions. J. Amer. Statist. Assoc. 70(351), 674–680 (1975).
Article MathSciNet Google Scholar
Aitken, A. C., Gonin, H. T.: On fourfold sampling with and without replacement. Proc. Roy. Soc. Edinburgh. 55, 114–125 (1935).
David, H. A., O'Connell, M. J., Yang, S. S: Distribution and expected value of the rank of a concomitant of an order statistic. Ann. Statist. 5, 216–223 (1977).
Bairamove, I., Kemalbay, G.: Some novel discrete distributions under fourfold sampling schemes and conditional bivariate order statistics. J. Comput. Appl. Math. 248, 1–14 (2013).
Barakat, H. M.: On moments of bivariate order statistics. Ann. Instit. Statist. Math. 51(2), 351–358 (1999).
Bairamove, I., Kemalbay, G.: Joint distribution of new sample rank of bivariate order statistics. J. Appl. Statist. 42(10), 2280–2289 (2015).
Barakat, H. M.: Limit theorems for bivariate extremes of non-identically distributed random variable. Appl. Math. 29(4), 371–386 (2002).
MathSciNet MATH Google Scholar
Barakat, H. M., Nigm, E. M., Al-Awady, M. A.: Asymptotic properties of multivariate order statistics with random index. Bull. Malaysian Math. Soc. (second series). 38(1), 289–301 (2015).
Bapat, R. B., Beg, M. I.: Order statistics for nonidentically distributed variables and permanents. Sankhya Indian J. Stat. Ser. A. 51(1), 79–93 (1989).
Faculty of Science, Department of Mathematics, Girls Branch, Al-Azhar University, Cairo, Egypt
A. R. Omar
The author read and approved the final manuscript.
Correspondence to A. R. Omar.
The author declares that she has no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Omar, A.R. On the joint distribution of order statistics from independent non-identical bivariate distributions. J Egypt Math Soc 27, 29 (2019). https://doi.org/10.1186/s42787-019-0034-9
Bivariate order statistics
Random vector
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Pat'sBlog
The mathematical (and other) thoughts of a (now retired) math teacher,
On This Day in Math - September 15
Once a sage asked why scholars always flock to the doors of the rich, whilst the rich are not inclined to call at the doors of scholars. "The scholars" he answered , "are well aware of the use of money, but the rich are ignorant of the nobility of science".
~Al-Biruni
The 258th day of the year; 258 is a sphenic(wedge) number (the product of three distinct prime factors..258 = 2·3·43) it is also the sum of four consecutive primes 258 = 59 + 61 + 67 + 71
(Jim Wilder@Wilderlab pointed out that 2,5,&8 are the numbers in the center column of a phone or calculator.) Jim's comment reminded me of a math type phone joke I saw at Wolfram Mathworld:
"I'm sorry, the number you have dialed is an imaginary number. Please rotate by 90o and try again."
Taking this joke one step further gives the "identity" \( 8*i = \infty \) And that reminds me of this cartoon at Mind Research Institute.
The Number Zoo gives a Magic square using 16 consecutive primes, with a constant of 258
1739 Euler, in a letter to Johann Bernoulli, begins the general treatment of the homogeneous linear differential equation with constant coefficients. *VFR Within a year Euler had completed this treatment by successfully dealing with repeated quadratic factors and turned his attention to the non-homogeneous linear equation. *John E. Sasser, HISTORY OF ORDINARY DIFFERENTIAL EQUATIONS -THE FIRST HUNDRED YEARS
1749 Euler's interest in lotteries began at the latest in 1749 when he was commissioned by Frederick the Great to render an opinion on a proposed lottery that would be similar to the Lottery in Genoa. The first of two letters began 15 September 1749. A second series began on 17 August 1763. E812. Read before the Academy of Berlin 10 March 1763 but only published posthumously in 1862. "Reflexions sur une espese singulier de loterie nommée loterie genoise." Opera postuma I, 1862, p. 319–335. The paper determined the probability that a particular number be drawn. *Euler's Correspondence Translated by Richard J. Pulskamp, Department of Mathematics & Computer Science, Xavier University,
1782 Lagrange, in a letter to Laplace, told of finishing his M´ecanique analytique. Legendre undertook the editing of the work for the press. *VFR
1784 Balloon Corner in London earns its name. 'Vincent' Lunardi, "The Daredevil Aeronaut", demonstragted a hydrogen balloon flight at the Artillery Ground of the Honourable Artillery Company in London before over a reported crowd of 200,000 people. With a cat, a dog, and a caged pigeon, he rose into the air with only a partially filled bag and then set down at Welham Green, to release the cat, which seems to have become airsick. He then continued to Standon Green End. A stone marks the event in Welham Green :
"NEAR THIS SPOT AT 3.30 IN THE
AFTERNOON OF SEPTEMBER 15TH
1784 VINCENZO LUNDARDI THE
ITALIAN BALLOONIST MADE HIS
FIRST LANDING WHILST ON HIS
PIONEER FLIGHT IN THE ENGLISH
HAVING HANDED OUT A CAT AND DOG
THE PARTNERS OF HIS FLIGHT FROM
LONDON HE RE-ASCENDED AND
CONTINUED NORTH EASTWARD."
The 24 mile flight brought Lunardi fame and began the ballooning fad that inspired fashions of the day—Lunardi skirts were decorated with balloon styles, and in Scotland, the Lunardi Bonnet was named after him (balloon-shaped and standing some 600 mm tall), and is even mentioned by Scotland's national poet, Robert Burns (1759–96), in his poem 'To a Louse', written about a young woman called Jenny, who had a louse scampering in her Lunardi bonnet, *Wik
1788 Thomas Paine writes to Thomas Jefferson to discuss shapes for Iron Bridges:
Whether I shall set off a catenarian Arch or an Arch of a Circle I have not yet determined, but I mean to set off both and take my choice. There is one objection against a Catenarian Arch, which is, that the Iron tubes being all cast in one form will not exactly fit every part of it. An Arch of a Circle may be sett off to any extent by calculating the Ordinates, at equal distances on the diameter. In this case, the Radius will always be the Hypothenuse, the portion of the diameter be the Base, and the Ordinate the perpendicular or the Ordinate may be found by Trigonometry in which the Base, the Hypothenuse and right angle will be always given.,
Jefferson's reply of Dec 23, 1788 is cited by OED as the first use of "catenary". *Jeff Miller
1846 George Boole, age 30, applied for a professorship at "any of her Majesty's colleges, now in the course of being established in Ireland." Although he had "never studied at a college" he had been a teacher for 15 years and was "familiar with the elementary and the practical as well as the higher Mathematics." Although he was self taught, the testimonies of DeMorgan, Cayley, and William Thomson showed that he was an accomplished mathematician. In August 1849, he was appointed the first professor of mathematics at Queen's College Cork. The reason for the long delay is unclear. *MacHale, George Boole, His Life and Work, pp. 75-84
1855 Sylvester commenced his duties as professor of mathematics and lecturer in natural philosophy at the Royal Military Academy, Woolwich, and one of the richest research periods of his life began. [Osiris, 1(1936), 101] *VFR
1947 The world's oldest computing society, the Association for Computing Machinery, is founded. With more than 80,000 members today, ACM organizes conference and educational workshops to exchange information on technology.*CHM
2017 The Cassini spaceprobe, launched in 1997, was named after Giovanni Cassini and became the first probe to orbit Saturn. For over a decade the probe sent back vital information about Saturn and its moons, expanding our knowledge of the planet, its moons and rings. Fittingly, it crashed into the planet on the day following the anniversary of the death of the astronomer for whom it was named. Earth received @CassiniSaturn's final signal at 7:55am ET. Cassini is now part of the planet it studied *NASA
973 Al-Biruni (15 Sept 973, 13 Dec 1048) is one of the major figures of Islamic mathematics. He contributed to astronomy, mathematics, physics, medicine and history. *SAU
1736 Jean-Sylvain Bailly (15 Sep 1736; 12 Nov 1793) French astronomer who computed an orbit for Halley's Comet (1759) and studied the four satellites of Jupiter then known. He was the first Mayor of Paris (1789-91). He was executed by guillotine in Paris during the French Revolution.*TIS
Bailly published his Essay on the theory of the satellites of Jupiter in 1766,a an expansion of a presentation he had made to the Academy in 1763. It was followed up in 1771 by a noteworthy dissertation, On the inequalities of light of the satellites of Jupiter.b and in 1778, he was elected a foreign member of the Royal Swedish Academy of Sciences. *Wik
1852 Edward Bouchet (15 Sept 1852, New Haven, Conn – 28 Oct 1918, New Haven, Conn) was the first African-American to earn a Ph.D. in Physics from an American university and the first African-American to graduate from Yale University in 1874. He completed his dissertation in Yale's Ph.D. program in 1876 becoming the first African-American to receive a Ph.D. (in any subject). His area of study was Physics. Bouchet was also the first African-American to be elected to Phi Beta Kappa.
Bouchet was also among 20 Americans (of any race) to receive a Ph.D. in physics and was the sixth to earn a Ph. D. in physics from Yale.
Edward Bouchet was born in New Haven, Connecticut. At that time there were only three schools in New Haven open to black children. Bouchet was enrolled in the Artisan Street Colored School with only one teacher, who nurtured Bouchet's academic abilities. He attended the New Haven High School from 1866–1868 and then Hopkins School from 1868-1870 where he was named valedictorian (after graduating first in his class).
Bouchet was unable to find a university teaching position after college, most likely due racial discrimination. Bouchet moved to Philadelphia in 1876 and took a position at the Institute for Colored Youth (ICY). He taught physics and chemistry at the ICY for 26 years. The ICY was later renamed Cheyney University. He resigned in 1902 at the height of the W. E. B. Du Bois-Booker T. Washington controversy over the need for an industrial vs. collegiate education for blacks.
Bouchet spent the next 14 years holding a variety of jobs around the country. Between 1905 and 1908, Bouchet was director of academics at St. Paul's Normal and Industrial School in Lawrenceville, Virginia (presently, St. Paul's College). He was then principal and teacher at Lincoln High School in Gallipolis, Ohio from 1908 to 1913. He joined the faculty of Bishop College in Marshall, Texas in 1913. Illness finally forced him to retire in 1916 and he moved back to New Haven. He died there, in his childhood home, in 1918, at age of 66. He had never married and had no children.*Wik
1883 Esteban Terrades i Illa (15 September 1883;Barcelona,- 9 May 1950,Madrid,) was a Spanish mathematician, scientist and engineer. He researched and taught widely in the fields of mathematics and the physical sciences, working not only in his native Catalonia, but also in the rest of Spain and in South America. He was also active as a consultant in the Spanish aeronautics, electric power, telephone and railway industries. *Wik
1886 Paul Pierre Lévy (15 Sep 1886; 15 Dec 1971) was a French mining engineer and mathematician. He contributed to probability, functional analysis, partial differential equations and series. He also studied geometry. In 1926 he extended Laplace transforms to broader function classes. He undertook a large-scale work on generalised differential equations in functional derivatives.*TIS
1894 Oskar Benjamin Klein (September 15, 1894 (or 1893?) – February 5, 1977) was a Swedish theoretical physicist. Klein retired as professor emeritus in 1962. He was awarded the Max Planck medal in 1959. He is credited for inventing the idea, part of Kaluza–Klein theory, that extra dimensions may be physically real but curled up and very small, an idea essential to string theory / M-theory. *Wik
1901 Luigi Fantappiè (15 September 1901 – 28 July 1956) was an Italian mathematician, known for work in mathematical analysis and for creating the theory of analytic functionals: he was a student and follower of Vito Volterra. Later in life he proposed scientific theories of sweeping scope.*Wik
1923 Georg Kreisel FRS (born September 15, 1923 in Graz) is an Austrian-born mathematical logician who has studied and worked in Great Britain and America. Kreisel came from a Jewish background; his family sent him to England before the Anschluss, where he studied mathematics at Trinity College, Cambridge and then, during World War II, worked on military subjects. After the war he returned to Cambridge and received his doctorate. He taught at the University of Reading until 1954 and then worked at the Institute for Advanced Study from 1955 to 1957. Subsequently he taught at Stanford University and the University of Paris. Kreisel was appointed a professor at Stanford University in 1962 and remained on the faculty there until he retired in 1985.
Kreisel worked in various areas of logic, and especially in proof theory, where he is known for his so-called "unwinding" program, whose aim was to extract constructive content from superficially non-constructive proofs.*Wik
1926 Jean-Pierre Serre (15 September 1926 - ) born in Bages, France. In 1954 he received a Fields Medal for his work on the homotopy groups of spheres. He also reformulated some of the main results of complex variable theory in terms of sheaves. See International Mathematical Congresses. An Illustrated History, 1893–1986, edited by Donald J. Albers, G. L. Alexanderson and Constance Reid.
1929 Murray Gell-Mann (15 Sep 1929 - ). American theoretical physicist who predicted the existence of quarks. He was awarded the 1969 Nobel Prize for Physics for his contributions to particle physics. His first major contribution to high-energy physics was made in 1953, when he demonstrated how some puzzling features of hadrons (particles responsive to the strong force) could be explained by a new quantum number, which he called "strangeness". In 1964, he (and Yuval Ne'eman) proposed the eightfold way to define the structure of particles. This led to Gell-Mann's postulate of the quark, a name he coined (from a word in James Joyce's Finnegan's Wake).*TIS
DEATHS 1883 physicist J. Plateau (14 October 1801 – 15 September 1883) Plateau's problem asks for the minimal surface through a given curve in three dimensions. A minimal surface is the surface through the curve with the least area. Mathematically the problem is still unsolved, but physical solutions are easy: dip a curved wire in a soap solution. The "soap bubble" that results is the minimal surface for that curve. *VFR
In 1829 Joseph Plateau submitted his doctoral thesis to his mentor Adolphe Quetelet for advice. It contained only 27 pages, but formulated a great number of fundamental conclusions. It contained the first results of his research into the effect of colors on the retina (duration, intensity and color), his mathematical research into the intersections of revolving curves (locus), the observation of the distortion of moving images, and the reconstruction of distorted images through counter revolving
discs Prior to going blind was the first person to demonstrate the illusion of a moving image. To do this he used counter rotating disks with repeating drawn images in small increments of motion on one and regularly spaced slits in the other. He called this device of 1832 the phenakistoscope.
Plateau has often been termed a "martyr for science". . In many (popular) publications the blindness of Plateau is ascribed to his experiment of 1829 in which he looked directly into the sun for 25 seconds. Recent research definitely refutes this. The exact date of the blindness is difficult to formulate simply. It was a gradual process during the year 1843 and early 1844. Plateau publishes two papers in which he painstakingly describes the scientific observations of his own blindness. After 40 years of blindness he still has subjective visual sensations. For his experiments, as well as for the related deskwork colleagues and family help him. *Wik
1898 William Seward Burroughs (born 28 Jan 1855, 5 Sep 1898) American inventor who invented the world's first commercially viable recording adding machine and pioneer of its manufacture. He was inspired by his experience in his beginning career as a bank clerk. On 10 Jan 1885 he submitted his first patent (issued 399,116 on 21 Aug 1888) for his mechanical "calculating machine." Burroughs co-founded the American Arithmometer Co in 1886 to develop and market the machine. The manufacture of the first machines was contracted out, and their durability was unsatisfactory. He continued to refine his design for accuracy and reliability, receiving more patents in 1892, and began selling the much-improved model for $475 each. By 1895, 284 machines had been sold, mostly to banks, and 1500 by 1900. The company later became Burroughs Corporation (1905) and eventually Unisys. *TIS
1962 William W(eber) Coblentz (20 Nov 1873, 15 Sep 1962) was an American physicist and astronomer whose work lay primarily in infrared spectroscopy. In 1905 he founded the radiometry section of the National Bureau of Standards, which he headed for 40 years. Coblentz measured the infrared radiation from stars, planets, and nebulae and was the first to determine accurately the constants of blackbody radiation, thus confirming Planck's law.*TIS
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbell
Posted by Pat's Blog at 00:00
Etymological and Historical Dictionary of Math
Volume A-B
Volume C
Volume D-E
Volume F-H
Volume I-M
Volume N-P
Volume Q-R
Volume S
Volume T-Z
Expanded Number Facts Collection
The Expanded Number Facts collection for year days 1-366
are now available here
Search On This Day in Math
On This Day in Math - September 9 | CommonCrawl |
It is known that American college students have embraced cognitive enhancement, and some information exists about the demographics of the students most likely to practice cognitive enhancement with prescription stimulants. Outside of this narrow segment of the population, very little is known. What happens when students graduate and enter the world of work? Do they continue using prescription stimulants for cognitive enhancement in their first jobs and beyond? How might the answer to this question depend on occupation? For those who stay on campus to pursue graduate or professional education, what happens to patterns of use? To what extent do college graduates who did not use stimulants as students begin to use them for cognitive enhancement later in their careers? To what extent do workers without college degrees use stimulants to enhance job performance? How do the answers to these questions differ for countries outside of North America, where the studies of Table 1 were carried out?
Up to 20% of Ivy League college students have already tried "smart drugs," so we can expect these pills to feature prominently in organizations (if they don't already). After all, the pressure to perform is unlikely to disappear the moment students graduate. And senior employees with demanding jobs might find these drugs even more useful than a 19-year-old college kid does. Indeed, a 2012 Royal Society report emphasized that these "enhancements," along with other technologies for self-enhancement, are likely to have far-reaching implications for the business world.
And yet aside from anecdotal evidence, we know very little about the use of these drugs in professional settings. The Financial Times has claimed that they are "becoming popular among city lawyers, bankers, and other professionals keen to gain a competitive advantage over colleagues." Back in 2008 the narcolepsy medication Modafinil was labeled the "entrepreneur's drug of choice" by TechCrunch. That same year, the magazine Nature asked its readers whether they use cognitive-enhancing drugs; of the 1,400 respondents, one in five responded in the affirmative.
For illustration, consider amphetamines, Ritalin, and modafinil, all of which have been proposed as cognitive enhancers of attention. These drugs exhibit some positive effects on cognition, especially among individuals with lower baseline abilities. However, individuals of normal or above-average cognitive ability often show negligible improvements or even decrements in performance following drug treatment (for details, see de Jongh, Bolt, Schermer, & Olivier, 2008). For instance, Randall, Shneerson, and File (2005) found that modafinil improved performance only among individuals with lower IQ, not among those with higher IQ. [See also Finke et al 2010 on visual attention.] Farah, Haimm, Sankoorikal, & Chatterjee 2009 found a similar nonlinear relationship of dose to response for amphetamines in a remote-associates task, with low-performing individuals showing enhanced performance but high-performing individuals showing reduced performance. Such ∩-shaped dose-response curves are quite common (see Cools & Robbins, 2004)
"I have a bachelors degree in Nutrition Science. Cavin's Balaster's How to Feed a Brain is one the best written health nutrition books that I have ever read. It is evident that through his personal journey with a TBI and many years of research Cavin has gained a great depth of understanding on the biomechanics of nutrition has how it relates to the structure of the brain and nervous system, as well as how all of the body systems intercommunicate with one another. He then takes this complicated knowledge and breaks it down into a concise and comprehensive book. If you or your loved one is suffering from ANY neurological disorder or TBI please read this book."
Some supplement blends, meanwhile, claim to work by combining ingredients – bacopa, cat's claw, huperzia serrata and oat straw in the case of Alpha Brain, for example – that have some support for boosting cognition and other areas of nervous system health. One 2014 study in Frontiers in Aging Neuroscience, suggested that huperzia serrata, which is used in China to fight Alzheimer's disease, may help slow cell death and protect against (or slow the progression of) neurodegenerative diseases. The Alpha Brain product itself has also been studied in a company-funded small randomized controlled trial, which found Alpha Brain significantly improved verbal memory when compared to adults who took a placebo.
Today piracetam is a favourite with students and young professionals looking for a way to boost their performance, though decades after Giurgea's discovery, there still isn't much evidence that it can improve the mental abilities of healthy people. It's a prescription drug in the UK, though it's not approved for medical use by the US Food and Drug Administration and can't be sold as a dietary supplement either.
My predictions were substantially better than random chance7, so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn't keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don't think Adderall is personally worthwhile.
MarketInsightsReports provides syndicated market research reports to industries, organizations or even individuals with an aim of helping them in their decision making process. These reports include in-depth market research studies i.e. market share analysis, industry analysis, information on products, countries, market size, trends, business research details and much more. MarketInsightsReports provides Global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.
Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try.
Ongoing studies are looking into the possible pathways by which nootropic substances function. Researchers have postulated that the mental health advantages derived from these substances can be attributed to their effects on the cholinergic and dopaminergic systems of the brain. These systems regulate two important neurotransmitters, acetylcholine and dopamine.
Also known as Arcalion or Bisbuthiamine and Enerion, Sulbutiamine is a compound of the Sulphur group and is an analog to vitamin B1, which is known to pass the blood-brain barrier easily. Sulbutiamine is found to circulate faster than Thiamine from blood to brain. It is recommended for patients suffering from mental fatigue caused due to emotional and psychological stress. The best part about this compound is that it does not have most of the common side effects linked with a few nootropics.
These days, nootropics are beginning to take their rightful place as a particularly powerful tool in the Neurohacker's toolbox. After all, biochemistry is deeply foundational to neural function. Whether you are trying to fix the damage that is done to your nervous system by a stressful and toxic environment or support and enhance your neural functioning, getting the chemistry right is table-stakes. And we are starting to get good at getting it right. What's changed?
Natural nootropic supplements derive from various nutritional studies. Research shows the health benefits of isolated vitamins, nutrients, and herbs. By increasing your intake of certain herbal substances, you can enhance brain function. Below is a list of the top categories of natural and herbal nootropics. These supplements are mainstays in many of today's best smart pills.
Finally, a workforce high on stimulants wouldn't necessarily be more productive overall. "One thinks 'are these things dangerous?' – and that's important to consider in the short term," says Huberman. "But there's also a different question, which is: 'How do you feel the day afterwards?' Maybe you're hyper-focused for four hours, 12 hours, but then you're below baseline for 24 or 48."
Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%.
Of course, there are drugs out there with more transformative powers. "I think it's very clear that some do work," says Andrew Huberman, a neuroscientist based at Stanford University. In fact, there's one category of smart drugs which has received more attention from scientists and biohackers – those looking to alter their own biology and abilities – than any other. These are the stimulants.
Regardless, while in the absence of piracetam, I did notice some stimulant effects (somewhat negative - more aggressive than usual while driving) and similar effects to piracetam, I did not notice any mental performance beyond piracetam when using them both. The most I can say is that on some nights, I seemed to be less easily tired when writing or editing or n-backing (and I felt less tired than ICON 2011 than ICON 2010), but those were also often nights I was also trying out all the other things I had gotten in that order from Smart Powders, and I am still dis-entangling what was responsible. (Probably the l-theanine or sulbutiamine.)
As Sulbutiamine crosses the blood-brain barrier very easily, it has a positive effect on the cholinergic and the glutamatergic receptors that are responsible for essential activities impacting memory, concentration, and mood. The compound is also fat-soluble, which means it circulates rapidly and widely throughout the body and the brain, ensuring positive results. Thus, patients with schizophrenia and Parkinson's disease will find the drug to be very effective.
Vitamin B12 is also known as Cobalamin and is a water-soluble essential vitamin. A (large) deficiency of Vitamin B12 will ultimately lead to cognitive impairment [52]. Older people and people who don't eat meat are at a higher risk than young people who eat more meat. And people with depression have less Vitamin B12 than the average population [53].
Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says.
I can test fish oil for mood, since the other claimed benefits like anti-schizophrenia are too hard to test. The medical student trial (Kiecolt-Glaser et al 2011) did not see changes until visit 3, after 3 weeks of supplementation. (Visit 1, 3 weeks, visit 2, supplementation started for 3 weeks, visit 3, supplementation continued 3 weeks, visit 4 etc.) There were no tests in between the test starting week 1 and starting week 3, so I can't pin it down any further. This suggests randomizing in 2 or 3 week blocks. (For an explanation of blocking, see the footnote in the Zeo page.)
Similarly, we could try applying Nick Bostrom's reversal test and ask ourselves, how would we react to a virus which had no effect but to eliminate sleep from alternating nights and double sleep in the intervening nights? We would probably grouch about it for a while and then adapt to our new hedonistic lifestyle of partying or working hard. On the other hand, imagine the virus had the effect of eliminating normal sleep but instead, every 2 minutes, a person would fall asleep for a minute. This would be disastrous! Besides the most immediate problems like safely driving vehicles, how would anything get done? You would hold a meeting and at any point, a third of the participants would be asleep. If the virus made it instead 2 hours on, one hour off, that would be better but still problematic: there would be constant interruptions. And so on, until we reach our present state of 16 hours on, 8 hours off. Given that we rejected all the earlier buffer sizes, one wonders if 16:8 can be defended as uniquely suited to circumstances. Is that optimal? It may be, given the synchronization with the night-day cycle, but I wonder; rush hour alone stands as an argument against synchronized sleep - wouldn't our infrastructure would be much cheaper if it only had to handle the average daily load rather than cope with the projected peak loads? Might not a longer cycle be better? The longer the day, the less we are interrupted by sleep; it's a hoary cliche about programmers that they prefer to work in long sustained marathons during long nights rather than sprint occasionally during a distraction-filled day, to the point where some famously adopt a 28 hour day (which evenly divides a week into 6 days). Are there other occupations which would benefit from a 20 hour waking period? Or 24 hour waking period? We might not know because without chemical assistance, circadian rhythms would overpower anyone attempting such schedules. It certainly would be nice if one had long time chunks in which could read a challenging book in one sitting, without heroic arrangements.↩
For Malcolm Gladwell, "the thing with doping is that it allows you to train harder than you would have done otherwise." He argues that we cannot easily call someone a cheater on the basis of having used a drug for this purpose. The equivalent, he explains, would be a student who steals an exam paper from the teacher, and then instead of going home and not studying at all, goes to a library and studies five times harder.
When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance.
Do you start your day with a cup (or two, or three) of coffee? It tastes delicious, but it's also jump-starting your brain because of its caffeine content. Caffeine is definitely a nootropic substance—it's a mild stimulant that can alleviate fatigue and improve concentration, according to the Mayo Clinic. Current research shows that coffee drinkers don't suffer any ill effects from drinking up to about four cups of coffee per day. Caffeine is also found in tea, soda, and energy drinks. Not too surprisingly, it's also in many of the nootropic supplements that are being marketed to people looking for a mental boost. Take a look at these 7 genius brain boosters to try in the morning.
"Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!"
Many studies suggest that Creatine helps in treating cognitive decline in individuals when combined with other therapies. It also helps people suffering from Parkinson's and Huntington's disease. Though there are minimal side effects associated with creatine, pretty much like any nootropic, it is not entirely free of side-effects. An overdose of creatine can lead to gastrointestinal issues, weight gain, stress, and anxiety.
Unfortunately, cognitive enhancement falls between the stools of research funding, which makes it unlikely that such research programs will be carried out. Disease-oriented funders will, by definition, not support research on normal healthy individuals. The topic intersects with drug abuse research only in the assessment of risk, leaving out the study of potential benefits, as well as the comparative benefits of other enhancement methods. As a fundamentally applied research question, it will not qualify for support by funders of basic science. The pharmaceutical industry would be expected to support such research only if cognitive enhancement were to be considered a legitimate indication by the FDA, which we hope would happen only after considerably more research has illuminated its risks, benefits, and societal impact. Even then, industry would have little incentive to delve into all of the issues raised here, including the comparison of drug effects to nonpharmaceutical means of enhancing cognition.
"Cavin Balaster knows brain injury as well as any specialist. He survived a horrific accident and came out on the other side stronger than ever. His book, "How To Feed A Brain" details how changing his diet helped him to recover further from the devastating symptoms of brain injury such as fatigue and brain fog. Cavin is able to thoroughly explain complex issues in a simplified manner so the reader does not need a medical degree to understand. The book also includes comprehensive charts to simplify what the body needs and how to provide the necessary foods. "How To Feed A Brain" is a great resource for anyone looking to improve their health through diet, brain injury not required."
Even though smart drugs come with a long list of benefits, their misuse can cause negative side effects. Excess use can cause anxiety, fear, headaches, increased blood pressure, and more. Considering this, it is imperative to study usage instructions: how often can you take the pill, the correct dosage and interaction with other medication/supplements.
Fortunately, there are some performance-enhancing habits that have held up under rigorous scientific scrutiny. They are free, and easy to pronounce. Unfortunately, they are also the habits you were perhaps hoping to forego by using nootropics instead. "Of all the things that are supposed to be 'good for the brain,'" says Stanford neurology professor Sharon Sha, "there is more evidence for exercise than anything else." Next time you're facing a long day, you could take a pill and see what happens.
Proteus Digital Health (Redwood City, Calif.) offers an FDA-approved microchip—an ingestible pill that tracks medication-taking behavior and how the body is responding to medicine. Through the company's Digital Health Feedback System, the sensor monitors blood flow, body temperature and other vital signs for people with heart problems, schizophrenia or Alzheimer's disease.
Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away.
Racetams are the best-known smart drugs on the market, and have decades of widespread use behind them. Piracetam is a leading smart drug, commonly prescribed to seniors with Alzheimer's or pre-dementia symptoms – but studies have shown Piracetam's beneficial effects extend to people of all ages, as young as university students. The Racetams speed up chemical exchange between brain cells. Effects include increases in verbal learning, mental clarity, and general IQ. Other members of the Racetam family include Pramiracetam, Oxiracetam, аnԁ Aniracetam, which differ from Piracetam primarily in their potency, not their actual effects.
Please browse our website to learn more about how to enhance your memory. Our blog contains informative articles about the science behind nootropic supplements, specific ingredients, and effective methods for improving memory. Browse through our blog articles and read and compare reviews of the top rated natural supplements and smart pills to find everything you need to make an informed decision.
(As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It's not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.)
The absence of a suitable home for this needed research on the current research funding landscape exemplifies a more general problem emerging now, as applications of neuroscience begin to reach out of the clinical setting and into classrooms, offices, courtrooms, nurseries, marketplaces, and battlefields (Farah, 2011). Most of the longstanding sources of public support for neuroscience research are dedicated to basic research or medical applications. As neuroscience is increasingly applied to solving problems outside the medical realm, it loses access to public funding. The result is products and systems reaching the public with less than adequate information about effectiveness and/or safety. Examples include cognitive enhancement with prescription stimulants, event-related potential and fMRI-based lie detection, neuroscience-based educational software, and anti-brain-aging computer programs. Research and development in nonmedical neuroscience are now primarily the responsibility of private corporations, which have an interest in promoting their products. Greater public support of nonmedical neuroscience research, including methods of cognitive enhancement, will encourage greater knowledge and transparency concerning the efficacy and safety of these products and will encourage the development of products based on social value rather than profit value.
Autism Brain brain fuel brain health Brain Injury broth Cholesterol choline DAI DHA Diabetes digestion Exercise Fat Functional Medicine gastric Gluten gut-brain Gut Brain Axis gut health Health intestinal permeability keto Ketogenic leaky Gut Learning Medicine Metabolism Music Therapy neurology Neuroplasticity neurorehabilitation Nutrition omega Paleo Physical Therapy Recovery Science second brain superfood synaptogenesis TBI Therapy tube feed uridine
In general, I feel a little bit less alert, but still close to normal. By 6PM, I have a mild headache, but I try out 30 rounds of gbrainy (haven't played it in months) and am surprised to find that I reach an all-time high; no idea whether this is due to DNB or not, since Gbrainy is very heavily crystallized (half the challenge disappears as you learn how the problems work), but it does indicate I'm not deluding myself about mental ability. (To give a figure: my last score well before I did any DNB was 64, and I was doing well that day; on modafinil, I had a 77.) I figure the headache might be food related, eat, and by 7:30 the headache is pretty much gone and I'm fine up to midnight.
The evidence? Although everyone can benefit from dietary sources of essential fatty acids, supplementation is especially recommended for people with heart disease. A small study published in 2013 found that DHA may enhance memory and reaction time in healthy young adults. However, a more recent review suggested that there is not enough evidence of any effect from omega 3 supplementation in the general population.
The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous.
You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes.
Table 5 lists the results of 16 tasks from 13 articles on the effects of d-AMP or MPH on cognitive control. One of the simplest tasks used to study cognitive control is the go/no-go task. Subjects are instructed to press a button as quickly as possible for one stimulus or class of stimuli (go) and to refrain from pressing for another stimulus or class of stimuli (no go). De Wit et al. (2002) used a version of this task to measure the effects of d-AMP on subjects' ability to inhibit a response and found enhancement in the form of decreased false alarms (responses to no-go stimuli) and increased speed of correct go responses. They also found that subjects who made the most errors on placebo experienced the greatest enhancement from the drug.
Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.)
Natural and herbal nootropics are by far the safest and best smart drugs to ingest. For this reason, they're worth covering first. Our recommendation is always to stick with natural brain fog cures. Herbal remedies for enhancing mental cognition are often side-effect free. These substances are superior for both long-term safety and effectiveness. They are also well-studied and have deep roots in traditional medicine.
Exercise and nutrition also play an important role in neuroplasticity. Many vitamins and ingredients found naturally in food products have been shown to have cognitive enhancing effects. Some of these include vitamins B6 and B12, caffeine, phenethylamine found in chocolate and l-theanine, found in green tea, whose combined effects with caffeine are more extensively researched.
The flanker task is designed to tax cognitive control by requiring subjects to respond based on the identity of a target stimulus (H or S) and not the more numerous and visually salient stimuli that flank the target (as in a display such as HHHSHHH). Servan-Schreiber, Carter, Bruno, and Cohen (1998) administered the flanker task to subjects on placebo and d-AMP. They found an overall speeding of responses but, more importantly, an increase in accuracy that was disproportionate for the incongruent conditions, that is, the conditions in which the target and flankers did not match and cognitive control was needed.
Adderall increases dopamine and noradrenaline availability within the prefrontal cortex, an area in which our memory and attention are controlled. As such, this smart pill improves our mood, makes us feel more awake and attentive. It is also known for its lasting effect – depending on the dose, it can last up to 12 hours. However, note that it is crucial to get confirmation from your doctor on the exact dose you should take.
The Smart Pills Technology are primarily utilized for dairy products, soft drinks, and water catering in diverse shapes and sizes to various consumers. The rising preference for easy-to-carry liquid foods is expected to boost the demand for these packaging cartons, thereby, fueling the market growth. The changing lifestyle of people coupled with the convenience of utilizing carton packaging is projected to propel the market. In addition, Smart Pills Technology have an edge over the glass and plastic packaging, in terms of environmental-friendliness and recyclability of the material, which mitigates the wastage and reduces the product cost. Thus, the aforementioned factors are expected to drive the Smart Pills Technology market growth over the projected period.
An entirely different set of questions concerns cognitive enhancement in younger students, including elementary school and even preschool children. Some children can function adequately in school without stimulants but perform better with them; medicating such children could be considered a form of cognitive enhancement. How often does this occur? What are the roles and motives of parents, teachers, and pediatricians in these cases? These questions have been discussed elsewhere and deserve continued attention (Diller, 1996; Singh & Keller, 2010).
Coconut oil was recommended by Pontus Granström on the Dual N-Back mailing list for boosting energy & mental clarity. It is fairly cheap (~$13 for 30 ounces) and tastes surprisingly good; it has a very bad reputation in some parts, but seems to be in the middle of a rehabilitation. Seth Robert's Buttermind experiment found no mental benefits to coconut oil (and benefits to eating butter), but I wonder.
The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote:
This formula presents a relatively high price and one bottle of 60 tables, at the recommended dosage of two tablets per day with a meal, a bottle provides a month's supply. The secure online purchase is available on the manufacturer's site as well as at several online retailers. Although no free trials or money back guarantees are available at this time, the manufacturer provides free shipping if the desired order exceeds a certain amount. With time different online retailers could offer some advantages depending on the amount purchased, so an online research is advised before purchase, as to assess the market and find the best solution.
Hericium erinaceus (Examine.com) was recommended strongly by several on the ImmInst.org forums for its long-term benefits to learning, apparently linked to Nerve growth factor. Highly speculative stuff, and it's unclear whether the mushroom powder I bought was the right form to take (ImmInst.org discussions seem to universally assume one is taking an alcohol or hotwater extract). It tasted nice, though, and I mixed it into my sleeping pills (which contain melatonin & tryptophan). I'll probably never know whether the $30 for 0.5lb was well-spent or not.
Most stock quote data provided by BATS. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. All times are ET. Disclaimer. Morningstar: Copyright 2018 Morningstar, Inc. All Rights Reserved. Factset: FactSet Research Systems Inc.2018. All rights reserved. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Standard & Poor's and S&P are registered trademarks of Standard & Poor's Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. All content of the Dow Jones branded indices Copyright S&P Dow Jones Indices LLC 2018 and/or its affiliates.
In our list of synthetic smart drugs, Noopept may be the genius pill to rule them all. Up to 1000 times stronger than Piracetam, Noopept may not be suitable for everyone. This nootropic substance requires much smaller doses for enhanced cognitive function. There are plenty of synthetic alternatives to Adderall and prescription ADHD medications. Noopept may be worth a look if you want something powerful over the counter.
Low-tech methods of cognitive enhancement include many components of what has traditionally been viewed as a healthy lifestyle, such as exercise, good nutrition, adequate sleep, and stress management. These low-tech methods nevertheless belong in a discussion of brain enhancement because, in addition to benefiting cognitive performance, their effects on brain function have been demonstrated (Almeida et al., 2002; Boonstra, Stins, Daffertshofer, & Beek, 2007; Hillman, Erickson, & Kramer, 2008; Lutz, Slagter, Dunne, & Davidson, 2008; Van Dongen, Maislin, Mullington, & Dinges, 2003).
He used to get his edge from Adderall, but after moving from New Jersey to San Francisco, he says, he couldn't find a doctor who would write him a prescription. Driven to the Internet, he discovered a world of cognition-enhancing drugs known as nootropics — some prescription, some over-the-counter, others available on a worldwide gray market of private sellers — said to improve memory, attention, creativity and motivation.
Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn't have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn't notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall.
I'm wary of others, though. The trouble with using a blanket term like "nootropics" is that you lump all kinds of substances in together. Technically, you could argue that caffeine and cocaine are both nootropics, but they're hardly equal. With so many ways to enhance your brain function, many of which have significant risks, it's most valuable to look at nootropics on a case-by-case basis. Here's a list of 9 nootropics, along with my thoughts on each. | CommonCrawl |
Portfolio Optimizer The Web API for investment portfolio optimization
API Release Notes
API Service Status
Correlation Matrix Stress Testing: Shrinkage Toward an Equicorrelation Matrix
Mathematical preliminaries
Equicorrelation matrices
Correlation matrix stress testing scenarios
Correlations increasing to one
Correlations decreasing to zero
Correlations decreasing to minus one
Determination of the value of the shrinkage factor
Example - preparing for the next financial crisis
Determination of a data-driven shrinkage factor
Computation of the data-driven shrinkage factor
Computation of the stressed correlation matrix
Financial research has consistently shown that correlations between assets tend to increase during crises and tend to decrease during recoveries1.
The recent COVID-19 market crash was no exception, as illustrated on Alvarez Quant Trading blog post Correlations go to One for both the individual constituents of the S&P500 and several broad ETFs commonly used in tactical asset allocation strategies.
From a portfolio management perspective, this implies that it is important to understand the impact of sudden changes in correlations before they materialize.
One way to proceed is to alter a baseline correlation matrix by incorporating extreme correlation changes in its coefficients and evaluate how this altered correlation matrix translates into a different portfolio allocation2, a higher portfolio value at risk3, etc.
This procedure is usually called correlation (matrix) stress testing.
However, this altered correlation matrix may become invalid4 and not usable anymore in numerical algorithms!
In a previous post, I mentioned that such an invalid correlation matrix could be fixed thanks to the Nearest Correlation Matrix algorithm, as implemented for example in Portfolio Optimizer.
In this post, I will introduce another method, maybe more intuitive, which relies on the shrinkage of the baseline correlation matrix toward an equicorrelation matrix representing the desired crisis state.
A fully functional Google sheet corresponding to this post is available here
Main references for this post are two papers: one from Andreas Steiner5 and one from Kawee Numpacharoen6, in which the shrinkage method is called the Weighted Average Correlation Matrices method
An equicorrelation matrix of order $n \geq 2$ is a matrix $C_{\rho} \in \mathcal{M} \left( \mathbb{R}^{n \times n} \right)$ which:
Is unit diagonal: $(C_{\rho})_{i,i} = 1$, $i=1..n$
Is symmetric and has all its off-diagonal coefficients equal to a common parameter $\rho \in [-1,1]$: $(C_{\rho})_{i,j} = \rho$, $i,j=1..n, i \ne j$
Is positive semi-definite
In other words, \(C_{\rho} = \begin{pmatrix} 1 & \rho & \rho & \dots & \rho \\ \rho & 1 & \rho & \dots & \rho \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \rho & \rho & \rho &\dots & 1 \end{pmatrix}\) , with $\rho \in [-1,1]$ such that $C_{\rho}$ is positive semi-definite.
One interesting property that can be established is that the matrix $C_{\rho}$ is positive semi-definite if and only if $\rho \in [-\frac{1}{n-1}, 1]$, which unfortunately severely restricts the magnitude of the negative values of the parameter $\rho$, especially in high dimension.
Let's quickly demonstrate this result.
The matrix $C_{\rho}$ has two eigenvalues7:
$1 + (n-1) \rho$, of multiplicity 1, associated to the eigenvector $u_1 = (1, \dots, 1) {}^t$
$1 - \rho$, of multiplicity $n-1$8, associated to the $n-1$ eigenvectors $u_2, \dots, u_n$, with \((u_k)_i = \begin{cases} 1 &\text{if } i = 1, \\ -1 &\text{if } i = k, \\ 0 &\text{otherwise.} \end{cases}\)
So, because a symmetric matrix is positive semi-definite if and only if all its eigenvalues are non-negative, the matrix $C_{\rho}$ is positive semi-definite if and only if $1 + (n-1) \rho \geq 0$ and $1 - \rho \geq 0$, that is, $\rho \in [-\frac{1}{n-1}, 1]$.
The usage of shrinkage estimation of covariance (or correlation) matrices in finance was popularized in the 2000s by Olivier Ledoit and Michael Wolf9.
The basic idea is to form a convex linear combination of two covariance (or correlation) matrices in order to reach a balance between their properties.
More formally, given
A reference correlation matrix $C \in \mathcal{M}(\mathbb{R}^{n \times n})$
A target correlation matrix $T \in \mathcal{M}(\mathbb{R}^{n \times n})$
A shrinkage factor10 $\lambda \in [0,1]$
, shrinking the matrix $C$ toward the matrix $T$ is done by computing the matrix \(S_{\lambda} = (1-\lambda) C + \lambda T\).
To be noted that the matrix $S_{\lambda}$ is a valid correlation matrix for any $\lambda \in [0,1]$, because it is easily established that $S_{\lambda}$ is
Symmetric: $S_{\lambda} {}^t = S_{\lambda}$
Unit diagonal: $(S_{\lambda})_{i,i} = 1$, $i=1..n$
Positive semi-definite: $x {}^t S_{\lambda} x \geq 0, x \in \mathbb{R}^{n}$
The stylized facts of financial crises described in the introduction lead to envisage three (systemic) correlation stress testing scenarios.
This is the typical scenario for global equities during the onset of a major financial crisis, as illustrated on figure 1 taken from the article Covid Crash Risk — What's in a Number? of the Man Institute.
Figure 1. Rolling correlations of the S&P500 Index v.s. the EuroStoxx50 Index. Source: Man Institute
This scenario can be simulated through the shrinkage of a baseline correlation matrix toward the equicorrelation matrix \(C_1 = \begin{pmatrix} 1 & 1 & 1 & \dots & 1 \\ 1 & 1 & 1 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 &\dots & 1 \end{pmatrix}\), representing a state of maximum correlation.
This scenario is frequently occuring for global equities during the recovery of a major financial crisis, as also illustrated on figure 1 for the Black Monday, the US Downgrade and the Corona Crash crises.
This scenario is also documented to be occuring for some asset classes like bonds11 or in international currency markets12 during market turbulences.
For example, figure 2 taken from Philippe Jorion's book12 illustrates the turbulences around the devaluation of the British Pound in September 1992, which would have been devastating for a market-neutral trader long British Pound - short Deutsche Mark.
Figure 2. Rolling correlations of the British Pound v.s. the Deutsche Mark. Source: Philippe Jorion's book
This scenario can be simulated through the shrinkage of a baseline correlation matrix toward the equicorrelation matrix \(C_0 = \begin{pmatrix} 1 & 0 & 0 & \dots & 0 \\ 0 & 1 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 &\dots & 1 \end{pmatrix}\), representing a state of maximum decorrelation.
This scenario is the mirror of the first one, and is known to be occuring for risk factors of fixed income portfolios13 during market distress with aggressive central banks intervention.
Ideally, this scenario could have been simulated through the shrinkage of a baseline correlation matrix toward the matrix \(C_{-1} = \begin{pmatrix} 1 & -1 & -1 & \dots & -1 \\ -1 & 1 & -1 & \dots & -1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -1 & -1 & -1 &\dots & 1 \end{pmatrix}\), representing a state of maximum anticorrelation.
Unfortunately, as demonstrated in the mathematical preliminaries of this post, the matrix $C_{-1}$ is not an equicorrelation matrix because $-1 < -\frac{1}{n-1}$.
So, this scenario can only be partially simulated through the shrinkage of a baseline correlation matrix toward the equicorrelation matrix \(C_{-\frac{1}{n-1}} = \begin{pmatrix} 1 & -\frac{1}{n-1} & -\frac{1}{n-1} & \dots & -\frac{1}{n-1} \\ -\frac{1}{n-1} & 1 & -\frac{1}{n-1} & \dots & -\frac{1}{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -\frac{1}{n-1} & -\frac{1}{n-1} & -\frac{1}{n-1} &\dots & 1 \end{pmatrix}\), representing a state of maximum achievable anticorrelation.
For any correlation stress testing scenario simulated through shrinkage, the shrinkage factor $\lambda$ can be interpreted as a way to incorporate the probability of occurence of the scenario:
For $\lambda = 0$, $S_{\lambda} = C$, so that the shrunk correlation matrix is equal to the baseline correlation matrix. This can be interpreted as a 0% probability of realisation of the stress scenario.
For $\lambda = 1$, $S_{\lambda} = T$, so that the shrunk correlation matrix is equal to the target equicorrelation matrix. This can be interpreted as a 100% probability of realisation of the stress scenario.
For $\lambda \in ]0,1[$, the shrunk correlation matrix $S_{\lambda}$ is more and more departing from the baseline correlation matrix $C$ and approaching the target correlation matrix $T$ as $\lambda \to 1$. This can be interpreted as a $\lambda$ % probability of realisation of the stress scenario.
So, for example, if the probability of occurence of a given scenario is evaluated to 50%, the value of the shrinkage factor $\lambda$ could be set to 50%.
The interpretation of the shrinkage factor $\lambda$ as a percentage is described in one of the main references of this post6, but is, by definition, rather subjective. A more data-driven procedure to determine this factor is proposed in the next paragraph.
Suppose that we are managing a portfolio of broad ETFs, in the same universe14 as the J.P. Morgan Efficiente 5 Index.
Suppose also that on 20 August 2021, we would like to asses the impact on our universe of ETFs of a correlations breakdown similar to the one which occured during the Corona Crash.
Well, because we read this blog post carefuly , we would like to compute a stressed correlation matrix $C_{shrunk}$ by shrinking the current asset correlation matrix $C_{current}$ toward the equicorrelation matrix $C_1$, that is
\[C_{shrunk} = (1-\lambda) C_{current} + \lambda C_1\]
with $\lambda \in ]0,1[$ to be determined.
Unfortunately, we have no view on the probability of occurence of a market crash, so that we are unable to choose a subjective shrinkage factor $\lambda$.
One data-driven possibility is then to compute $\lambda$ so that the (Frobenius) distance15 between the current asset correlation matrix and the stressed correlation matrix is the same as the distance between the pre-Corona Crash asset correlation matrix and the Corona Crash asset correlation matrix.
Formally, we would like
\[d_F(C_{current}, C_{shrunk}) = d_F(C_{pre-Corona \, Crash}, C_{Corona \, Crash})\]
So, given that
\[\begin{aligned} d_F(C_{current}, C_{shrunk}) &= \left\Vert C_{current} - C_{shrunk} \right\Vert_F \\ &= \left\Vert C_{current} - ((1-\lambda) C_{current} + \lambda C_1) \right\Vert_F \\ &= \lambda \left\Vert C_{current} - C_1 \right\Vert_F \\ &= \lambda d_F(C_{current}, C_1)\end{aligned}\]
It implies
\[\lambda = \frac{d_F(C_{pre-Corona \, Crash}, C_{Corona \, Crash})}{d_F(C_{current}, C_1)}\]
Let's first compute $d_F(C_{pre-Corona \, Crash}, C_{Corona \, Crash})$.
For this, we need to define the exact two time periods pre-Corona Crash and Corona Crash:
For the later, I chose the period 19 February 2020 - 23 March 2020, which corresponds to the peak of the financial crisis16
For the former, I chose the period 14 January 2020 - 18 February 2020, which corresponds to the period preceding the peak of the financial crisis with the same number of trading days
This leads to $C_{pre-Corona \, Crash}$
Figure 3. Daily returns correlation matrix between 12 ETFs over the period 14 January 2020 - 18 February 2020
and to $C_{Corona \, Crash}$17
Figure 4. Daily returns correlation matrix between 12 ETFs over the period 19 February 2020 - 23 March 2020
so that $d_F(C_{pre-Corona \, Crash}, C_{Corona \, Crash}) \approx 6.03$.
Let's then compute $d_F(C_{current}, C_1)$, for which I chose to define the current period as the period 20 July 2021 - 20 August 202118
Figure 5. Daily returns correlation matrix between 12 ETFs over the period 20 July 2021 to the 20 August 2021
so that $d_F(C_{current}, C_1) \approx 9.84$.
Let's finally compute the shrinkage factor $\lambda$, that is, $\lambda \approx \frac{6.03}{9.84} \approx 0.61$.
Thanks to the computation of the shrinkage factor, it is now possible to compute the shrunk correlation matrix, for example with the Portfolio Optimizer endpoint /assets/correlation/matrix/shrinkage.
This leads to $C_{shrunk}$
Figure 6. Stressed (shrunk) correlation matrix between 12 ETFs, to prepare for the next crisis
We can notice19 that the increase in correlations between the matrices $C_{current}$ and $C_{shrunk}$ is comparable to the increase in correlations observed during the Corona Crash between the matrices $C_{pre-Corona \, Crash}$ and $C_{Corona \, Crash}$.
We can also notice one possible issue, which is that correlations are increasing between all the assets, while during the Corona Crash a couple of assets saw their correlations actually decrease, like gold and long term treasuries.
Nevertheless, given that each crisis is usually of different nature, such unexpected changes in correlations might ultimately be desirable!
What to do next with the stressed correlation matrix $C_{shrunk}$ is up to one's imagination or requirements.
One possibility could be to compute an associated optimal portfolio allocation20 and compare it to the current portfolio allocation. This would help determining the robustness of the current portfolio to extreme correlation changes.
See Sebastien Page & Robert A. Panariello (2018) When Diversification Fails, Financial Analysts Journal, 74:3, 19-32 and references therein. ↩
See Rachel Campbell, Kees Koedijk and Paul Kofman, Increased Correlation in Bear Markets, Financial Analysts Journal Vol. 58, No. 1, pp. 87-94. ↩
See Paul H. Kupiec, Stress Testing in a Value at Risk Framework, The Journal of Derivatives Aug 1998, 6 (1) 7-24. ↩
Usually, non positive semi-definite. ↩
See Steiner, Andreas, Manipulating Valid Correlation Matrices. ↩
See Kawee Numpacharoen, Weighted Average Correlation Matrices Method for Correlation Stress Testing and Sensitivity Analysis, The Journal of Derivatives Nov 2013, 21 (2) 67-74. ↩ ↩2
This can be verified by using the definition of an eigenvalue. For example, $C_{\rho} u_1 = (1 + (n-1) \rho) u_1$, so that by definition $u_1$ is an eigenvector associated to the eigenvalue $1 + (n-1) \rho$. ↩
The eigenvectors $u_2, \dots ,u_n$ are linearly independent, which can easily be verified using the definition of linear independence of a vector set. So, the dimension of the eigenspace associated to the eigenvalue $1 - \rho$ is $n-1$. ↩
See Olivier Ledoit and Michael Wolf, Honey, I Shrunk the Sample Covariance Matrix, The Journal of Portfolio Management Summer 2004, 30 (4) 110-119. ↩
Also called shrinkage intensity or shrinkage constant. ↩
See Mark Kritzman, Kenneth Lowry and Anne-Sophie Van Royen, Risk, Regimes, and Overconfidence, The Journal of Derivatives Spring 2001, 8 (3) 32-42. ↩
See Philippe Jorion, Value at Risk: The New Benchmark for Managing Financial Risk, McGraw-Hill Professional; 3rd Revised edition. ↩ ↩2
See Bhansali, Vineer and Wise, Mark B., Forecasting Portfolio Risk in Normal and Stressed Markets ↩
Excluding the cash-equivalent ETF SGOV, because it would make no sense to increase the correlation of cash to any other asset. ↩
See Frobenius norm ↩
C.f. Wikipedia, In the 33 days between 19 February and 23 March, […] the MSCI World Index declined by 34 per cent. ↩
On which an extreme increase in most correlations is visible. For example, the correlation between the two ETFs LQD and HYG spikes from -0.59 to 0.41! ↩
This period is the most current period at the time of preparation of this blog post with the same number of trading days as the peak of the Corona Crash time period. ↩
E.g., the correlation between the 2 ETFs SPY and LQD went from -0.56 to 0.24 during the Corona Crash, and increases from -0.16 to 0.55 with the computed stressed correlation matrix. ↩
Possibly also revising future assets return and/or assets volatility with a similar methodology. ↩
Tags: correlation matrix
© 2022 Portfolio Optimizer. Powered by Jekyll & Minimal Mistakes. | CommonCrawl |
The Prime state and its quantum relatives
D. García-Martín1,2,3, E. Ribas2, S. Carrazza4,5, J.I. Latorre2,5,6, and G. Sierra3
1Barcelona Supercomputing Center (BSC), Barcelona, Spain.
2Dept. Física Quàntica i Astrofísica, Universitat de Barcelona, Barcelona, Spain.
3Instituto de Física Teórica, UAM-CSIC, Madrid, Spain.
4TIF Lab, Dipartimento di Fisica, Università degli Studi di Milano and INFN Milan, Milan, Italy.
5Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE.
6Center for Quantum Technologies, National University of Singapore, Singapore.
Published: 2020-12-11, volume 4, page 371
Eprint: arXiv:2005.02422v3
Doi: https://doi.org/10.22331/q-2020-12-11-371
Citation: Quantum 4, 371 (2020).
Get full text pdf
Read on arXiv Vanity
Comment on Fermat's library
Find this paper interesting or want to discuss? Scite or leave a comment on SciRate.
The Prime state of $n$ qubits, ${|\mathbb{P}_n{\rangle}}$, is defined as the uniform superposition of all the computational-basis states corresponding to prime numbers smaller than $2^n$. This state encodes, quantum mechanically, arithmetic properties of the primes. We first show that the Quantum Fourier Transform of the Prime state provides a direct access to Chebyshev-like biases in the distribution of prime numbers. We next study the entanglement entropy of ${|\mathbb{P}_n{\rangle}}$ up to $n=30$ qubits, and find a relation between its scaling and the Shannon entropy of the density of square-free integers. This relation also holds when the Prime state is constructed using a qudit basis, showing that this property is intrinsic to the distribution of primes. The same feature is found when considering states built from the superposition of primes in arithmetic progressions. Finally, we explore the properties of other number-theoretical quantum states, such as those defined from odd composite numbers, square-free integers and starry primes. For this study, we have developed an open-source library that diagonalizes matrices using floats of arbitrary precision.
► BibTeX data
@article{GarciaMartin2020primestateits, doi = {10.22331/q-2020-12-11-371}, url = {https://doi.org/10.22331/q-2020-12-11-371}, title = {The {P}rime state and its quantum relatives}, author = {Garc{\'{i}}a-Mart{\'{i}}n, D. and Ribas, E. and Carrazza, S. and Latorre, J.I. and Sierra, G.}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {4}, pages = {371}, month = dec, year = {2020} }
[1] J.I. Latorre and G. Sierra, ``Quantum computation of prime number functions'', Quant. Inf. Comput. 14, 577 (2014).
https://dl.acm.org/doi/10.5555/2638682.2638685
[2] L.K. Grover, ``A fast quantum mechanical algorithm for database search'', Proc. STOC. May 1996, 212 (1996).
https://doi.org/10.1145/237814.237866
[3] B. Riemann, ``On the Number of Prime Numbers less than a Given Quantity'', Monatsberichte der Berliner Akademie November 1859, 671 (1859),.
https://www.claymath.org/sites/default/files/ezeta.pdf
[4] K. Walisch and D. Baugh, ``New confirmed $\pi(10^{27}$) prime counting function record'', Mersenne Forum (2015), https://github.com/kimwalisch/primecount.
https://github.com/kimwalisch/primecount
[5] G. Brassard, P. Høyer and A. Tapp, ``Quantum Counting'', Proc. 25th ICALP, LNCS 1443, Springer-Verlag, 820 (1998).
https://doi.org/10.1007/BFb0055105
[6] M. Rubinstein and P. Sarnak, ``Chebyshev's Bias'', Exp. Math. 3, 173 (1994).
https://doi.org/10.1080/10586458.1994.10504289
[7] J.I. Latorre and G. Sierra, ``There is entanglement in the primes'', Quant. Inf. Comput. 15, 622 (2015).
https://doi.org//10.5555/2871411.2871417
[8] G.H. Hardy and J.E. Littlewood, ``Some Problems of 'Partitio Numerorum.' III. On the Expression of a Number as a Sum of Primes", Acta Math. 44, 1 (1923).
https://doi.org/10.1007/BF02403921
[9] We consider natural bi-partitions those that separate the first $m$ qubits and the remainder $n-m$ qubits, without any reordering.
[10] G. Mussardo, H. Giudici and J. Viti, ``The Coprime Quantum Chain", J. Stat. Mech. Theory Exp. 2017, 033104 (2017).
https://doi.org/10.1088/1742-5468/aa5bb4
[11] G. Mussardo, A. Trombettoni and Zhaon Zhang, ``Prime suspects in a Quantum Ladder'', arXiv:2005.01758 (2020).
arXiv:2005.01758
[12] F.V. Mendes and R.V. Ramos, ``Quantum Sequence States'', arXiv:1408.4838 (2014).
arXiv:1408.4838
[13] S. Carrazza, D. García-Martín and J.I. Latorre, Quantum-TII/qprime: Qprime v1.0.0 (May 2020). https://doi.org/10.5281/zenodo.3787043.
https://doi.org/10.5281/zenodo.3787043
[14] L. Miller. ``Riemann's hypothesis and tests for primality'', J. Comput. Sys. Sci. 13, 300 (1976); M. O. Rabin. ``Probabilistic algorithm for testing primality'', J. Number Theory 12, 128 (1980).
https://doi.org/10.1016/0022-314X(80)90084-0
[15] M. Agrawal, N. Kayal and N. Saxena, ``Primes is in P'', Ann. Math. 160, 781 (2004).
https://doi.org/10.4007/annals.2004.160.781
[16] T.M. Apostol, Introduction to Analytic Number Theory, Springer-Verlag, New York (1976); G.H. Hardy and E.M. Wright, An Introduction to the Theory of Numbers, Oxford Universiy Press, Oxford (1938).
https://doi.org/10.1080/00107510903184414
[17] M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition, Cambridge University Press, New York (2011).
https://doi.org/10.1017/CBO9780511976667
[18] https://www.claymath.org/millennium-problems.
https://www.claymath.org/millennium-problems
[19] L. Schoenfeld, ``Sharper bounds for the Chebyshev functions $\theta(x)$ and $\psi(x)$. II", Math. Comput. 30, 337 (1976).
https://doi.org/10.2307/2005976
[20] A. Kitaev, ``Quantum measurements and the Abelian stabilizer problem'', arXiv:quant-ph/9511026 (1995); R. Cleve, A. Ekert, C. Macchiavello and M.Mosca, ``Quantum algorithms revisited'', Proc. R. Soc. London A 454, 339 (1998).
https://doi.org/10.1098/rspa.1998.0164
arXiv:quant-ph/9511026
[21] S. Aaronson, ``Quantum Lower Bound for Approximate Counting Via Laurent Polynomials'', arXiv:1808.02420 (2018); S. Aaronson and P. Rall, ``Quantum Approximate Counting, Simplified'', SOSA20, 24 (2020).
https://doi.org/10.1137/1.9781611976014.5
[22] Meissel, ``Über die Bestimmung der Primzahlenmenge innerhalb gegebener Grenzen'', Math. Ann. 2, 636 (1870).
[23] D.H. Lehmer, ``On the exact number of primes less than a given limit'', Illinois J. Math. 3, 381 (1959).
https://doi.org/10.1215/ijm/1255455259
[24] J.C. Lagarias, V.S. Miller and A.M. Odlyzko, ``Computing $\pi(x)$: the Meissel-Lehmer method'', Math. Comp. 44, 573 (1985).
https://doi.org/10.2307/2007973%20
[25] M. Deléglise and J. Rivat, ``Computing $\pi(x)$: the Meissel, Lehmer, Lagarias, Miller, Odlyzko method'', Math. Comp. 65, 235 (1996).
https://doi.org/10.1090/S0025-5718-96-00674-6%20
[26] X. Gourdon, ``Computation of $\pi(x)$: improvements to the Meissel, Lehmer, Lagarias, Miller, Odllyzko, Deléglise and Rivat method'', unpublished (2001). As cited in Ref. pi(x).
[27] J.C. Lagarias and A.M. Odlyzko, ``Computing $\pi(x)$: an Analytic Method'', J. Algorithms 8, 173 (1987).
https://doi.org/10.1016/0196-6774(87)90037-X
[28] P. Shor, ``Polynomial-time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer'', SIAM J. Sci. Statist. Comput. 26, 1484 (1997).
https://doi.org/10.1137/S0097539795293172
[29] A. Peruzzo, J. McClean, P. Shadbolt, M.H. Yung, X.Q. Zhou, P.J. Love, A. Aspuru-Guzik and J. L. O'Brien, ``A variational eigenvalue solver on a quantum processor'', Nat. Commun. 5, 4213 (2013).
https://doi.org/10.1038/ncomms5213
[30] E. Farhi, J. Goldstone and S. Guttman, ``A Quantum Approximate Optimization Algorithm'', arXiv:1411.4028 (2014).
[31] If $p_i$ is the exact probability, and $\hat{p}_i$ the estimate of it, the statistical additive error is defined as $p_i=\hat{p}_i+\epsilon_i$, and the multiplicative error as $\hat{p}_i=(1+\varepsilon_i) p_i$.
[32] G. Brassard, P. Høyer, M. Mosca and A. Tapp, ``Quantum Amplitude Amplification and Estimation'', Contemp. Math. 305, 53 (2002).
https://doi.org/10.1090/conm/305
[33] A. Montanaro, ``Quantum speedup of Monte Carlo methods'', Proc. R. Soc. A 471, 20150301 (2015).
[34] M. Deléglise, P. Dusart and X.F. Roblot, ``Counting primes in residue classes'', Math. Comp. 73, 1565 (2004).
https://doi.org/10.1090/S0025-5718-04-01649-7
[35] http://mathworld.wolfram.com/ ModularPrimeCountingFunction.html.
http://mathworld.wolfram.com/ModularPrimeCountingFunction.html
[36] I. Bengtsson and K. Zyczkowski, Geometry of quantum states: an introduction to quantum entanglement, Cambridge University Press, New York (2018).
[37] We thank Alonso Botero for noticing this fact.
[38] J.P. Keating and D.J. Smith, ``Twin prime correlations from the pair correlation of Riemann zeros'', J. Phys. A: Math. Theor. 52, 365201 (2019).
https://doi.org/10.1088%2F1751-8121%2Fab3521
[39] E. B. Bogomolny and J. P. Keating, ``Gutzwiller's trace formula and spectral statistics: beyond the diagonal approximation", Phys. Rev. Lett. 77, 1472 (1996).
https://doi.org/10.1103/PhysRevLett.77.1472
[40] M. V. Berry and J. P. Keating, ``The Riemann zeros and eigenvalue asymptotics", SIAM Rev. 41, 236 (1999).
[41] H.G. Gadiyar and R. Padma, ``Ramanujan-Fourier series, the Wiener-Khintchine formula and the distribution of prime pairs'', Phys. A 269, 503 (1999).
https://doi.org/10.1016/S0378-4371(99)00171-5
[42] H. G. Gadiyar and R. Padma, ``Linking the circle and the sieve: Ramanujan-Fourier series", arXiv:math/0601574 (2006).
arXiv:math/0601574
[43] A. Rényi, ``On measures of information and entropy", Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability 1960, 547 (1961).
[44] D.N. Page, ``Average entropy of a subsystem'', Phys. Rev. Lett. 71, 129 (1993).
[45] T. Grover and M.P.A. Fisher, ``Entanglement and the Sign Structure of Quantum States'', Phys. Rev. A 92, 042308 (2015).
https://doi.org/10.1103/PhysRevA.92.042308
[46] J.I. Latorre, ``Entanglement entropy and the simulation of quantum mechanics'', J. Phys. A: Math. Theor. 40, 6689 (2007).
https://doi.org/10.1088%2F1751-8113%2F40%2F25%2Fs13
[47] P. Sarnak, ``Three lectures on the Möbius function, randomness, and dynamics'', online notes (2011), http://publications.ias.edu/sarnak/paper/512.
http://publications.ias.edu/sarnak/paper/512
[48] F. Cellarosi and Y. G. Sinai, ``Ergodic Properties of Square-Free Numbers", arXiv:1112.4691 (2011).
[49] L. Amico, R. Fazio, A. Osterloh and V. Vedral, ``Entanglement in Many-Body Systems'', Rev. Mod. Phys. 80, 517 (2008).
https://doi.org/10.1103/RevModPhys.80.517
[50] E. Grosswald and F. J. Schnitzer, ``A class of modified $\zeta$ and L-functions'', Pacific. Jour. Math. 74, 357 (1978).
https://doi.org/10.2140/pjm.1978.74.357
[51] D. R. Ward, ``Some Series Involving Euler's Function'', J. London Math. Soc. 2, 210 (1927).
https://doi.org/10.1112/jlms/s1-2.4.210
[52] J. H. van Lint and H.E. Richert, ``Über die Summe $\sum_n \mu^2(n)/ \varphi(n)$'', Nederl. Akad. Wetensch.` Proc. Ser. A 67, Indag. Math. 26, 582 (1964).
[1] Giuseppe Mussardo, Andrea Trombettoni, and Zhao Zhang, "Prime Suspects in a Quantum Ladder", Physical Review Letters 125 24, 240603 (2020).
The above citations are from SAO/NASA ADS (last updated successfully 2021-01-19 04:01:15). The list may be incomplete as not all publishers provide suitable and complete citation data.
On Crossref's cited-by service no data on citing works was found (last attempt 2021-01-19 04:01:13).
This Paper is published in Quantum under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions. | CommonCrawl |
Importance of recursion in computability theory
It is said that computability theory is also called recursion theory. Why is it called like that? Why recursion has this much importance?
computability terminology history
Kaveh
In the 1920's and 1930's people were trying to figure out what it means to "effectively compute a function" (remember, there were no general purpose computing machines around, and computing was something done by people).
Several definitions of "computable" were proposed, of which three are best known:
The $\lambda$-calculus
Recursive functions
Turing machines
These turned out to define the same class of number-theoretic functions. Because recursive functions are older than Turing machines, and the even older $\lambda$-calculus was not immediately accepted as an adequate notion of computability, the adjective "recursive" was used widely (recursive functions, recursive sets, recursively enumerable sets, etc.)
Later on, there was an effort, popularized by Robert Soare, to change "recursive" to "computable". Thus we nowadays speak of computable functions and computably enumerable sets. But many older textbooks, and many people, still prefer the "recursive" terminology.
So much for the history. We can also ask whether recursion is important for computation from a purely mathematical point of view. The answer is a very definite "yes!". Recursion lies at the basis of general-purpose programming languages (even while loops are just a form of recursion because while p do c is the same as if p then (c; while p do c)), and many fundamental data stuctures, such as lists and trees, are recursive. Recursion is simply unavoidable in computer science, and in computability theory specifically.
Andrej BauerAndrej Bauer
Computability theory is the study of computable functions :-).
Such functions are usually (in this community) defined as functions that can be expressed with a Turing machine.
More formally a function $f: \mathbb{N} \mapsto \mathbb{N}$ is computable if there exist a Turing $T$ such that if the input to $T$ is $x = 1^n$ the output of $T$ is $1^{f(x)}.$
As it turns out if you define computable functions in this way (programs) they are equivalent to the set of functions one can obtain by using the rules described here. They are called recursive functions since one of the rules for obtaining such functions is a recursive definition (See the 5th rule on wikipedia).
So the reason why recursion theory has much importance is equivalent to the question of why are computable functions important. And the answer to the latter should be quite obvious :)
Marc Khoury
JernejJernej
Not the answer you're looking for? Browse other questions tagged computability terminology history or ask your own question.
Why the name Recursively Enumerable and Recursive?
Easy to state open problems in computability theory
Importance of the empty string
Why are computable functions also called recursive functions?
Domain Theory and Polymorphism
Iteration can replace Recursion?
How can MLTT etc encode computability?
Can the notation for polynomial reduction, A ≤p B be reversed in computability theory?
PCAs and Kleene's Recursion Theorem | CommonCrawl |
Reducing travel-related SARS-CoV-2 transmission with layered mitigation measures: symptom monitoring, quarantine, and testing
Michael A. Johansson ORCID: orcid.org/0000-0002-5090-77221,
Hannah Wolford1,
Prabasaj Paul1,
Pamela S. Diaz1,
Tai-Ho Chen1,
Clive M. Brown1,
Martin S. Cetron1 &
Francisco Alvarado-Ramy1
BMC Medicine volume 19, Article number: 94 (2021) Cite this article
Balancing the control of SARS-CoV-2 transmission with the resumption of travel is a global priority. Current recommendations include mitigation measures before, during, and after travel. Pre- and post-travel strategies including symptom monitoring, antigen or nucleic acid amplification testing, and quarantine can be combined in multiple ways considering different trade-offs in feasibility, adherence, effectiveness, cost, and adverse consequences.
We used a mathematical model to analyze the expected effectiveness of symptom monitoring, testing, and quarantine under different estimates of the infectious period, test-positivity relative to time of infection, and test sensitivity to reduce the risk of transmission from infected travelers during and after travel.
If infection occurs 0–7 days prior to travel, immediate isolation following symptom onset prior to or during travel reduces risk of transmission while traveling by 30–35%. Pre-departure testing can further reduce risk, with testing closer to the time of travel being optimal even if test sensitivity is lower than an earlier test. For example, testing on the day of departure can reduce risk while traveling by 44–72%. For transmission risk after travel with infection time up to 7 days prior to arrival at the destination, isolation based on symptom monitoring reduced introduction risk at the destination by 42–56%. A 14-day quarantine after arrival, without symptom monitoring or testing, can reduce post-travel risk by 96–100% on its own. However, a shorter quarantine of 7 days combined with symptom monitoring and a test on day 5–6 after arrival is also effective (97--100%) at reducing introduction risk and is less burdensome, which may improve adherence.
Quarantine is an effective measure to reduce SARS-CoV-2 transmission risk from travelers and can be enhanced by the addition of symptom monitoring and testing. Optimal test timing depends on the effectiveness of quarantine: with low adherence or no quarantine, optimal test timing is close to the time of arrival; with effective quarantine, testing a few days later optimizes sensitivity to detect those infected immediately before or while traveling. These measures can complement recommendations such as social distancing, using masks, and hand hygiene, to further reduce risk during and after travel.
Coronavirus disease 2019 (COVID-19) was first recognized in late December 2019. By March 2020, the virus causing COVID-19, SARS-CoV-2, had reached 6 continents and almost 70 countries. In response to the global COVID-19 outbreak, governments implemented a variety of mitigation measures including unprecedented social distancing measures, travel health alerts, and travel restrictions at national and sub-national levels [1, 2]. These measures, as well as concern about exposures related to travel, led to major and prolonged reductions in air travel worldwide [3,4,5,6,7]. Spatiotemporally asynchronous waves of COVID-19 have led to dynamic risk and mitigation measures globally with an accompanying interest in identifying risk management steps for travel that can reduce the risk of transmission and address concerns of travelers, travel industry regulators, and public health authorities [8,9,10].
Initial policies for managing translocation of the virus from one destination to another relied on closing borders or restricting entry of travelers from countries with higher incidence rates [11, 12]. Although these approaches may have reduced the importation of some cases and preserved resources, they came with enormous economic and individual impacts [13, 14].
For travelers, personal mitigation actions include wearing masks, social distancing at least 6 ft from others when possible, frequent hand washing or use of alcohol-based hand sanitizer, not touching their face, and avoiding anyone who is sick. Governments, airlines, airports, and other businesses serving travelers have implemented or recommended measures to reduce the risk of COVID-19 associated with air travel [15, 16]. These measures have included enhanced disinfection procedures, employee health assessments, passenger health attestations, screening for fever, illness response protocols, increased spacing between passengers on flights, and other steps to reduce risk of transmission in airports and on conveyances [10, 17]. Symptom-based screening at airports has proven ineffective because those measures miss mild, afebrile, asymptomatic, and pre-symptomatic SARS-CoV-2 infections [18,19,20,21]. Asymptomatic persons may account for 20% to 40% of SARS-CoV-2 infections and can transmit the virus to others [22,23,24,25,26,27], and epidemiological data indicate that infectiousness begins prior to symptom onset for those who do develop symptoms [28,29,30,31,32].
In many destinations, arriving travelers, most of whom are asymptomatic with no specific known exposures, were asked to self-quarantine and reduce contacts as much as possible after arrival. The World Health Organization (WHO) defines quarantine as "the restriction of activities and/or separation from others of the suspect persons... who are not ill, in such a manner as to prevent the possible spread of infection" and indicates that quarantine may be considered for travelers based on risk assessment and local conditions. For known SARS-CoV-2 exposures, WHO recommends quarantine of 14 days from their last exposure based on the limit of the estimated incubation period for SARS-CoV-2 [33]. A 14-day quarantine alone, when implemented immediately post-exposure and strictly adhered to, approaches 100% reduction in post-exposure transmission risk [34, 35]. However, travelers may have little incentive to consistently adhere to these measures at their destinations unless there is the ability to reliably communicate with them, support their needs, and enforce these measures. Monitoring and enforcing adherence to quarantine measures requires tremendous effort and resources by public health entities that may only be feasible and appropriate in certain contexts [36, 37].
Inclusion of SARS-CoV-2 testing as a component of a multi-layered approach to risk-reduction is currently being implemented in various settings. Some businesses and educational institutions are incorporating SARS-CoV-2 screening strategies into their concepts of operations, sometimes including mandatory testing of employees and voluntary testing for customers [38,39,40]. While there is no current international standard for testing travelers, many countries and jurisdictions are requiring arriving travelers to be tested either prior to their departure or after arrival to identify infected persons who are asymptomatic so they can be isolated [41]. Current guidance or requirements vary from country to country, and from state to state within the USA, including the timing of the test prior to or after travel, the type of test used (viral antigen, viral RNA), and the use of negative test results to alleviate additional public health measures, such as quarantine, at the destination [42, 43].
Currently available SARS-CoV-2 tests for detecting active infections include nucleic acid amplification tests (NAAT), such as reverse transcription polymerase chain reaction (RT-PCR) tests, rapid isothermal NAATs, and antigen-based tests. Time to deliver results is hours to days for RT-PCR and minutes to hours for antigen tests, which can also be processed without a specialized laboratory. Several antigen tests for SARS-CoV-2 are currently authorized in the United States for suspected SARS-CoV-2 infection [44, 45]. While rapid antigen tests have advantages over NAATs in terms of cost, simplicity, and turnaround time, they are less likely to detect a positive in individuals with low viral load, i.e., early or late in infection [46]. However, the limited available data on the efficacy of antigen testing in asymptomatic individuals suggests they may have high sensitivity for infectious individuals [46,47,48,49].
SARS-CoV-2 transmission risk related to travel can be viewed in two domains: transmission risk during travel (e.g., by infected travelers while at an airport or on aircraft) and after travel is completed (e.g., introduction or re-introduction of SARS-CoV-2 to the destination location). There is also overlap as transmission risk during travel can lead to new infections, which can increase post-travel risk. Data on strategies for reducing risk associated with travel are scant and there are many potential strategies (e.g., the optimal timing of pre-departure or post-arrival testing or the combination of testing and post-arrival quarantine) [39, 50,51,52]. Mathematical models have provided some insights to the potential impact of quarantine combined with testing [51, 53]. Here, we build upon those models, considering uncertainty in infectious periods and different testing options to assess a suite of possible combined pre- and post-travel strategies to reduce transmission risk from infected travelers.
First, we characterized component processes related to transmission risk during infection: the relative infectiousness over the course of infection, the proportion of infections resulting in symptoms, the timing of symptom onset for those who have symptoms, and the probability of testing positive over the course of infection.
We used three distinct models to characterize relative infectiousness over time, I(t), specifying each as a density function of daily infectiousness such that the total infectiousness is equal to one and the curves only differ in the temporal distribution of transmission risk (Fig. 1a). We used a Gamma density function to approximate a 10-day infectious period with peak infectiousness on day 5 based on observations from numerous studies [27, 54,55,56,57,58]. We also replicated a within-host infection model by Goyal et al. [59] by simulating infections in 10,000 individuals and recording the probability of being infectious at time steps of 0.1 days. We then fitted a density function to the set of times when these individuals were infectious. This indicated that most people are infectious from days 3 to 7 after the time they were infected, with tapering afterwards. The final model characterizes simulated infectious periods from Clifford et al. [51], based on estimated latent periods (the delay between infection and becoming infectious) [54] and infectious periods [60]. We simulated 10,000 individual-level paired latent and infectious periods then fit an empirical density function to the infectious time points. The Gamma model represents a simple assumption that does not capture individual level variability; however, both the Goyal et al. and Clifford et al. infectiousness models capture the population-level impacts of individual-level variability such that estimates based on these models may more completely reflect the potential impact across many individuals.
Models of relative infectiousness and the probability of testing positive relative to time since SARS-CoV-2 infection. a Infectiousness density functions for a Gamma density function approximating a 10-day infectious period with a peak on day 5 [54,55,56,57,58], a host infection model adopted from Goyal et al. [59], and simulated infectious and latent periods adopted from Clifford et al. [51]. b Models of the probability of a positive test for SARS-CoV-2 relative to time since infection: a distribution estimating positivity by RT-PCR adopted from Clifford et al. [51] and antigen ("Ag") testing curves for each infectiousness curve (a) scaled such that test positivity tracks infectiousness with a maximum sensitivity of 80% at peak infectiousness
We assumed that 70% of all infections result in symptomatic COVID-19 cases [25], σ0, and provided several additional sensitivity estimates assuming that 50% of infections result in symptoms. For the incubation period, σ(t), we used a meta-estimate with a median of 5 days and a Log-Normal distribution based on a meta-analysis by McAloon et al. [61].
For diagnostic testing, ρ(t), we used two models: one directly estimating positivity by RT-PCR and one approximating an antigen detection assay (Fig. 1b). For the RT-PCR model, we used the model generated by Clifford et al. [51] based on data from Kucirka et al. [62]. To approximate an antigen detection assay, we assumed that the assay would have 80% sensitivity, s, for infectious individuals [46,47,48,49] and scaled the probability of testing, ρ, to match the time-course of each infectiousness curve with a peak at 80%:
$$ \rho (t)= sI(t)/\max (I).\kern1em $$
To assess the impact of test sensitivity we also compared this to a 95% sensitivity version of the same model.
We then constructed a model capturing these components to assess the impacts of testing, symptom monitoring, and quarantine (Table 1). Infections resulting in travel-related risk could occur before or during a trip and we use one of the infectiousness density functions described above, I(t), which defines relative infectiousness at time t relative to travel based on infection at time τ relative to travel (prior to travel is negative):
$$ I\left(t,\tau \right)=I\left(t-\tau \right). $$
Table 1 Model parameters
Symptom monitoring was assessed as a method to detect and isolate infected individuals and therefore prevent transmission after symptom onset. As described above, we assumed that a proportion of infected individuals develop symptoms (σ0) and develop symptoms at rate σ(t) as defined by the incubation period (described above). The onset of symptoms was assumed to lead to isolation until recovery, resulting in a residual in transmission risk over the transmission window:
$$ {r}_S\left(t,\tau \right)=1-{\sigma}_0\sigma \left(t-\tau \right). $$
Transmission at a time t can also be mitigated through quarantine. We estimated the impact of quarantine as a reduction in risk of a magnitude equal to the adherence ∝Q (1 = 100%) during a quarantine of duration TQ starting at the time of arrival t0 with residual transmission risk:
$$ {r}_Q(t)=1-{\propto}_Q\mathrm{if}\ t\in \left[{t}_0,{t}_0+{T}_Q\right],\mathrm{and}\ 1\ \mathrm{otherwise}. $$
Transmission can also be mitigated through test-based detection followed by isolation. For the purposes of the model, we assumed that test results were immediately available and a positive test immediately led to isolation until recovery. Test positivity for each test (described above) was characterized ρ(t) and the corresponding residual in transmission associated with each test k at test time tk is:
$$ {r}_T\left(t,\tau, {t}_k\right)=1-\rho \left({t}_k-\tau \right)\ \mathrm{if}\ t\in \left[{t}_k,\infty \right],\mathrm{and}\ 1\ \mathrm{otherwise}. $$
For a set of tests, K, the residual risk is the product:
$$ {r}_T^K\left(t,\tau \right)={\prod}_{k=1}^K{r}_T\left(t,\tau, {t}_k\right) $$
Here, we assessed two transmission windows: days 0–1 for risk during travel to include potential risk in transit prior to and after airline travel and days 0–28 for risk after travel. The total transmission risk between times t1 and t2 for individuals infected at time τ is:
$$ {I}_0\left(\tau \right)={\int}_{t_1}^{t_2}I\left(t,\tau \right) dt. $$
The transmission risk prevented by protocols including symptom monitoring, quarantine, and testing is:
$$ {I}_{SQT}^K\left(\tau \right)={\int}_{t_1}^{t_2}I\left(t,\tau \right)\left(1-{r}_S\left(t,\tau \right){r}_Q(t){r}_T^K\left(t,\tau \right)\right) dt. $$
For exposure windows in which a unique time of exposure is unknown, we assumed that infection may have occurred at any time in that window with equal probability. We therefore define the risk of exposure ϵ(τ) as uniformly distributed over a window defined by the beginning and end of the exposure period, τ1 and τ2, respectively:
$$ \epsilon \left(\tau \right)=1/\left({\tau}_2-{\tau}_1\right)\ \mathrm{if}\ \tau \in \left[{\tau}_1-{\tau}_2\right],\mathrm{and}\ 0\ \mathrm{otherwise}. $$
For example, with a 1-day trip, infection between 7 days pre-departure and the time of departure can be modeled relative to the time of arrival with τ1 = − 8 and τ2 = − 1.
Total infectiousness is then:
$$ {I}_0={\int}_{\tau_1}^{\tau_2}{\int}_{t_1}^{t_2}I\left(t,\tau \right) dtd\tau . $$
The prevented transmission risk is:
$$ {I}_{SQT}^K={\int}_{\tau_1}^{\tau_2}{\int}_{t_1}^{t_2}\epsilon \left(\tau \right)I\left(t,\tau \right)\left(1-{r}_S\left(t,\tau \right){r}_Q(t){r}_T^K\left(t,\tau \right)\right) dtd\tau . $$
Finally, we calculate the proportional reduction in transmission risk as: \( {I}_{SQT}^K/{I}_0. \)
All analyses were conducted in R and the code is available at https://github.com/cdcepi/COVID-19-traveler-model.
Reducing transmission risk after a specific known exposure
Before looking at exposure over a range of times, we first assessed the impact of symptom monitoring, quarantine, and testing when the time of infection was known (for example, a brief high-risk contact). Isolating infected individuals at the time of symptom onset, without testing or quarantine, resulted in a reduction in transmission risk of 36–52% (minimum to maximum) accounting for differences in infectiousness over time between models relative to the onset of symptoms and an assumption that 30% of infected individuals never develop symptoms. If the proportion of individuals who never have symptoms was higher, the effect of symptom monitoring decreased. For example, if 50% of individuals never had symptoms, the reduction from symptom monitoring decreased to 26–37%. Quarantine alone implemented immediately following exposure led to higher reductions in transmission risk, from 39 to 75% with 7 days to 90–100% with 14 days. Isolating individuals based on a single positive test result alone produced a 0–67% reduction in transmission, depending on the day of the test relative to the infectious period and the time-specific test sensitivity (Fig. 2). Testing earlier in infection was less effective at detecting infections; later testing means that while the test was more likely to be positive, the infectious period may begin prior to the test, leading to a smaller reduction in risk.
Reductions in total average SARS-CoV-2 transmission risk after infection at a known high-risk exposure time (day 0) without considering travel. Transmission risk reductions are stratified by method of risk reduction including symptom monitoring, quarantine (7 or 14 days), and testing (test on days 1–7). Symptom monitoring is assumed to be ongoing regardless of the test date when implemented and either symptom onset or a positive test result is assumed to result in immediate isolation until the individual is no longer infectious. The bars represent the median estimates and the error bars show the ranges (minima and maxima) across the different infectiousness curves and test positivity curves (when testing was included)
Combining symptom monitoring or quarantine with testing provided added benefit, leading to increased risk reduction, especially with a test at day 3–5 post-exposure with symptom monitoring (47–75% reduction with 30% never symptomatic or 39–73% with 50% never symptomatic) or a test at day 5–7 with a 7-day quarantine (76–95% reduction). A 7-day quarantine with symptom monitoring and a test at day 5–7 further increased the lower bound of likely risk reduction to 91–98% (with 30% never symptomatic, 86–97% with 50% never symptomatic). The effect of moderately different assumptions related to the proportion of infections that never result in symptoms had minimal impacts when symptom monitoring was combined with testing or quarantine, we therefore use the 30% value for this parameter in the following analyses.
Transmission risk during travel
To assess approaches for reducing risk of transmission while traveling, we assumed that exposure may have occurred at any time in the 7 days prior to departure and assessed reductions in transmission risk over a 1-day period following departure. Isolating individuals at the time of symptom onset prior to or during travel resulted in a 30–35% reduction in risk (Fig. 3a). Testing resulted in the greatest reduction of risk when the specimen was collected closest to the time of travel. Testing 3 days prior to travel resulted in a 10–29% reduction in transmission risk compared to a 44–72% reduction with testing on the day of travel. This was also true for testing combined with symptom monitoring, which had higher overall reductions.
Reductions in SARS-CoV-2 transmission during travel. a Reduction in transmission risk during a 1-day trip assuming a 7-day exposure window prior to travel, stratified by method of risk reduction. Individuals developing symptoms are assumed to be isolated and therefore do not travel. b Reductions in transmission risk during a 1-day trip assuming a 7-day exposure window prior to travel comparing the antigen assays with 80% and 95% sensitivity. Ranges indicate uncertainty from the different infectiousness models
We assessed the impact of test sensitivity relative to timing by comparing the antigen-type test model to the same model with higher sensitivity. With the same time-specific pattern but different sensitivity (80% vs. 95%, Fig. 3b), the higher sensitivity test gives a higher reduction in transmission risk if used at the same time. However, the importance of sensitivity is intertwined with timing. The lower sensitivity test was as effective or more effective than a higher sensitivity test if it was performed closer to the time of travel. For example, the test with 80% sensitivity performed 1 day prior to departure was 47–58% effective at reducing transmission risk during travel, while the test with 95% sensitivity performed 3 days prior to departure was 18–35% effective.
Transmission risk after travel
We then considered measures to reduce the risk of SARS-CoV-2 introduction to the destination location from travelers, i.e., transmission risk after traveling (Fig. 4). Assuming infection occurs at an unknown time within a 7-day exposure period prior to arrival (i.e., including possible infection while traveling), a single test on its own was most effective when performed 1- or 2-days post-arrival (29–53% and 29–51% reduction in transmission risk, respectively). This reduction in introduction risk was higher than reductions generated by testing prior to travel; a test 1 day prior to arrival provided a 17–35% reduction in risk and a test 3 days prior to arrival provided an 5–13% reduction (not shown). Tests prior to travel do not detect travelers infected while traveling and were less likely to detect travelers infected close to the time of travel. These travelers are those who are most likely to experience their entire infectious period in the destination location, and therefore, pose the greatest introduction risk.
Reductions in SARS-CoV-2 transmission risk from infected travelers post-arrival. Reduction in transmission risk after arrival assuming a 7-day exposure window prior to arrival, stratified by day of test and symptom monitoring, with and without a 7-day quarantine. Symptom monitoring is assumed to be ongoing before, during, and after travel and either symptom onset or a positive test result is assumed to result in immediate isolation until the individual is no longer infectious
Although a pre-travel test was less effective on its own than a post-travel test, the combination of pre-travel and post-travel tests provided additional risk reduction. A pre-travel test was most effective at reducing transmission risk after travel when performed close to the time of travel (as described above for risk during travel). In the absence of post-arrival quarantine, a second test post-travel was optimal 2–3 after arrival. The pre-travel test was likely to detect individuals who were infectious upon arrival and the later test was likely to detect those who became infectious after arrival. Combined, these tests can reduce introduction risk by 37–75%. A similar effect can be attained by testing immediately upon arrival and again 2–4 days post-arrival, which reduced introduction risk by 47–82%.
Symptom monitoring and isolation before, during, and after travel, with no other measures in place, reduced introduction risk by 42–56% and was more effective when combined with testing (Fig. 4). For example, a test 1-day post-arrival combined with symptom monitoring before, during, and after travel reduced introduction risk by 57–75%. However, quarantine for 7 days or more on its own was more effective than testing combined with symptom monitoring, regardless of when the test occurred. A 14-day quarantine reduced transmission risk by 96–100%, a 10-day quarantine by 84–100%, and a 7-day quarantine by 64–95% (Fig. 5). Testing and symptom monitoring further enhanced the effectiveness of quarantine. A single test conducted 5–6 days after arrival with symptom monitoring and a 7-day quarantine reduced introduction risk by 97--100% (Fig. 4). The day 5–6 window is optimal because it balances the reduced risk while in quarantine, with higher sensitivity for detecting individuals who may remain infectious at the end of the quarantine period.
Reductions in transmission risk post-arrival assuming a 7-day exposure window prior to arrival and symptom monitoring, stratified by quarantine length, quarantine adherence, and day of test
A 7-day quarantine in conjunction with symptom monitoring and testing had similar effectiveness to a 10-day or 14-day quarantine on its own. Comparing quarantine with imperfect adherence (50%), we found that with symptom monitoring and no test, a 7-day quarantine (70–72%) was likely to be almost as effective as a 14-day quarantine (71–77%; Fig. 5). Combined with a test within 0–3 days after arrival and symptom monitoring, a 7-day quarantine with 50% adherence was estimated to be more effective (77–86%) than a 14-day quarantine with 50% adherence and no test (71–77%) and as effective as a 14-day quarantine with a test (77–88%).
Control of SARS-CoV-2 is contingent upon multiple layered mitigation measures. Reducing the risk of transmission associated with travel is critical to reducing the impact related to importations on local health and healthcare systems. This is important when transmission at the destination is low and an introduction could spur additional outbreaks, but also when transmission is already high and health systems may be strained. Reducing risks associated with air travel could pave the way to air industry recovery, as well as offer relief to national economies and reduce social distress [63]. Efforts to control transmission before and after travel rely on individual mitigation measures such as mask use and social distancing before, during, and after travel, but additional control measures, such as testing and quarantine, have also been used by some countries. The fifth meeting of the International Health Regulations Emergency Committee convened by WHO regarding the COVID-19 pandemic stated that for health measures related to international travel, countries should regularly reappraise measures applied to international travel and ensure those measures (including targeted use of diagnostics and quarantine) are risk- and evidence-based [64].
Here, we used a mathematical model to assess the relative impact of three mitigation measures to reduce transmission risk from infected travelers: symptom monitoring, testing, and quarantine. We assessed combinations of these mitigation measures with different estimates of the infectious period, different estimates of test-positivity relative to time of infection, and different assumptions about infection timing and test sensitivity. We frame these results as proportional reductions in transmission risk from infected travelers during or after travel to consider the importance of optimizing mitigation measures to address peak infectiousness (Fig. 1a). On its own, quarantine was the most effective of the three strategies, with a 14-day quarantine almost eliminating risk and a 7-day quarantine being more effective than any single other measure. However, these measures can be more effective when used together. For example, symptom monitoring is relatively easy and further increases the effect of a 7-day quarantine to 88–98% with a 7-day exposure window prior to arrival (Fig. 4).
Testing also provides added benefit but is contingent on the timing and quality of the test. Testing prior to travel reduces transmission risk both while traveling and after travel if testing is done close to the time of travel. Testing closer to the time of travel is more likely to detect individuals who are infectious while traveling and immediately afterwards but can still miss infected travelers who are in their latent period, as they may not have enough viral shedding to be detected. While testing immediately prior to travel can substantially reduce risk, it poses additional logistical challenges: results must be reliably available prior to travel and protocols would be needed to effectively isolate individuals who test positive and their close contacts. On the other hand, testing more than 3 days before travel provides little benefit beyond what symptom monitoring can provide, because individuals who test positive at that point contribute less to transmission risk during and after travel than individuals who test negative because they are in their latent period or not yet infected at that time. Because of the value of testing close to the time of travel, a lower sensitivity test with faster results can be more effective despite decreased sensitivity. This finding is consistent with modeling work by Larremore et al. showing that limitations of reduced sensitivity can be overcome by more frequent testing that can still identify infections in time to reduce transmission, in this case, closer to the time of travel [65]. This conclusion draws attention to the importance of turnaround times to allow for corresponding decision-making, not just the sensitivity of the test. While test and setting-specific test turnaround times are critical to planning, they are highly varied and were not included here. These results should be considered in that context. For example, short turn-around time is very important for pre-travel testing but less critical for post-travel testing at days 3 or 4 when individuals are expected to remain in quarantine for 7 days or more.
In the absence of quarantine or with low adherence to quarantine, post-arrival testing is likely most effective 1–2 days after arrival, balancing early detection with optimal sensitivity for travelers in their latent period while traveling. With high-adherence quarantine or potential exposure closer to the time of travel (for example, while traveling), optimal post-arrival test timing is later, 5–6 days after travel. This corresponds to improved sensitivity for detecting individuals who may be infected close to the time of arrival and are most likely to be infectious at the end of the quarantine. With exposure up to 7 days prior to travel, we found that optimal test timing was on days 0–2 after arrival with symptom monitoring and no quarantine, days 5–6 with symptom monitoring and quarantine with 100% adherence, and days 0–3 with symptom monitoring and quarantine with 50% adherence. When exposure time is known more precisely or is specifically at the time of travel, for example with a high-risk contact while traveling or otherwise, the optimal test time is on days 4–5 after that exposure to optimize test sensitivity with or without symptom monitoring and on days 6–7 when combined with quarantine (Fig. 2). Beyond days 7–8 post-infection, the sensitivity for detecting infections in the models considered here begins to decrease (Fig. 1b). Even with quarantine measures in place, tests on or after arrival may have additional roles if quarantine adherence is imperfect or to assist in contact tracing when other travelers are potentially infected. Waiting to test several days after arrival improves the chance of detecting an individual who will be infectious at the end of the quarantine but does not optimize early detection of other infections among travelers.
These results are generally consistent with other analyses of risk associated with travel. Early in the pandemic, it was apparent that symptom screening at airports or other transit hubs could not stop the spread of SARS-CoV-2 [18]. Using an individual-level simulation framework, Clifford et al. found that more than half of infected travelers would not be detected by exit and entry screening based on temperature measurement, observation for illness, and health declaration [51]. Sufficient detection of infected travelers to avoid uncontrolled importations is largely dependent on a set of assumptions that are inconsistent with COVID-19 epidemiology: asymptomatic transmission being negligible, very high airport symptom screening sensitivity, and a short incubation period. Clifford et al. also assessed combined measures and estimated that an 8-day quarantine period with an RT-PCR test on day 7 would be nearly as effective as a 14-day quarantine on its own. Other recent work highlights the effectiveness of shorter quarantine periods combined with testing for individuals with known exposures [53, 66, 67]. Across these studies, the specific days for quarantine or testing and the estimated effectiveness varied due to differences in assumptions about the time of exposure, different modeled test characteristics, and differences in parameters for the infectious period. Nonetheless, all indicate the value of shorter quarantine combined with symptom monitoring and testing, a finding that is helpful both in the travel setting and in other settings with exposure risk.
The model used here has some specific limitations. First, the infectious period of SARS-CoV-2 is not well-defined. We therefore considered multiple models of the infectious period generated by multiple approaches to reflect uncertainty around this period, yet these models also have limitations, are not exhaustive, and more detail is needed for more precise estimates. Moreover, each of the infectious period models captures only the average infectious period, so for individual travelers, this could be substantially different. The most effective measures modeled here are close to 100% effective in the model; however, the existence of individual-level variation suggests that none of these approaches would truly be 100% effective. Even with a 14-day quarantine, it is likely that some individuals will be infectious later, or even develop symptoms only at the end of the time period. Nonetheless, the average parameterization gives the expected average effectiveness for larger numbers of infected travelers; this is the scale at which policies may be most useful. Testing options are also highly varied and not well-characterized. The test options considered here are not exhaustive nor precisely characterized. Moreover, test turnaround time can also vary. We did not model test turnaround time; instead, we focused on when the test was performed, such that the result turnaround time could be considered in the context of whatever testing and laboratory resources are available. For example, a test during quarantine should be done sufficiently early so that results are available before the end of quarantine, but that delay varies in different settings. Our framework, however, can be applied with many other options, or with better characterized distributions as these become available.
We also did not consider behavioral aspects of prevention, with the exception of adherence to quarantine. For simplicity, we assumed that quarantine was equivalent to individual-level isolation and that symptomatic individuals or those testing positive are isolated immediately. However, individuals may quarantine with others. In that case, symptom onset or a positive test for a single individual can indicate exposure for the others during quarantine. Without symptom onset or a positive test, there may be silent secondary transmission that could result in additional post-quarantine risk. Moreover, travelers may have little incentive to consistently adhere to these measures, and notification or enforcement of them also would require substantial effort and resources. Some travelers could attribute symptoms to other etiologies, such as an exacerbation of a pre-existing condition or travel fatigue. Additionally, if negative test results are available prior to the recommended end of quarantine, individuals may be less likely to complete the recommended quarantine perceiving that the test is sufficient evidence of not being infected. While adherence to all measures may be lower in practice than considered here, the relative effectiveness of measures still provides a useful guide. Moreover, the effectiveness of shorter quarantines, especially when combined with symptom monitoring and testing, may be enhanced because a shorter quarantine is less onerous and may drive better adherence [68].
Finally, we focused on comparing the effectiveness of intervention measures for infected travelers, not the reduction in absolute risk as that varies by location and time. This is therefore not an analysis of the conditions in which these measures should be implemented, nor of the specific logistical and policy challenges that arise in different situations. Quarantine of all travelers can be an effective prevention measure but could also result in the restricted movement of many travelers who are not infected and, therefore, pose no risk. When the absolute risk of infection in travelers is low and the number of travelers is high, quarantine of travelers without symptoms would predominantly result in the quarantine of uninfected people. Testing is helpful in part because it can reduce the length of quarantine needed for optimal prevention. However, testing can also result in false negatives (missed cases that are released from quarantine when still infectious) or false positives (individuals who test positive but are not actually infected). The impact of false positives can be partly mitigated by confirmatory testing. It is also possible that some recently recovered individuals will test positive but no longer be infectious (e.g., by RT-PCR which can detect SARS-CoV-2 RNA after the infectious period has ended). Additional testing or assessment of cycle threshold values may help reduce the impact on these individuals [69]. It is important that authorities also carefully consider prioritization of testing resources in the context of other public health needs in resource-limited situations.
A multi-layered approach is needed to control SARS-CoV-2 transmission associated with travel. Infection prevention measures (e.g., social distancing, mask use, hand hygiene, enhanced cleaning, and disinfection) are expected to reduce risk before, during, and after travel. Symptom monitoring, quarantine, and testing can all complement those measures to further reduce risk. Pre-departure SARS-CoV-2 testing can supplement symptom monitoring to identify potentially infectious travelers who do not have symptoms, and therefore, offers an opportunity to further reduce transmission risk during and after travel. Post-arrival SARS-CoV-2 testing can identify asymptomatic or pre-symptomatic infected travelers, including some who may have tested negative prior to departure, if prior testing took place. Post-arrival testing is likely effective at days 1–2 without quarantine, but more effective later, at days 5–6, if combined with an effective quarantine of 7 days or longer. A 14-day quarantine is effective on its own but combined with testing and symptom monitoring (with isolation of those who develop symptoms or test positive), quarantine can be shortened and still be effective. These findings can inform policies for travel until safe and effective vaccines become widely available.
All code required to run the analysis is available at https://github.com/cdcepi/COVID-19-traveler-model.
Devi S. Travel restrictions hampering COVID-19 response. Lancet. 2020;395(10233):1331–2. https://doi.org/10.1016/S0140-6736(20)30967-3.
Studdert DM, Hall MA, Mello MM. Partitioning the curve — interstate travel restrictions during the Covid-19 pandemic. N Engl J Med. 2020;383(13):e83. https://doi.org/10.1056/NEJMp2024274.
TSA checkpoint travel numbers for 2020 and 2019 | Transportation Security Administration. https://www.tsa.gov/coronavirus/passenger-throughput. Accessed 5 Nov 2020.
Air travel in the time of COVID-19. Lancet Infect Dis. 2020;20:993.
Maneenop S, Kotcharin S. The impacts of COVID-19 on the global airline industry: an event study approach. J Air Transp Manag. 2020;89:101920. https://doi.org/10.1016/j.jairtraman.2020.101920.
Sun X, Wandelt S, Zhang A. How did COVID-19 impact air transportation? A first peek through the lens of complex networks. J Air Transp Manag. 2020;89:101928.
Forsyth P, Guiomard C, Niemeier H-M. Covid −19, the collapse in passenger demand and airport charges11The authors wish to thank Brian Pearce of IATA and Michael Stanton-Geddes of ACI for helpful discussions and data, and also two anonymous referees for their comments. J Air Transp Manag. 2020;89:101932. https://doi.org/10.1016/j.jairtraman.2020.101932.
Airlines Seek Gate Checks for Virus to Revive Foreign Travel. Bloomberg.com. 2020. https://www.bloomberg.com/news/articles/2020-09-09/airlines-seek-u-s-airport-virus-tests-to-revive-foreign-travel. Accessed 5 Nov 2020.
Wilson ME, Chen LH. Re-starting travel in the era of COVID-19: preparing anew. J Travel Med. 2020;27(5). https://doi.org/10.1093/jtm/taaa108.
US Department of Transportation. Runway to Recovery: The United States Framework for Airlines and Airports to Mitigate the Public Health Risks of Coronavirus, Guidance Jointly Issued by the U.S. Departments of Transportation, Homeland Security, and Health and Human Services. https://www.transportation.gov/sites/dot.gov/files/2020-07/Runway_to_Recovery_07022020.pdf. Accessed 5 Nov 2020.
Anderson SC, Mulberry N, Edwards AM, Stockdale JE, Iyaniwura SA, Falcao RC, et al. How much leeway is there to relax COVID-19 control measures? medRxiv. 2020. doi:https://doi.org/10.1101/2020.06.12.20129833.
Proclamation on Suspension of Entry as Immigrants and Nonimmigrants of Persons who Pose a Risk of Transmitting 2019 Novel Coronavirus. The White House. https://www.whitehouse.gov/presidential-actions/proclamation-suspension-entry-immigrants-nonimmigrants-persons-pose-risk-transmitting-2019-novel-coronavirus/. Accessed 5 Nov 2020.
Linka K, Peirlinck M, Costabal FS, Kuhl E. Outbreak dynamics of COVID-19 in Europe and the effect of travel restrictions. Comput Methods Biomech Biomed Engin. 2020;23(11):710–7. https://doi.org/10.1080/10255842.2020.1759560.
Chinazzi M, Davis JT, Ajelli M, Gioannini C, Litvinova M, Merler S, et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science. 2020;368(6489):395–400. https://doi.org/10.1126/science.aba9757.
Pombal R, Hosegood I, Powell D. Risk of COVID-19 during air travel. JAMA. 2020;324(17):1798. https://doi.org/10.1001/jama.2020.19108.
International Civil Aviation Organization Council Aviation Recovery Task Force. Take-off: guidance for air travel through the COVID-19 public health crisis. 2020. https://www.icao.int/covid/cart/Documents/CART_Report_Take-Off_Document.pdf. Accessed 5 Nov 2020.
Mouchtouri VA, Bogogiannidou Z, Dirksen-Fischer M, Tsiodras S, Hadjichristodoulou C. Detection of imported COVID-19 cases worldwide: early assessment of airport entry screening, 24 January until 17 February 2020. Trop Med Health. 2020;48(1):79. https://doi.org/10.1186/s41182-020-00260-5.
Gostic K, Gomez AC, Mummah RO, Kucharski AJ, Lloyd-Smith JO. Estimated effectiveness of symptom and risk screening to prevent the spread of COVID-19. eLife. 2020;9:e55570. https://doi.org/10.7554/eLife.55570.
Considerations relating to passenger locator data, entry and exit screening and health declarations in the context of COVID-19 in the EU/EEA and the UK. European Centre for Disease Prevention and Control. 2020. https://www.ecdc.europa.eu/en/publications-data/passenger-locator-data-entry-exit-screening-health-declaration. Accessed 5 Nov 2020.
Vilke GM, Brennan JJ, Cronin AO, Castillo EM. Clinical features of patients with COVID-19: is temperature screening useful? J Emerg Med. 2020;59(6):952–6. https://doi.org/10.1016/j.jemermed.2020.09.048.
Dollard P. Risk Assessment and Management of COVID-19 Among Travelers Arriving at Designated U.S. Airports, January 17–September 13, 2020. MMWR Morb Mortal Wkly Rep. 2020;69. https://doi.org/10.15585/mmwr.mm6945a4.
Oran DP, Topol EJ. Prevalence of asymptomatic SARS-CoV-2 infection. Ann Intern Med. 2020;173(5):362–7. https://doi.org/10.7326/M20-3012.
Furukawa NW, Brooks JT, Sobel J. Evidence supporting transmission of severe acute respiratory syndrome coronavirus 2 while Presymptomatic or asymptomatic. Emerg Infect Dis. https://doi.org/10.3201/eid2607.201595.
Lavezzo E, Franchin E, Ciavarella C, Cuomo-Dannenburg G, Barzon L, Del Vecchio C, et al. Suppression of a SARS-CoV-2 outbreak in the Italian municipality of Vo. Nature. 2020;584(7821):425–9. https://doi.org/10.1038/s41586-020-2488-1.
Buitrago-Garcia D, Egli-Gany D, Counotte MJ, Hossmann S, Imeri H, Ipekci AM, et al. Occurrence and transmission potential of asymptomatic and presymptomatic SARS-CoV-2 infections: a living systematic review and meta-analysis. Plos Med. 2020;17(9):e1003346. https://doi.org/10.1371/journal.pmed.1003346.
Joshi RK, Ray RK, Adhya S, Chauhan VPS, Pani S. Spread of COVID-19 by asymptomatic cases: evidence from military quarantine facilities. BMJ Mil Health. 2020:bmjmilitary-2020-001669. https://doi.org/10.1136/bmjmilitary-2020-001669.
Johansson MA, Quandelacy TM, Kada S, Prasad PV, Steele M, Brooks JT, et al. SARS-CoV-2 transmission from people without COVID-19 symptoms. JAMA Netw Open. 2021;4(1):e2035057. https://doi.org/10.1001/jamanetworkopen.2020.35057.
Tindale LC, Stockdale JE, Coombe M, Garlock ES, Lau WYV, Saraswat M, et al. Evidence for transmission of COVID-19 prior to symptom onset. eLife. 2020;9:e57149. https://doi.org/10.7554/eLife.57149.
Nishiura H, Linton NM, Akhmetzhanov AR. Serial interval of novel coronavirus (COVID-19) infections. Int J Infect Dis. 2020;93:284–6. https://doi.org/10.1016/j.ijid.2020.02.060.
Zhao S, Gao D, Zhuang Z, Chong MKC, Cai Y, Ran J, et al. Estimating the serial interval of the novel coronavirus disease (COVID-19): a statistical analysis using the public data in Hong Kong from January 16 to February 15, 2020. Front Phys. 2020;8. https://doi.org/10.3389/fphy.2020.00347.
Wei WE. Presymptomatic Transmission of SARS-CoV-2 — Singapore, January 23–March 16, 2020. MMWR Morb Mortal Wkly Rep. 2020;69. https://doi.org/10.15585/mmwr.mm6914e1.
Tong Z-D, Tang A, Li K-F, Li P, Wang H-L, Yi J-P, et al. Potential Presymptomatic transmission of SARS-CoV-2, Zhejiang Province, China, 2020. Emerg Infect Dis. 2020;26(5):1052–4. https://doi.org/10.3201/eid2605.200198.
Considerations for quarantine of contacts of COVID-19 cases. https://www.who.int/publications-detail-redirect/considerations-for-quarantine-of-individuals-in-the-context-of-containment-for-coronavirus-disease-(covid-19). Accessed 5 Nov 2020.
Peak CM, Kahn R, Grad YH, Childs LM, Li R, Lipsitch M, et al. Individual quarantine versus active monitoring of contacts for the mitigation of COVID-19: a modelling study. Lancet Infect Dis. 2020;20(9):1025–33. https://doi.org/10.1016/S1473-3099(20)30361-3.
Saldaña F, Flores-Arguedas H, Camacho-Gutiérrez JA, Barradas I. Modeling the transmission dynamics and the impact of the control interventions for the COVID-19 epidemic outbreak. Math Biosci Eng. 2020;17(4):4165–83. https://doi.org/10.3934/mbe.2020231.
Lin C, Mullen J, Braund WE, Tu P, Auerbach J. Reopening safely – Lessons from Taiwan's COVID-19 response. J Glob Health. 10. https://doi.org/10.7189/jogh.10.020318.
Lam HY, Lam TS, Wong CH, Lam WH, Leung CME, Au KWA, et al. The epidemiology of COVID-19 cases and the successful containment strategy in Hong Kong–January to may 2020. Int J Infect Dis. 2020;98:51–8. https://doi.org/10.1016/j.ijid.2020.06.057.
Paltiel AD, Zheng A, Walensky RP. Assessment of SARS-CoV-2 screening strategies to permit the safe reopening of college campuses in the United States. JAMA Netw Open. 2020;3(7):e2016818. https://doi.org/10.1001/jamanetworkopen.2020.16818.
Taylor T, Das R, Mueller K, Pransky G, Christian J, Orford R, et al. Safely returning America to work: part I: general guidance for employers. J Occup Environ Med. 2020;62(9):771–9. https://doi.org/10.1097/JOM.0000000000001984.
Murray MT. Mitigating a COVID-19 Outbreak Among Major League Baseball Players — United States, 2020. MMWR Morb Mortal Wkly Rep. 2020;69. https://doi.org/10.15585/mmwr.mm6942a4.
Coronavirus (COVID-19) Travel Restrictions By Country. KAYAK. https://www.kayak.com/travel-restrictions. Accessed 5 Nov 2020.
Visiting Iceland. https://www.covid.is/categories/tourists-travelling-to-iceland. Accessed 5 Nov 2020.
No. 205.2: Quarantine Restrictions on Travelers Arriving in New York. Governor Andrew M. Cuomo. 2020. https://www.governor.ny.gov/news/no-2052-quarantine-restrictions-travelers-arriving-new-york. Accessed 5 Nov 2020.
Centers for Medicare and Medicaid Services. What is CMS's policy regarding laboratories performing antigen tests authorized by the Food and Drug Administration (FDA) under an Emergency Use Authorization (EUA) for use at the point of care (POC) or in patient care settings operating under a Clinical Laboratory Improvement Amendments of 1988 (CLIA) Certificate of Waiver on asymptomatic individuals? https://www.cms.gov/files/document/clia-poc-ag-test-enforcement-discretion.pdf. Accessed 5 Nov 2020.
Food and Drug Administration. Individual EUAs for Antigen Diagnostic Tests for SARS-CoV-2. FDA. 2020. https://www.fda.gov/medical-devices/coronavirus-disease-2019-covid-19-emergency-use-authorizations-medical-devices/vitro-diagnostics-euas. Accessed 5 Nov 2020.
Alemany A, Baró B, Ouchi D, Rodó P, Ubals M, Corbacho-Monné M, Vergara-Alert J, Rodon J, Segalés J, Esteban C, Fernández G, Ruiz L, Bassat Q, Clotet B, Ara J, Vall-Mayans M, G-Beiras C, Blanco I, Mitjà O. Analytical and clinical performance of the panbio COVID-19 antigen-detecting rapid diagnostic test. J Inf Secur. 2021;0. https://doi.org/10.1016/j.jinf.2020.12.033.
Pilarowski G, Lebel P, Sunshine S, Liu J, Crawford E, Marquez C, et al. The CLIAHUB Consortium, DeRisi J. Performance characteristics of a rapid severe acute respiratory syndrome coronavirus 2 antigen detection assay at a public plaza testing site in San Francisco. J Infect Dis. 2021. https://doi.org/10.1093/infdis/jiaa802.
Prince-Guerra JL. Evaluation of Abbott BinaxNOW Rapid Antigen Test for SARS-CoV-2 Infection at Two Community-Based Testing Sites — Pima County, Arizona, November 3–17, 2020. MMWR Morb Mortal Wkly Rep. 2021;70. https://doi.org/10.15585/mmwr.mm7003e3.
Pray IW. Performance of an Antigen-Based Test for Asymptomatic and Symptomatic SARS-CoV-2 Testing at Two University Campuses — Wisconsin, September–October 2020. MMWR Morb Mortal Wkly Rep. 2021;69. https://doi.org/10.15585/mmwr.mm695152a3.
Burns J, Movsisyan A, Stratil JM, Coenen M, Emmert-Fees KM, Geffert K, et al. Travel-related control measures to contain the COVID-19 pandemic: a rapid review. Cochrane Database Syst Rev. 2020. https://doi.org/10.1002/14651858.CD013717.
Clifford S, Quilty BJ, Russell TW, Liu Y, Chan Y-WD, Pearson CAB, et al. Strategies to reduce the risk of SARS-CoV-2 re-introduction from international travellers. medRxiv. 2020. https://doi.org/10.1101/2020.07.24.20161281.
Dickens BL, Koo JR, Lim JT, Sun H, Clapham HE, Wilder-Smith A, et al. Strategies at points of entry to reduce importation risk of COVID-19 cases and re-open travel. J Travel Med. 2020;27(8). https://doi.org/10.1093/jtm/taaa141.
Ashcroft P, Lehtinen S, Angst DC, Low N, Bonhoeffer S. Quantifying the impact of quarantine duration on COVID-19 transmission. medRxiv. 2020. https://doi.org/10.1101/2020.09.24.20201061.
He X, Lau EHY, Wu P, Deng X, Wang J, Hao X, et al. Temporal dynamics in viral shedding and transmissibility of COVID-19. Nat Med. 2020;26(5):672–5. https://doi.org/10.1038/s41591-020-0869-5.
Casey M, Griffin J, McAloon CG, Byrne AW, Madden JM, McEvoy D, et al. Pre-symptomatic transmission of SARS-CoV-2 infection: a secondary analysis using published data. medRxiv. 2020. https://doi.org/10.1101/2020.05.08.20094870.
Benefield AE, Skrip LA, Clement A, Althouse RA, Chang S, Althouse BM. SARS-CoV-2 viral load peaks prior to symptom onset: a systematic review and individual-pooled analysis of coronavirus viral load from 66 studies. medRxiv. 2020. https://doi.org/10.1101/2020.09.28.20202028.
Walsh KA, Jordan K, Clyne B, Rohde D, Drummond L, Byrne P, et al. SARS-CoV-2 detection, viral load and infectivity over the course of an infection. J Inf Secur. 2020;81:357–71.
Byrne AW, McEvoy D, Collins AB, Hunt K, Casey M, Barber A, et al. Inferred duration of infectious period of SARS-CoV-2: rapid scoping review and analysis of available evidence for asymptomatic and symptomatic COVID-19 cases. BMJ Open. 2020;10(8):e039856. https://doi.org/10.1136/bmjopen-2020-039856.
Goyal A, Reeves DB, Cardozo-Ojeda EF, Schiffer JT, Mayer BT. Wrong person, place and time: viral load and contact network structure predict SARS-CoV-2 transmission and super-spreading events. medRxiv. 2020. https://doi.org/10.1101/2020.08.07.20169920.
Wölfel R, Corman VM, Guggemos W, Seilmaier M, Zange S, Müller MA, et al. Virological assessment of hospitalized patients with COVID-2019. Nature. 2020;581(7809):465–9. https://doi.org/10.1038/s41586-020-2196-x.
McAloon C, Collins Á, Hunt K, Barber A, Byrne AW, Butler F, et al. Incubation period of COVID-19: a rapid systematic review and meta-analysis of observational research. BMJ Open. 2020;10(8):e039652. https://doi.org/10.1136/bmjopen-2020-039652.
Kucirka LM, Lauer SA, Laeyendecker O, Boon D, Lessler J. Variation in false-negative rate of reverse transcriptase polymerase chain reaction–based SARS-CoV-2 tests by time since exposure. Ann Intern Med. 2020;173(4):262–7. https://doi.org/10.7326/M20-1495.
Lamb TL, Winter SR, Rice S, Ruskin KJ, Vaughn A. Factors that predict passengers willingness to fly during and after the COVID-19 pandemic. J Air Transp Manag. 2020;89:101897. https://doi.org/10.1016/j.jairtraman.2020.101897.
Statement on the fifth meeting of the International Health Regulations (2005) Emergency Committee regarding the coronavirus disease (COVID-19) pandemic. https://www.who.int/news/item/30-10-2020-statement-on-the-fifth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic. Accessed 5 Nov 2020.
Larremore DB, Wilder B, Lester E, Shehata S, Burke JM, Hay JA, et al. Test sensitivity is secondary to frequency and turnaround time for COVID-19 surveillance. medRxiv. 2020. https://doi.org/10.1101/2020.06.22.20136309.
Quilty BJ, Clifford S, Group2 C nCoV working, Flasche S, Eggo RM. Effectiveness of airport screening at detecting travellers infected with novel coronavirus (2019-nCoV). Euro Surveill. 2020;25:2000080.
Wells CR, Townsend JP, Pandey A, Krieger G, Singer B, McDonald RH, et al. Optimal COVID-19 quarantine and testing strategies. medRxiv. 2020. https://doi.org/10.1101/2020.10.27.20211631.
Webster RK, Brooks SK, Smith LE, Woodland L, Wessely S, Rubin GJ. How to improve adherence with quarantine: rapid review of the evidence. Public Health. 2020;182:163–9. https://doi.org/10.1016/j.puhe.2020.03.007.
Bullard J, Dust K, Funk D, Strong JE, Alexander D, Garnett L, et al. Predicting infectious severe acute respiratory syndrome coronavirus 2 from diagnostic samples. Clin Infect Dis. 2020. https://doi.org/10.1093/cid/ciaa638.
No specific funding source was used for this study.
COVID-19 Response, Centers for Disease Control and Prevention, Atlanta, USA
Michael A. Johansson, Hannah Wolford, Prabasaj Paul, Pamela S. Diaz, Tai-Ho Chen, Clive M. Brown, Martin S. Cetron & Francisco Alvarado-Ramy
Michael A. Johansson
Hannah Wolford
Prabasaj Paul
Pamela S. Diaz
Tai-Ho Chen
Clive M. Brown
Martin S. Cetron
Francisco Alvarado-Ramy
MAJ, PSD, THC, CMB, MSC, and FAR conceived the study. MAJ, HW, and PP designed and carried out the analyses. MAJ, HW, PP, PSD, and FAR drafted the manuscript. All authors revised the manuscript. The authors read and approved the final manuscript.
Correspondence to Michael A. Johansson.
The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.
Johansson, M.A., Wolford, H., Paul, P. et al. Reducing travel-related SARS-CoV-2 transmission with layered mitigation measures: symptom monitoring, quarantine, and testing. BMC Med 19, 94 (2021). https://doi.org/10.1186/s12916-021-01975-w | CommonCrawl |
Can the unresolved X-ray background be explained by emission from the optically-detected faint galaxies of the GOODS project?
by M. A. Worsley; A. C. Fabian; F. E. Bauer; D. M. Alexander; W. N. Brandt; B. D. Lehmer
The emission from individual X-ray sources in the Chandra Deep Fields and XMM-Newton Lockman Hole shows that almost half of the hard X-ray background above 6 keV is unresolved and implies the existence of a missing population of heavily obscured active galactic nuclei (AGN). We have stacked the 0.5-8 keV X-ray emission from optical sources in the Great Observatories Origins Deep Survey (GOODS; which covers the Chandra Deep Fields) to determine whether these galaxies, which are individually...
Source: http://arxiv.org/abs/astro-ph/0602605v1
A deficit of ultraluminous X-ray sources in luminous infrared galaxies
by W. Luangtip; T. P. Roberts; S. Mineo; B. D. Lehmer; D. M. Alexander; F. E. Jackson; A. D. Goulding; J. L. Fischer
We present results from a Chandra study of ultraluminous X-ray sources (ULXs) in a sample of 17 nearby (D_L
Topics: High Energy Astrophysical Phenomena, Astrophysics
Source: http://arxiv.org/abs/1410.1569
The average submillimetre properties of Lyman-alpha Blobs at z=3
by N. K. Hine; J. E. Geach; Y. Matsuda; B. D. Lehmer; M. J. Michalowski; D. Farrah; M. Spaans; S. J. Oliver; D. J. B. Smith; S. C. Chapman; T. Jenness; D. M. Alexander; I. Robson; P. van der Werf
Ly-alpha blobs (LABs) offer insight into the complex interface between galaxies and their circumgalactic medium. Whilst some LABs have been found to contain luminous star-forming galaxies and active galactic nuclei that could potentially power the Ly-alpha emission, others appear not to be associated with obvious luminous galaxy counterparts. It has been speculated that LABs may be powered by cold gas streaming on to a central galaxy, providing an opportunity to directly observe the `cold...
Topics: Astrophysics, Astrophysics of Galaxies
Source: http://arxiv.org/abs/1605.04912
A 100-kpc inverse Compton X-ray halo around 4C60.07 at z=3.79
by Ian Smail; B. D. Lehmer; R. J. Ivison; D. M. Alexander; R. G. Bower; J. A. Stevens; J. E. Geach; C. A. Scharf; K. E. K. Coppin; W. J. M. van Breugel; .
We analyse a 100-ks Chandra observation of the powerful radio galaxy, 4C60.07 at z=3.79. We identify extended X-ray emission with Lx~10^45 erg/s across a ~90-kpc region around the radio galaxy. The energetics of this X-ray halo and its morphological similarity to the radio emission from the galaxy suggest that it arises from inverse Compton (IC) scattering, by relativistic electrons in the radio jets, of Cosmic Microwave Background photons and potentially far-infrared photons from the dusty...
Source: http://arxiv.org/abs/0908.0819v1
Identification of the Hard X-ray Source Dominating the E > 25 keV Emission of the Nearby Galaxy M31
by M. Yukita; A. Ptak; A. E. Hornschemeier; D. Wik; T. J. Maccarone; K. Pottschmidt; A. Zezas; V. Antoniou; R. Ballhausen; B. D. Lehmer; A. Lien; B. Williams; F. Baganoff; P. T. Boyd; T. Enoto; J. Kennea; K. L. Page; Y. Choi
We report the identification of a bright hard X-ray source dominating the M31 bulge above 25 keV from a simultaneous NuSTAR-Swift observation. We find that this source is the counterpart to Swift J0042.6+4112, which was previously detected in the Swift BAT All-sky Hard X-ray Survey. This Swift BAT source had been suggested to be the combined emission from a number of point sources; our new observations have identified a single X-ray source from 0.5 to 50 keV as the counterpart for the first...
Topics: Astrophysics of Galaxies, Astrophysics, High Energy Astrophysical Phenomena
Supermassive Black Hole Growth in Starburst Galaxies over Cosmic Time: Constraints from the Deepest Chandra Fields
by D. A. Rafferty; W. N. Brandt; D. M. Alexander; Y. Q. Xue; F. E. Bauer; B. D. Lehmer; B. Luo; C. Papovich
We present an analysis of deep multiwavelength data for z ~ 0.3-3 starburst galaxies selected by their 70 um emission in the Extended-Chandra Deep Field-South and Extended Groth Strip. We identify active galactic nuclei (AGNs) in these infrared sources through their X-ray emission and quantify the fraction that host an AGN. We find that the fraction depends strongly on both the mid-infrared color and rest-frame mid-infrared luminosity of the source, rising to ~ 50-70% at the warmest colors and...
ALMA observations of a z~3.1 Protocluster: Star Formation from Active Galactic Nuclei and Lyman-Alpha Blobs in an Overdense Environment
by D. M. Alexander; J. M. Simpson; C. M. Harrison; J. R. Mullaney; I. Smail; J. E. Geach; R. C. Hickox; N. K. Hine; A. Karim; M. Kubo; B. D. Lehmer; Y. Matsuda; D. J. Rosario; F. Stanley; A. M. Swinbank; H. Umehata; T. Yamada
We exploit ALMA 870um observations to measure the star-formation rates (SFRs) of eight X-ray detected Active Galactic Nuclei (AGNs) in a z~3.1 protocluster, four of which reside in extended Ly-alpha haloes (often termed Ly-alpha blobs: LABs). Three of the AGNs are detected by ALMA and have implied SFRs of ~220-410~M_sun/yr; the non detection of the other five AGNs places SFR upper limits of 100 kpc) do not host more luminous star formation than the smaller LABs, despite being an order of...
Variability Selected Low-Luminosity Active Galactic Nuclei in the 4 Ms Chandra Deep Field-South
by M. Young; W. N. Brandt; Y. Q. Xue; M. Paolillo; D. M. Alexander; F. E. Bauer; B. D. Lehmer; B. Luo; O. Shemmer; D. P. Schneider; C. Vignali
The 4 Ms Chandra Deep Field-South (CDF-S) and other deep X-ray surveys have been highly effective at selecting active galactic nuclei (AGN). However, cosmologically distant low-luminosity AGN (LLAGN) have remained a challenge to identify due to significant contribution from the host galaxy. We identify long-term X-ray variability (~month-years, observed frame) in 20 of 92 CDF-S galaxies spanning redshifts z~0.08-1.02 that do not meet other AGN selection criteria. We show that the observed...
The 2 Ms Chandra Deep Field-North Survey and the 250 ks Extended Chandra Deep Field-South Survey: Improved Point-Source Catalogs
by Y. Q. Xue; B. Luo; W. N. Brandt; D. M. Alexander; F. E. Bauer; B. D. Lehmer; G. Yang
We present improved point-source catalogs for the 2 Ms Chandra Deep Field-North (CDF-N) and the 250 ks Extended Chandra Deep Field-South (E-CDF-S), implementing a number of recent improvements in Chandra source-cataloging methodology. For the CDF-N/E-CDF-S, we provide a main catalog that contains 683/1003 X-ray sources detected with wavdetect at a false-positive probability threshold of $10^{-5}$ that also satisfy a binomial-probability source-selection criterion of $P
Topics: Astrophysics, High Energy Astrophysical Phenomena, Cosmology and Nongalactic Astrophysics,...
Photometric Redshifts in the Hawaii-Hubble Deep Field-North (H-HDF-N)
by G. Yang; Y. Q. Xue; B. Luo; W. N. Brandt; D. M. Alexander; F. E. Bauer; W. Cui; X. Kong; B. D. Lehmer; J. -X. Wang; X. -B. Wu; F. Yuan; Y. -F. Yuan; H. Y. Zhou
We derive photometric redshifts (\zp) for sources in the entire ($\sim0.4$ deg$^2$) Hawaii-Hubble Deep Field-North (\hdfn) field with the EAzY code, based on point spread function-matched photometry of 15 broad bands from the ultraviolet (\bandu~band) to mid-infrared (IRAC 4.5 $\mu$m). Our catalog consists of a total of 131,678 sources. We evaluate the \zp~quality by comparing \zp~with spectroscopic redshifts (\zs) when available, and find a value of normalized median absolute deviation...
Topics: Astrophysics of Galaxies, Astrophysics
Identifications and Photometric Redshifts of the 2 Ms Chandra Deep Field-South Sources
by B. Luo; W. N. Brandt; Y. Q. Xue; M. Brusa; D. M. Alexander; F. E. Bauer; A. Comastri; A. Koekemoer; B. D. Lehmer; V. Mainieri; D. A. Rafferty; D. P. Schneider; J. D. Silverman; C. Vignali
[Abridged] We present reliable multiwavelength identifications and high-quality photometric redshifts for the 462 X-ray sources in the ~2 Ms Chandra Deep Field-South. Source identifications are carried out using deep optical-to-radio multiwavelength catalogs, and are then combined to create lists of primary and secondary counterparts for the X-ray sources. We identified reliable counterparts for 446 (96.5%) of the X-ray sources, with an expected false-match probability of ~6.2%. A...
Herschel protocluster survey: A search for dusty star-forming galaxies in protoclusters at z=2-3
by Y. Kato; Y. Matsuda; Ian Smail; A. M. Swinbank; B. Hatsukade; H. Umehata; I. Tanaka; T. Saito; D. Iono; Y. Tamura; K. Kohno; D. K. Erb; B. D. Lehmer; J. E. Geach; C. C. Steidel; D. M. Alexander; T. Yamada; T. Hayashino
We present a Herschel/SPIRE survey of three protoclusters at z=2-3 (2QZCluster, HS1700, SSA22). Based on the SPIRE colours (S350/S250 and S500/S350) of 250 $\mu$m sources, we selected high redshift dusty star-forming galaxies potentially associated with the protoclusters. In the 2QZCluster field, we found a 4-sigma overdensity of six SPIRE sources around 4.5' (~2.2 Mpc) from a density peak of H$\alpha$ emitters at z=2.2. In the HS1700 field, we found a 5-sigma overdensity of eight SPIRE sources...
The Evolution of Normal Galaxy X-ray Emission Through Cosmic History: Constraints from the 6 Ms Chandra Deep Field-South
by B. D. Lehmer; A. R. Basu-Zych; S. Mineo; W. N. Brandt; R. T. Eufrasio; T. Fragos; A. E. Hornschemeier; B. Luo; Y. Q. Xue; F. E. Bauer; M. Gilfanov; P. Ranalli; D. P. Schneider; O. Shemmer; P. Tozzi; J. R. Trump; C. Vignali; J. -X. Wang; M. Yukita; A. Zezas
We present measurements of the evolution of normal-galaxy X-ray emission from $z \approx$ 0-7 using local galaxies and galaxy samples in the 6 Ms Chandra Deep Field-South (CDF-S) survey. The majority of the CDF-S galaxies are observed at rest-frame energies above 2 keV, where the emission is expected to be dominated by X-ray binary (XRB) populations; however, hot gas is expected to provide small contributions to the observed- frame < 1 keV emission at $z < 1$. We show that a single...
Topics: Astrophysics, Cosmology and Nongalactic Astrophysics, Astrophysics of Galaxies
A resolved map of the infrared excess in a Lyman Break Galaxy at z=3
by M. P. Koprowski; K. E. K. Coppin; J. E. Geach; N. K. Hine; M. Bremer; S. C. Chapman; L. J. M. Davies; T. Hayashino; K. K. Knudsen; M. Kubo; B. D. Lehmer; Y. Matsuda; D. J. B. Smith; P. P. van der Werf; G. Violino; T. Yamada
We have observed the dust continuum of ten z=3.1 Lyman Break Galaxies with the Atacama Large Millimeter/Submillimeter Array at ~450 mas resolution in Band 7. We detect and resolve the 870um emission in one of the targets with an integrated flux density of S(870)=(192+/-57) uJy, and measure a stacked 3-sigma signal of S(870)=(67+/-23) uJy for the remaining nine. The total infrared luminosities estimated from full spectral energy distribution fits are L(8-1000um)=(8.4+/-2.3)x10^10 Lsun for the...
The Properties and Redshift Evolution of Intermediate-Luminosity Off-Nuclear X-ray Sources in the Chandra Deep Fields
by B. D. Lehmer; W. N. Brandt; A. E. Hornschemeier; D. M. Alexander; F. E. Bauer; A. M. Koekemoer; D. P. Schneider; A. T. Steffen
We analyze a population of intermediate-redshift (z ~ 0.05-0.3), off-nuclear X-ray sources located within optically-bright galaxies in the Great Observatories Origins Deep Survey (GOODS) and Galaxy Evolution from Morphology and SEDs (GEMS) fields. A total of 24 off-nuclear source candidates are classified using deep Chandra exposures from the Chandra Deep Field-North, Chandra Deep Field-South, and Extended Chandra Deep Field-South; 15 of these are newly identified. These sources have average...
The Evolution of AGN Host Galaxies: From Blue to Red and the Influence of Large-Scale Structures
by J. D. Silverman; V. Mainieri; B. D. Lehmer; D. M. Alexander; F. E. Bauer; J. Bergeron; W. N. Brandt; R. Gilli; G. Hasinger; D. P. Schneider; P. Tozzi; C. Vignali; A. M. Koekemoer; T. Miyaji; P. Popesso; P. Rosati; G. Szokoly
We present an analysis of 109 moderate-luminosity (41.9 < Log L{0.5-8.0 keV} < 43.7) AGN in the Extended Chandra Deep Field-South survey, which is drawn from 5,549 galaxies from the COMBO-17 and GEMS surveys having 0.4 < z < 1.1. These obscured or optically-weak AGN facilitate the study of their host galaxies since the AGN provide an insubstantial amount of contamination to the galaxy light. We find that the color distribution of AGN host galaxies is highly dependent upon (1) the...
The Chandra Deep Protocluster Survey: Ly-alpha Blobs are powered by heating, not cooling
by J. E. Geach; D. M. Alexander; B. D. Lehmer; Ian Smail; Y. Matsuda; S. C. Chapman; C. A. Scharf; R. J. Ivison; M. Volonteri; T. Yamada; A. W. Blain; R. G. Bower; F. E. Bauer; A. Basu-Zych
We present the results of a 400ks Chandra survey of 29 extended Ly-alpha emitting nebulae (Ly-alpha Blobs, LABs) in the z=3.09 proto-cluster in the SSA22 field. We detect luminous X-ray counterparts in five LABs, implying a large fraction of active galactic nuclei (AGN) in LABs, f_AGN = 17% down to L_2-32keV ~ 10^44 erg/s. All of the AGN appear to be heavily obscured, with spectral indices implying obscuring column densities of N_H > 10^23 cm^-2. The AGN fraction should be considered a lower...
Discovery of the Most-Distant Double-Peaked Emitter at z=1.369
by B. Luo; W. N. Brandt; J. D. Silverman; I. V. Strateva; F. E. Bauer; P. Capak; J. Kartaltepe; B. D. Lehmer; V. Mainieri; M. Salvato; G. Szokoly; D. P. Schneider; C. Vignali
We report the discovery of the most-distant double-peaked emitter, CXOECDFS J033115.0-275518, at z=1.369. A Keck/DEIMOS spectrum shows a clearly double-peaked broad Mg II $\lambda2799$ emission line, with FWHM 11000 km/s for the line complex. The line profile can be well fit by an elliptical relativistic Keplerian disk model. This is one of a handful of double-peaked emitters known to be a luminous quasar, with excellent multiwavelength coverage and a high-quality X-ray spectrum. CXOECDFS...
Tracing the Mass-Dependent Star Formation History of Late-Type Galaxies using X-ray Emission: Results from the Chandra Deep Fields
by B. D. Lehmer; W. N. Brandt; D. M. Alexander; E. F. Bell; A. E. Hornschemeier; D. H. McIntosh; F. E. Bauer; R. Gilli; V. Mainieri; D. P. Schneider; J. D. Silverman; A. T. Steffen; P. Tozzi; C. Wolf
We report on the X-ray evolution over the last ~9 Gyr of cosmic history (i.e., since z = 1.4) of late-type galaxy populations in the Chandra Deep Field-North and Extended Chandra Deep Field-South (CDF-N and E-CDF-S, respectively; jointly CDFs) survey fields. Our late-type galaxy sample consists of 2568 galaxies, which were identified using rest-frame optical colors and HST morphologies. We utilized X-ray stacking analyses to investigate the X-ray emission from these galaxies, emphasizing the...
The VLA survey of the Chandra Deep Field South III: X-ray spectral properties of radio sources
by P. Tozzi; V. Mainieri; P. Rosati; P. Padovani; K. I. Kellermann; E. Fomalont; N. Miller; P. Shaver; J. Bergeron; W. N. Brandt; M. Brusa; R. Giacconi; G. Hasinger; B. D. Lehmer; M. Nonino; C. Norman; J. Silverman
We discuss the X-ray properties of the radio sources detected in a deep 1.4 and 5 GHz VLA Radio survey of the Extended Chandra Deep Field South (E-CDFS). Among the 266 radio sources detected, we find 89 sources (1/3 of the total) with X-ray counterparts in the catalog of the 1Ms exposure of the central 0.08 deg^2 (Giacconi et al. 2002; Alexander et al. 2003) or in the catalog of the 250 ks exposure of the 0.3 deg^2 E-CDFS field (Lehmer et al. 2005). For 76 (85%) of these sources we have...
Color-Magnitude Relations of Active and Non-Active Galaxies in the Chandra Deep Fields: High-Redshift Constraints and Stellar-Mass Selection Effects
by Y. Q. Xue; W. N. Brandt; B. Luo; D. A. Rafferty; D. M. Alexander; F. E. Bauer; B. D. Lehmer; D. P. Schneider; J. D. Silverman
[Abridged] We extend color-magnitude relations for moderate-luminosity X-ray AGN hosts and non-AGN galaxies through the galaxy formation epoch in the Chandra Deep Fields. We utilized analyses of color-magnitude diagrams (CMDs) to assess the role of moderate-luminosity AGNs in galaxy evolution. First, we confirm some previous results and extend them to higher redshifts, e.g., there is no apparent color bimodality for AGN hosts from z~0-2, but non-AGN galaxy color bimodality exists up to z~3;...
The Chandra Deep Field-South Survey: 4 Ms Source Catalogs
by Y. Q. Xue; B. Luo; W. N. Brandt; F. E. Bauer; B. D. Lehmer; P. S. Broos; D. P. Schneider; D. M. Alexander; M. Brusa; A. Comastri; A. C. Fabian; R. Gilli; G. Hasinger; A. E. Hornschemeier; A. Koekemoer; T. Liu; V. Mainieri; M. Paolillo; D. A. Rafferty; P. Rosati; O. Shemmer; J. D. Silverman; I. Smail; P. Tozzi; C. Vignali
[abridged] We present point-source catalogs for the 4Ms Chandra Deep Field-South (CDF-S), which is the deepest Chandra survey to date and covers an area of 464.5 arcmin^2. We provide a main source catalog, which contains 740 X-ray point sources that are detected with wavdetect at a false-positive probability threshold of 1E-5 and also satisfy a binomial-probability source-selection criterion of P
X-rays from the First Massive Black Holes
by W. N. Brandt; C. Vignali; B. D. Lehmer; L. A. Lopez; D. P. Schneider; I. V. Strateva
We briefly review some recent results from Chandra and XMM-Newton studies of the highest redshift (z > 4) active galactic nuclei (AGNs). Specific topics covered include radio-quiet quasars, radio-loud quasars, moderate-luminosity AGNs in X-ray surveys, and future prospects. No significant changes in AGN X-ray emission properties have yet been found at high redshift, indicating that the small-scale X-ray emission regions of AGNs are insensitive to the dramatic changes on larger scales that...
A Chandra Perspective On Galaxy-Wide X-ray Binary Emission And Its Correlation With Star Formation Rate And Stellar Mass: New Results From Luminous Infrared Galaxies
by B. D. Lehmer; D. M. Alexander; F. E. Bauer; W. N. Brandt; A. D. Goulding; L. P. Jenkins; A. Ptak; T. P. Roberts
We present new Chandra observations that complete a sample of seventeen (17) luminous infrared galaxies (LIRGs) with D < 60 Mpc and low Galactic column densities of N_H < 5 X 10^20 cm^-2. The LIRGs in our sample have total infrared (8-1000um) luminosities in the range of L_IR ~ (1-8) X 10^11 L_sol. The high-resolution imaging and X-ray spectral information from our Chandra observations allow us to measure separately X-ray contributions from active galactic nuclei (AGNs) and normal galaxy...
The X-ray-to-Optical Properties of Optically-Selected Active Galaxies Over Wide Luminosity and Redshift Ranges
by A. T. Steffen; I. Strateva; W. N. Brandt; D. M. Alexander; A. M. Koekemoer; B. D. Lehmer; D. P. Schneider; C. Vignali
We present partial-correlation analyses that examine the strengths of the relationships between L_UV, L_X, Alpha_OX, and redshift for optically-selected AGNs. We extend the work of Strateva et al. (2005), that analyzed optically-selected AGNs from the Sloan Digital Sky Survey (SDSS), by including 52 moderate-luminosity, optically-selected AGNs from the COMBO-17 survey with corresponding deep (~250 ks to 1 Ms) X-ray observations from the Extended Chandra Deep Field-South. The COMBO-17 survey...
The Extended Chandra Deep Field-South Survey. Chandra Point-Source Catalogs
by B. D. Lehmer; W. N. Brandt; D. M. Alexander; F. E. Bauer; D. P. Schneider; P. Tozzi; J. Bergeron; G. P. Garmire; R. Giacconi; R. Gilli; G. Hasinger; A. E. Hornschemeier; A. M. Koekemoer; V. Mainieri; T. Miyaji; M. Nonino; P. Rosati; J. D. Silverman; G. Szokoly; C. Vignali
We present Chandra point-source catalogs for the Extended Chandra Deep Field-South (E-CDF-S) survey. The E-CDF-S consists of four contiguous 250 ks Chandra observations covering an approximately square region of total solid angle ~0.3 deg^2, which flank the existing ~1 Ms Chandra Deep Field-South (CDF-S). The survey reaches sensitivity limits of 1.1 X 10^-16 erg/cm^2/s and 6.7 X 10^-16 erg/cm^2/s for the 0.5-2.0 keV and 2-8 keV bands, respectively. We detect 762 distinct X-ray point sources...
The Extended Chandra Deep Field-South Survey: Optical spectroscopy of faint X-ray sources with the VLT and Keck
by J. D. Silverman; V. Mainieri; M. Salvato; G. Hasinger; J. Bergeron; P. Capak; G. Szokoly; A. Finoguenov; R. Gilli; P. Rosati; P. Tozzi; C. Vignali; D. M. Alexander; W. N. Brandt; B. D. Lehmer; B. Luo; D. Rafferty; Y. Q. Xue; I. Balestra; F. E. Bauer; M. Brusa; A. Comastri; J. Kartaltepe; A. M. Koekemoer; T. Miyaji; D. P. Schneider; E. Treister; L. Wisotski; M. Schramm
We present the results of a program to acquire high-quality optical spectra of X-ray sources detected in the E-CDF-S and its central area. New spectroscopic redshifts are measured for 283 counterparts to Chandra sources with deep exposures (t~2-9 hr per pointing) using multi-slit facilities on both the VLT and Keck thus bringing the total number of spectroscopically-identified X-ray sources to over 500 in this survey field. We provide a comprehensive catalog of X-ray sources detected in the...
Modeling the Redshift Evolution of the Normal Galaxy X-ray Luminosity Function
by M. Tremmel; T. Fragos; B. D. Lehmer; P. Tzanavaris; K. Belczynski; V. Kalogera; A. R. Basu-Zych; W. M. Farr; A. Hornschemeier; L. Jenkins; A. Ptak; A. Zezas
Emission from X-ray binaries (XRBs) is a major component of the total X-ray luminosity of normal galaxies, so X-ray studies of high redshift galaxies allow us to probe the formation and evolution of X-ray binaries on very long timescales. In this paper, we present results from large-scale population synthesis models of binary populations in galaxies from z = 0 to 20. We use as input into our modeling the Millennium II Cosmological Simulation and the updated semi-analytic galaxy catalog by Guo...
The X-ray Properties of Early-Type Galaxies in the Extended Chandra Deep Field-South
by B. D. Lehmer; W. N. Brandt; D. M. Alexander; E. F. Bell; D. H. McIntosh; F. E. Bauer; G. Hasinger; V. Mainieri; T. Miyaji; D. P. Schneider; A. T. Steffen
We investigate the evolution over the last 6.3 Gyr of cosmic time (i.e., since z ~ 0.7) of the average X-ray properties of early-type galaxies within the Extended Chandra Deep Field-South (E-CDF-S). Our early-type galaxy sample includes 539 objects with red-sequence colors and Sersic indices larger than n = 2.5, which were selected jointly from the COMBO-17 (Classifying Objects by Medium-Band Observations in 17 Filters) and GEMS (Galaxy Evolution from Morphologies and SEDs) surveys. We utilize...
The X-ray Luminosity Functions of Field Low Mass X-ray Binaries in Early-Type Galaxies: Evidence for a Stellar Age Dependence
by B. D Lehmer; M. Berkeley; A. Zezas; D. M. Alexander; A. Basu-Zych; F. E. Bauer; W. N. Brandt; T. Fragos; A. E. Hornschemeier; V. Kalogera; A. Ptak; G. R. Sivakoff; P. Tzanavaris; M. Yukita
We present direct constraints on how the formation of low-mass X-ray binary (LMXB) populations in galactic fields depends on stellar age. In this pilot study, we utilize Chandra and Hubble Space Telescope (HST) data to detect and characterize the X-ray point source populations of three nearby early-type galaxies: NGC 3115, 3379, and 3384. The luminosity-weighted stellar ages of our sample span 3-10 Gyr. X-ray binary population synthesis models predict that the field LMXBs associated with...
Topics: Astrophysics of Galaxies, High Energy Astrophysical Phenomena, Astrophysics
Tracking Down the Source Population Responsible for the Unresolved Cosmic 6-8 keV Background
by Y. Q. Xue; S. X. Wang; W. N. Brandt; B. Luo; D. M. Alexander; F. E. Bauer; A. Comastri; A. C. Fabian; R. Gilli; B. D. Lehmer; D. P. Schneider; C. Vignali; M. Young
Using the 4 Ms Chandra Deep Field-South (CDF-S) survey, we have identified a sample of 6845 X-ray undetected galaxies that dominates the unresolved ~ 20-25% of the 6-8 keV cosmic X-ray background (XRB). This sample was constructed by applying mass and color cuts to sources from a parent catalog based on GOODS-South HST z-band imaging of the central 6'-radius area of the 4 Ms CDF-S. The stacked 6-8 keV detection is significant at the 3.9 sigma level, but the stacked emission was not detected in...
X-ray Properties of Lyman Break Galaxies in the Great Observatories Origins Deep Survey
by B. D. Lehmer; W. N. Brandt; D. M. Alexander; F. E. Bauer; C. J. Conselice; M. E. Dickinson; M. Giavalisco; N. A. Grogin; A. M. Koekemoer; K. S. Lee; L. A. Moustakas; D. P. Schneider
We constrain the X-ray emission properties of Lyman break galaxies (LBGs) at z ~ 3-6 using the 2 Ms Chandra Deep Field-North and 1 Ms Chandra Deep Field-South. Large samples of LBGs were discovered using HST as part of the Great Observatories Origins Deep Survey (GOODS). Deep optical and X-ray imaging over the GOODS fields have allowed us to place the most significant constraints on the X-ray properties of LBGs to date. Mean X-ray properties of 449, 1734, 629, and 247 LBGs with z ~ 3, 4, 5, and...
An Extragalactic Spectroscopic Survey of the SSA22 Field
by C. Saez; B. D. Lehmer; F. E. Bauer; D. Stern; A. Gonzales; I. Rreza; D. M. Alexander; Y. Matsuda; J. E. Geach; F. A. Harrison; T. Havashino
We present VLT VIMOS, Keck DEIMOS and Keck LRIS multi-object spectra of 367 sources in the field of the z ~ 3.09 protocluster SSA22. Sources are spectroscopically classified via template matching, allowing new identifications for 206 extragalactic sources, including 36 z > 2 Lyman-break galaxies (LBGs) and Lyman-\alpha\ emitters (LAEs), 8 protocluster members, and 94 X-ray sources from the ~ 400 ks Chandra deep survey of SSA22. Additionally, in the area covered by our study, we have...
The 4 Ms Chandra Deep Field-South Number Counts Apportioned by Source Class: Pervasive Active Galactic Nuclei and the Ascent of Normal Galaxies
by B. D. Lehmer; Y. Q. Xue; W. N. Brandt; D. M. Alexander; F. E. Bauer; M. Brusa; A. Comastri; R. Gilli; A. E. Hornschemeier; B. Luo; M. Paolillo; A. Ptak; O. Shemmer; D. P. Schneider; P. Tozzi; C. Vignali
We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV cumulative and differential number counts (logN-logS) measurements for the recently completed ~4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of ~1.9-4.3 times fainter than the previously deepest number-counts investigations. In the soft band, the most sensitive bandpass in our...
An enhanced merger fraction within the galaxy population of the SSA22 protocluster at z ~ 3.1
by N. K. Hine; J. E. Geach; D. M. Alexander; B. D. Lehmer; S C. Chapman; Y. Matsuda
The over-dense environments of protoclusters of galaxies in the early Universe (z>2) are expected to accelerate the evolution of galaxies, with an increased rate of stellar mass assembly and black hole accretion compared to co-eval galaxies in the average density `field'. These galaxies are destined to form the passive population of massive systems that dominate the cores of rich clusters today. While signatures of accelerated growth of galaxies in the SSA22 protocluster (z=3.1) have been...
Testing the Universality of the Stellar IMF with Chandra and HST
by D. A. Coulter; B. D. Lehmer; R. T. Eufrasio; A. Kundu; T. Maccarone; M. Peacock; A. E. Hornschemeier; A. Basu-Zych; A. H. Gonzalez; C. Maraston; S. E. Zepf
The stellar initial mass function (IMF), which is often assumed to be universal across unresolved stellar populations, has recently been suggested to be "bottom-heavy" for massive ellipticals. In these galaxies, the prevalence of gravity-sensitive absorption lines (e.g. Na I and Ca II) in their near-IR spectra implies an excess of low-mass ($m = 8$ $M_\odot$) would lead to a corresponding deficit of neutron stars and black holes, and therefore of low-mass X-ray binaries (LMXBs), per...
Submillimeter Array Identification of the Millimeter-Selected Galaxy SSA22-AzTEC1: A Protoquasar in a Protocluster?
by Y. Tamura; D. Iono; D. J. Wilner; M. Kajisawa; Y. K. Uchimoto; D. M. Alexander; A. Chung; H. Ezawa; B. Hatsukade; T. Hayashino; D. H. Hughes; T. Ichikawa; S. Ikarashi; R. Kawabe; K. Kohno; B. D. Lehmer; Y. Matsuda; K. Nakanishi; T. Takata; G. W. Wilson; T. Yamada; M. S. Yun
We present results from Submillimeter Array (SMA) 860-micron sub-arcsec astrometry and multiwavelength observations of the brightest millimeter (S_1.1mm = 8.4 mJy) source, SSA22-AzTEC1, found near the core of the SSA22 protocluster that is traced by Ly\alpha emitting galaxies at z = 3.09. We identify a 860-micron counterpart with a flux density of S_860um = 12.2 +/- 2.3 mJy and absolute positional accuracy that is better than 0.3". At the SMA position, we find radio to mid-infrared...
X-ray Spectral Constraints for z~2 Massive Galaxies: The Identification of Reflection-Dominated Active Galactic Nuclei
by D. M. Alexander; F. E. Bauer; W. N. Brandt; E. Daddi; R. C. Hickox; B. D. Lehmer; B. Luo; Y. Q. Xue; M. Young; A. Comastri; A. Del Moro; A. C. Fabian; R. Gilli; A. D. Goulding; V. Mainieri; J. R. Mullaney; M. Paolillo; D. A. Rafferty; D. P. Schneider; O. Shemmer; C. Vignali
We use the 4Ms CDF-S survey to place direct X-ray constraints on the ubiquity of z~2 heavily obscured AGNs in K 1E43 erg/s) heavily obscured and Compton-thick AGNs at z~2. Our space-density constraints are conservative lower limits but they are already consistent with the range of predictions from X-ray background models.
Concurrent Supermassive Black Hole and Galaxy Growth: Linking Environment and Nuclear Activity in z = 2.23 H-alpha Emitters
by B. D. Lehmer; A. B. Lucy; D. M. Alexander; P. N. Best; J. E. Geach; C. M. Harrison; A. E. Hornschemeier; Y. Matsuda; J. R. Mullaney; Ian Smail; D. Sobral; A. M. Swinbank
We present results from a ~100 ks Chandra observation of the 2QZ Cluster 1004+00 structure at z = 2.23 (hereafter, 2QZ Clus). 2QZ Clus was originally identified as an overdensity of four optically-selected QSOs at z = 2.23 within a 15x15 arcmin^2 region. Narrow-band imaging in the near-IR revealed that the structure contains an additional overdensity of 22 z = 2.23 Halpha-emitting galaxies (HAEs), resulting in 23 unique z = 2.23 HAEs/QSOs. Our Chandra observations reveal that 3 HAEs in addition...
Confirmation of a correlation between the X-ray luminosity and spectral slope of AGNs in the Chandra Deep Fields
by C. Saez; G. Chartas; W. N. Brandt; B. D. Lehmer; F. E. Bauer; X. Dai; G. P. Garmire
We present results from a statistical analysis of 173 bright radio-quiet AGNs selected from the Chandra Deep Field-North and Chandra Deep Field-South surveys (hereafter, CDFs) in the redshift range of 0.1 < z < 4. We find that the X-ray power-law photon index (Gamma) of radio-quiet AGNs is correlated with their 2-10 keV rest-frame X-ray luminosity (L_X) at the > 99.5 percent confidence level in two redshift bins, 0.3 < z < 0.96, and 1.5 < z < 3.3 and is slightly less...
The Chandra Deep Protocluster Survey: Evidence for an Enhancement in the AGN Activity in the SSA22 Protocluster at z = 3.09
by B. D. Lehmer; D. M. Alexander; J. E. Geach; Ian Smail; A. Basu-Zych; F. E. Bauer; S. C. Chapman; Y. Matsuda; C. A. Scharf; M. Volonteri; T. Yamada
We present results from a new ultra-deep 400 ks Chandra observation of the SSA22 protocluster at z = 3.09. We have studied the X-ray properties of 234 z ~ 3 Lyman break galaxies (LBGs; protocluster and field) and 158 z = 3.09 Ly-alpha emitters (LAEs) in SSA22 to measure the influence of the high-density protocluster environment on the accretion activity of supermassive black holes (SMBHs) in these UV-selected star forming populations. We detect individually X-ray emission from active galactic...
The 0.3-30 keV Spectra of Powerful Starburst Galaxies: NuSTAR and Chandra Observations of NGC 3256 and NGC 3310
by B. D. Lehmer; J. B. Tyler; A. E. Hornschemeier; D. R. Wik; M. Yukita; V. Antoniou; S. Boggs; F. E. Christensen; W. W. Craig; C. J. Hailey; F. A. Harrison; T. J. Maccarone; A. Ptak; D. Stern; A. Zezas; W. W. Zhang
We present nearly simultaneous Chandra and NuSTAR observations of two actively star-forming galaxies within 50 Mpc: NGC 3256 and NGC 3310. Both galaxies are detected by both Chandra and NuSTAR, which together provide the first-ever spectra of these two galaxies spanning 0.3-30 keV. The X-ray emission from both galaxies is spatially resolved by Chandra; we find that hot gas dominates the E < 1-3 keV emission while ultraluminous X-ray sources (ULXs) dominate at E > 1-3 keV. The NuSTAR...
Topics: High Energy Astrophysical Phenomena, Astrophysics, Cosmology and Nongalactic Astrophysics,...
Ly-alpha Emission-Line Galaxies at z = 3.1 in the Extended Chandra Deep Field South
by C. Gronwall; R. Ciardullo; T. Hickey; E. Gawaiser; J. J. Feldmeier; P. G. van Dokkum; C. M. Urry; D. Herrera; B. D. Lehmer; L. Infante; A. Orsi; D. Marchesini; G. A. Blanc; H. Francke; P. Lira; E. Treister
We describe the results of an extremely deep, 0.28 deg^2 survey for z = 3.1 Ly-alpha emission-line galaxies in the Extended Chandra Deep Field South. By using a narrow-band 5000 Anstrom filter and complementary broadband photometry from the MUSYC survey, we identify a statistically complete sample of 162 galaxies with monochromatic fluxes brighter than 1.5 x 10^-17 ergs cm^-2 s^-1 and observers frame equivalent widths greater than 80 Angstroms. We show that the equivalent width distribution of...
Inverse Compton X-ray halos around high-z radio galaxies: A feedback mechanism powered by far-infrared starbursts or the CMB?
by Ian Smail; Katherine M. Blundell; B. D. Lehmer; D. M. Alexander
We report the detection of extended X-ray emission around two powerful high-z radio galaxies (HzRGs) at z~3.6 (4C03.24 & 4C19.71) and use these to investigate the origin of extended, Inverse Compton (IC) powered X-ray halos at high z. The halos have X-ray luminosities of Lx~3e44 erg/s and sizes of ~60kpc. Their morphologies are broadly similar to the ~60-kpc long radio lobes around these galaxies suggesting they are formed from IC scattering by relativistic electrons in the radio lobes, of...
"Revealing a Population of Heavily Obscured Active Galactic Nuclei at z=0.5-1 in the Chandra Deep Field-South
by B. Luo; W. N. Brandt; Y. Q. Xue; D. M. Alexander; M. Brusa; F. E. Bauer; A. Comastri; A. C. Fabian; R. Gilli; B. D. Lehmer; D. A. Rafferty; D. P. Schneider; C. Vignali
(abridged) We identify a numerically significant population of heavily obscured AGNs at z~0.5-1 in the Chandra Deep Field-South (CDF-S) and Extended Chandra Deep Field-South by selecting 242 X-ray undetected objects with infrared-based star formation rates (SFRs) substantially higher (a factor of 3.2 or more) than their SFRs determined from the UV after correcting for dust extinction. An X-ray stacking analysis of 23 candidates in the central CDF-S region using the 4 Ms Chandra data reveals a...
Chandra Stacking Constraints on the Contribution of 24 micron Spitzer Sources to the Unresolved Cosmic X-ray Background
by A. T. Steffen; W. N. Brandt; D. M. Alexander; S. C. Gallagher; B. D. Lehmer
We employ X-ray stacking techniques to examine the contribution from X-ray undetected, mid-infrared-selected sources to the unresolved, hard (6-8 keV) cosmic X-ray background (CXB). We use the publicly available, 24 micron Spitzer Space Telescope MIPS catalogs from the Great Observatories Origins Deep Survey (GOODS) - North and South fields, which are centered on the 2 Ms Chandra Deep Field-North and the 1 Ms Chandra Deep Field-South, to identify bright (S_24 > 80 microJy) mid-infrared...
A Hard X-ray Study of the Normal Star-Forming Galaxy M83 with NuSTAR
by M. Yukita; A. E. Hornschemeier; B. D. Lehmer; A. Ptak; D. R. Wik; A. Zezas; V. Antoniou; T. J. Maccarone; V. Replicon; J. B. Tyler; T. Venters; M. K. Argo; K. Bechtol; S. Boggs; F. E. Christensen; W. W. Craig; C. Hailey; F. Harrison; R. Krivonos; K. Kuntz; D. Stern; W. W. Zhang
We present results from sensitive, multi-epoch NuSTAR observations of the late-type star-forming galaxy M83 (d=4.6 Mpc), which is the first investigation to spatially resolve the hard (E>10 keV) X-ray emission of this galaxy. The nuclear region and ~ 20 off-nuclear point sources, including a previously discovered ultraluminous X-ray (ULX) source, are detected in our NuSTAR observations. The X-ray hardnesses and luminosities of the majority of the point sources are consistent with hard X-ray...
Topics: Astrophysics, High Energy Astrophysical Phenomena, Astrophysics of Galaxies | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.